The following are equivalent

I have been reviewing Rosen’s Discrete Mathematics and Its Applications textbook for a course this fall, and I noticed an interesting potential pitfall for students in the first chapter on logic and proofs.

Many theorems in mathematics are of the form, “p if and only if q,” where p and q are logical propositions that may be true or false.  For example:

Theorem 1: An integer m is even if and only if m+2 is even.

where in this case p is “m is even” and q is “m+2 is even.”  The statement of the theorem may itself be viewed as a proposition p \leftrightarrow q, where the logical connective \leftrightarrow is read “if and only if,” and behaves like Boolean equality.  Intuitively, p \leftrightarrow q states that “p and q are (materially) equivalent; they have the same truth value, either both true or both false.”

(Think Boolean expressions in your favorite programming language; for example, the proposition p \land q, read “p and q,” looks like p && q in C++, assuming that p and q are of type bool.  Similarly, the proposition p \leftrightarrow q looks like p == q in C++.)

Now consider extending this idea to the equivalence of more than just two propositions.  For example:

Theorem 2: Let m be an integer.  Then the following are equivalent:

  1. m is even.
  2. m+2 is even.
  3. m-2 is even.

The idea is that the three propositions above (let’s call them p_1, p_2, p_3) always have the same truth value; either all three are true, or all three are false.

So far, so good.  The problem arises when Rosen expresses this general idea of equivalence of multiple propositions p_1, p_2, \ldots, p_n as

p_1 \leftrightarrow p_2 \leftrightarrow \ldots \leftrightarrow p_n

Puzzle: What does this expression mean?  A first concern might be that we need parentheses to eliminate any ambiguity.  But almost unfortunately, it can be shown that the \leftrightarrow connective is associative, meaning that this is a perfectly well-formed propositional formula even without parentheses.  The problem is that it doesn’t mean what it looks like it means.

Reference:

  • Rosen, K. H. (2011). Discrete Mathematics and Its Applications (7th ed.). New York, NY: McGraw-Hill. ISBN-13: 978-0073383095
Posted in Uncategorized | 3 Comments

Dice puzzle

I recently encountered the following interesting problem:

Suppose that I put 6 identical dice in a cup, and roll them simultaneously (as in Farkle, for example).  Then you take those same 6 dice, and roll them all again.  What is the probability that we both observe the same outcome?  For example, we may both roll one of each possible value (1-2-3-4-5-6, but not necessarily in order), or we may both roll three 3s and three 6s, etc.

I like this problem as an “extra” for combinatorics students learning about generating functions.  A numeric solution likely requires some programming (but I’ve been wrong about that here before), but implementation is not overly complex, while being slightly beyond the “usual” type of homework problem in the construction of its solution.

Posted in Uncategorized | 2 Comments

A number concatenation problem

Introduction

Consider the following problem: given a finite set of positive integers, arrange them so that the concatenation of their decimal representations is as small as possible.  For example, given the numbers {1, 10, 12}, the arrangement (10, 1, 12) yields the minimum possible value 10112.

I saw a variant of this problem in a recent Reddit post, where it was presented as an “easy” programming challenge, referring in turn to a blog post by Santiago Valdarrama describing it as one of “Five programming problems every software engineer should be able to solve in 1 hour.”

I think the problem is interesting in that it seems simple and intuitive– and indeed it does have a solution with a relatively straightforward implementation– but there are also several “intuitive” approaches that don’t work… and even for the correct implementation, there is some subtlety involved in proving that it really works.

Brute force

First, the following Python 3 implementation simply tries all possible arrangements, returning the lexicographically smallest:

import itertools

def min_cat(num_strs):
    return min(''.join(s) for s in itertools.permutations(num_strs))

(Aside: for convenience in the following discussion, all inputs are assumed to be a list of strings of decimal representations of positive integers, rather than the integers themselves.  This lends some brevity to the code, without adding to or detracting from the complexity of the algorithms.)

This implementation works because every concatenated arrangement has the same length, so a lexicographic comparison is equivalent to comparing the corresponding numeric values.  It’s unacceptably inefficient, though, since we have to consider all n! possible arrangements of n inputs.

Sorting

We can do better, by sorting the inputs in non-decreasing order, and concatenating the result.  But this is where the problem gets tricky: what order relation should we use?

We can’t just use the natural ordering on the integers; using the same earlier example, the sorted arrangement (1, 10, 12) yields 11012, which is larger than the minimum 10112.  Similarly, the sorted arrangement (2, 11) yields 211, which is larger than the minimum 112.

We can’t use the natural lexicographic ordering on strings, either; the initial example (1, 10, 12) fails again here.

The complexity arises because the numbers in a given input set may have different lengths, i.e. numbers of digits.  If all of the numbers were guaranteed to have the same number of digits, then the numeric and lexicographic orderings are the same, and both yield the correct solution.  Several users in the Reddit thread, and even Valdarrama, propose “padding” each input in various ways before sorting to address this, but this is also tricky to get right.  For example, how should the inputs {12, 121} be padded so that a natural lexicographic ordering yields the correct minimum value 12112?

There is a way to do this, which I’ll leave as an exercise for the reader.  Instead, consider the following solution (still Python 3):

import functools

def cmp(x, y):
    return int(x + y) - int(y + x)

def min_cat(num_strs):
    return ''.join(sorted(num_strs, key=functools.cmp_to_key(cmp)))

There are several interesting things going on here.  First, a Python-specific wrinkle: we need to specify the order relation \prec by which to sort.  This actually would have looked slightly simpler in the older Python 2.7, where you could specify a binary comparison function directly.  In Python 3, you can only provide a unary key function to apply to each element in the list, and sort by that.  It’s an interesting exercise in itself to work out how to “convert” a comparison function into the corresponding key function; here we lean on the built-in functools.cmp_to_key to do it for us.  (This idea of specifying an order relation by a natural comparison without a corresponding natural key has been discussed here before, in the context of Reddit’s comment ranking algorithm.)

Second, recall that the input num_strs is a list of strings, not integers, so in the implementation of the comparison cmp(x, y) , the arguments are strings, and the addition operators are concatenation.  The comparison function returns a negative value if the concatenation xy, interpreted as an integer, is less than yx, zero if they are equal, or a positive value if xy is greater than yx.  The intended effect is to sort according to the relation x \prec y defined as xy < yx.

It works… but should it?

This implementation has a nice intuitive justification: suppose that the entire input list contained just two strings x and y.  Then the comparison function effectively realizes the “brute force” evaluation of the two possible arrangements xy and yx.

However, that same intuitive reasoning becomes dangerous as soon as we consider input lists with more than two elements.  That comparison function should bother us, for several reasons:

First, it’s not obvious that the resulting sorted ordering is even well-defined.  That is, is the order relation \prec a strict weak ordering of the set of (decimal string representations of) positive integers?  It certainly isn’t a total ordering, since distinct values can compare as “equal:” for example, consider (1, 11), or (123, 123123), etc.

Second, even assuming the comparison function does realize a strict weak ordering (we’ll prove this shortly), that ordering has some interesting properties.  For example, unlike the natural ordering on the positive integers, there is no smallest element.  That is, for any x, we can always find another strictly lesser y \prec x (as a simple example, note that x0 \prec x, e.g., 1230 \prec 123).  Also unlike the natural ordering on the positive integers, this ordering is dense; given any pair x \prec y, we can always find a third value z in between, i.e., x \prec z \prec y.

Finally, and perhaps most disturbingly, observe that a swap-based sorting algorithm will not necessarily make “monotonic” progress toward the solution: swapping elements that are “out of order” in terms of the comparison function may not always improve the overall situation.  For example, consider the partially-sorted list (12, 345, 1), whose concatenation yields 123451.  The comparison function indicates that 12 and 1 are “out of order” (121>112), but swapping them makes things worse: the concatenation of (1, 345, 12) yields the larger value 134512.

Proof of correctness

Given all of this perhaps non-intuitive weirdness, it seems worth being more rigorous in proving that the above implementation actually does work.  We do this in two steps:

Theorem 1: The relation \prec defined by the comparison function cmp is a strict weak ordering.

Proof: Irreflexivity follows from the definition.  To show transitivity, let x, y, z be positive integers with a, b, c digits, respectively, with x \prec y and y \prec z.  Then

10^b x+y < 10^a y+x and 10^c y+z < 10^b z+y

Thus,

x(10^c-1) < x \frac{z}{y}(10^b-1) < z(10^a-1)

10^c x-x < 10^a z-z

10^c x+z < 10^a z+x

i.e., x \prec z.  Incomparability of x and y corresponds to xy=yx; this is an equivalence relation, with reflexivity and symmetry following from the definition, and transitivity shown exactly as above (with equality in place of inequality).

Theorem 2: Concatenating positive integers sorted by \prec yields the minimum value among all possible arrangements.

Proof: Let x_1 x_2 \ldots x_n be the concatenation of an arrangement of positive integers with minimum value, and suppose that it is not ordered by \prec, i.e., x_i \succ x_{i+1} for some 1 \leq i < n.  Then the concatenation x_1 x_2 \ldots x_{i+1} x_i \ldots x_n is strictly smaller, a contradiction.

(Note that this argument is only “simple” because x_i and x_{i+1} are adjacent.  As mentioned above, swapping non-adjacent elements that are out of order may not in general decrease the overall value.)

Posted in Uncategorized | 4 Comments

Cutting crown molding

This post captures my notes on how to determine the miter and bevel angles for cutting crown molding with a compound miter saw.  There are plenty of web sites with tables of these angles, and even formulas for calculating them, but I thought it would be useful to be more explicit about sign conventions, orientation of the finished cut pieces, etc., as well as to provide a slightly different version of the formulas that doesn’t involve a discontinuity right in the middle of the region of interest, as typically seems to be the case.

Instructions

Let s be the measure of the spring angle, i.e., the angle made by the flat back side of the crown molding with the wall (typically 38 or 45 degrees).  Let w be the measure of the wall angle (e.g., 90 degrees for an inside corner, 270 degrees for an outside corner, etc.).

To cut the piece on the left-hand wall (facing the corner), set the bevel angle b and miter angle m to

b = \arcsin(-\cos\frac{w}{2}\cos s)

m = \arcsin(-\tan b \tan s)

where positive angles are to the right (i.e., positive miter angle is counter-clockwise).  Cut with the ceiling contact edge against the fence, and the finished piece on the left side of the blade.

To cut the piece on the right-hand wall (facing the corner), reverse the miter angle,

m' = -m = \arcsin(\tan b \tan s)

and cut with the wall contact edge against the fence, and the finished piece still on the left side of the blade.

Derivation

Let’s start by focusing on the crown molding piece on the left-hand wall as we face the corner.  Consider a coordinate frame with the ceiling corner at the origin, the positive x-axis running along the crown molding to be cut, the negative z-axis running down to the floor, and the y-axis completing the right-handed frame, as shown in the figure below.  In this example of an inside 90-degree corner, the positive y-axis runs along the opposite wall.

Cutting crown molding for left-hand wall. Example shows an inside corner (w=90 degrees).

The desired axis of rotation of the saw blade is normal to the triangular cross section at the corner, which may be computed as the cross product of unit vectors from the origin to the vertices of this cross section:

\mathbf{u} = (0, 0, -1) \times (\cos\frac{w}{2}, \sin\frac{w}{2}, 0)

To cut with the back of the crown molding flat on the saw table (the xz-plane), with the ceiling contact edge against the fence (the xy-plane), rotate this vector by angle s about the x-axis:

\mathbf{v} = \left(\begin{array}{ccc}1&0&0\\0&\cos s&-\sin s\\0&\sin s&\cos s\end{array}\right) \mathbf{u}

It remains to compute the bevel and miter rotations that transform the axis of rotation of the saw blade from its initial (1,0,0) to \mathbf{v}.  With the finished piece on the left side of the blade, the bevel is a rotation by angle b about the z-axis, followed by the miter rotation by angle m about the y-axis:

\left(\begin{array}{ccc}\cos m&0&\sin m\\0&1&0\\-\sin m&0&\cos m\end{array}\right) \left(\begin{array}{ccc}\cos b&-\sin b&0\\ \sin b&\cos b&0\\0&0&1\end{array}\right) \left(\begin{array}{c}1\\0\\0\end{array}\right) = \mathbf{v}

Solving yields the bevel and miter angles above.  For the crown molding piece on the right-hand wall, we can simply change the sign of both s and w, assuming that the wall contact edge is against the fence (still with the finished piece on the left side of the blade).  The result is no change to the bevel angle, and a sign change in the miter angle.

Posted in Uncategorized | Leave a comment

Iterative Poohsticks

The game of Poohsticks was invented by Milne and appears in one of his books, where Pooh and Christopher Robin each simultaneously drop a stick into a river from the upstream side of a bridge.  The winner is the one whose stick emerges first from the downstream side.  Sounds like fun, right?

Right.  Now consider the following iterative version of the game: as each player’s stick emerges from under the bridge, he retrieves it, then runs back to the upstream side of the bridge and drops the stick again.  Both players continue in this way, until one player’s stick “laps” the other, having emerged from under the bridge for the n-th time before the other player’s stick has emerged n-1 times.

Let’s make this more precise by modeling the game as a discrete event simulation with positive integer parameters (a, b, c).  Both players start at time 0 by simultaneously dropping their sticks, each of which emerges from under the bridge an integer number of seconds later, independently and uniformly distributed between a and b (inclusive).  The river is random, but the players are otherwise evenly matched: each player then takes a constant c seconds to recover his stick from the water, run back to the upstream side of the bridge, and drop the stick again.

Suppose, for example, that (a,b,c)=(10,30,5).  If the game ends at the instant the winner’s stick emerges from under the bridge having first lapped the other player’s stick, then what is the expected time t(a,b,c) to complete a game of Iterative Poohsticks?

I think this is a great problem.  As is often the case here, it’s not only an interesting mathematical problem to calculate the exact expected number of seconds to complete the game, but in addition, this game can even be tricky to simulate correctly, as a means of approximating the solution.  The potential trickiness stems from an ambiguity in the description of how the game ends: what happens if the leading player’s stick emerges from under the bridge for the n-th time, at exactly the same time that the trailing player’s stick emerges for the n-1-st time?

There are two possibilities.  Under version A of the rules, the game continues, so that the leading player’s stick must emerge strictly before the trailing player’s stick.  Under version B of the rules, the game ends there, so that the leading player’s stick need only emerge as or before the trailing player’s stick emerges in order to win.

(It’s interesting to consider which of these versions of the game is easier to simulate and/or analyze.  I think version B admits a slightly cleaner exact solution, although my simulation of the game switches more easily between the two versions.  For reference, the expected time to complete the game with the above parameters is about 309.911 seconds for version A, and about 290.014 seconds for version B.)

Posted in Uncategorized | 3 Comments

Horse race puzzle

Introduction

This post is part pedagogical rant, part discussion of a beautiful technique in combinatorics, both motivated by a recent exchange with a high school student, about an interesting dice game that seems to be a common introductory exercise in probability:

There are 12 horses, numbered 1 through 12, each initially at position 0 on a track.  Play consists of a series of turns: on each turn, the teacher rolls two 6-sided dice, where the sum of the dice indicates which horse to advance one position along the track.  The first horse to reach position n=5 wins the race.

At first glance, this seems like a nice exercise.  Students quickly realize, for example, that horse #1 is a definite loser– the sum of two dice will never equal 1– and that horse #7 is the best bet to win the race, with the largest probability (1/6) of advancing on any given turn.

But what if a student asks, as this particular student did, “Okay, I can see how to calculate the distribution of probabilities of each horse advancing in a single turn, but what about the probabilities of each horse winning the race, as a function of the race length n?”  This makes me question whether this is indeed such a great exercise, at least as part of an introduction to probability.  What started as a fun game and engaging discussion has very naturally led to a significantly more challenging problem, whose solution is arguably beyond most students– and possibly many teachers as well– at the high school level.

I like this game anyway, and I imagine that I would likely use it if I were in a similar position.  Although the methods involved in an exact solution might be inappropriate at this level, the game still lends itself nicely to investigation via Monte Carlo simulation, especially for students with a programming bent.

Poissonization

There is an exact solution, however, via several different approaches.  This problem is essentially a variant of the coupon collector’s problem in disguise: if each box of cereal contains one of 12 different types of coupons, then if I buy boxes of cereal until I have n=5 of one type of coupon, what is the probability of stopping with each type of coupon?  Here the horses are the coupon types, and the dice rolls are the boxes of cereal.

As in the coupon collector’s problem, it is helpful to modify the model of the horse race in a way that, at first glance, seems like unnecessary additional complexity: suppose that the dice rolls occur at times distributed according to a Poisson process with rate 1.  Then the advances of each individual horse (that is, the subsets of dice rolls with each corresponding total) are also Poisson processes, each with rate equal to the probability p_i of the corresponding dice roll.

Most importantly, these individual processes are independent, meaning that we can easily compute the probability of desired states of the horses’ positions on the track at a particular time, as the product of the individual probabilities for each horse.  Integrating over all time yields the desired probability that horse j wins the race:

P(j) = \displaystyle\int_{0}^{\infty} p_j \frac{e^{-p_j t}(p_j t)^{n-1}}{(n-1)!} \prod\limits_{i \neq j} \sum\limits_{k=0}^{n-1} \frac{e^{-p_i t}(p_i t)^k}{k!} dt

Intuitively, horse j advances on the final dice roll, after exactly n-1 previous advances, while each of the other horses has advanced at most n-1 times.

Generating functions

This “Poissonization” trick is not the only way to solve the problem, and in fact may be less suitable for implementation without a sufficiently powerful computer algebra system.  Generating functions may also be used to “encode” the possible outcomes of dice rolls leading to victory for a particular horse, as follows:

G_j(x) = p_j \frac{(p_j x)^{n-1}}{(n-1)!} \prod\limits_{i \neq j} \sum\limits_{k=0}^{n-1} \frac{(p_i x)^k}{k!} 

where the probability that horse j wins on the m+1-st dice roll is m! times the coefficient of x^m in G_j(x).  Adding up these probabilities for all possible m yields the overall probability of winning.  This boils down to simple polynomial multiplication and addition, allowing relatively straightforward implementation in Python, for example.

The results are shown in the following figure.  Each curve corresponds to a race length, from n=1 in black– where the outcome is determined by a single dice roll– to n=6 in purple.

Probability distribution of each horse winning, with each curve corresponding to a race length n from 1 to 6.

As intuition might suggest, the longer the race, the more likely the favored horse #7 is to win.  This is true for any non-uniformity in the single-turn probability distribution.  For a contrasting example, consider a race with just 6 horses, with each turn decided by a single die roll.  This race is fair no matter how long it is; every horse always has the same probability of winning.  But if the die is loaded, no matter how slightly, then the longer the race, the more advantage to the favorite.

Posted in Uncategorized | 3 Comments

The hardest 24 puzzles

Introduction

Once again motivated by a series of interesting posts by Mark Dominus, a “24 puzzle” is a set of 4 randomly selected numbers from 1 to 9, where the objective is to arrange the numbers in an arithmetic expression using only addition, subtraction, multiplication, and division, to yield the value 24.  For example, given the numbers (3, 5, 5, 9), one solution is

5(3 + \frac{9}{5}) = 24

Solutions are in general not unique; for example, another possibility is

3(9 - \frac{5}{5}) = 24

This is a great game for kids, and it can be played with no more equipment than a standard deck of playing cards: remove the tens and face cards, shuffle the remaining 36 cards, and deal 4 cards to “generate” a puzzle.  Or keep all 52 cards, and generate potentially more difficult puzzles involving numbers from 1 to 13 instead of 1 to 9.

Or you could play the game using a different “target” value other than 24… but should you?  That is, is there anything special about the number 24 that makes it more suitable as a target value than, say, 25, or 10, etc.?  And whatever target value we decide to use, what makes some puzzles (i.e., sets of numbers) more difficult to solve than others?  What are the hardest puzzles?  Finally, subtraction is one of the allowed binary operations; what about unary minus (i.e., negation)?  Is this allowed?  Does it matter?  These are the sort of questions that make a simple children’s game a great source of interesting problems for both mathematics and computer science students.

(Aside: Is it “these are the sort of questions” or “these are the sorts of questions”?  I got embarrassingly derailed working on that sentence.  I could have re-worded to avoid the issue entirely, but it’s interesting enough that I choose to leave it in.)

Enumerating possible expressions

Following is my Mathematica implementation of a 24 puzzle “solver”:

trees[n_Integer] := trees[n] =
  If[n == 0, {N},
   Flatten@Table[Outer[Star,
      trees[k], trees[n - 1 - k]],
     {k, 0, n - 1}]]

sub[expr_, values_List, ops_List] :=
 Quiet@Fold[
   ReplacePart[#1,
     MapThread[Rule, {Position[#1, First[#2]], Last[#2]}]] &,
   expr,
   {{N, values}, {Star, ops}}]

search[visit_, values_List, ops_List] :=
 Outer[visit,
  trees[Length[values] - 1],
  Permutations[values],
  Tuples[ops, Length[values] - 1], 1]

The function trees[n] enumerates all possible expression trees involving n binary operations, which are counted by the Catalan numbers.  Each expression tree is just a “template,” with placeholders for the numbers and operators that will be plugged in using the function sub.  For example, a standard 24 puzzle with 4 numbers requires n=3 operators, in one of the following 5 patterns:

N * (N * (N * N))
N * ((N * N) * N)
(N * N) * (N * N)
(N * (N * N)) * N
((N * N) * N) * N

The function search takes a puzzle represented as a set of numbers and set of available operators, and simply explores the outer product of all possible expression trees, permutations of numbers, and selections of operators, “visiting” each resulting expression in turn.

The choice of visitor depends on the question we want to answer.  For example, the following code solves a given puzzle for a given target value, with a visitor that checks each evaluated expression’s value against the target, and pretty-prints the expression if it matches:

show[expr_, values_List, ops_List] :=
 ToExpression[
  ToString@sub[expr, ToString /@ values, ToString /@ ops],
  InputForm, HoldForm]

solve[target_Integer, values_List, ops_List] :=
 Reap@search[
     If[sub[##] == target, Sow@show[##]] &,
     values, ops] // Last // Flatten

But another useful visitor is just sub itself, in which case search computes the set of all possible values that can be made from all possible arithmetic arrangements of the given numbers and operators.  We can use this information in the following sections.

Why 24?

Suppose that we draw 4 random cards from a deck of 36 cards (with face cards removed); what is the probability that the resulting puzzle is solvable?  The answer depends on the target– are we trying to find an expression that evaluates to 24, or to some other value?  The following figure shows the probability that a randomly selected puzzle is solvable, as a function of the target value.

Probability that a randomly selected puzzle, as dealt from a 36-card deck, is solvable, vs. the target value (usually 24).

The general downward trend makes sense: it’s more difficult to make larger numbers.  But most interesting are the targets that are multiples of 12 (highlighted by the vertical grid lines), whose corresponding probabilities are distinctly higher than their neighbors.  This also makes sense, at least in hindsight (although I doubt I would have predicted this behavior): multiples of 12 have a relatively large number of factors, allowing more possible ways to be “built.”

So this explains at least in part why 24 is “the” target value… but why not 12, for example, especially since it has an even higher probability of being solvable (i.e., an even lower probability of frustrating a child playing the game)?  The problem is that the target of 12 seems to be too easy, as the following figure shows, indicating for each target the expected number of different solutions to a randomly selected solvable puzzle:

Expected number of solutions to a randomly selected puzzle, conditioned on the puzzle being solvable, vs. the target value.

Of course, this just pushes the discussion in the other direction, asking whether a larger multiple of 12, like 36, for example, wouldn’t be an even better target value, allowing “difficult” puzzles while still having an approximately 84% probability of being solvable.  And it arguably would be, at least for more advanced players or students.

More generally, the following figure shows these two metrics together, with the expected number of solutions on the x-axis, and the probability of solvability on the y-axis, for each target value, with a few highlighted alternative target values along/near the Pareto frontier:

Probability of solvability vs. expected number of solutions.

The hardest 24 puzzles

Finally, which 24 puzzles are the hardest to solve?  The answer depends on the metric for difficulty, but one reasonable choice is the number of distinct solutions.  That is, among all possible expression trees, permutations of the numbers in the puzzle, and choices of available operators, how many yield the desired target value of 24?  The fewer the possible arrangements that work, the more difficult the puzzle.

It turns out that there are relatively few puzzles that have a unique solution, with exactly one possible arrangement of numbers and operators that evaluates to 24.  The list is below, where for completeness I have included all puzzles involving numbers up to 13 instead of just single digits.  (It’s worth noting that Mark’s example— which is indeed difficult– of arranging (2, 5, 6, 6) to yield 17, would not make this list.  And some of the puzzles that are on this list are arguably pretty easy, suggesting that there is something more to “hardness” than just uniqueness.)

  • (1, 2, 7, 7)
  • (1, 3, 4, 6)
  • (1, 5, 11, 11)
  • (1, 6, 6, 8)
  • (1, 7, 13, 13)
  • (1, 8, 12, 12)
  • (2, 3, 5, 12)
  • (3, 3, 5, 5)
  • (3, 3, 8, 8)
  • (4, 4, 10, 10)
  • (5, 5, 5, 5)
  • (5, 5, 8, 8)
  • (5, 5, 9, 9)
  • (5, 5, 10, 10)
  • (5, 5, 11, 11)
  • (5, 5, 13, 13)

And one more: (3, 4, 9, 10), although this one is special.  It has no solution involving only addition, subtraction, multiplication, and division.  For this puzzle, we must expand the set of available operators to also include exponentiation… and then the solution is unique.

Posted in Uncategorized | 2 Comments