## Dice puzzle

I recently encountered the following interesting problem:

Suppose that I put 6 identical dice in a cup, and roll them simultaneously (as in Farkle, for example).  Then you take those same 6 dice, and roll them all again.  What is the probability that we both observe the same outcome?  For example, we may both roll one of each possible value (1-2-3-4-5-6, but not necessarily in order), or we may both roll three 3s and three 6s, etc.

I like this problem as an “extra” for combinatorics students learning about generating functions.  A numeric solution likely requires some programming (but I’ve been wrong about that here before), but implementation is not overly complex, while being slightly beyond the “usual” type of homework problem in the construction of its solution.

Posted in Uncategorized | 2 Comments

## A number concatenation problem

Introduction

Consider the following problem: given a finite set of positive integers, arrange them so that the concatenation of their decimal representations is as small as possible.  For example, given the numbers {1, 10, 12}, the arrangement (10, 1, 12) yields the minimum possible value 10112.

I saw a variant of this problem in a recent Reddit post, where it was presented as an “easy” programming challenge, referring in turn to a blog post by Santiago Valdarrama describing it as one of “Five programming problems every software engineer should be able to solve in 1 hour.”

I think the problem is interesting in that it seems simple and intuitive– and indeed it does have a solution with a relatively straightforward implementation– but there are also several “intuitive” approaches that don’t work… and even for the correct implementation, there is some subtlety involved in proving that it really works.

Brute force

First, the following Python 3 implementation simply tries all possible arrangements, returning the lexicographically smallest:

import itertools

def min_cat(num_strs):
return min(''.join(s) for s in itertools.permutations(num_strs))


(Aside: for convenience in the following discussion, all inputs are assumed to be a list of strings of decimal representations of positive integers, rather than the integers themselves.  This lends some brevity to the code, without adding to or detracting from the complexity of the algorithms.)

This implementation works because every concatenated arrangement has the same length, so a lexicographic comparison is equivalent to comparing the corresponding numeric values.  It’s unacceptably inefficient, though, since we have to consider all $n!$ possible arrangements of $n$ inputs.

Sorting

We can do better, by sorting the inputs in non-decreasing order, and concatenating the result.  But this is where the problem gets tricky: what order relation should we use?

We can’t just use the natural ordering on the integers; using the same earlier example, the sorted arrangement (1, 10, 12) yields 11012, which is larger than the minimum 10112.  Similarly, the sorted arrangement (2, 11) yields 211, which is larger than the minimum 112.

We can’t use the natural lexicographic ordering on strings, either; the initial example (1, 10, 12) fails again here.

The complexity arises because the numbers in a given input set may have different lengths, i.e. numbers of digits.  If all of the numbers were guaranteed to have the same number of digits, then the numeric and lexicographic orderings are the same, and both yield the correct solution.  Several users in the Reddit thread, and even Valdarrama, propose “padding” each input in various ways before sorting to address this, but this is also tricky to get right.  For example, how should the inputs {12, 121} be padded so that a natural lexicographic ordering yields the correct minimum value 12112?

There is a way to do this, which I’ll leave as an exercise for the reader.  Instead, consider the following solution (still Python 3):

import functools

def cmp(x, y):
return int(x + y) - int(y + x)

def min_cat(num_strs):
return ''.join(sorted(num_strs, key=functools.cmp_to_key(cmp)))


There are several interesting things going on here.  First, a Python-specific wrinkle: we need to specify the order relation $\prec$ by which to sort.  This actually would have looked slightly simpler in the older Python 2.7, where you could specify a binary comparison function directly.  In Python 3, you can only provide a unary key function to apply to each element in the list, and sort by that.  It’s an interesting exercise in itself to work out how to “convert” a comparison function into the corresponding key function; here we lean on the built-in functools.cmp_to_key to do it for us.  (This idea of specifying an order relation by a natural comparison without a corresponding natural key has been discussed here before, in the context of Reddit’s comment ranking algorithm.)

Second, recall that the input num_strs is a list of strings, not integers, so in the implementation of the comparison cmp(x, y) , the arguments are strings, and the addition operators are concatenation.  The comparison function returns a negative value if the concatenation $xy$, interpreted as an integer, is less than $yx$, zero if they are equal, or a positive value if $xy$ is greater than $yx$.  The intended effect is to sort according to the relation $x \prec y$ defined as $xy < yx$.

It works… but should it?

This implementation has a nice intuitive justification: suppose that the entire input list contained just two strings $x$ and $y$.  Then the comparison function effectively realizes the “brute force” evaluation of the two possible arrangements $xy$ and $yx$.

However, that same intuitive reasoning becomes dangerous as soon as we consider input lists with more than two elements.  That comparison function should bother us, for several reasons:

First, it’s not obvious that the resulting sorted ordering is even well-defined.  That is, is the order relation $\prec$ a strict weak ordering of the set of (decimal string representations of) positive integers?  It certainly isn’t a total ordering, since distinct values can compare as “equal:” for example, consider (1, 11), or (123, 123123), etc.

Second, even assuming the comparison function does realize a strict weak ordering (we’ll prove this shortly), that ordering has some interesting properties.  For example, unlike the natural ordering on the positive integers, there is no smallest element.  That is, for any $x$, we can always find another strictly lesser $y \prec x$ (as a simple example, note that $x0 \prec x$, e.g., $1230 \prec 123$).  Also unlike the natural ordering on the positive integers, this ordering is dense; given any pair $x \prec y$, we can always find a third value $z$ in between, i.e., $x \prec z \prec y$.

Finally, and perhaps most disturbingly, observe that a swap-based sorting algorithm will not necessarily make “monotonic” progress toward the solution: swapping elements that are “out of order” in terms of the comparison function may not always improve the overall situation.  For example, consider the partially-sorted list (12, 345, 1), whose concatenation yields 123451.  The comparison function indicates that 12 and 1 are “out of order” (121>112), but swapping them makes things worse: the concatenation of (1, 345, 12) yields the larger value 134512.

Proof of correctness

Given all of this perhaps non-intuitive weirdness, it seems worth being more rigorous in proving that the above implementation actually does work.  We do this in two steps:

Theorem 1: The relation $\prec$ defined by the comparison function cmp is a strict weak ordering.

Proof: Irreflexivity follows from the definition.  To show transitivity, let $x, y, z$ be positive integers with $a, b, c$ digits, respectively, with $x \prec y$ and $y \prec z$.  Then

$10^b x+y < 10^a y+x$ and $10^c y+z < 10^b z+y$

Thus,

$x(10^c-1) < x \frac{z}{y}(10^b-1) < z(10^a-1)$

$10^c x-x < 10^a z-z$

$10^c x+z < 10^a z+x$

i.e., $x \prec z$.  Incomparability of $x$ and $y$ corresponds to $xy=yx$; this is an equivalence relation, with reflexivity and symmetry following from the definition, and transitivity shown exactly as above (with equality in place of inequality).

Theorem 2: Concatenating positive integers sorted by $\prec$ yields the minimum value among all possible arrangements.

Proof: Let $x_1 x_2 \ldots x_n$ be the concatenation of an arrangement of positive integers with minimum value, and suppose that it is not ordered by $\prec$, i.e., $x_i \succ x_{i+1}$ for some $1 \leq i < n$.  Then the concatenation $x_1 x_2 \ldots x_{i+1} x_i \ldots x_n$ is strictly smaller, a contradiction.

(Note that this argument is only “simple” because $x_i$ and $x_{i+1}$ are adjacent.  As mentioned above, swapping non-adjacent elements that are out of order may not in general decrease the overall value.)

Posted in Uncategorized | 4 Comments

## Cutting crown molding

This post captures my notes on how to determine the miter and bevel angles for cutting crown molding with a compound miter saw.  There are plenty of web sites with tables of these angles, and even formulas for calculating them, but I thought it would be useful to be more explicit about sign conventions, orientation of the finished cut pieces, etc., as well as to provide a slightly different version of the formulas that doesn’t involve a discontinuity right in the middle of the region of interest, as typically seems to be the case.

Instructions

Let $s$ be the measure of the spring angle, i.e., the angle made by the flat back side of the crown molding with the wall (typically 38 or 45 degrees).  Let $w$ be the measure of the wall angle (e.g., 90 degrees for an inside corner, 270 degrees for an outside corner, etc.).

To cut the piece on the left-hand wall (facing the corner), set the bevel angle $b$ and miter angle $m$ to

$b = \arcsin(-\cos\frac{w}{2}\cos s)$

$m = \arcsin(-\tan b \tan s)$

where positive angles are to the right (i.e., positive miter angle is counter-clockwise).  Cut with the ceiling contact edge against the fence, and the finished piece on the left side of the blade.

To cut the piece on the right-hand wall (facing the corner), reverse the miter angle,

$m' = -m = \arcsin(\tan b \tan s)$

and cut with the wall contact edge against the fence, and the finished piece still on the left side of the blade.

Derivation

Let’s start by focusing on the crown molding piece on the left-hand wall as we face the corner.  Consider a coordinate frame with the ceiling corner at the origin, the positive x-axis running along the crown molding to be cut, the negative z-axis running down to the floor, and the y-axis completing the right-handed frame, as shown in the figure below.  In this example of an inside 90-degree corner, the positive y-axis runs along the opposite wall.

Cutting crown molding for left-hand wall. Example shows an inside corner (w=90 degrees).

The desired axis of rotation of the saw blade is normal to the triangular cross section at the corner, which may be computed as the cross product of unit vectors from the origin to the vertices of this cross section:

$\mathbf{u} = (0, 0, -1) \times (\cos\frac{w}{2}, \sin\frac{w}{2}, 0)$

To cut with the back of the crown molding flat on the saw table (the xz-plane), with the ceiling contact edge against the fence (the xy-plane), rotate this vector by angle $s$ about the x-axis:

$\mathbf{v} = \left(\begin{array}{ccc}1&0&0\\0&\cos s&-\sin s\\0&\sin s&\cos s\end{array}\right) \mathbf{u}$

It remains to compute the bevel and miter rotations that transform the axis of rotation of the saw blade from its initial $(1,0,0)$ to $\mathbf{v}$.  With the finished piece on the left side of the blade, the bevel is a rotation by angle $b$ about the z-axis, followed by the miter rotation by angle $m$ about the y-axis:

$\left(\begin{array}{ccc}\cos m&0&\sin m\\0&1&0\\-\sin m&0&\cos m\end{array}\right) \left(\begin{array}{ccc}\cos b&-\sin b&0\\ \sin b&\cos b&0\\0&0&1\end{array}\right) \left(\begin{array}{c}1\\0\\0\end{array}\right) = \mathbf{v}$

Solving yields the bevel and miter angles above.  For the crown molding piece on the right-hand wall, we can simply change the sign of both $s$ and $w$, assuming that the wall contact edge is against the fence (still with the finished piece on the left side of the blade).  The result is no change to the bevel angle, and a sign change in the miter angle.

## Iterative Poohsticks

The game of Poohsticks was invented by Milne and appears in one of his books, where Pooh and Christopher Robin each simultaneously drop a stick into a river from the upstream side of a bridge.  The winner is the one whose stick emerges first from the downstream side.  Sounds like fun, right?

Right.  Now consider the following iterative version of the game: as each player’s stick emerges from under the bridge, he retrieves it, then runs back to the upstream side of the bridge and drops the stick again.  Both players continue in this way, until one player’s stick “laps” the other, having emerged from under the bridge for the $n$-th time before the other player’s stick has emerged $n-1$ times.

Let’s make this more precise by modeling the game as a discrete event simulation with positive integer parameters $(a, b, c)$.  Both players start at time 0 by simultaneously dropping their sticks, each of which emerges from under the bridge an integer number of seconds later, independently and uniformly distributed between $a$ and $b$ (inclusive).  The river is random, but the players are otherwise evenly matched: each player then takes a constant $c$ seconds to recover his stick from the water, run back to the upstream side of the bridge, and drop the stick again.

Suppose, for example, that $(a,b,c)=(10,30,5)$.  If the game ends at the instant the winner’s stick emerges from under the bridge having first lapped the other player’s stick, then what is the expected time $t(a,b,c)$ to complete a game of Iterative Poohsticks?

I think this is a great problem.  As is often the case here, it’s not only an interesting mathematical problem to calculate the exact expected number of seconds to complete the game, but in addition, this game can even be tricky to simulate correctly, as a means of approximating the solution.  The potential trickiness stems from an ambiguity in the description of how the game ends: what happens if the leading player’s stick emerges from under the bridge for the $n$-th time, at exactly the same time that the trailing player’s stick emerges for the $n-1$-st time?

There are two possibilities.  Under version A of the rules, the game continues, so that the leading player’s stick must emerge strictly before the trailing player’s stick.  Under version B of the rules, the game ends there, so that the leading player’s stick need only emerge as or before the trailing player’s stick emerges in order to win.

(It’s interesting to consider which of these versions of the game is easier to simulate and/or analyze.  I think version B admits a slightly cleaner exact solution, although my simulation of the game switches more easily between the two versions.  For reference, the expected time to complete the game with the above parameters is about 309.911 seconds for version A, and about 290.014 seconds for version B.)

Posted in Uncategorized | 3 Comments

## Horse race puzzle

Introduction

This post is part pedagogical rant, part discussion of a beautiful technique in combinatorics, both motivated by a recent exchange with a high school student, about an interesting dice game that seems to be a common introductory exercise in probability:

There are 12 horses, numbered 1 through 12, each initially at position 0 on a track.  Play consists of a series of turns: on each turn, the teacher rolls two 6-sided dice, where the sum of the dice indicates which horse to advance one position along the track.  The first horse to reach position $n=5$ wins the race.

At first glance, this seems like a nice exercise.  Students quickly realize, for example, that horse #1 is a definite loser– the sum of two dice will never equal 1– and that horse #7 is the best bet to win the race, with the largest probability (1/6) of advancing on any given turn.

But what if a student asks, as this particular student did, “Okay, I can see how to calculate the distribution of probabilities of each horse advancing in a single turn, but what about the probabilities of each horse winning the race, as a function of the race length $n$?”  This makes me question whether this is indeed such a great exercise, at least as part of an introduction to probability.  What started as a fun game and engaging discussion has very naturally led to a significantly more challenging problem, whose solution is arguably beyond most students– and possibly many teachers as well– at the high school level.

I like this game anyway, and I imagine that I would likely use it if I were in a similar position.  Although the methods involved in an exact solution might be inappropriate at this level, the game still lends itself nicely to investigation via Monte Carlo simulation, especially for students with a programming bent.

Poissonization

There is an exact solution, however, via several different approaches.  This problem is essentially a variant of the coupon collector’s problem in disguise: if each box of cereal contains one of 12 different types of coupons, then if I buy boxes of cereal until I have $n=5$ of one type of coupon, what is the probability of stopping with each type of coupon?  Here the horses are the coupon types, and the dice rolls are the boxes of cereal.

As in the coupon collector’s problem, it is helpful to modify the model of the horse race in a way that, at first glance, seems like unnecessary additional complexity: suppose that the dice rolls occur at times distributed according to a Poisson process with rate 1.  Then the advances of each individual horse (that is, the subsets of dice rolls with each corresponding total) are also Poisson processes, each with rate equal to the probability $p_i$ of the corresponding dice roll.

Most importantly, these individual processes are independent, meaning that we can easily compute the probability of desired states of the horses’ positions on the track at a particular time, as the product of the individual probabilities for each horse.  Integrating over all time yields the desired probability that horse $j$ wins the race:

$P(j) = \displaystyle\int_{0}^{\infty} p_j \frac{e^{-p_j t}(p_j t)^{n-1}}{(n-1)!} \prod\limits_{i \neq j} \sum\limits_{k=0}^{n-1} \frac{e^{-p_i t}(p_i t)^k}{k!} dt$

Intuitively, horse $j$ advances on the final dice roll, after exactly $n-1$ previous advances, while each of the other horses has advanced at most $n-1$ times.

Generating functions

This “Poissonization” trick is not the only way to solve the problem, and in fact may be less suitable for implementation without a sufficiently powerful computer algebra system.  Generating functions may also be used to “encode” the possible outcomes of dice rolls leading to victory for a particular horse, as follows:

$G_j(x) = p_j \frac{(p_j x)^{n-1}}{(n-1)!} \prod\limits_{i \neq j} \sum\limits_{k=0}^{n-1} \frac{(p_i x)^k}{k!}$

where the probability that horse $j$ wins on the $m+1$-st dice roll is $m!$ times the coefficient of $x^m$ in $G_j(x)$.  Adding up these probabilities for all possible $m$ yields the overall probability of winning.  This boils down to simple polynomial multiplication and addition, allowing relatively straightforward implementation in Python, for example.

The results are shown in the following figure.  Each curve corresponds to a race length, from $n=1$ in black– where the outcome is determined by a single dice roll– to $n=6$ in purple.

Probability distribution of each horse winning, with each curve corresponding to a race length n from 1 to 6.

As intuition might suggest, the longer the race, the more likely the favored horse #7 is to win.  This is true for any non-uniformity in the single-turn probability distribution.  For a contrasting example, consider a race with just 6 horses, with each turn decided by a single die roll.  This race is fair no matter how long it is; every horse always has the same probability of winning.  But if the die is loaded, no matter how slightly, then the longer the race, the more advantage to the favorite.

Posted in Uncategorized | 3 Comments

## The hardest 24 puzzles

Introduction

Once again motivated by a series of interesting posts by Mark Dominus, a “24 puzzle” is a set of 4 randomly selected numbers from 1 to 9, where the objective is to arrange the numbers in an arithmetic expression using only addition, subtraction, multiplication, and division, to yield the value 24.  For example, given the numbers (3, 5, 5, 9), one solution is

$5(3 + \frac{9}{5}) = 24$

Solutions are in general not unique; for example, another possibility is

$3(9 - \frac{5}{5}) = 24$

This is a great game for kids, and it can be played with no more equipment than a standard deck of playing cards: remove the tens and face cards, shuffle the remaining 36 cards, and deal 4 cards to “generate” a puzzle.  Or keep all 52 cards, and generate potentially more difficult puzzles involving numbers from 1 to 13 instead of 1 to 9.

Or you could play the game using a different “target” value other than 24… but should you?  That is, is there anything special about the number 24 that makes it more suitable as a target value than, say, 25, or 10, etc.?  And whatever target value we decide to use, what makes some puzzles (i.e., sets of numbers) more difficult to solve than others?  What are the hardest puzzles?  Finally, subtraction is one of the allowed binary operations; what about unary minus (i.e., negation)?  Is this allowed?  Does it matter?  These are the sort of questions that make a simple children’s game a great source of interesting problems for both mathematics and computer science students.

(Aside: Is it “these are the sort of questions” or “these are the sorts of questions”?  I got embarrassingly derailed working on that sentence.  I could have re-worded to avoid the issue entirely, but it’s interesting enough that I choose to leave it in.)

Enumerating possible expressions

Following is my Mathematica implementation of a 24 puzzle “solver”:

trees[n_Integer] := trees[n] =
If[n == 0, {N},
Flatten@Table[Outer[Star,
trees[k], trees[n - 1 - k]],
{k, 0, n - 1}]]

sub[expr_, values_List, ops_List] :=
Quiet@Fold[
ReplacePart[#1,
expr,
{{N, values}, {Star, ops}}]

search[visit_, values_List, ops_List] :=
Outer[visit,
trees[Length[values] - 1],
Permutations[values],
Tuples[ops, Length[values] - 1], 1]


The function trees[n] enumerates all possible expression trees involving $n$ binary operations, which are counted by the Catalan numbers.  Each expression tree is just a “template,” with placeholders for the numbers and operators that will be plugged in using the function sub.  For example, a standard 24 puzzle with 4 numbers requires $n=3$ operators, in one of the following 5 patterns:

N * (N * (N * N))
N * ((N * N) * N)
(N * N) * (N * N)
(N * (N * N)) * N
((N * N) * N) * N


The function search takes a puzzle represented as a set of numbers and set of available operators, and simply explores the outer product of all possible expression trees, permutations of numbers, and selections of operators, “visiting” each resulting expression in turn.

The choice of visitor depends on the question we want to answer.  For example, the following code solves a given puzzle for a given target value, with a visitor that checks each evaluated expression’s value against the target, and pretty-prints the expression if it matches:

show[expr_, values_List, ops_List] :=
ToExpression[
ToString@sub[expr, ToString /@ values, ToString /@ ops],
InputForm, HoldForm]

solve[target_Integer, values_List, ops_List] :=
Reap@search[
If[sub[##] == target, Sow@show[##]] &,
values, ops] // Last // Flatten


But another useful visitor is just sub itself, in which case search computes the set of all possible values that can be made from all possible arithmetic arrangements of the given numbers and operators.  We can use this information in the following sections.

Why 24?

Suppose that we draw 4 random cards from a deck of 36 cards (with face cards removed); what is the probability that the resulting puzzle is solvable?  The answer depends on the target– are we trying to find an expression that evaluates to 24, or to some other value?  The following figure shows the probability that a randomly selected puzzle is solvable, as a function of the target value.

Probability that a randomly selected puzzle, as dealt from a 36-card deck, is solvable, vs. the target value (usually 24).

The general downward trend makes sense: it’s more difficult to make larger numbers.  But most interesting are the targets that are multiples of 12 (highlighted by the vertical grid lines), whose corresponding probabilities are distinctly higher than their neighbors.  This also makes sense, at least in hindsight (although I doubt I would have predicted this behavior): multiples of 12 have a relatively large number of factors, allowing more possible ways to be “built.”

So this explains at least in part why 24 is “the” target value… but why not 12, for example, especially since it has an even higher probability of being solvable (i.e., an even lower probability of frustrating a child playing the game)?  The problem is that the target of 12 seems to be too easy, as the following figure shows, indicating for each target the expected number of different solutions to a randomly selected solvable puzzle:

Expected number of solutions to a randomly selected puzzle, conditioned on the puzzle being solvable, vs. the target value.

Of course, this just pushes the discussion in the other direction, asking whether a larger multiple of 12, like 36, for example, wouldn’t be an even better target value, allowing “difficult” puzzles while still having an approximately 84% probability of being solvable.  And it arguably would be, at least for more advanced players or students.

More generally, the following figure shows these two metrics together, with the expected number of solutions on the x-axis, and the probability of solvability on the y-axis, for each target value, with a few highlighted alternative target values along/near the Pareto frontier:

Probability of solvability vs. expected number of solutions.

The hardest 24 puzzles

Finally, which 24 puzzles are the hardest to solve?  The answer depends on the metric for difficulty, but one reasonable choice is the number of distinct solutions.  That is, among all possible expression trees, permutations of the numbers in the puzzle, and choices of available operators, how many yield the desired target value of 24?  The fewer the possible arrangements that work, the more difficult the puzzle.

It turns out that there are relatively few puzzles that have a unique solution, with exactly one possible arrangement of numbers and operators that evaluates to 24.  The list is below, where for completeness I have included all puzzles involving numbers up to 13 instead of just single digits.  (It’s worth noting that Mark’s example— which is indeed difficult– of arranging (2, 5, 6, 6) to yield 17, would not make this list.  And some of the puzzles that are on this list are arguably pretty easy, suggesting that there is something more to “hardness” than just uniqueness.)

• (1, 2, 7, 7)
• (1, 3, 4, 6)
• (1, 5, 11, 11)
• (1, 6, 6, 8)
• (1, 7, 13, 13)
• (1, 8, 12, 12)
• (2, 3, 5, 12)
• (3, 3, 5, 5)
• (3, 3, 8, 8)
• (4, 4, 10, 10)
• (5, 5, 5, 5)
• (5, 5, 8, 8)
• (5, 5, 9, 9)
• (5, 5, 10, 10)
• (5, 5, 11, 11)
• (5, 5, 13, 13)

And one more: (3, 4, 9, 10), although this one is special.  It has no solution involving only addition, subtraction, multiplication, and division.  For this puzzle, we must expand the set of available operators to also include exponentiation… and then the solution is unique.

Posted in Uncategorized | 2 Comments

## Anagrams

Introduction

This was a fun exercise, motivated by several interesting recent posts by Mark Dominus at The Universe of Discourse about finding anagrams of individual English words, such as (relationshipsrhinoplasties), and how to compute a “score” for such anagrams by some reasonable measure of the complexity of the rearrangement, so that (attentivenesstentativeness), with a common 8-letter suffix, may be viewed as less “interesting” than, say, the more thoroughly shuffled (microclimates, commercialist).

The proposed scoring metric is the size of a “minimum common string partition” (MCSP): what is the fewest number of blocks of consecutive letters in a partition of the first word that may be permuted and re-concatenated to yield the second word?  For example, the above word attentiveness may be partitioned into 3 blocks, at+tent+iveness, and transposing the first two blocks yields tent+at+iveness.  Thus, the score for this anagram is only 3.  Compare this with the score of 12 for (intolerances, crenelations), where all 12 letters must be rearranged.

Computing MCSP

I wanted to experiment with this idea in a couple of different ways.  First, as Mark points out, the task of finding the anagrams themselves is pretty straightforward, but computing the resulting MCSP scores is NP-complete.  Fortunately, there is a nice characterization of the solution– essentially the same “brute force” approach described by Mark– that allows concise and reasonably efficient implementation.

Consider an anagram of two words $(w_1, w_2)$ with $n$ letters each, where the necessary rearrangement of letters of $w_1$ to produce $w_2$ is specified by a permutation

$\pi:\{1,2,\ldots,n\} \rightarrow \{1,2,\ldots,n\}$

where the $i$-th letter in $w_2$ is the $\pi(i)$-th letter in $w_1$.  This permutation of individual letters corresponds to a permutation of blocks of consecutive letters, where the number of such blocks– the MCSP score– is

$s(\pi) = n - \left|\{i

Computing an MCSP is hard because this permutation transforming $w_1$ into $w_2$ is not necessarily unique; we need the permutation that minimizes $s(\pi)$.  The key observation is that each candidate permutation may be decomposed into $\pi = \pi_2 \pi_1^{-1}$, where $\pi_j$ transforms any canonical (e.g., sorted) ordering of letters into $w_j$.  So we can fix, say, $\pi_2$, and the enumeration of possible $\pi_1$ is easy to express, since we are using the sorted list of letters as our starting point.

The following Mathematica function implements this approach:

anagramScore[w1_String, w2_String] :=
Module[
{s1 = Characters[w1], s2 = Characters[w2], p1, p2, i},
p1 = Ordering@Ordering[s1];
p2 = Ordering@Ordering[s2];
Length[s1] - Max@Outer[
Count[
Differences[Ordering[ReplacePart[p1, {##} // Flatten]][[p2]]],
1] &,
Sequence @@ Map[
(i = Position[s1, #] // Flatten;
Thread[i -> #] & /@ Permutations[p1[[i]]]) &,
Union[s1]
], 1]]


Using this, we find, as Mark does, that an anagram with maximum MCSP score of 14 is (cinematographer, megachiropteran)… along with the almost-as-interesting (involuntariness, nonuniversalist), but also other fun ones farther down the list, such as (enthusiastic, unchastities) with a score of 9.

Scoring Anagrams Using MCSP and Frequency

From Mark’s post:

Clearly my chunk score is not the end of the story, because “notaries / senorita” should score better than “abets / baste” (which is boring) or “Acephali / Phacelia” (whatever those are), also 5-pointers. The length of the words should be worth something, and the familiarity of the words should be worth even more [my emphasis].

The problem is that an MCSP score alone is a pretty coarse metric, since it’s an integer bounded by the length of the words in the dictionary.  So the second idea was to refine the ordering of the list of anagrams as Mark suggests, with a lexicographic sort first by MCSP score, then by (average) frequency of occurrence in language, as estimated using the Google Books Ngrams data set (methodology described in more detail here).  The expectation was that this would make browsing a long list easier, with more “recognizable” anagrams appearing together near the beginning of each MCSP grouping.

However, because I wanted to try to reproduce Mark’s results, I also needed a larger dictionary that contained, for example, megachiropteran. (which, by the way, is a bat that can have a wing span of over 5 feet).  I used the American English version of the Spell Checker Oriented Word List (SCOWL), combined with the Scrabble and ENABLE2k word lists used in similar previous experiments– which, interestingly, alone contain many anagrams not found in the earlier list.  (The SCOWL was really only needed to “reach” megachiropteran; with the exception of it and nonuniversalist, all of the other examples in this post are valid Scrabble words!).  The resulting word lists and corresponding counts of occurrences in the Ngrams data set are available here.

The resulting list of anagrams are in the text file linked below, sorted by MCSP score, then by the average frequency of the pair of words in each anagram.  Interesting examples high on the list are (personality, antileprosy) with a score of 11, (industries, disuniters) with a score of 10, etc.

The full list of 82,776 anagrams sorted by MCSP and frequency

The individual word frequencies are included in the list, to allow investigation of other sorting methods.  For example, it might be useful to normalize MCSP score by word length.  Or instead of using the average frequency of the two words in an anagram, the ratio of frequencies would more heavily weight anagrams between a really common word and a relatively unknown one, such as (penalties, antisleep)– I have never heard of the latter, but both are Scrabble-playable.

References:

1. Dominus, M., I found the best anagram in English, The Universe of Discourse, 21 February 2017 [HTML]
2. Goldstein, A., Kolman, P., Zheng, J., Minimum Common String Partition Problem: Hardness and Approximations, Electronic Journal of Combinatorics, 12 (2005), #R50 [PDF]