## Analysis of the game of HIPE

Introduction

Can you think of a word in which the letters HIPE appear consecutively?  What about a word containing HQ?  This “game” is described in a not-so-relevant chapter of Peter Winkler’s wonderful book Mathematical Mind-Benders, where he provides many more examples of what he calls HIPEs: a challenge of a short sequence of letters, a response to which must be a word containing that sequence, consecutively with no intervening letters.  For example, BV has many solutions, one of which is, well, oBVious.

It turns out that I am really bad at playing this game.  My wife, on the other hand, is pretty good at it.  As it happens, I also lag behind her quite a bit when we work together on crossword puzzles, acrostics, anagrams, etc…. that is, in many situations where it is helpful to consider words as made up of letters.  After browsing more examples of HIPEs in Winkler’s book, I wondered what makes a given HIPE easy or difficult.  I will describe my attempt at “automating” the generation of difficult HIPEs… but mostly I want to share what I found to be a fascinating anecdote from Winkler’s discussion of the game.

Word Lists

My idea was pretty simple: difficult HIPEs are most likely those sequences of letters that (1) occur very rarely in the dictionary, but (2) occur sufficiently often in natural language to be recognizable.  To compute these metrics, I used:

1. A word list consisting of the union of (a) the ENABLE2k word list (updated from the previous ENABLE1), and (b) the third and latest 2014 edition of the Scrabble Official Tournament and Club Word List.
2. The Google Books Ngrams data set (English Version 20120701) to map each word in my dictionary to a corresponding frequency of occurrence (details of methodology described in this previous post).

As usual, you can download the aggregated frequency data and component word lists here.

2- and 3-letter HIPEs

First, let’s focus on just two letters for now; the following figure shows all possible 2-letter HIPEs, arranged by frequency of occurrence in the Google Books dataset on the x-axis, and frequency of occurrence in the word list on the y-axis, with the example HIPEs in Winkler’s chapter shown in red.  Note that both axes are on a logarithmic scale to better “spread out” the infrequent HIPEs that we are interested in.

All digraphs with corresponding x=frequency of occurrence in the Google Books dataset and y=frequency of occurrence in the ENABLE+Scrabble word list. Winkler’s example HIPEs are shown in red.

As expected, the digraphs near the top of the figure aren’t very interesting, while Winkler’s examples are clustered near the bottom of the figure… although I don’t see much horizontal organization.  To get a better view, let’s zoom in on that lower-left corner of the figure, while still containing all of Winkler’s example HIPEs in red:

A zoomed-in view of the 2-letter HIPEs no more common than Winkler’s examples.

Unfortunately, closer inspection of this figure is a little disappointing: there are certainly “interesting” additional HIPEs in there (DK and GP, for example), but no clear separation between them and other unacceptably weird ones like QI, MK, etc.

We can do the same thing for 3-letter HIPEs, but things get messy quickly; there are just too many possible HIPEs with valid solutions, even if we again “zoom in” on just the bounding box of Winkler’s examples:

Similarly zoomed-in view of rare/difficult 3-letter HIPEs.

There are quite a few interesting HIPEs even in that very bottom row in the figure.  Following are some of my favorites, which appear at most twice in the entire word list: BSM, CEV, CTW, CYI, FCO, IKY, KGA, LFC, UCY, UIU, WDF, XEU, XII.

Conclusion

Finally, back to Winkler’s discussion of what makes HIPEs easy or difficult.  This is where things get really interesting.  He points out that, for most people, it is the “kinetic sense” of producing a word with our mouths that dominates our sense of “knowing” the word.  Not its definition, not what it looks like on paper or how it sounds, but the association with the physical act of expressing the word.  If this is really true, then he suggests that perhaps deaf people, “especially those who customarily communicate in a sign language,” might play HIPE better than others:

Resolved to test this hypothesis, I introduced HIPE to a group of hearing-impaired employees at a government agency, who sat together and conversed in ASL daily at lunch.  They found the game completely trivial; as fast as I wrote HIPEs on napkins, they wrote solutions around them.  To them it was a mystery why anyone would think HIPE was any kind of challenge.

This certainly sounds like a significant effect, enough so that I wonder if more rigorous study has been done?

References:

• Winkler, P., Mathematical Mind-Benders. Wellesley: A K Peters, Ltd., 2007 [PDF]

## Secret Santa solution with burritos

Introduction

You and a group of co-workers all ordered burritos for lunch, and it’s your job to go pick them up.  When you arrive at the restaurant, you are given a cardboard box with 22 foil-wrapped burritos: 1 veggie, 8 chicken, 6 steak, and 7 carnitas.

However, when you return to work and open the box, none of the burritos are labeled!  And because they are, well, burritos, you can’t determine what’s inside each one without opening it up or taking a bite.  So you just pass out burritos randomly and hope for the best.

How well should you expect this to work?  That is, how many co-workers will get what they ordered, in distribution and on average?  What is the probability of the worst-case scenario where no one gets what they ordered?

This is exactly the same “Dinner-Diner Matching Problem” described by Margolius in [1], just dressed up a little differently.  It is also exactly the same problem as the Secret Santa drawing with families described in the previous post, where the “bad” outcome of a person drawing the name of someone in his own immediate family (including himself) corresponds to the “good” outcome here of a co-worker getting one of the burritos that she ordered.  The solution approach is the same in either case, but since the holidays have come and gone, let’s talk about burritos instead.

The objective here is not just to describe the mathematics involved– that’s been done– but to provide Python code to perform the actual calculations.

Permutations with restricted positions

An assignment of exactly one burrito to each of the $n=22$ co-workers may be represented as a permutation, which we can visualize as a selection of $n$ squares to place non-attacking rooks on an $n \times n$ chess board.  Each row of the board represents a co-worker, and each column represents a specific burrito; a rook represents the corresponding co-worker receiving the corresponding burrito.  “Non-attacking” means nobody gets more than one burrito, and no two people have to share a single burrito.

In addition to the non-attacking requirement, we may impose further restrictions on which permutations are allowed: the figure below shows such a board, with the black squares indicating “restricted” positions corresponding to co-workers receiving their desired variety of burrito.  Note that the co-workers and burritos (rows and columns, respectively) have been conveniently ordered so that the restricted positions are square “sub-boards” along the diagonal.  We are interested in counting permutations that avoid these restricted positions.

Board representing “restricted” positions (shown in black) for the burrito problem.

(It seems backward to refer to someone getting their desired type of burrito as a restricted position.  This would have made more sense in the context of the Secret Santa drawing, where we really did want to avoid drawings within immediate families.  We could just as well “reverse” the board, changing white to black and black to white, and with some modified bookkeeping arrive at the same answer… but it will be useful to keep things as they are, as we will see shortly.)

Rook polynomials

In how many of the $n!$ possible ways to hand out burritos does no one get what they ordered?  Using inclusion-exclusion, this number of permutations avoiding all restricted positions is

$\sum\limits_{k=0}^n (-1)^k r_k (n-k)!$

where $r_k$ is the number of ways to place $k$ non-attacking rooks on the board $B$ of restricted positions; i.e., on the black squares in the above figure.

So how can we compute the $r_k$?  In general this is a hard problem, but in this special case we get a lot of help by representing the sequence of $r_k$ as a generating function known as the rook polynomial of the board $B$:

$R_B(x) = \sum\limits_{k=0}^\infty r_k x^k$

There are two reasons why this is useful here.  First, if the board $B$ happens to be a disjoint union of sub-boards that have no rows or columns in common, then the rook polynomial of $B$ is the product of the rook polynomials of the sub-boards.  In this case, the sub-boards are the 1×1, 8×8, 6×6, and 7×7 “complete” boards of all-black squares along the diagonal.

Second, the rook polynomial for just such a complete $m \times m$ board is easy to compute:

$R_{m \times m}(x) = \sum\limits_{k=0}^m {m \choose k}^2 k! x^k$

(Intuitively, we are choosing $k$ of $m$ rows and $k$ of $m$ columns, then placing non-attacking rooks on the resulting $k \times k$ sub-board.)

Solution

Putting it all together, given $n = m_1 + m_2 + ... + m_j$ co-workers ordering burritos (or family members in a Secret Santa drawing), made up of sub-groups of $m_i$ people ordering the same type of burrito (or in the same immediate family), the rook polynomial for the corresponding block-diagonal board is

$R_B(x) = \prod\limits_{i=1}^j R_{m_i \times m_i}(x)$

And the coefficients $r_k$ of the resulting polynomial may be used in the inclusion-exclusion formula above to compute the number of permutations where no one gets what they ordered (or no one draws the name of someone in his immediate family).

The following Python code implements these formulas:

import numpy as np
from numpy.polynomial import polynomial as P
import functools

def binomial(n, k):
"""Binomial coefficient n "choose" k."""
if 0 <= k <= n:
return (np.math.factorial(n) //
np.math.factorial(k) // np.math.factorial(n - k))
else:
return 0

def rook_poly(m):
"""Rook polynomial for complete mxm board."""
return np.array([binomial(m, k) ** 2 * np.math.factorial(k)
for k in range(m + 1)], dtype=object)

def secret_santa_burrito(groups):
"""Hit polynomial for permutations with block diagonal restrictions."""
r = functools.reduce(P.polymul, [rook_poly(m) for m in groups])
n = sum(groups)
t_1 = np.array([-1, 1], dtype=object)
return functools.reduce(P.polyadd, [P.polypow(t_1, k) * r[k] *
np.math.factorial(n - k)
for k in range(n + 1)])

print(secret_santa_burrito([1, 8, 6, 7]))


If you look closely, the last function does a bit more work than described so far.  It’s an exercise for the reader to show that we can actually compute the entire distribution of numbers of possible permutations according to the number $k$ of “hits” on restricted positions (i.e., how many people get what they ordered?), as coefficients of the hit polynomial given by

$N_B(t) = \sum\limits_{k=0}^n (t-1)^k r_k (n-k)!$

where the inclusion-exclusion formula above is simply $N_B(0)$.

Finally, note that derangements are a special case where $m_1=m_2=...=m_n=1$; that is, the board consists of single black squares along the diagonal.  As is well-known, the probability of a random permutation being a derangement is approximately $1/e$.  This generalizes nicely; Penrice in [2] describes a bounding argument that, with $n$ families each of size $k$, the probability of a successful Secret Santa drawing tends to $e^{-k}$ as $n$ grows large.

References:

1. Margolius, B., The Dinner-Diner Matching Problem, Mathematics Magazine, 76(2) April 2003, p. 107-118 [JSTOR]
2. Penrice, S., Derangements, Permanents, and Christmas Presents, American Mathematical Monthly, 98(7) August 1991, p. 617-620 [JSTOR]

## Secret Santa puzzle

This problem was inspired by a recent /r/dailyprogrammer exercise on Reddit.  It’s admittedly a little late for a holiday-themed puzzle, but then so was the exercise:

You and your large extended family participate in a Secret Santa gift exchange, where everyone is randomly– and secretly– assigned another person for whom to buy a gift.  To determine the gift assignments, everyone writes their name on a slip of paper and puts the slip into a hat.  Then each person in turn draws a slip from the hat, buying a gift for the person whose name is on the slip.

One obvious requirement is that no one should draw their own name.  However, there is an additional restriction: no one should be assigned to buy a gift for anyone in their immediate family: husbands cannot buy for their wives, parents cannot buy for their children, etc.  Roughly, you cannot buy for anyone that came in the same car with you.

Instead of drawing slips from a hat, the /r/dailyprogrammer exercise involved writing a program to generate the gift assignments for a group of 30 people comprised of immediate family sizes of {1, 1, 2, 1, 2, 4, 3, 1, 1, 2, 1, 2, 1, 1, 3, 2, 2}.  I saw several different approaches in the submitted solutions: some programs were actually deterministic, forgetting that the assignments should be random.  Other programs constructed an assignment iteratively or recursively, essentially modeling the sequential draw-from-a-hat method… but sometimes getting “stuck” near the end in situations discussed here before, where all remaining unassigned persons are in the same family.  (And even when these programs don’t get stuck, not all of the resulting assignments are equally likely.)

Finally, what I thought were the cleanest solutions used rejection sampling to eliminate these problems: keep generating random unrestricted permutations until finding one that satisfies all of the intra-family buying restrictions.

Problem: Your family does the same thing: if at any point in the drawing process, anyone draws their own name or the name of anyone in their immediate family, everyone puts their slips back into the hat, and the process is re-started from the beginning.  On average, how long will this take?  That is, what is the expected number of times the drawing process must be started before it successfully completes?

Posted in Uncategorized | 1 Comment

## Proofreading as “mark and recapture”

Last week I saw two different articles (at Futility Closet and DataGenetics) about exactly the same topic: suppose that two reviewers each proofread the same document.  The first reviewer finds $A=10$ errors, and the second finds $B=12$ errors, of which $C=8$ are in common, i.e., 8 of those 12 errors had already been found by the first reviewer.  Can we estimate how many additional errors remain in the document that were missed by both reviewers?

Both articles essentially reproduce the argument given by Pólya (see reference below) that a reasonable estimate for the total number of errors (both found and missed) is given by the following simple formula:

$\hat{N} = \frac{A B}{C} = 15$

This is a “mark-and-recapture” estimation method similar to that used, for example, to estimate the number of fish in a lake.  Intuitively, the first reviewer identifies and “marks” $A/N$ of the errors in the document (where $N$ is unknown), which should approximately equal the fraction $C/B$ of errors found by the second reviewer that were already marked.

However, neither article points out just how inaccurate this method of estimation can be, nor the fact that better alternatives are available.  For example, continuing with the example above as originally presented in the DataGenetics article, let us assume for the moment that

1. There really are a total of 15 errors in the document being reviewed.
2. The first reviewer really does find each error independently with probability 10/15=2/3.
3. The second reviewer really does find each error independently with probability 12/15=4/5.

Note that this example is arguably somewhat contrived to be “nice,” since the actual number $C=8$ of errors observed in common by both reviewers happens to equal the expected number of such errors.  This need not be the case; with this model of reviewer accuracy, the number of common errors may be as large as $N=15$… or as small as zero, in which case our estimator breaks down entirely.

Even if we condition against this unlikely difficulty, essentially asking the reviewers to both start over if they don’t find any errors in common (and to forget the errors they may have already found), there is still significant variance in the possible estimates that may result, as shown in the following figure.

Distribution of estimate of number of errors (N=15, p1=2/3, p2=4/5).

(This is not a sample from a simulation; we can calculate this distribution exactly.)  The mean of the estimate, shown in red, is approximately 15.17, which is pretty good.  However, we can do better– only slightly better in this already-fortunate case, but a lot better in other cases– using a slightly different estimator due to Chapman:

$\hat{N} = \frac{(A+1)(B+1)}{C+1} - 1$

This estimate has several advantages over the Lincoln-Petersen method.  It has less bias and less variance, particularly in situations like this where the “population” of errors is relatively small.  Also, it still works even when $C=0$, i.e., when no errors are found in common by both reviewers.

Having said that, it’s not clear how really useful either method is in this particular context, given how widely the resulting estimate may vary from the true value.  These estimation methods work much better when at least one of the two sample sizes $A$ and $B$ is pre-determined (e.g., first catch exactly 100 fish, mark them, then catch 100 fish again), and only $C$ varies randomly.

Reference:

• Pólya, G., Probabilities in Proofreading, American Mathematical Monthly, 83(1) January 1976, p. 42 [JSTOR]

## College football bowl confidence pools

Introduction

For most of the last decade, I have followed an annual NCAA football bowl pool.  An entry into the pool consists of picking a winner of each of $n$ bowl games, and assigning each pick a unique “confidence” point value from 1 to $n$.  For each game picked correctly, you win the assigned number of confidence points; the entry with the most points wins, uh, bragging rights.

Certainly there is some opportunity for strategy in pools like this.  For example, knowledge of point spreads helps: one simple strategy is to pick the favored team in every game, with confidence points assigned in order of point spread.  In fact, this strategy maximizes the expected (i.e., average) number of points won among all possible entries.

However, expectation isn’t everything: the objective here is not to maximize average number of points, it’s to maximize the probability of winning, i.e., scoring more points than all other submitted entries.  The objective of this post is to describe some of the mathematics involved in trying to solve this problem.  The techniques described here have won a lot of bragging rights, but there are still a lot of unanswered questions.  (And unfortunately, this basic pool structure may soon be obsolete, with the introduction of the 4-team playoff last year.)

First, let’s assume that we actually know the “true” probabilities of outcomes of every bowl game in a season.  We don’t, really… but we do know the point spread for every game.  For example, the following figure shows the spreads for the bowl games over the past 9 years, shown in ascending order of spread for each season (not chronologically):

Point spreads for college football bowl games 2006-2014. The 38 games in 2014 exclude the final championship game from the 4-team playoff.

Intuitively, we expect the probability for a “pick’em” (i.e., spread near zero) to be about 1/2 either way, and the probability of a two-touchdown favorite winning to be much closer to 1.  There is an interesting analysis by Hal Stern (see reference at the end of this post) suggesting that a reasonable model of the actual margin of victory in a football game is a normal distribution with mean equal to the point spread, and standard deviation of slightly less than 14 points.  We can use this model to convert the above point spreads into corresponding probabilities of winning each game, as shown in the following figure:

Estimated probability of favored team winning each bowl game 2006-2014.

(Aside: Although this model is certainly reasonable– using it has won pools in the past– it would be interesting to see some further analysis to validate or refine it.  Stern’s original analysis is nearly 30 years old, and focused on NFL games, not college football.)

Comparing two (or more) pool entries

Armed with these probabilities, what do we do with them?  As a first step, let’s suppose, temporarily, that there are just two entries in the entire pool.  What is the probability of one entry scoring more points than the other?  At first glance, even this simplified problem looks difficult: for $n$ bowl games, there are $2^n$ possible outcomes to evaluate.  But it turns out we can “compare” two entries reasonably efficiently, using probability generating functions.

Let’s specify a pool entry as an ordered pair of $n$-vectors $(\mathbf{q}, \mathbf{w})$, where $q_i$ in {0,1} indicates whether the underdog (0) or favorite (1) is picked to win game $i$, and $1 \leq w_i \leq n$ is the corresponding confidence point assignment.  Then, given a second entry $(\mathbf{r}, \mathbf{v})$, consider the following probability generating function:

$g(x) = \prod\limits_{i=1}^n (p_i x^{q_i w_i-r_i v_i}+(1-p_i)x^{(1-q_i)w_i-(1-r_i)v_i})$

where $p_i$ is the assumed probability that the favored team wins game $i$.  Each term in the product corresponds to a game, and each summand corresponds to a possible outcome of that game.  Expanding this product as a polynomial in the formal variable $x$ yields only $O(n^2)$ non-zero coefficients, indicating the probability distribution of the overall difference in scores between the two entries.

(We are cheating a little by allowing negative exponents, but that’s not a problem here, since we are just using the generating function machinery for bookkeeping, and not actually evaluating $g(x)$ for specific values of $x$.)

This provides a means of quickly computing the probability that one entry scores more points than another: just sum the coefficients of the positive powers of $x$ in the corresponding $g(x)$.  For example, suppose that $(\mathbf{r}, \mathbf{v})$ is the entry described above that maximizes expected score.  If we consider this to be a “representative” entry in a larger pool, then we can optimize our own entry by searching for one that maximizes the probability of beating $(\mathbf{r}, \mathbf{v})$.

Better representatives

This strategy is certainly not perfect.  The larger the number of entries in the actual pool, the more likely it is that someone submits an entry that wins the pool, despite deviating significantly from the “representative” entry.  Fortunately, we can attempt to account for the size of the pool by modifying the representative, expected score-maximizing entry slightly:

Suppose that instead of a single entry that picks the favorite for every game, there are $2^k$ entries that “buy” the $k$ toughest games with the smallest point spreads.  There is one entry for each possible combination of outcomes of those $k$ games… where each entry assigns the $k$ largest confidence values to those toughest games.  For example, buying the 3 lowest-spread games requires 8 distinct entries, covering all possible outcomes of those games, and guaranteeing that exactly one of those entries will score all $(n-2)+(n-1)+n$ points assigned to them.

The reason for choosing this form for the “family” of more powerful representative entries is that we can “compress” all $2^k$ entries into a single $(\mathbf{r}, \mathbf{v})$ pair, where the picks for the bought games have the form $r_i = 1/2$.  Using this convention, no efficiency is lost in evaluating candidate entries against this entire family of entries, by replacing the appearance of each $r_i$ by the unit step function $H[r_i]$:

$g(x) = \prod\limits_{i=1}^n (p_i x^{q_i w_i-H[r_i] v_i}+(1-p_i)x^{(1-q_i])w_i-H[1-r_i]v_i})$

The following figure shows how well this could be expected to work, as a function of the number of low-spread games bought in the representative entry(s).  The y-axis indicates the maximum probability of beating all corresponding representative entries, using an entry found by a simple steepest-ascent search within the space of all $n!2^n$ possible entries.

Estimated probability of winning pool against a “maximum expected points” entry with some number of “bought” games.

There are plenty of open questions here.  For example, I don’t see a nice way to efficiently compute the probability of an entry winning against more than one other entry (or against more than just the one family of similar entries as described above).  And even if there was such an efficient algorithm for optimizing against a large pool of entries, what is a reasonable way to model a large pool of entries?

Reference:

• Stern, H., On the Probability of Winning a Football Game, The American Statistician, 45(3) August 1991, p. 179-183 [PDF]

## Coin flip puzzle

There was a lot of unwarranted buzz this past week about the New England Patriots winning 19 of their past 25 coin tosses prior to each game, leading to semi-serious speculation of more foul play.  This episode is a great source of many interesting probability problems, none of which is the question asked by almost every related news article: “What is the probability that the Patriots would win at least 19 of their last 25 coin tosses purely by chance?”  (The answer is about 1/137, but the better answer is that this is the wrong question.)

I think one slightly better and more interesting question is: how often does this happen by chance?  That is, suppose that we flip a fair coin repeatedly, and after each flip we observe whether at least 19 of the most recent 25 flips came up heads (a success), or less than 19 were heads (a failure).  Over time, for what fraction of flips should we expect success?

Another related question: what is the expected number of games until the first such success?

Posted in Uncategorized | 2 Comments

## Arbitrary-precision integer arithmetic in C++

This was mostly just for fun, motivated by questions about a student programming problem.  There are plenty of arbitrary- and extended-precision arithmetic libraries out there, and there are plenty of great references describing the algorithms involved.  But there can be a lot of added complexity in the translation from pseudo-code to actual lines of working code.  My goal here was to provide the latter, while maintaining the readability of the former.  I’m not sure I succeeded.  But it was still fun.

The end result is about 500 lines of code in a single C++ header file, defining an unsigned (i.e., non-negative) integer type with overloading of all sensible arithmetic, bitwise, and relational operators (basically everything except bitwise one’s complement), and implicit conversions to/from a decimal string representation.  The code follows the conventions and variable names of Knuth’s TAOCP Volume 2 very closely throughout, although the division algorithm is slightly simplified.

Following is the source code, which is free, unencumbered, released into the public domain; do what you want with it.

#ifndef MATH_UNSIGNED_H
#define MATH_UNSIGNED_H

#include <cstdint>
#include <vector>
#include <iostream>
#include <stdexcept>
#include <algorithm>
#include <sstream>
#include <cctype>
using std::size_t;

namespace math
{
class Unsigned
{
public:
typedef std::uint32_t Digit;
typedef std::uint64_t Wigit;
static const unsigned BITS = 32;

Unsigned(Digit u = 0) :
digits(1, u)
{
// empty
}

Unsigned(const std::string& s) :
digits(1, 0)
{
std::istringstream iss(s);
iss >> *this;
if (iss.fail() || !iss.eof())
{
throw std::runtime_error("Error: Unsigned::string");
}
}

Unsigned(const Unsigned& copy) :
digits(copy.digits)
{
// empty
}

Unsigned& operator= (const Unsigned& rhs)
{
digits = rhs.digits;
return *this;
}

Unsigned operator++ (int)
{
Unsigned w(*this);
++(*this);
return w;
}

Unsigned& operator++ ()
{
for (size_t j = 0; j < digits.size() && ++digits[j] == 0; ++j);
if (digits.back() == 0)
{
digits.push_back(1);
}
return *this;
}

Unsigned operator-- (int)
{
Unsigned w(*this);
--(*this);
return w;
}

Unsigned& operator-- ()
{
if (digits.back() == 0)
{
throw std::underflow_error("Error: Unsigned::underflow");
}
for (size_t j = 0; j < digits.size() && digits[j]-- == 0; ++j);
trim();
return *this;
}

friend Unsigned operator+ (const Unsigned& u, const Unsigned& v)
{
Unsigned w(u);
w += v;
return w;
}

Unsigned& operator+= (const Unsigned& rhs)
{
const size_t n = rhs.digits.size();
if (digits.size() < n)
{
digits.resize(n, 0);
}
size_t j = 0;
Wigit k = 0;
for (; j < n; ++j)
{
k = k + digits[j] + rhs.digits[j];
digits[j] = static_cast<Digit>(k);
k >>= BITS;
}
for (; k != 0 && j < digits.size(); ++j)
{
k += digits[j];
digits[j] = static_cast<Digit>(k);
k >>= BITS;
}
if (k != 0)
{
digits.push_back(1);
}
return *this;
}

friend Unsigned operator- (const Unsigned& u, const Unsigned& v)
{
Unsigned w(u);
w -= v;
return w;
}

Unsigned& operator-= (const Unsigned& rhs)
{
if ((*this) < rhs)
{
throw std::underflow_error("Error: Unsigned::underflow");
}
size_t j = 0;
Wigit k = 0;
for (; j < rhs.digits.size(); ++j)
{
k = k + digits[j] - rhs.digits[j];
digits[j] = static_cast<Digit>(k);
k = ((k >> BITS) ? -1 : 0);
}
for (; k != 0 && j < digits.size(); ++j)
{
k += digits[j];
digits[j] = static_cast<Digit>(k);
k = ((k >> BITS) ? -1 : 0);
}
trim();
return *this;
}

friend Unsigned operator* (const Unsigned& u, const Unsigned& v)
{
const size_t m = u.digits.size();
const size_t n = v.digits.size();
Unsigned w;
w.digits.resize(m + n, 0);
for (size_t j = 0; j < n; ++j)
{
Wigit k = 0;
for (size_t i = 0; i < m; ++i)
{
k += static_cast<Wigit>(u.digits[i]) * v.digits[j] +
w.digits[i + j];
w.digits[i + j] = static_cast<Digit>(k);
k >>= BITS;
}
w.digits[j + m] = static_cast<Digit>(k);
}
w.trim();
return w;
}

Unsigned& operator*= (const Unsigned& rhs)
{
*this = (*this) * rhs;
return *this;
}

friend Unsigned operator/ (const Unsigned& u, const Unsigned& v)
{
Unsigned q, r;
u.divide(v, q, r);
return q;
}

Unsigned& operator/= (const Unsigned& rhs)
{
Unsigned r;
divide(rhs, *this, r);
return *this;
}

friend Unsigned operator% (const Unsigned& u, const Unsigned& v)
{
Unsigned q, r;
u.divide(v, q, r);
return r;
}

Unsigned& operator%= (const Unsigned& rhs)
{
Unsigned q;
divide(rhs, q, *this);
return *this;
}

void divide(Unsigned v, Unsigned& q, Unsigned& r) const
{
// Handle special cases (m < n).
if (v.digits.back() == 0)
{
throw std::overflow_error("Error: Unsigned::overflow");
}
r.digits = digits;
const size_t n = v.digits.size();
if (digits.size() < n)
{
q.digits.assign(1, 0);
return;
}

// Normalize divisor (v[n-1] >= BASE/2).
unsigned d = BITS;
for (Digit vn = v.digits.back(); vn != 0; vn >>= 1, --d);
v <<= d;
r <<= d;
const Digit vn = v.digits.back();

// Ensure first single-digit quotient (u[m-1] < v[n-1]).
r.digits.push_back(0);
const size_t m = r.digits.size();
q.digits.resize(m - n);
Unsigned w;
w.digits.resize(n + 1);
const Wigit MAX_DIGIT = (static_cast<Wigit>(1) << BITS) - 1;
for (size_t j = m - n; j-- != 0;)
{
// Estimate quotient digit.
Wigit qhat = std::min(MAX_DIGIT,
(static_cast<Wigit>(r.digits[j + n]) << BITS |
r.digits[j + n - 1]) / vn);

// Compute partial product (w = qhat * v).
Wigit k = 0;
for (size_t i = 0; i < n; ++i)
{
k += qhat * v.digits[i];
w.digits[i] = static_cast<Digit>(k);
k >>= BITS;
}
w.digits[n] = static_cast<Digit>(k);

// Check if qhat is too large (u - w < 0).
bool is_trial = true;
while (is_trial)
{
size_t i = n;
for (; i != 0 && r.digits[j + i] == w.digits[i]; --i);
if ((is_trial = (r.digits[j + i] < w.digits[i])))
{
// Adjust partial product (w -= v).
--qhat;
k = 0;
for (size_t i = 0; i < n; ++i)
{
k = k + w.digits[i] - v.digits[i];
w.digits[i] = static_cast<Digit>(k);
k = ((k >> BITS) ? -1 : 0);
}
w.digits[n] = static_cast<Digit>(k + w.digits[n]);
}
}
q.digits[j] = static_cast<Digit>(qhat);

// Compute partial remainder (u -= w).
k = 0;
for (size_t i = 0; i < n; ++i)
{
k = k + r.digits[j + i] - w.digits[i];
r.digits[j + i] = static_cast<Digit>(k);
k = ((k >> BITS) ? -1 : 0);
}
}

// Denormalize remainder.
q.trim();
r.digits.resize(n);
r >>= d;
}

friend Unsigned operator<< (const Unsigned& u, size_t v)
{
Unsigned w(u);
w <<= v;
return w;
}

Unsigned& operator<<= (size_t rhs)
{
if (digits.back() != 0 && rhs != 0)
{
const size_t n = rhs / BITS;
digits.insert(digits.begin(), n, 0);
rhs -= n * BITS;
Wigit k = 0;
for (size_t j = n; j < digits.size(); ++j)
{
k |= static_cast<Wigit>(digits[j]) << rhs;
digits[j] = static_cast<Digit>(k);
k >>= BITS;
}
if (k != 0)
{
digits.push_back(static_cast<Digit>(k));
}
}
return *this;
}

friend Unsigned operator>> (const Unsigned& u, size_t v)
{
Unsigned w(u);
w >>= v;
return w;
}

Unsigned& operator>>= (size_t rhs)
{
const size_t n = rhs / BITS;
if (n >= digits.size())
{
digits.assign(1, 0);
}
else
{
digits.erase(digits.begin(), digits.begin() + n);
rhs -= n * BITS;
Wigit k = 0;
for (size_t j = digits.size(); j-- != 0;)
{
k = k << BITS | digits[j];
digits[j] = static_cast<Digit>(k >> rhs);
k = static_cast<Digit>(k);
}
trim();
}
return *this;
}

friend Unsigned operator& (const Unsigned& u, const Unsigned& v)
{
Unsigned w(u);
w &= v;
return w;
}

Unsigned& operator&= (const Unsigned& rhs)
{
const size_t n = rhs.digits.size();
if (digits.size() > n)
{
digits.resize(n);
}
for (size_t j = 0; j < digits.size(); ++j)
{
digits[j] &= rhs.digits[j];
}
trim();
return *this;
}

friend Unsigned operator^ (const Unsigned& u, const Unsigned& v)
{
Unsigned w(u);
w ^= v;
return w;
}

Unsigned& operator^= (const Unsigned& rhs)
{
const size_t n = rhs.digits.size();
if (digits.size() < n)
{
digits.resize(n, 0);
}
for (size_t j = 0; j < n; ++j)
{
digits[j] ^= rhs.digits[j];
}
trim();
return *this;
}

friend Unsigned operator| (const Unsigned& u, const Unsigned& v)
{
Unsigned w(u);
w |= v;
return w;
}

Unsigned& operator|= (const Unsigned& rhs)
{
const size_t n = rhs.digits.size();
if (digits.size() < n)
{
digits.resize(n, 0);
}
for (size_t j = 0; j < n; ++j)
{
digits[j] |= rhs.digits[j];
}
return *this;
}

friend bool operator< (const Unsigned& u, const Unsigned& v)
{
const size_t m = u.digits.size();
size_t n = v.digits.size();
if (m != n)
{
return (m < n);
}
for (--n; n != 0 && u.digits[n] == v.digits[n]; --n);
return (u.digits[n] < v.digits[n]);
}

friend bool operator> (const Unsigned& u, const Unsigned& v)
{
return (v < u);
}

friend bool operator<= (const Unsigned& u, const Unsigned& v)
{
return !(v < u);
}

friend bool operator>= (const Unsigned& u, const Unsigned& v)
{
return !(u < v);
}

friend bool operator== (const Unsigned& u, const Unsigned& v)
{
return (u.digits == v.digits);
}

friend bool operator!= (const Unsigned& u, const Unsigned& v)
{
return (u.digits != v.digits);
}

Digit to_uint() const
{
return digits[0];
}

std::string to_string() const
{
std::ostringstream oss;
Unsigned q(*this), r;
do
{
q.divide(10, q, r);
oss << r.digits[0];
} while (q.digits.back() != 0);
std::string s(oss.str());
std::reverse(s.begin(), s.end());
return s;
}

friend std::ostream& operator<< (std::ostream& os, const Unsigned& u)
{
os << u.to_string();
return os;
}

friend std::istream& operator>> (std::istream& is, Unsigned& u)
{
char digit = '\0';
is >> digit;
if (is.good() && std::isdigit(digit))
{
u = digit - '0';
while (std::isdigit(is.peek()))
{
is >> digit;
u = 10 * u + (digit - '0');
}
}
else
{
is.setstate(std::ios_base::failbit);
}
return is;
}

private:
std::vector<Digit> digits;

void trim()
{
while (digits.back() == 0 && digits.size() > 1)
{
digits.pop_back();
}
}
};
} // namespace math

#endif // MATH_UNSIGNED_H


Posted in Uncategorized | 2 Comments