Time management in distributed simulation

Introduction

Much of my work involves discrete event simulation, where a simulation advances, in discrete jumps, from one “interesting” point in time (called an event) to the next.  Managing execution of such a simulation is pretty straightforward when everything happens in a single thread.  But what if multiple processes, each representing just one component of a larger simulation, want to play nicely together?  How do we control the advance of each process’s “logical time,” so that events are always handled in time order, with no one lagging behind or jumping too far ahead?  In particular– and the question that motivated this post– what is lookahead, and why does my simulation need to know about it?

My goal in this post is to describe how this works at an introductory level: the protocol for communicating events and time advances, and the constraints imposed by that protocol.  But just as writing a compiler or interpreter can improve understanding of programming languages, I think actually implementing this protocol can help to understand why those constraints exist.  And rather than stick to pseudo-code as in most of the relevant literature, I wanted to write something that actually runs.  The end result is less than 90 lines of Python code, included below, which may be used either as a sandbox for single-step experimenting at the interpreter prompt, or to coordinate an actual distributed simulation:

"""Conservative time management for distributed simulation."""

import collections
import heapq
from functools import reduce

federations = collections.defaultdict(dict)
epsilon_time = 1e-9

class Federate:
    """Manage communication and time constraint/regulation for a federate."""

    def __init__(self):
        """Create an "orphan" federate not joined to any federation."""
        self.federation = None

    def join(self, federate_name, federation_name, lookahead):
        """Join federation as time-constrained/regulating federate."""
        assert(self.federation is None)
        assert(lookahead > epsilon_time)
        federation = federations[federation_name]
        assert(not federate_name in federation)
        self.name = federate_name
        self.federation_name = federation_name
        self.federation = federation
        time = reduce(max, [fed.time for fed in federation.values()], 0)
        self.time = max(time - lookahead + epsilon_time, 0)
        self.requested = self.time
        self.lookahead = lookahead
        self.events = []
        self.event_tag = 0
        federation[federate_name] = self
        self.grant(self.time)

    def resign(self):
        """Resign from federation."""
        self.federation.pop(self.name)
        if self.federation:
            self.push()
        else:
            federations.pop(self.federation_name)
        self.federation = None

    def send(self, event, time):
        """Send future event in timestamp order."""
        assert(time >= self.requested + self.lookahead)
        for fed in self.federation.values():
            if not fed is self:
                heapq.heappush(fed.events,
                               (time, self.name, self.event_tag, event))
        self.event_tag = self.event_tag + 1

    def request(self, time):
        """Request advance to given future time."""
        assert(time > self.time)
        assert(self.requested == self.time)
        self.requested = time
        self.push()

    def receive(self, event, time):
        """Called when timestamp order event is received via send()."""
        print('{}.receive(event={}, time={})'.format(self.name, event, time))

    def grant(self, time):
        """Called when next event request() is granted."""
        print('{}.grant(time={})'.format(self.name, time))

    def next_grant_time(self):
        time = self.requested
        if time > self.time and self.events:
            time = min(time, self.events[0][0])
        return time

    def advance(self):
        galt = reduce(min,
                      [fed.next_grant_time() + fed.lookahead
                       for fed in self.federation.values() if not fed is self],
                      float('inf'))
        if self.next_grant_time() < galt:
            while self.events and self.events[0][0] <= self.requested:
                self.requested, source, tag, event = heapq.heappop(self.events)
                self.receive(event, self.requested)
            self.time = self.requested
            self.grant(self.time)

    def push(self):
        for fed in self.federation.values():
            if fed.requested > fed.time:
                fed.advance()

From one to many

Before jumping into distributed simulation, though, let’s start simple with just a single process, and consider what an event-based simulation framework might look like:

import heapq
import collections

class Simulation:
    def __init__(self):
        """Create an "empty" simulation."""
        self.time = float('-inf')
        self.events = []
        self.event_tag = 0
        self.listeners = collections.defaultdict(list)

    def publish(self, event, time):
        """Insert event into the local queue."""
        assert(time >= self.time)
        heapq.heappush(self.events,
                       (time, self.event_tag, event))
        self.event_tag = self.event_tag + 1

    def subscribe(self, event_type, listener):
        """Register listener callback for events of the given type."""
        self.listeners[event_type].append(listener)

    def run(self):
        """Run simulation."""
        while self.events:
            self.time, tag, event = heapq.heappop(self.events)
            for listener in self.listeners[event.type]:
                listener(self, event, time)

The idea is to maintain a priority queue of future events, sorted by time.  As the simulation is run(), we “advance” time by iteratively popping the next time-stamped event from the queue, and notifying any listeners that subscribe() to events of that type, who may in turn publish() (i.e., insert) additional future events into the queue.

There are several things to note here:

First, your discrete event simulation may not look like this at all.  It could be process-based, or a simple batch for-loop, or whatever.  That’s okay– this is just a convenient generic starting point, so that once we start interacting with other components in a distributed simulation, we can see how to modify the publish() and run() methods in a natural way.

Second, at this point the only real constraint is that we cannot publish events in the past (see the assertion in line 14).  That is, while handling an event that occurs at a given time t, a listener is free to publish new events that occur at any time greater than or even equal to t.  We will have to restrict this freedom somewhat when participating in a distributed simulation.

Finally, because multiple events can have the same time stamp, repeatability is a concern.  We want to ensure that “ties” are broken in a deterministic, repeatable way, independent of the particular implementation of the underlying priority queue.  To do this, note that elements of the queue are actually tuples of the form (time, tag, event), with the natural lexicographic ordering.  The tag is just a monotonically increasing counter that provides such a tie-breaker.  It will eventually be useful to add a source element to tuples like this, indicating the string name of the simulation generating the event… but only once we start receiving events from other external simulation components (with different names).

Federation of federates

In what follows, I will try to use terminology consistent with the High-Level Architecture (HLA), a rather ridiculously generic name conveying little information about what it actually is: an IEEE-standardized definition of services for managing a distributed simulation via a centralized run-time infrastructure (RTI).  But although this will be in HLA-speak, it’s worth noting that the ideas presented here are applicable to distributed simulation in general, not just to HLA in particular.

An HLA federation is a distributed simulation consisting of a collection of simulation components called federates, interacting via the RTI.  The RTI acts as both the mailman and timekeeper for the federation, coordinating the communication of events between federates and the granting of requests by each federate to advance their clocks forward in (simulated) logical time.

Simulation federates interacting via centralized RTI.  (Image created by Arichnad 2009, retrieved from http://en.wikipedia.org/wiki/File:RTI.svg)

Simulation federates interacting via centralized RTI. (Public domain image created by Arichnad 2009, retrieved from http://en.wikipedia.org/wiki/File:RTI.svg)

Note that in the figure above, the federates never communicate directly with each other.  Everything goes through the RTI: we will assume that each federate communicates only with the RTI via its own reliable, in-order channel (e.g., a TCP socket).  This simplifies the use of the code provided here, since we can leave out the multi-threaded synchronization details of the actual inter-process communication, and instead focus on the single “main” thread in which the RTI responds to the already-serialized sequence of messages received from various federates.  (However, it is an exercise for the reader–or another post– to show that this centralization is not strictly necessary.)

Properties and states of a federate

To manage a federation of n federates from the RTI, let’s maintain the following properties of each federate:

  • t_i is the current logical time of federate i.  Note that at any given point in wall clock time, different federates may in general have different logical times.
  • r_i \geq t_i is the requested time to which federate i has requested advance.  A federate may be in one of two states: if r_i > t_i, then federate i is in the Time Advancing state.  If r_i = t_i, then federate i is in the Time Granted state.
  • \Delta_i > 0 is the lookahead for federate i; more on this shortly.
  • Q_i is the priority queue of future time-stamped events, received by the RTI from other federates, to be delivered to federate i.

The message protocol

A federate may send four types of messages to the RTI: it can (1) join a federation, (2) exit or “resign” from a federation, (3) request a time advance, or (4) send an event to other federates:

  • join(federate, federation, lookahead) sent by a simulation component indicates a request to join a named federation (or create it if it does not yet exist) as a federate with the given name and positive lookahead.  The name of the federate will be used to deterministically order identically time-stamped events as mentioned above.  The name of the federation may be used to allow a single RTI process to service multiple federations (with distinct names) simultaneously.  A joining federate is effectively in the Time Advancing state, and must wait until it receives an initial grant() message (see below) to initialize its logical time.
  • resign() sent by a federate indicates an exit from the currently joined federation.
  • request(t) sent by federate i indicates a request to advance to future logical time t > t_i.  Federate i must currently be in the Time Granted state (i.e., r_i = t_i), and upon sending this message the federate transitions to the Time Advancing state (i.e., r_i is updated to the given future time t).  Note that the federate’s current logical time t_i remains unchanged; the federate must wait until it receives a grant() message in response from the RTI (see below).
  • send(e, t) sent by federate i indicates that event e should be sent to all other federates in the federation with time stamp t \geq r_i + \Delta_i.  (The event message is not immediately delivered, however, but is instead inserted into the RTI’s central queues Q_j for all other federates j.)  A federate is free to send these event messages in either the Time Granted or Time Advancing states; however, note that all such “externally published” events must have strictly future time stamps, as specified by the positive lookahead \Delta_i.

In the other direction, the RTI may send just two types of messages back to a federate… and will only do so when a federate is in the Time Advancing state:

  • receive(e, t) sent to federate i indicates that external event e should be effectively inserted into the federate’s local event queue with time stamp t, where t_i < t \leq r_i.  The federate may receive zero or more additional receive() event messages, all with the same time stamp, followed by:
  • grant(t) sent to federate i indicates that the federate has transitioned from the Time Advancing to the Time Granted state, with its new logical time t_i– and requested time r_i– updated to the given time t \leq r_i.  Note that the granted time may be less than the originally requested time… but this will only be the case if one or more external events are received between the request() and the grant(), in which case the granted time will be equal to the time stamp on the received event(s).

The punch line is that, in return for federates adhering to this protocol, the RTI “promises” to maintain the following relatively simple invariant throughout execution of a federation:

A federate will only receive external events with strictly future time stamps.  That is, at all times, a federate has already encountered all possible external events with time stamps up to and including its current logical time.

The Python code above implements the RTI’s role in this protocol.  Call the Federate class methods join(), resign(), send(), and request() to “send” the corresponding messages to the RTI, and the receive() and grant() methods are callbacks indicating the corresponding response messages.

Conclusion

Coming back now to the framework for a single simulation component, we can see how to modify the event loop to participate in a distributed simulation.  Before we process the next event in our local queue, we first “peek” at its time stamp, and request advance to that logical time.  While waiting for the time advance grant, we insert any externally received events into our local queue– possibly ahead of the event time to which we initially tried to advance.

def run(self):
    """Run simulation."""
    while self.events:

        # Peek at next event time stamp in local queue.
        next_time, tag, event = self.events[0]

        # Send request() message, wait for grant().
        self.request(next_time)
        self.time, tag, event = heapq.heappop(self.events)
        for listener in self.listeners[event.type]:
            listener(self, event, time)

def receive(self, event, time):
    """Insert external event into the local queue."""
    self.publish(event, time)

Note that this event loop is actually more “relaxed” than it could be.  That is, when we request() a time advance, we simply block until we get the grant(), storing any intervening receive() events into our local queue… without immediately processing any of them.  Thus, we only ever send() external events when we are in the Time Granted state, although the time management protocol supports sending when in the Time Advancing state as well.

I have obviously simplified things quite a bit here.  For example, in the actual HLA interface, there are complementary notions of time constraint (blocking for grant() callbacks) and time regulation (observing the lookahead restriction when sending events), that can be independently enabled or disabled.  A federate’s lookahead can be modified at run time.  And other time advance services are available beyond the one discussed here (typically referred to as the “Next Message Request” service), etc.  But hopefully this still serves as a useful starting point.

And I have not even mentioned what I actually find to be the most interesting aspect of this problem: a proof of correctness.  That is, how do we know that this implementation actually preserves the “Thou shalt receive only future events” invariant promised above?  In particular, why is strictly positive lookahead critical to that proof of correctness, and what goes wrong when we try to support zero lookahead?  And how can we “decentralize” the algorithm without changing the basic protocol from the perspective of the individual federates?  These are all interesting questions, maybe subject matter for a later post.

References:

  1. 1516.1-2010 – IEEE Standard for Modeling and Simulation (M&S) High Level Architecture (HLA)– Federate Interface Specification (Section 8). [HTML]
  2. Fujimoto, R. M., Parallel and Distributed Simulation Systems. New York: John Wiley and Sons, 2000 (Chapter 3).
  3. Lamport, L., Time, Clocks and the Ordering of Events in a Distributed System, Communications of the ACM21(7) July 1978, p. 558-565 [HTML]
  4. Misra, J., Distributed Discrete-Event Simulation, Computing Surveys, 18(1) March 1986, p. 39-65 [PDF]

 

Posted in Uncategorized | Leave a comment

Update: Chutes and Ladders is long, but not *that* long

It occurred to me, as I was failing to pay attention in a class this past week, that I neglected an important detail in my recent post analyzing the expected number of turns to complete the game Chutes and Ladders: namely, that one generally does not play the game alone.

Recall from the previous post that we can express the expected value x_i of the number of die rolls (or spins of the spinner) needed for a player to reach the final square 100, starting from square i, “recursively” as

x_i = 1 + \frac{1}{6} \sum_{j=1}^6 x_{f(i,j)}, i < 100

x_{100} = 0

where f(i,j) is the number of the square reached from square i by rolling j (thus encoding the configuration of chutes and ladders on the board).  Solving this system of 100 equations yields the value x_0 of about 39.5984 turns on average for our hypothetical player to finish the game.

However, it is a bit misleading to stop there, since the expected total number of turns in a game with multiple players does not simply scale directly as the number of players.  That is, for example, in a game with two players, we should not expect nearly 80 total rolls of the die, 40 for each player.  As simulation confirms, the game typically ends much more quickly than that, with an actual average of about 52.5188 total rolls, or only about 26 turns for each player.

(Why is this?  This is a good example of the common situation where we can gain insight by considering “extreme” aspects or versions of the problem.  In this case, first note that the shortest possible path from the start to the final square 100 is just seven moves.  It is very unlikely that any single player will actually take this short route, with a probability of only 73/46656, or about 1 in 640.  But now suppose that instead of just one or even two players, there are one million players.  It is now a near certainty that some one of those million players will happen to win the lottery and take that short route… and so we should expect that the average number of total turns should be “only” about 7 million, instead of 40 million as the earlier post might suggest.)

We can still compute this two-player expected value exactly, using the same approach as before… but it starts to get expensive, because instead of just 100 equations in 100 unknowns, we now need 100^2 or 10,000 equations, to keep track of the possible positions of both players:

x_{i,j} = 1 + \frac{1}{6} \sum_{k=1}^6 x_{j,f(i,k)}, i,j < 100

x_{i,100} = 0, i < 100

Solving yields the desired value x_{0,0} \approx 52.5188.

 

Posted in Uncategorized | 1 Comment

Analysis of Chutes and Ladders

Introduction

A friend of mine has been playing the board game Chutes and Ladders recently with his son.  The game is pretty simple: starting off of the board (effectively on “square zero”), each player in turn rolls a die (actually, spins a spinner) and– if possible– moves forward the indicated number of squares in serpentine fashion on the board shown below:

Chutes and Ladders board layout.  In some variants, the chute from 48 to 26 is from 47 to 26.

Chutes and Ladders board layout. In some variants, the chute from square 48 to 26 starts at square 47 instead.

If a player lands on the tail of an arrow, he continues along the arrow to the indicated square, ending his turn there.  (“Chutes” are arrows leading backward, and “ladders” are arrows leading forward.)  The first player to land exactly on square 100 wins the game.

My friend emailed me with the question, “What is the expected number of turns before the game is over?”  I think this is a nice problem for students, since it yields to the usual two-pronged attack of (1) computer simulation and (2) “cocktail napkin” derivation of an exact solution.

This is certainly not a new problem.  See the references at the end of this post for just a few past analyses of the game as a Markov chain.  My goal here is to provide an intuitive derivation of the exact expected length of the game as a solution to a system of linear equations, while staying away from the more sophisticated machinery of Markov chains, fundamental matrices, etc., that younger students may not be familiar with.

Unbounded vs. infinite

Before getting to the exact solution, though, it is worth addressing a comment in the referenced DataGenetics blog post about Monte Carlo simulation of the game:

“Whilst the chances of a game running and running are essentially negligible (constantly landing on snakes [i.e., chutes] and going around in circles), there is a theoretical chance that the main game loop could be executing forever. This is bad, and will lock-up your code. Any smart developer implementing an algorithm like this would program a counter that is incremented on each dice roll. This counter would be checked, and if it exceeds a defined threshold, the loop should be exited.”

I strongly disagree with this.  The valid concern is that there is no fixed bound on how many turns a particular simulated game might take; that is, for any chosen positive integer r, there is a positive probability that the game will require at least r turns, due to repeatedly falling backward down chutes.  However, the probability is zero that the game will execute forever; we can be certain that the game will always eventually terminate.  There is a difference between unbounded and infinite execution time.  In fact, such explicit “safety clamping” of the allowed number of moves implicitly changes the answer, modifying the resulting probability distribution (and thus the expected value) of the number of moves, albeit by an admittedly small amount.

There are situations where games can take an infinitely long time to finish– Chutes and Ladders just isn’t one of them.  For example, in this multi-round card game, everything is fine as long as p > 1/2.  And even when p = 1/2, the game is still guaranteed to finish, although the expected number of rounds is infinite.  But when p < 1/2, there is a positive probability that the game never finishes, and indeed continues “forever.”

Recursive expected values

To see how to analyze Chutes and Ladders, first consider the following simpler problem: how many times should you expect to flip a fair coin until it first comes up heads?  Let x be this desired expected value.  Then considering the two possible outcomes of the first flip, either:

  • It comes up heads, in which case we are done after a single flip; or
  • It comes up tails… in which case we are right back where we started, and so should expect to need x additional flips (plus the one we just “spent”).

That is,

x = \frac{1}{2}(1) + \frac{1}{2}(1 + x)

Solving yields x=2.  More generally, if P(success) is p, the expected number of trials until the first success is 1/p.

Solution

We can apply this same idea to Chutes and Ladders.  Let x_i be the expected number of turns needed to finish the game (i.e., reach square 100), starting from square i, where 0 \leq i \leq 100 (so that x_0 is the value we really want).  Then

x_i = 1 + \frac{1}{6}\sum_{j=1}^6 x_{f(i,j)}, i < 100

x_{100} = 0

where f(i,j) is the number of the square reached from square i by rolling j– which is usually just equal to i+j, but this function also “encodes” the configuration of chutes and ladders on the board, as well as the requirement of landing exactly on square 100 to end the game.

So we have a system of 100 linear equations in 100 unknowns, which we can now talk the computer into solving for us (in Python):

import numpy as np

n, m = 100, 6
chutes = {1: 38, 4: 14, 9: 31, 16: 6, 21: 42, 28: 84, 36: 44,
          48: 26, 49: 11, 51: 67, 56: 53, 62: 19, 64: 60, 71: 91,
          80: 100, 87: 24, 93: 73, 95: 75, 98: 78}

A = np.eye(n)
b = np.ones(n)
for i in range(n):
    for j in range(1, m + 1):
        k = i if i + j > n else chutes.get(i + j, i + j)
        if k < n:
            A[i][k] -= 1.0 / m
x = np.linalg.solve(A, b)

Results

The following figure shows all of the resulting values x_i.  For example, the expected number of turns to complete the game from the start is x_0 = 39.5984.

Expected number of (additional) turns to finish game starting from the given square.

Expected number of (additional) turns to finish game starting from the given square.

Note that we can easily analyze the “house rule” used in the DataGenetics post– allowing a player to win even when “overshooting” square 100– simply by changing the k=i in line 12 to k=n.  The result is a modest reduction in game length, to about 36.1931 turns.  (Edit: However, this is not the whole story.  See this subsequent post addressing the fact that the game involves multiple players.)

References:

  1. Althoen, S. C., King, L., and Schilling, K., How Long Is a Game of Snakes and Ladders?  The Mathematical Gazette, 77(478) March 1993, p. 71-76 [JSTOR]
  2. DataGenetics blog, Mathematical Analysis of Chutes and Ladders [HTML]
  3. Hochman, M., Chutes and Ladders [PDF]
Posted in Uncategorized | 5 Comments

Searching for chocolates

The following problem occurred to me, with Valentine’s Day approaching next week:

A couple of years ago, I bought a box of chocolates for Valentine’s Day.  The box contained n chocolates, indistinguishable from the outside, but the inside of each containing one of n different fillings (e.g., cherry, caramel, etc.).  On the cover of the box was a map, with labels indicating the type of each chocolate in the corresponding position.  Using the map, it was easy for me to find my favorite type of chocolate: I just find it on the map, pick it out of the box, and eat that one single chocolate.

Last year, I bought another box of chocolates.  It was the same box, containing the same n different types of chocolate… but the map was no longer printed on the box cover.  So to find my single favorite type of chocolate, I was forced to eat chocolates one at a time until I found the one I was looking for.  With no information about which chocolate was which, I expected to have to eat (n+1)/2 chocolates on average to find my favorite, by simply picking one after the other at random.

This year, I have bought another box of chocolates.  Again, it is the same box, with the same n types of chocolate, and this time the map is printed on the box cover… but as a puzzle, my wife secretly opened the box and rearranged all of the chocolates, so that all of the labels on the map are incorrect.  Now what is the optimal strategy for finding my favorite type of chocolate, and what is the corresponding (minimized) expected number of chocolates that I have to eat?

Posted in Uncategorized | 4 Comments

One-card draw poker

Introduction

Let’s play another simple card game: after a $1 ante, you (the dealer) shuffle a standard 52-card poker deck, and deal one card face down to each of us.  After looking at our respective cards, we each have the option either to “stay” and keep our card, or to “switch,” discarding and drawing a new card from the top of the deck.  The player with the highest-ranked card (ace low through king high) wins the pot, where a tie splits the pot.  What is the optimal strategy and expected value of playing this game?

(Note that although all of the games discussed here are playable– and are even more interesting and complex– with three or more players, I am focusing on just two players here.)

One-card Guts

There are two interesting variations on these basic rules.  First, suppose that we declare our choice of whether to stay or switch simultaneously, as in the game Guts: each player takes a chip or bottle cap under the table, either places it in a closed fist (to stay) or not (to switch), then holds the closed fist over the table.  All players simultaneously open their hands palm-down to indicate their choice.

This is the simpler version of the game.  With two players, it can be shown that the optimal strategy for both players is to stay if they are dealt a nine or higher.  That the game is fair (i.e., the expected value is zero) is clear from symmetry; no one player is “preferred” or distinguished in any way by the rules of the game.

One-card draw poker

Now suppose instead that we play more like a typical draw poker game: after the initial deal, each player in turn decides whether to discard and draw a new card.  Remember that you are the dealer, so you get to make your decision after I have made mine.  Can you exploit this advantage?

This is the version of the game that motivated this post.  I came up with this while trying to find examples of games for a student, games that were “small” enough to be tractable to analyze and experiment with via simulation, but still with interesting strategic complexity.  It’s worth noting that the name “One-Card Poker” is also used to refer to a similar “stud” form of the game, where the card play is simpler (no discarding), but the betting rounds are as in normal poker, resulting in a game that is much more complex to analyze.

(One final note: use of an actual 52-card deck complicates the analysis slightly, in a mostly non-interesting way.  I find it convenient to present the game using an “infinite deck,” where the probability distribution of card ranks remaining in the deck does not change as cards are dealt.)

 

Posted in Uncategorized | Leave a comment

Calories in, calories out revisited

“All models are wrong, but some are useful.” George E. P. Box

A couple of months ago, I wrote about my experience “counting calories,” particularly about the accuracy of a very simple model of daily weight changes as a recurrence relation, converting each day’s net calorie deficit (or excess) into a 3500 calorie-per-pound weight loss (or gain).  The resulting predictions agreed very closely with my actual weight loss… however, I raised some additional questions at the end of the post, and the post itself generated some interesting comments as well.  Since it’s New Year’s resolution time, I thought this would be an appropriate time to follow up on these.

The following figure shows my predicted weight loss (in blue) over a period of 136 days, compared with each day’s actual measured weight (in red).  The details of the predictive model are provided in the earlier post; briefly, the idea is simple: start with a “zero day” measured weight, then predict weight changes for all subsequent days using only daily calorie intake (from eating) and expenditure (based on weight and, in my case, running).

Predicted and actual measured weight over 136 days.

Predicted and actual measured weight over 136 days.

For side-by-side comparison, the following figure shows the corresponding estimated daily calorie intake over the same time period.

Daily calorie intake over the same 136 days.

Daily calorie intake over the same 136 days.

The first 75 days (the focus of the earlier post) show reasonably consistent behavior: relatively aggressive calorie deficits that yield almost 2.5 pounds lost each week.  But what if I were not so consistent?  How well does this simple model handle more radical changes in diet?

As you can see from both figures, days 76 through 79 get messy.  During that long weekend I had to travel to give a talk, and thus I didn’t have ready access to a scale, hence the missing weight measurements; and I also had less convenient control over the food that I ate.  There is a banquet buffet lunch in there, some restaurant food, etc., where my best-effort calorie estimates are obviously much less accurate.

But although I certainly expected to have gained weight after returning from my trip, I was surprised at how much I had gained, much more than the predictive model could possibly account for.  However, over the next several weeks, when I returned to a more well-behaved diet (more on this later), my weight seemed to “calm down,” returning to reasonably close agreement with the simple “calories in, calories out” model.  It’s not clear to me what causes wild swings like this.  For example, there are also a couple of 4 am wake-up calls for early flights, late nights, and the general stress of travel during those four days.  Perhaps those departures from my normally routine lifestyle might also contribute to the fluctuation in some way?

Also, note the planned incremental increases in calorie intake over the last couple of months, and the resulting slowdown in the rate of weight loss.  I didn’t stop losing weight, I just started losing less weight each week as I approached my goal.  This ability to eat more while still losing weight may be counter-intuitive, but the math makes sense: it’s really hard to lose weight… but it’s much easier to maintain weight once you’re where you want to be.  (On the other hand, it’s an exercise for the reader to verify using the model that it’s also dangerously easy to gain weight, and to do so much more quickly than you lost it.)

Finally, this subject generated quite a bit of discussion about the initial question of whether “it’s really as simple as calories in, calories out.”  In particular, several commenters insisted that no, it is not that simple, that the human body “is not a bomb calorimeter,” but a much more complex machine where weight is influenced by many other factors, including genetic variation, gut flora, etc.

I don’t disagree with this.  In fact, while we’re at it, let’s point out several other limitations as well: this model treats calorie expenditure as a linear function of total body weight, instead of lean body mass, which is arguably a better fit (but is not nearly as convenient to actually measure).  It also treats calorie expenditure from running as a function of weight and distance, but not speed.

Which brings me to the quotation at the top of this post.  No, this simple recurrence relation does not reflect the full complexity of the biological processes occurring in the human body that contribute to weight loss or gain… but so what?  Don’t use a complicated model when a simple one will do.  In this case, most of that additional complexity is “rolled up” into the single coefficient \alpha reflecting the individual’s “burn rate” based on gender, genetic variation, flora in the gut, etc.  Granted, that coefficient may be unknown ahead of time, but at worst it can be estimated using a procedure similar to that described in the original post.

 

Posted in Uncategorized | 1 Comment

Pick any two cards

Let’s play a game: thoroughly shuffle a standard pack of 52 playing cards.  Now name any two card ranks, e.g., seven and jack.  If a seven and a jack happen to appear next to each other in the deck, you pay me a dollar, otherwise I pay you a dollar.  Should you be willing to play this game?  How likely do you think it is that I will win?

This game is discussed in a Grey Matters blog post, which in turn refers to a “Scam School” video describing the game as a bar trick.  I think it’s interesting because it’s not intuitive– it doesn’t seem very likely at all that an arbitrary pre-selected pair of ranks, just 4 cards each, should appear adjacent in a random shuffle of the 52 total cards in the deck.

But it’s also interesting because it’s not as likely as the Scam School video seems to suggest.  It turns out to be slightly worse than a coin flip, where the probability that two given card ranks appear adjacent is 284622747/585307450, or about 0.486.  Not very useful as a bar bet, I guess.

So, to improve the odds a bit, consider the following slight variation: shuffle the deck, and name any single card rank that is not a face card (i.e., not a jack, queen, or king).  If a card of the named rank appears next to a face card somewhere in the deck, you pay me a dollar.  Now what is the probability that I win?  It turns out this is a much better bet, with a probability of about 0.886 of such an adjacency.

Finally, we don’t have to resort to simulation as in the Grey Matters post.  It’s a nice puzzle to compute the probability in general, where n=a+b+c is the total number of cards in the deck, of which a are of rank A and b are of rank B.  (For example, in the original problem, n=52 and a=b=4; in the second variant, a=4 and b=12.)  Then the probability of at least one A/B adjacency is

1-\frac{1}{{n \choose a,b,c}} \sum_{k=0}^{a-1} {a-1 \choose k} {n-2a+k \choose b} ({c-1 \choose a-k} + 2{c-1 \choose a-k-1} + {c-1 \choose a-k-2})

the derivation of which I think is a nice puzzle in itself.

Posted in Uncategorized | Leave a comment