My confusion is still about in general (not just from the solved state) why “(cq[mp] + mq) % 3” correctly updates cq? (could you elaborate on the “key observation” mentioned in the latest comment?)

Is there a proof for why “(cq[mp] + mq) % 3” works?

Thanks again!

]]>That early peak in the middle of the deck makes sense: the probability of moving the card in position 26 to the bottom in a single shuffle is , since the only constraints are that (a) there are exactly 26 zeros in the encoding for the shuffle, and (b) one of those zeros is in the last/bottom position.

]]>I imagine doing that will show broad distributions at the ends, and more narrow distribution in the middle. But maybe I am wrong. ]]>

Now consider a +X rotation, for example, when applied to that initial solved state. Recompute the encoded orientations of each of the cubies, and verify that they are unchanged (i.e., they are still all zero). Similarly, a -X rotation also leaved the encoding of the orientations unchanged.

However, for each of the other four quarter turn moves (always starting from the initial solved state), some orientation codes *will* change, to the (appropriately permuted) values described in the penultimate paragraph of the 12 December comment.

Now… instead of starting from the solved state, scramble the cube, and record the (codes for the) orientations of the cubies; call this vector q. Then, for each of the six possible moves, apply the move, inspect the cube, and write down the updated vector of encoded orientations, call this q2. The key observation is that the vector *difference* q2-q[p] (call this mq) for a given move is always the same, independent of the starting state of the cube to which you applied the move. That is, no matter what state of the cube we are in (solved or scrambled), we can compute the new vector of encoded orientations resulting from a move as q2=q[p]+mq.

]]>You’re right that this is an equivalent representation; more strongly, the generating functions themselves are the same, and thus the effective multipliers are the same as well. It’s a fair question which is clearer, arguably depending on how one interprets the underlying counting problem. For example, in the original form, consider counting ordered sequences of 2n Skittles, with the n Skittles in the first half of the sequence coming from the first pack, and the second half of the sequence from the (identical) second pack. Then the denominator clearly enumerates the sample space with size d^(2n), and the g.f. coefficient is the number of such sequences corresponding to identical packs, with each term in a sum in the g.f. corresponding to arranging k Skittles of a given color in each half/pack of the sequence.

]]>Thanks for the explanation for 2, 3, 4.

My confusion for 1 is not related to the implementation, just conceptually, how does “(cq[mp] + mq) % 3” correctly update the orientation of each cubie?

I understand moving our record of their orientations is achieved by cq[mp], but why does the vector addition (+mq mod 3) work?

Thank you!

]]>One small detail: Wouldn’t it be more clear (unless ofc I missed some easier way to derive it) to write p(n,d) as

It took me a while to get why the function is correct, maybe it isn’t obvious to some others either.

]]>