The subject of this post is a puzzle, a variant of the so-called secretary problem: suppose that you need to hire one of candidates for a job. You must interview one candidate at a time in random order, and after each interview you must decide whether to accept the candidate and offer him or her the position, or to reject the candidate and move on to the next interview. You cannot return to a previously rejected candidate.
Suppose that the candidates’ interview results are not quantifiable, but they are comparable, so that at any point in the process you can rank in strict preference order the candidates you have interviewed so far. What should be your strategy for selecting the “best” candidate?
In the classical statement of the problem, the objective is to maximize the probability of selecting the overall best candidate. The “popular” nice result is that the corresponding optimal strategy has a very simple form, and a nearly fixed success rate, no matter the total number of candidates: interview the first candidates (about 36.8% of them), rejecting them outright, then accept the first subsequent candidate that is the best so far. The resulting probability of selecting the best overall candidate is approximately , or about 0.368.
I remember first encountering this problem, and thinking that the goal was unrealistic: for example, consider the extreme situation where you have interviewed the 99th of 100 candidates, and he or she is the second-best so far. The optimal strategy dictates rejecting the candidate– which is at worst third-best overall– and rolling the dice on the final candidate.
Problem: Instead, suppose that the objective is to minimize the expected overall rank of the selected candidate. That is, if is the random variable indicating the final overall rank of the selected candidate, then instead of maximizing , we want to minimize . What is the optimal strategy and resulting expected rank for, say, one hundred candidates? One million candidates?
(Note that this seemingly natural variant of the problem is missing from the Wikipedia page, despite the inclusion of several other similar excursions.)
I think this is a great problem, particularly as a programming exercise. It can be solved with a relatively simple recursive implementation in less than a dozen lines of code. But with one million candidates, that simple implementation bogs down, and requires some re-factoring. The resulting more efficient solution still involves less than a dozen lines of code… and I think the answer is just as interesting as the result in the original problem.