(Following is an attempt to capture some of my thoughts following an interesting student discussion. I think I mostly failed during that discussion, but it was fun.)

How do we know when two sets and have the same “size”? The layman answer is, “When they both have the same number of elements,” or even better, “When we can ‘pair up’ elements from and , with each element a part of exactly one pair.”

The mathematician formalizes this idea as a bijection, that is, a function that is both:

- Injective, or “one-to-one”: if , then . Intuitively, distinct elements of map to distinct elements of .
- Surjective, or “onto”: for every element , there exists some such that . Intuitively, we “cover” all of .

Bijections are handy tools in combinatorics, since they provide a way to count elements in , which might be difficult to do directly, by instead counting elements in , which might be easier.

So far, so good. Now suppose that a student asks *why* injectivity and surjectivity are *the* critical properties of functions that “pair up” elements of two sets?

This question struck a chord with me, because I remember struggling with it myself while learning introductory algebra. The *ideas* behind these two properties seemed vaguely “dual” to me, but the structure of the actual *definitions* looked very different. (I remember thinking that, if anything, the properties that most closely “paired” with one another were *injective* and *well-defined* (i.e., if , then ; compare this with the above definition of an injection). Granted, well-definedness might seem obvious most of the time, but sometimes an object can have many names.)

One way to answer such a question is to provide alternative definitions for these properties, that are equivalent to the originals, but where that elegance of duality is more explicit:

- A function is injective if and only if, for all functions , implies .
- A function is surjective if and only if, for all functions , implies .

It’s a great exercise to show that these new definitions are indeed equivalent to the earlier ones. But if that’s the case, and if these are more pleasing to look at, then why don’t we *always* use/teach them instead?

I think there are at least two practical explanations for this. First, these definitions are *inconvenient*. That is, to use them to show that is a bijection, we end up having to drag a *third* (and fourth) arbitrary hypothetical set (and ) into the mix, with corresponding arbitrary hypothetical functions to compose with . With the usual definitions, we can focus all of our attention solely on the two sets under consideration.

Second, these new definitions can be *dangerous*. Often the function we are trying to show is a bijection preserves some useful structural relationship between the sets and (e.g., may be a homomorphism)… but to use the new definitions, we must consider *all possible* additional functions , not just that restricted class of functions that behaves similarly to . We have strayed into category theory, where monomorphisms and epimorphisms generalize the notions of injections and surjections, respectively. I say “generalize,” because although for sets, these definitions are equivalent, in other categories (rings, for example), things get more complicated, where a monomorphism may not be injective, or (more frequently) an epimorphism may not be surjective.