Wanted: Theorem about Cocomplete Categories

I’m pretty sure that a certain theorem about cocomplete categories must be true, and I’m even pretty sure that I know how to write down a proof. (Famous last words, I know.) But I have the feeling that the result is already known, and I just haven’t seen it. I thought I would state the result here (in somewhat vague terms), and hopefully someone can point me to the result, if it already exists.

Every cocomplete category that is co-well-powered and has a set of generators can be constructed explicitly as follows. Each object X can be represented as:

  1. A family of sets, X_i. This family is always a set. Each set represents a different sort, in the sense of multisorted algebras.
  2. A family of relations, R_j defined on the X_i. The relations can be of arbitrary arity and signature (so you can have relations on X_1 x X_2, etc.) Infinite arities are allowed. The number of relations of a fixed arity and signature is a set, but the family of all relations can be a proper class.
  3. A family of partially-defined operations. Each operation has as its domain all tuples that satisfy a certain relation.
  4. The relations are required to satisfy a collection of specified Horn clauses. The left-hand side of the Horn clauses can contain infinite conjunctions.

The arrows of this category are all families of functions X_i -> X’_i that preserve the R_j and the partial operations.

An easy example of this is the category of small categories. Here X_1 is the set of objects, X_2 is the set of arrows. It has four operations: the id operation that sends an object to its identity element, the dom operation that sends an arrow to its domain, the cod operation that sends an arrow to its codomain, and the partial operation of composition, which is defined for all f and g such that cod f = dom g. The Horn clause it satisfies is the requirement that the identity arrow is an identity under composition. (This example is unusual in that the relation is an equality between two operations; the relations can be arbitrary in general.)

Non-standard analysis in economics

I see, via Yet Another Sheep, that nonstandard analysis has spread to mathematical economics. Robert Anderson has a book manuscript available, Infinitesimal Methods in Mathematical Economics which explains how to apply nonstandard analysis to approximate economies with large numbers of agents. The main technique is Loeb measures, which is something that I plan on writing a post on, once I actually know anything about them.

Brouwer Fixed Point Theorem

One idiosyncratic interest of mine is mathematical economics. I was looking through Volume 2 of the Handbook of Mathematical Economics when I spotted a paper by Scarf called “The Computation of Equilibrium Prices: An Exposition”. The real subject of the paper is an incredibly clear exposition of how to find fixed points of maps of the unit n-cube to itself. The Brouwer Fixed Point Theorem promises that at least one fixed point exists. I knew that there was a combinatorial approach based on Sperner’s lemma, but it had always struck me as rather technical. Not so; Scarf gives a straightforward algorithm for finding the fixed point. Sperner’s lemma is just the result that dictates that the algorithm terminates.

The proof is stated for an n-simplex, which is the n-dimensional analogue of a triangle. The algorithm works by cutting up the simplex into smaller simplices, and identifying which of the smaller simplices contains a fixed point. It then repeats, trapping the fixed point in smaller and smaller simplices until it eventually converges. What’s interesting is that the test for whether a particular simplex contains a fixed point is fantastically crude; it amounts to just checking a simple condition on the map at the vertices. (The condition is not satisfied for every simplex containing a fixed point, and in fact the algorithm will miss some fixed points. At least one fixed point will satisfy the condition, though.)

The article does everything from scratch. Brouwer’s theorem is derived as a consequence of the algorithm. It is simple enough that it could easily be included in an undergraduate analysis textbook. The whole article is so simple that it makes me wonder if there is an elementary combinatorial subject lurking under the intimidating algebra of modern-day homology theory. An interesting test case is if a constructive version of the Lefschetz fixed point theorem. (Lefschetz’ original proof was apparently combinatorial, but extremely difficult to follow. I doubt it was constructive.)

Here is two artists’ take on the Brouwer fixed point theorem.

Update. Commenter Mio spotted this elementary introduction to the topic on Herb Scarf’s web page. Poking around some more, I found the original article I mentioned above here.

Nilpotent Infinitesimals I

I’ve been writing a post explaining the practical difference between the synthetic and nonstandard notions of infinitesimals. It was getting a bit long, so I’m splitting it into two posts, of which this is the first.

Synthetic differential geometry (or smooth infinitesimal analysis) is a way to add infinitesimals to the reals; one that is an alternative to the nonstandard analysis approach. In SDC, infinitesimals can be nilpotent: their square or some higher power can be zero. This allows you to formalize arguments such as “this quantity is so small, we can treat its square as if it is zero”. You can also formalize these arguments in nonstandard analysis, but with more care (you can’t actually set the square to be zero, but you can treat it as an even smaller infinitesimal). Nonstandard analysis cannot have nilpotent infinitesimals directly, because it is required to preserve first-order theorems about the reals (which includes the theorem that the only nilpotent element of the reals is zero).

You can formalize infinitesimal arguments at the level of calculus equally well using either approach, so why would you ever want nilpotent infinitesimals? Here are some examples with a differential geometric flavor. Consider two points on the real line, and move them together so that they coalsce into one point. In ordinary differential geometry, that’s all they are — one point. In the synthetic approach, you can treat this as a double point, with defining equations x2 = 0. You can imitate this in nonstandard analysis; I’ll explain how in the next post.

Here’s an example that’s considerably harder to simulate in nonstandard analysis. Consider four lines in the plane, two horizontal, and two vertical. Collectively they intersect in four points. If we let the two horizontal lines move towards each other, they become a double line, which intersects each of the two vertical lines in a double point. If we now let the two vertical lines degenerate into a double line, we have two double lines intersecting in a quadruple point. But not just any quadruple point (there’s more than one kind), they intersect in the quadruple point with defining equations x2 = 0 and y2 = 0.