Linear Bestiary of Francois Pottier

Ugh, I suck at this blogging thing. I periodically get ambitious, and make big plans. That doesn’t actually lead to any completed posts, just many long half-finished posts, and hundreds of open tabs in Firefox. I think I’ll start with some short posts.

Linear types are one of those things that I’d always wanted to learn more about. The idea seems somewhat natural &emdash; practically speaking the amount of resources an object uses is part of its signature &emdash; but the details are sufficiently complex that I’ve never quite mastered it. This presentation by Francois Pottier seems like a nice place to start.

I find typing rules in theoretical computer science hard to read, and I just realized that it’s for a completely trivial reason: I subconsiously read “:” as having lower precedence than ⊨ and “,”, which is completely wrong, so I have to concentrate to group everything the right way.

The only reason I can think of for this is that way back when I learned Pascal, which allows declarations like “ x, y : integer”, which is somewhat like “:” being higher precedence than “:”.

Machine Learning Book

Machine learning is an exciting new area of applied mathematics and computer science. It’s philosophically interesting because it requires to make a precise mathematical formulation of what it means to learn, and to be able to distinguish concepts from each other. Nils Nilsson has an early draft of a textbook on the subject.

I Have Marshall McLuhan Right Here

Here’s a post and comment thread at Computational Complexity from way back in November that struck me as funny when I came across it. In the post itself, Lance complains about the scarcity of publicly-accessible proofs of Barrington’s Theorem. In the comments, the eponymous David Max Barrington appears, and gives the proof.

Sierpinski carpet

This is a test post to see what’s involved in uploading images.

This is of course the Sierpinski carpet. What’s interesting to me is that many objects that, in a previous age dominated by a picture of the physical world as a continuum, seemed deeply pathological, have straightforward computer-language descriptions. For example, you can check whether or not a point in the plane is on the Sierpinksi plane by looking at the ternary expansion of its coordinates, which is a couple of lines of computer code. From the point of view of the computer, the Sierpinksi carpet is not much more complicated than a parabola. I suspect that the popularity of fractals marks a change in the popular imagination of the dominant metaphor for mathematics, from mathematics as mechanics to mathematics as computer program.

GrÃ¶bner bases as sparse matrices

While thin on details, I found the article A new efficient algorithm for computing GrÃ¶bner basis intriguing. While a naive implementation of GrÃ¶bner bases is easy to come by, as a practical algorithm it is highly sensitive to the order in which you add new polynomials to the basis. This has given rise to a whole literature on strategies to add new polynomials. In the paper, Faugere seems to suggest that the whole question can be bypassed, and that polynomials can be added to the basis in bulk by using sparse linear algebra techniques.

This is more of a computer programming post than a math post, but since we all use TeX, I thought it might be of general interest. I came across this paper by Heckman and Wilhelm, which describes how to implement TeX’s algorithm for typesetting math formulas in a functional language (namely, ML). We see the output of this algorithm every time we look at a math paper, but until now it lacked a precise formal description other than what was embedded Knuth’s Pascal code. Since Knuth wrote TeX back when computers were much slower and compilers were much dumber, the code contains many hand optimizations that make the logic hard to follow. (The algorithm itself is too complicated to lend itself to a purely verbal description.) By rewriting it in a functional language, the authors are able to turn the algorithm into a well-defined mathematical function.

BHK Interpretation

The BHK interpretation of intuitionistic logic articulates the sense in which intuitionistic logic captures constructive reasoning. Statements that involve logical connectives are interpreted as constructions that transform proofs of simpler statements into proofs of more complicated statements. The interpretation requires a primitive notion of what it means to give a construction function that turn objects and proofs into another proof.

Generally, the BHK interpretation is invoked as a piece of metamathematical reasoning, with the actual definition of a constructive function left to the particular theory under development. This puzzled me, since there is a natural notion of construction: one given by an arbitrary Turing machine. At first, I chalked it up to a philosophical unwillingness to accept the Church-Turing thesis, but I think I’ve identified the real problem: it doesn’t work.

For the BHK interpretation to work, Turing machines that represent constructions must be total functions. Worse, to satisfy the philosophical aim of constructivism, they must be provably total in a constructive system. But a simple diagonal argument shows that there exists total functions that are not provably total in any constructive system since we would expect the valid proofs in such a system to be recursively enumerable. You can simply identify constructions with those provably total in a specific system, but philosophically the diagonal argument is constructively valid: given any recursively enumerable axiomatization of valid constructions, I can give you an explicit construction that is different from every element of that enumeration, and for any given element an explicit proof that it’s different.

The way this issue is handled in the realizability interpretation (and therefore the effective topos) is by identifying the constructions as those functions that are provably total in Heyting arithmetic, the intuitionistic analogue of Peano arithmetic.

Eliminate Cut Elimination

Sorry for the radio silence; I’ve been under the weather lately.

Is there any major theorem in mathematics drier than Gentzen’s cut elimination theorem. I feel guilty for picking on proof theory, since it is an undeservedly unfashionable area of mathematics (it’s probably better known by computer scientists than mathematicians at this point). I feel guilty for picking on the cut elimination theorem in particular, since it’s an important ingredient in the proof that Goodstein’s theorem is independent of Peano arithmetic, which is one of my favorite theorems. But, I had to read the statement of the theorem forty-nine times before I got what it was trying to say. I have yet to make it all the way through the proof before blacking out.

I was trying to think of a theorem in a more-fashionable area of mathematics that is particularly dry. The best I could come up with the Riemann-Roch theorem (for curves). I know what it says. I know what it’s good for. I just can’t bring myself to like it.