The 24-Cell in 3-D

The 24-cell is a mysterious regular polytope in 4 dimensions, in that there’s no analogy to a Platonic solid. Since 2005, the campus at Penn State has featured a sculpture based on a 3-dimensional projection of this 4-dimensional object. The sculpture, known as the Octacube, was designed by Penn State mathematics professor Adrian Ocneanu. The sculpture was sponsored by Jill Grashof Anderson in honor of her husband, who was killed in the September 11th attacks.

Penn State has put out a shorter and longer article on the sculpture.

The Tragedy of Blogging

Like most blogs, this blog gets a lot of spam, most of which gets caught by the spam filter, but not all. Some of it I have to go through manually. Sadly, I have come to learn that if a message says something like “great job”, it’s spam with probability 1. Probably the next generation of spambots will say “Completely wrong, you idiot!” and compare you to Hitler for greater verisimilitude.

Combined with the revelation that I accidentally turned down the Clay Prize, it’s been a discouraging day for this blogger.

Mathematics Interchange Format

While typing up an interminable LaTeX formula, I started to muse on the idea of a universal mathematics interchange format. As we use computers more and more, we’ll have to slop formulas from one program to another, say from LaTeX to Maxima, or the distant future from LaTeX to our friendly neighborhood proof checker. It would be nice if there were some sort of standard format to represent formulas. For many purposes, LaTeX is that format, but LaTeX doesn’t preserve the semantic content of its formulas. For example, in the formula \int e^x dx, there is nothing that associates \int with dx.

MathML is probably closer, but I’m imagining something with a type system built on top of it. Sometimes when you write x y, you mean a polynomial in x and y, but sometimes you mean that x and y are elements of a nonabelian group multiplied together. An ideal interchange format should preserve that kind of semantic information, and in theory should allow you to check that your formulas are well-formed. It’s not that format would understand noncommutativity — that is the domain of computer algebra systems and theorem provers — but that the format would have a declaration that there is a type GroupElement, and that two group elements are multiplied using group multiplication. A more ambitious proposal would allow you to say that G has type Group, and that x and y have type GroupElement(G), while the group multiplication is a binary operation that preserves the property of being GroupElement(G). The free computer algebra system Axiom has a type system like this.

The Beatles! Sort-of

I have a bunch of old half-written drafts, many of which are now stale. (For example, did you know that the Kervaire invariant one problem has been solved? I’m guessed that if you care, you probably do.) But here’s an older post that’s still interesting. It’s a video about mathematician Jason Brown, who has written a “new Beatles song” based on mathematical analysis of the Beatles oeuvre. (Brown’s main research area appears to be graph theory.) The song in the video does indeed sound pretty Beatlesesque.

Via Cognition and Culture, which has some more musings on the idea of authenticity.

Weird Sums of Random Variables

I was reading Feller, and found that sums of random variables have some weird properties. Univariate distributions form a semigroup under sum in the following sense. Let F and G be distributions, and X and Y be independent random variables with those distributions. Then X + Y gives you a new univariate distribution (given by the convolution).

I’d sort-of assumed that it would be a cancellative semigroup: if X + Y ~ X’ + Y, then X ~ X’ (for independent X, X’, Y), but Feller has a counterexample. He also has a related example. Let X ~ X’ and Y ~ Y’. Suppose X + X’ ~ Y + Y’. Then X ~ Y, right? Wrong.

In both cases, the critical property you need is for the characteristic function to have a zero. If you restrict to distributions whose characteristic functions are never zero, then you get a cancellative semigroup. Many nice distributions have this property, including all infinitely divisible distributions.

Cleaning Up the Mess

Minhyong Kim has an interesting post on how mathematical theories can develop messily, but then the mess gets cleaned up. Minhyong wonders what’s lost in the transition to neatness.

Class field theory is an example of a subject that has been polished to a high gloss, but its very neatness makes it hard to understand what it’s about, which is why I prefer expositions that talk about the history and start with inspirational examples. I suspect that a streamlined but mostly-historical introduction to class field theory would be incredibly interesting.

Algebraic geometry is another good example. Linear systems are basically an elementary idea (the Wikipedia page doesn’t get across how elementary), but in practice these get coded up in terms of ample line bundles, which seem a long way away from the original idea.

Geometrization Conjecture Coming to a Runway Near You

The same post at Quomodocumque has this completely odd video of an interview with William Thurston and fashion designer Dai Fujiwara. Apparently, Thurston provided the inspiration for Issey Miyake’s fall fashions, “8 Geometry Link Models as Metaphor of the Universe”.

You can see the finale of their Paris fashion show here, including Thurston joining Fujiwara on stage. (The whole thing didn’t seem real to me until I saw Thurston walk on-stage in that clip. I have no idea what the designs have to do with the Geometrization Conjecture, but the title of the show certainly suggests that’s what they had in mind.)

The proof of the Geometrization Conjecture was sketched by Perelman (it implies the Poincare Conjecture). I wonder if the brief burst of publicity in the wake of Perelman led to the show.

Legal System Versus Probability

This article in the Washington Monthly tells a startling story of a conflict between the legal system and probability. DNA tests use a certain number of genetic markers to match DNA samples. The samples are well-short of a complete sequencing, which means any randomly selected person has a 1 in a million chance of matching. When DNA testing is used on a few suspects, then the chances of matching an innocent person are pretty low. But now, governments are compiling large databases of DNA and data mining them for matches, which radically increases the odds of matching an innocent person. This would not necessarily lead to false convictions as long as juries are made aware of the consequences, but at least one judge ruled evidence that a DNA match happened by mining a database inadmissible: the jury never heard it.

(In the particular case of the article, the victim was raped and murdered, and the defendant was found in a database of sex offenders. The article claims the correct probability of a match in this case is 1 in 3. It’s not clear from the article if this is the unconditional probability of being found in a database of that size, or the conditional probability given that it was a database of sex offenders.)

Via Quomodocumque.