I might be the last person in the world to find out, but I just found a website, Detexify, that clearly works on magic. You draw a symbol, and it looks for the closest Latex symbol. It works surprisingly well. I just draw a terrible approximation of the Weierstrass p symbol, and the actual Weierstrass p symbol was the 4th hit.
I like to read the Low-Dimensional Topology blog, despite the fact that I know almost nothing about the subject. (It’s possible I like to read it because I know nothing about the subject.)
Over the past year, several posts convey palpable excitement over a series of preprints that prove two conjectures: the virtually Haken conjecture and its generalization the virtually fibered conjecture. These were apparently the outstanding open conjectures after the proof of geometrization. This post in particular describes the techniques involved in the proof. To see how fast things changed over the past year, this post on the Wise conjecture (an important ingredient of the proof) makes it clear that from the perspective of March of this year it was very much an open question which way the result would most likely turn out.
I’d been meaning to learn more about the subject, just to have a better idea of what happened. (For example, I still don’t really understand what a Haken manifold is, even though I’ve read the definition. Fortuitously, Erica Klarreich has written a long
general-audience article that gives at least some of the flavor of what’s going on.
I was never really sure I believed Weibel’s famous footnote that a proof of the Snake Lemma appeared in a 1980 romantic comedy, It’s My Turn, but Oliver Knill has put together a gallery of math clips from movies and TV shows, and it’s there.
What’s interesting about the clip is that it’s clear to a math audience that the student who keeps interrupting is a blowhard who has no idea what he’s talking about. While it would be clear to any audience that the student is arrogant, I don’t know if it would be clear that the student doesn’t know what he’s talking about.
Nate Eldridge has written a program, Mathgen, to randomly generate a nonsense math paper. (It’s based on an older program, SCIgen, that generates random computer science papers.) While they don’t make any sense, the Mathgen papers capture the typical style of mathematical writing pretty well. The main quirk that gives it away is that a real math paper would repeat terminology, Mathgen creates new mathematical terms every sentence. (This an inevitable consequence that the algorithm used is context-free.)
Apparently it doesn’t give it away for everyone, though. A Mathgen-generated paper was submitted to a journal, Advances in Pure Mathematics, where it was accepted with revisions. I’ve never heard of this journal, so I would assume that it’s like the mathematical version of a vanity publisher that makes money from publication fees. But what’s amazing is that the paper was peer reviewed! The suggested revisions are of the form “please make this make sense”, but still, out there somewhere there’s a person who read this paper, and tried to make constructive comments. Who was this person?
As probably most of you have heard, Shinichi Mochizuki has announced a proof of the abc conjecture. At some point I decided to stop posting about announcements of solutions to famous unsolved problems, after several high-profile retractions. This time, it’s been long enough to wonder if the proof will hold up. The papers are of daunting technical complexity, so it sounds like it will be some time before we hear the verdict.
I find typing rules in theoretical computer science hard to read, and I just realized that it’s for a completely trivial reason: I subconsiously read “:” as having lower precedence than ⊨ and “,”, which is completely wrong, so I have to concentrate to group everything the right way.
The only reason I can think of for this is that way back when I learned Pascal, which allows declarations like “ x, y : integer”, which is somewhat like “:” being higher precedence than “:”.
This article from the New York Times has a startling statistic: sales of the print edition of the Encyclopedia Britannica have dropped from 120,000 in 1990 to 12,000 today. (The article says 8,000, but a later article says the whole print run of 12,000 sold out.) I knew that Wikipedia had seriously hurt the sales of encyclopedias, but I had no idea it was on the order of 90%.
When I was a kid, I had the Funk & Wagnalls encyclopedia. (They sold it at the supermarket, one letter at a time.) I remember vividly reading the article on algebra, where it had a big table of axioms, like “the commutative axiom,” and “the associative axiom.” I was fascinated to find out that someone had isolated a list of properties of numbers, and that these properties had names.
David Li’s Gaussian copula formula has been called the formula that killed Wall Street, because of how it was used in pricing mortgage-backed securities.
Sociologists Donald McKenzie and Taylor Spears have a new paperon the history of the Gaussian copula model, based on detailed interviews with quants before and after the crisis. They find that the limitations of the Li model were well understood by financial modelers on Wall Street, and that none of them took the model as literally true. The formula became widely used for institutional reasons outside of the quant community — for example, once the industry had a standard model, then the model could be used in evaluating profit and loss. This is further evidence for the thesis that mathematics is only dangerous when it falls into the wrong hands.
Cathy O’Neal highlights the parts of the paper that address the questions of institutional politics and blame.
With all of its benefits, there are many difficulties of life that are unique to civilization: traffic, taxes, pollution. Thanks to blogging, I’ve experienced a new one: having your WordPress site hacked. The hosting company shut down the site because it was receiving unusual traffic. Upon further investigation, it turned out the site had been hacked. (Presumably the traffic was from the site being made part of a botnet.) It should now be fixed.
I had a question that I was going to ask on Math Overflow, but after some research I managed to find the answer.
Finite simple groups have a complete classification. I was wondering if there were any weakenings of the axioms of group that also allowed a complete classification of the simple objects. (Here, I mean no nontrivial quotients.) Surprisingly, there’s a classification for semigroups. In the theory of semigropus the term “Simple& is used for a weaker notion. Semigroups with no nontrivial quotients are known as “congruence-free”. The classification of finite congruence-free semigroups splits into two cases: for semigroups with a zero (an element 0 such that 0x = 0) there’s an explicit construction, while a congruence-free semigroup without a zero must be a simple group.
Another direction to generalize is weaken the form of associativity. The most-studied weakening is the Moufang property, which includes the octonions as a non-trivial example. Here, the complete classification is also known: a finite simple Moufang loop is either a group or a Paige loop, which is a non-associative construction closely related to the octonions, but defined over a finite field. It’s interesting that in this case, the one non-associative family resembles simple groups of Lie type, in that it’s parameterized by the finite fields. This classification relies non-trivially on the classification of simple groups, in that the explicit classification is used to rule out any other non-associative examples.
The paper Octonions, simple Moufang loops and triality by GÃ¡bor Nagy and Petr VojtechovskÃ½, explains Moufang loops, and how the classification of non-associative Moufang Loops reduces to a question about finite simple groups.