A new revision of the standard for the Scheme programming language, the Revised6 Report on the Algorithmic Language Scheme, has been approved. The draft standard and other information can be found at r6rs.org.
The changes seem to be aimed at enhancing portability between implementations of Scheme. Opinions, both pro and con, can be gleaned from the online ratification process.
I just discovered a shocking lacuna in my linking strategy. I have never linked to the acknowledgements page for the Scheme shell reference manual. If you ever write a book, this is the only appropriate model to follow.
Does anyone understand the claim being made here in this New York Times article? There is some sense in which the creators of the Chinook checkers-playing program have shown that Chinook cannot ever lose at checkers, but the article includes the caveat:
Even with the advances in computers over the past two decades, it is still impossible, in practical terms, to compute moves for all 500 billion billion board positions. Instead, the researchers took the usual starting position and then looked only at the positions that would occur during the normal course of play.
Do they mean that from the normal starting position that Chinook cannot lose? Or does checkers have stylized openings the way chess does, and they mean that Chinook cannot lose from any of those openings?
I neglected to put up a post about the first issue of the Bulletin of the AMS for 2007. Here’s some of the highlights:
Now to write up the second issue before the third issue comes out…
People always post interesting links in the comments to Scott Aaronson’s weblog. For example, the other day Paul Beame posted two links that explain the connections between random walks on graphs and electrical networks. One is a complete book on the subject by Doyle and Snell. The other is an article by Chandra, Raghavan, Ruzzo, Smolensky, and Tiwari that further develops the theory.
David Corfield highlights two articles by Samson Abramsky: one on Temperley-Lieb algebras and the other on concurrency.
The concurrency article, What are the fundamental structures of concurrency? We still don’t know!, discusses the profusion of formalisms for representing concurrency. Abramsky passes on the following anecdote:
The mathematician AndrÃ© Weil apparently compared finding the right definitions in algebraic number theory â€” which was like carving adamantine rockâ€”to making definitions in the theory of uniform spaces (which he founded), which was like sculpting with snow.
Jim Gray, the 1998 Turing Award winner, has gone missing on a sailing trip off the coast of California to scatter his mother’s ashes.
His book on the inner workings of databases, Transaction Processing: Concepts and Techniques, is very good.
Update. Cosmic Variance has more about Gray. He was a major contributor to the Sloan Digital Sky Survey.
I’ve been insanely busy lately, which means I have a gigantic backlog of stuff that I meant to link to, but didn’t.
Scott Aaronson has a knack for taking certain interesting but not obviously revolutionary inspiring concepts in complexity theory and in good Russian-formalist fashion, making them strange. Here are some examples:
- The Fable of the Chessmaster suggests that a perfect chess master could demonstrate their mastery to a high degree of certainty without revealing their strategies. Scott goes on to suggest that we can think of complexity theory as “mathematical theology”
- Logicians on safari connects complexity to super-intelligent aliens, the Riemann hypothesis to your car trunk, and that the fact that your computer crashes once a second is no excuse for not finishing your work.
- More tender nuggets points out, among other things, that if babies can learn languages by example alone, they can also learn to break RSA.
This is probably well-known to everyone in physics, but less well-known to pure mathematicians. For a open-source high quality implementation of linear algebra computations such as finding eigenvalues, the standard is LAPACK (distributed by Netlib, a numerical source code repository). It is so standard, in fact, that tuned versions exist for individual architectures, and LAPACK performance is frequently used to benchmark processors.
Lambda the Ultimate links to a cool little paper, An Incremental Approach to Compiler Construction. It’s literally a short tutorial on how to write a compiler. It concentrates on the least-complicated but most-intimidating phase: machine code generation.