While thin on details, I found the article A new efficient algorithm for computing GrÃ¶bner basis intriguing. While a naive implementation of GrÃ¶bner bases is easy to come by, as a practical algorithm it is highly sensitive to the order in which you add new polynomials to the basis. This has given rise to a whole literature on strategies to add new polynomials. In the paper, Faugere seems to suggest that the whole question can be bypassed, and that polynomials can be added to the basis in bulk by using sparse linear algebra techniques.
Back in November Anton Geraschenko had an interesting post at the Secret Blogging Seminar based on a preprint by George Bergman. Homological algebra is full of diagram chasing arguments that lead to scary-looking theorems like the Snake Lemma. Bergman claims that these are all special cases of an even-scarier looking but more obvious result he christens the Salamander Lemma. I’ll definitely be looking more closely at this when I get a chance.
I had a pet theory that a large part of what made some people good at mathematics was simply memory. A large part of mathematical practice is remembering bits of trivia: standard counterexamples, definitions, little technical tricks. BBC News is reporting on a discovery that makes that seem less plausible: chimpanzees may have better memories than people do. Scientists compared the performance of chimpanzees and their closest relatives, university students, when tested on their ability to remember a random pattern of numbers on a screen. The BBC story comes with two videos of the chimps in action that are worth watching.
There’s a quote that I have rattling around in my head that I can’t quite remember. The quote was of the form “___ is more interesting as a source of questions than a source of answers.” I don’t remember what went in the blank (the Brauer group, maybe?) or who said it, so if anyone happens to know I’d appreciate it.
I was browsing through Wikipedia today when I came across the definition of pretopological space. The notion seemed very exotic until I thought of a family of examples, which I’m christening ennui spaces.
A pretopological space prescribes for each point the set of (not-necessarily open) neighborhoods of that point. The set of neighborhoods of a given point are required to satisfy some natural axioms, but neighborhoods of one point can be completely unrelated to neighborhoods of another point. A sequence in a pretopological space converges if for any neighborhood, the sequence eventually enters that neighborhood and never leaves it again. A topological space can be turned into a pretopological space by taking as the set of neighborhoods of a point to be all sets that contain an open set that contain that point. You can try to reverse the process by borrowing the characterization of the closure of a set in terms of sequences (or nets), but usually the topological space you construct will have a coarser notion of convergence than the pretopological space.
An ennui space has the same underlying set as a metric space. A neighborhood of a point is any set that contains the unit ball around that point. A sequence in an ennui space converges to a point if is guaranteed to be eventually within one unit of the point. The mental image I have is that the sequence gets close to its destination, but then gets bored. If you try to construct a topology out of this space, you get the indiscrete topology, where all sequences converge to all points. Essentially, all information about the convergence properties of the ennui space are lost.
A practical example of an ennui space would be your computer whenever it simulates a convergent sequence of operations, such as numerical integration or Newton’s method. The computer gets within machine precision of the correct answer, and then stops to light up a Gauloise and discuss L’ÃŠtre et le nÃ©ant in a cafe.
A reader emailed me about a new online mathematics journal, Rejecta Mathematica. As far as I can tell from reading the FAQ, they only publish articles that have been rejected by peer reviewed journals. The twist is that with your submission, you must include a letter that describes the reasons the paper was rejected as well as any other flaws the paper might have. Submissions are not peer reviewed, and results are not required to be either correct or new, but submissions can be rejected, if they are deemed either incomprehensible or not mathematical.
This comment by klein4g helped me clarify for myself exactly what my objection to the examples then definition style of teaching. It’s that the author or speaker is pretending that we’re collectively coming up with the common definition as an act of creativity, when in reality there’s a right answer and the author knows it. It’s the pretense that annoys me.
When I started reading Tim’s post, I expected to agree completely with his point. I think I understand subjects almost entirely through examples. I expected him to advocate putting examples before theorems, but he’s actually advocating putting examples before definitions. I dislike that style fairly strongly; it’s like being led down a road by someone who already knows the destination, but they won’t tell me where we’re going until we actually get there.
The BHK interpretation of intuitionistic logic articulates the sense in which intuitionistic logic captures constructive reasoning. Statements that involve logical connectives are interpreted as constructions that transform proofs of simpler statements into proofs of more complicated statements. The interpretation requires a primitive notion of what it means to give a construction function that turn objects and proofs into another proof.
Generally, the BHK interpretation is invoked as a piece of metamathematical reasoning, with the actual definition of a constructive function left to the particular theory under development. This puzzled me, since there is a natural notion of construction: one given by an arbitrary Turing machine. At first, I chalked it up to a philosophical unwillingness to accept the Church-Turing thesis, but I think I’ve identified the real problem: it doesn’t work.
For the BHK interpretation to work, Turing machines that represent constructions must be total functions. Worse, to satisfy the philosophical aim of constructivism, they must be provably total in a constructive system. But a simple diagonal argument shows that there exists total functions that are not provably total in any constructive system since we would expect the valid proofs in such a system to be recursively enumerable. You can simply identify constructions with those provably total in a specific system, but philosophically the diagonal argument is constructively valid: given any recursively enumerable axiomatization of valid constructions, I can give you an explicit construction that is different from every element of that enumeration, and for any given element an explicit proof that it’s different.
The way this issue is handled in the realizability interpretation (and therefore the effective topos) is by identifying the constructions as those functions that are provably total in Heyting arithmetic, the intuitionistic analogue of Peano arithmetic.