# Reversible Markov Chains

Here’s a pretty idea. A Markov chain is one of the simplest forms of dependence in random variables: an infinite sequence of dependent random variables, where the probability distribution of the next random variable only depends on the value of the current random variable. If you reverse the sequence of variables, you get another Markov chain, the reverse Markov chain. Some Markov chains, reversible Markov chains, have the property that when you reverse them, you get back the same chain. Markov chains represent processes that have no history, in that future is determined solely by the present, not the past. A reversible Markov chain not only has no history, but time has no direction.

Here is a draft of a book by Aldous and Fill on the theory of reversible Markov chains.

# The Algebra of Possibilities

There is a notion in symbolic dynamics of a “topological Markov chain” that is analogous to a Markov chain in probability theory. It’s occurred to me that you can extend the analogy to a complete analogy with probability theory. We’re still interested in sets of events, but now we’re no longer interested in the probability of an event, but just whether or not an event is possible.

Start with a σ-algebra of sets, as usual. Instead of associating a probability with each set, associate a member of the set {Not Possible, Possible}. The empty set is assigned the value Not Possible, while the whole space is assigned the value Possible. A countable disjoint union of sets is Possible if and only if at least one of the individual sets is Possible.

A measure takes values in the semigroup of the nonnegative real numbers closed under addition. Here, we’ve replaced that semigroup with the semigroup of {Not Possible, Possible} under the commutative binary operation +, with multiplication table:

+ Not Possible Possible
Not Possible Not Possible Possible
Possible Possible Possible

I’ll explain the relationship with topological Markov chains in a future post.

# Numb3rs Canceled

I was never a big fan, so I just heard that Numb3rs was canceled for the fall. I guess the law will have to go back to fighting crime the old-fashioned way: calling in Batman to help.

# Martin Gardner and Hinton’s Cubes

Martin Gardner has just recently passed away. I remember really liking his books when I was in high school, but I haven’t looked at them since then.

One of his essays convinced me back in high school that trying to visualize the fourth dimension was dangerous. Charles Hinton invented a system of cubes to teach you to visualize the fourth dimension. Gardner printed a letter (copies at banubula and waggish) from someone who said that the cubes were bad for your mental health. It wasn’t until sometime after taking linear algebra that the feeling dissipated.

More on the cubes can be found at The Fairyland of Geometry.

# History of Loops

I came across this history of loops, a generalization of groups, where I picked up this interesting tidbit: loops are named after the Chicago Loop, the central business district of Chicago. (The elevated trains tracks form a loop that enclose it.)

# Horn Clause Example: Graphs and Partial Orders

Classical algebraic structures such as groups and rings can be given entirely in terms of equations, so they don’t require the full expressiveness of Horn clauses.

Directed graphs (with at most one edge between nodes) can be represented by their adjacency relation: for two vertices x and y, the adjacency relation R(x,y) holds if there is an edge from x to y. Undirected graphs can be modeled by requiring that directed edges between two vertices go in both directions: R(x,y) implies R(y,x).

Partially-ordered sets can also be modeled in terms of Horn clauses. The three properties of a partial order are all Horn clauses:

• reflexivity — x ≤ x
• transitivity — x ≤ y and y ≤ z implies x ≤ z.
• asymmetry — x ≤ y and y ≤ x implies x = y.

(Thus preorders, which drop the asymmetry axiom, are also given by Horn clauses.)

Total orders, which impose the requirement x ≤ y or y ≤ x, cannot be given by Horn clauses.

Previous post here. Next post here.

# Signatures

My normal tendency is to write long posts that I never finish. I’ll start off this series with small posts to see if I can break the habit.

The idea of Horn clauses emerged from model theory, so I will begin there. Model theory considers ideas that can expressed in very limited languages. You begin with a small vocabulary of constants, function symbols and relation symbols, known as the signature. For example, you can express the theory of groups in terms of two function symbols and one constant: the group product, the inverse function, and a constant that represents the unit. The theory of directed graphs can be described by a single relation symbol R(x,y) that expresses whether a directed edge begins at x and ends at y.

When coupled with first-order logic, even a very limited language can be very expressive. Set theory itself, for example, can be expressed using a single relation symbol (set membership). Horn clauses are special first-order logical statements that are not nearly as expressive, but still cover very many cases, as we shall see.

Previous post here. Next post here.

# The ubiquitous Horn clause

I was musing about the foundations of mathematics the other day, when it occurred to me that you could make a pretty good case that the key foundational idea of mathematics is that of Horn clauses (also known as universal Horn sentences). Horn clauses, despite begin obscure outside certain areas, are ubiquitous. Many (perhaps most?) basic mathematical objects can be described by Horn clauses. Fundamental category theoretic notions have Horn clause interpretations. Even first-order logic, which contains Horn clauses as a special case, can be viewed as having inference rules in the form of Horn clauses applied at the level of proofs.

I thought I’d spend a couple of posts describing Horn clauses, and laying out the case for their ubiquity.

Next post here.

# Tremellius and Naibod

God Plays Dice has a post that answers a question I’ve long had about the Mathematics Geneology Project: just how far back can you go? The answer is 1572, when Immanuel Tremellius and Valentine Naibod advised Rudolph Snellius. Snellius was the father of Willebrord Snellius, who discovered Snell’s law.

Tremellius was a Bible translator who was briefly jailed for being a Calvinist. It sounds like he was forced to move frequently as the prevailing winds for Protestants changed. (This was the early Reformation.) Naibod was an astrologer who had a book banned by the Catholic Church. An astrological prediction told him that his life was in danger, so he tried holing up in his house until the danger passed. Since the house showed no external signs of life, thieves thought the house was abandoned and broke in. Discovering Naibod, they murdered him. Apparently astrology works after all.

The Geneology Project has a page dedicated to what it calls extrema. I would support a campaign to rename the Guinness Book of World Records the Guinness Book of Extrema.

Update. In between when I hit “Post” and now, the Mathematical Geneology site updated their database, making this post completely obsolete.