Is Math Getting Too Hard?

The Edge Foundation has collected over a hundred essays in response to the question “What is your dangerous idea?“.

I haven’t yet read all of them (75,000 words!), but I thought that Steven Strogatz’s idea was worth mentioning. With reference to the four-colour theorem, classification of simple groups, and sphere packing, he worries that mathematics might be getting too hard, that the use of computer programs in mathematical proofs leaves mathematicians with the ability to show something is true without understanding why.

Obviously the use of computers as an aid in proofs is relatively new. But, is it new that there are results where we dont really understand why they are true? I’ve always thought that on the frontiers things are usually not well understood; but, as the body of knowledge grows, new tools are developed and new insights achieved, and what was hard becomes easier. Computer proofs may have skewed this progression somewhat, but do they signal a more fundamental change? Is it worth speculating whether or not, without computers, mathematicians might have continued working on the four-colour theorem and we might have a “real proof” by now?

I don’t think I’m quite ready to accept the idea that we are now reaching the limits of the human brain.

Aside: Professor Strogatz mentions a recent article by Brian Davies, Whither Mathematics, which talks about similar issues. It also talks about using formal verification of computer programs when they are included in a mathematical proof. Until now I’ve not paid much attention to such things, but I guess that if mathematical proofs are requiring computer programs then we’ll need techniques to verify their correctness so they can be verified like more traditional proofs.

Another aside: Not Even Wrong and Cosmic Variance have some comments about a few of the physics related Edge essays.

8 thoughts on “Is Math Getting Too Hard?

  1. In case anyone reading this is interested, there’s now a whole branch of computer science which studies formal verification of computer programs and program specifications, including methods for the automation of verification. Much of this work uses formal logic, which is one reason why modal, temporal, epistemic, deontic, etc, logics now play such a large role in CS.

    A fascinating, and easy-to-read, account of the conflicts between mathematicians and computer scientists in the development of formal verification can be found in this book by Donald MacKenzie (a sociologist of technology at the University of Edinburgh, Scotland): “Mechanizing Proof: Computing, Risk, and Trust” (MIT Press, 2004).

  2. the use of computer programs in mathematical proofs leaves mathematicians with the ability to show something is true without understanding why.

    I was always under the impression that people still don’t really understand why the Monstrous Moonshine conjectures are true despite Borcherds’s proof. His proof works but it doesn’t seem to satisfy people’s intutions. So I think think there’s nothing really new about proof without understanding. In fact, ever since mathematicians first figured out how to do algebraic manipulation there have been proofs without understanding.

  3. I second sigfpe’s opinion, and take it one further – all through grad school I was able to show things we true with absolutly no understanding of why :)

    Kidding aside, this “problem” has been around since the Hilbert basis theorem and I worry about it not one bit. Even if you look at a computer proof as an oracle, it isn’t as if you only get one question. As Robbie said, knowledge accretes around a fact, and eventually comprehension dawns. I feel that the people who worry about this have never actually done any mathematical reseach and felt the process happen in their head.

  4. It depends on your level of paranoia. If a very useful theorem was proved only by a computer because it would take years for a human to verify the proof, then you can formulate truckloads of maths (possibly creating other computer-only proofs).
    What if one day you reach some insane conclusions and realize that you have to check every one of these robot-proofs?
    The situation is very similar to compiling programs. You have to trust the compiler, or else you can’t do any business. If every native binary executable had to be checked by a human compiler writer, I wouldn’t be able to post this message.

  5. You have to trust the compiler

    Actually I think it’s different. I don’t trust the compiler (and this isn’t just abstract theoretical doubt, it’s based on actual experience). Binary executables do need to be checked. This checking takes the form of testing and using the principle of philosophical induction: once I’ve tested it enough, I’ll decide code is good enough to release. But I’ll never say “this absolutely definitely works”. In mathematics I need to say “this absolutely definitely works”.

  6. I must admit that this handwringing doesn’t make a whole of of sense to me.

    After all, the idea of a true but unprovable statement is not exactly new — even if people are still in denial about it. And that is a much more devasting thing than the “true but only provable by messy proof” situation of the 4 color theorem and things like it. What is the logical requirement that the truth be neat and human-understandible?

    Now, intuition is of course a good thing. One point being missed by Strogatz is that a major purpose “Wolframian” experiments is exactly to build intuition by exploring the space of what is possible, instead of sticking to the easy ones. And by exploring the larger space, you get a sense of where known knowledge fits.

  7. I need to say “this absolutely definitely works”.

    I’d rephrase that as I’d like to say this absolutely works. If there’s a finite number of proofs we can fit in one page, at one point we’re guaranteed to spill into the second page. I’d like to hope that I’ll never reach my personal limits in my lifetime, but that ought to be less and more less likely with every generation.

  8. PeterMcB: I was vaguely aware of formal verification in computer
    science. However, until now, I never thought it would be useful for
    any programs I’d be interested in and so I never paid much attention
    to it. (I always imagined it as something for control systems on machines that can’t afford to fail).

    sigfpe: Not trusting the compiler seems reasonable. But, where does
    it stop? Do you trust the microcode in the processor? (I remember
    about 10-15 years ago the pentium processors had a bug in floating
    point arithmetic). Do we need formal verification of everything from
    the source code down to the logic gates (i.e. the compiler and
    microcode)? Then, what about the logic gates? We know they work from
    theoretical and experimental physics and decades of processors doing
    what we expect them to. But we cant verify mathematically that they
    work.

    I’m sure the questions have already been tackled by mathematicians and computer scientists. I’m curious what the answers are.

    (Sorry for the late reply; I’ve been traveling).

Comments are closed.