Scott Aaronson has posted an extremely funny parable on his weblog about physicists and impossibility proofs.
The University of Liverpool (UK) has a vacancy for a post-doc researcher and for a PhD student, both in automated mechanism design. The expertise we are looking for includes game theory, mechanism design and auction theory, mathematical economics, and computational versions of same.Â These posts are part of a major UK research project on market-based control of complex computational systems.
I’ve been poking around Cosma Shalizi‘s website recently. He has a little bit of everything: a weblog, a set of book reviews he’s written, and a large collection of mini-essays (which he calls “notebooks”). The bulk of the material revolves around the related subjects of probability, machine learning, and dynamical systems (all with a strong physics flavor), but he touches on many other topics.
Many authors put up drafts of their books as they write them, only to take them down once the book is published. I understand the reasons, but it’s nice when such a book stays available in electronic format. David MacKay’s book Information Theory, Inference, and Learning Algorithms is still available, even though it’s been published by Cambridge University Press.
Sanjeev Arora is writing a new book on computational complexity, and he’s posting the draft chapters online. Since the standard textbook on the subject, Papadimitriou’s Computational Complexity, is from 1993, a more-recent take on the subject is much appreciated.
I’m a total amateur in the subject, but my impression looking at the notes is that the big difference between 1993 and now is that in 1993 researchers were already beginning to suspect that the best idea in years for settling P versus NP (circuit complexity) wasn’t going to work, and now they’re pretty sure it won’t work.
In the introduction, Arora provides a more succinct summary of the post-1993 work than I possibly could:
The list of surprising and fundamental results proved in the 1990s alone could fill a book: these include new probabilistic definitions of classical complexity classes ( IP = PSPACE and the PCP Theorems) and their implications for the field of approximation algorithms; Shorâ€™s algorithm to factor integers using a quantum computer; an understanding of why current approaches to the famous P versus NP will not be successful; a theory of derandomization and pseudorandomness based upon computational hardness; and beautiful constructions of pseudorandom objects such as extractors and expanders.
We’ve previously linked to some expository material on expanders.
Via Scott Aaronson.
On his blog, Sigfpe has proposed SIGFPE’s Law, that unlike the exponential growth of ordinary computers, the number of qubits in a quantum computer will only grow linearly. So when the mafia hacks into your bank account a la Sneakers because of their quantum public-encryption-cracking supercomputer, blame Sigfpe for getting it so terribly wrong. Also, blame him for the lack of flying cars.
(One subtlety to keep in mind — which Sigfpe alludes to — a 2-qubit computer is not the same thing as 2 1-qubit computers. The the states of each qubits must allow quantum superposition with each other to really count.)
This post on Scott Aaronson’s weblog is every internet discussion thread in microcosm (except for ours, which are shockingly polite by internet standards). Scott thought he was explaining the notion of non-constructive proof to a dense but argumentative student. If you read the comment thread, though, you’ll discover that the “student” thought Scott was some sort of nut who imagined he’d invented a computer more powerful than a Turing machine.
Epistemic modal logic was invented by Finnish philosopher Jaako Hintikka to represent knowledge and belief (in a book published in 1962), and is now used by computer scientists to model and design systems of autonomous software agents. It uses modal operators to indicate which propositions are known to which agents.
A common modal system for beliefs is C. I. Lewis‘ system S5, which (among other axioms) assumes that agents know what it is they know (positive introspection) and know what it is that they don’t know (negative introspection). (In other words, if an agent does not know whether or not some proposition is true, then the agent knows that he does not know whether or not that proposition is true). These are quite strong assumptions, and have been criticized as being unrealistic. Two computer scientists, Joseph Halpern and Leandro Chaves Rego, have now identified negative introspection as the axiom which makes the satisfiability problem for S5 NP-complete.
As an aside, discussion of positive and negative introspection by epistemic logicians meant that they fully understood Donald Rumsfeld’s statements about known unknowns vs. unknown unknowns.
Scott Aaronson and Greg Kuperberg have put together a website, the Complexity Zoo, that describes 443 (!) computational complexity classes.