I just spotted an interesting paper on Arxiv: Computing over the Reals: Foundations for Scientific Computing, which suggests a new model of computation with real numbers.
I just spotted an interesting paper on Arxiv: Computing over the Reals: Foundations for Scientific Computing, which suggests a new model of computation with real numbers.
I couldn’t figure out what was new about their model of computation. Do you have an explanation?
I looked at the paper more closely today, and it’s not very impressive. It seems like an introduction to their preferred approach, and not anything particularly new.
A more interesting approach, IMHO, is to use lazy evaluation to compute as many digits as required.
I think from a theoretical point of view their approach is equivalent to the lazy evaluation approach. Their contrasting with a different model of computation where you can do real arithmetic with infinite precision.
While I’ve just started reading this, I think it probably merits some attention, especially because of the pedigree of the author (Stephen Cook – NP completeness)