Seven years ago Stephen Wolfram published A New Kind of Science. I remember the hype surrounding this book. Journalists jumped at the chance to praise a heavy tome that was too complex for most of them to fully understand, but that shipped with an ambitious title and the implicit guarantee that comes from a genius like Wolfram.

It was “buzz worthy” for sure, and all the attention quickly attracted the interest of numerous scientists from many disciplines. As soon as the mathematicians, and particularly computer scientists, managed to get through its 1000+ pages, the first negative reviews began to pour in. Though, in all fairness, a few scientists had a little too much fun with this book and managed to showcase their comedic abilities by writing some of the most hilarious reviews known to humankind.

In this controversial best-seller, Stephen Wolfram primarily dissects the subject of cellular automata and its relevance to other scientific disciplines, in a systematic manner. It’s a book that covers a lot of ground and is arguably a remarkable piece of writing. Yet, the scientific community greeted the book with a fair dose of criticism.

So what went wrong? The main problem with *A New Kind of Science* is that it set very high expectations due to its author, title, and the numerous reminders of how important this material is, throughout the book.

The main accusations ranged from the book being called a display of Wolfram’s ego, to having very little “new” content, all the way to the more severe claims of not crediting other people’s work. For example, the idea of the universe as a cellular automaton was first presented by Konrad Zuse, so Wolfram’s “new” idea of a discrete, computable universe was anything but groundbreaking. On top of that, the most remarkable technical achievement revealed in this book was arguably the proof that the rule 110 cellular automaton is Turing complete. While this was conjectured by Wolfram, it was actually proven by his assistant Matthew Cook, who was refrained from publishing his results elsewhere by Wolfram’s lawyers.

It’s important to understand that, while perhaps not accepted as the breakthrough that Wolfram had hoped for, this book – and the methods for studying computational systems illustrated within it – is far from gibberish. Wolfram’s ambitious project failed in the eyes of the community due to the extremely high expectations that were set for this book. When you claim to have something radically new, you must be able to back that claim up in a convincing enough manner or else you’re bound to end up with egg on your face.

To be fair to Wolfram (for the few who are not familiar with his work) NKS is a controversial project, but he was already famous for having created the excellent program Mathematica (whose 7th version was recently released), one of the world’s most complete and advanced mathematical software.

Now Wolfram is at it again. According to his recent announcement, he is about to unleash something called WolframAlpha to the world, which combines both his work with Mathematica and NKS. In Wolfram’s own words:

I had two crucial ingredients: Mathematica and NKS. With Mathematica, I had a symbolic language to represent anything—as well as the algorithmic power to do any kind of computation. And with NKS, I had a paradigm for understanding how all sorts of complexity could arise from simple rules.

The project has been kept on the down-low for the past few years, while some of the brightest mathematicians and engineers employed by Wolfram Research, Inc. worked on it. It’s currently in private beta, but will go live in May of this year. From an initial glance, it would seem to be just another search engine a la Google.com. But is it? Not quite. It’s labeled as a “computational knowledge engine”, whose aim is to compute answers from the human knowledge available on the web. Whereas on Google you can search for strings and the results will be a series of relevant links, WolframAlpha will supposedly be able to parse and “understand” a query that’s inputted in English, and compute an answer based on the extensive knowledge stored in its system (assuming that a univocal answer exists). Conceptually speaking, it’s leaps and bounds more complex to get right than Google, which simply looks for matching strings and orders the results based on the popularity of the given keywords (For more information about the mathematics behind Google, read this book).

According to Nova Spivack, who had a chance to try out WolframAlpha, the service is able to compute factual answers to questions such as “What is the location of Timbuktu?”, “How many protons are in a hydrogen atom?,” “What was the average rainfall in Boston last year?,” “What is the 307th digit of Pi?,” “where is the ISS?” or “When was GOOG worth more than $300?”. This project has the potential to change the world as we know it, just like Google did. Several years ago Altavista was fine for most people’s search needs – or so we thought. It took Google to show us how much better off we could be search-wise, how much we needed Google, and ultimately how inadequate Altavista was. Unlike the case of Google and Altavista though, WolframAlpha would not replace Google, since the two services cover complimentary needs. Having access to a service that’s able to compute answers out of the chaos of the factual information that’s available to man would be a major breakthrough for humanity and computer science. And if an API (Application Programming Interface) were to become available, other developers would be able to tap into that with their applications.

Bold claims, high expectations. You understand why, two months away from experiencing something so potentially revolutionary, there is a lot of hype surrounding this project – but also major skepticism. For many this is *A New Kind of Science* all over again, especially since natural language processing and “computing knowledge” are extremely ambitious challenges in a realm where many have failed before. Pulling this one off would be a major accomplishment (that would dwarf Wolfram’s past achievements, including Mathematica), and, at long last, it would be the hard earned, practical validation of some of the methods and philosophies expressed in NKS by Wolfram.

I fully expect people to find bugs and have many simple questions, for which we will see bizarre answers. We’ll read blog posts about the whole thing and perhaps have a good laugh. But what interests me the most is whether, as Google did in the past, this new engine will be able to be practical and useful on an everyday level. Bugs are fair play and expected, but what we’re looking for here is a spark of true innovation thanks to the mathematical modelling of human knowledge.

I suspect that this engine will either have us in awe like Mathematica did, or leave us with mixed feelings – if not downright disappointment, like *A New Kind of Science* did for many. I can’t help but hope for the former, as I wait for my chance to try it out.

#### Get more stuff like this

Get interesting math updates directly in your inbox.

Thank you for subscribing. Please check your email to confirm your subscription.

Something went wrong.

It’s labeled as a “computational knowledge engine”, whose aim is to compute answers from the human knowledge available on the web.Perhaps some when in the future but as for before and as I understand, it won’t access public web data, but instead private databases. I guess the same ones that Mathematica 6+ can access under the name of data on demand, that’s my guess.

I hope for the best, but I have serious doubts that it will be a revolutionary thing. So I hope I am mistaken.

Hi Bo,

Based on Doug Lenat’s artilce, it looks like Wolfram alpha will mostly rely on a large internal database and some ad-hoc real time web services. This gives me more confidence regarding the accuracy of the data, but less hope when it comes to the revolutionary aspect of this service. That said, I’ll wait to see before making any judgement calls. If their database is large enough, this could still become an extremely useful service. I share your skepticism, but truly hope it will be unjustified.

When natural language processing is involved (and the pattern recognition tasks involved) I’m a bit skeptical. It looks like the systems capabilities will be limited by a “data on demand”-like system, which is useful but far from revolutionary.

I will be hard to try it in a unbiased way after reading NKS… I actually studied with one of the people that wolfram forgot to cite in his book and was pretty pissed about the lack of credit to people like Konrad Zuse.

And the issue about Matthew Cook and Rule 110 proof, to me the main result in the book (together with the equivalence between one-dimensional CA and Turing Machines), just made more disappointed.

I agree with you, Lucas. And thanks for your excellent comment.