The post Review: SOLVE THIS! A Book for Problem-Solvers (and Those Learning To Join Them) appeared first on Math ∞ Blog.

]]>

After first hearing of James Tanton (the subject of our April 2017 interview), I was excited to discover that he had books in print. The first one I got was SOLVE THIS! Even though I was already a mathematics teacher and teacher-educator, I found many of the problems it offered challenging, some of them more than slightly so. Very few of them were familiar to me and many required thinking in areas of mathematics I knew little about, though that didn’t make them inaccessible or overly forbidding.

What grabbed my attention instantly, however, was the way in which Tanton structured the book. There were three main divisions. The first, “Activities and Problem Statements,” had 30 sections, each of which presented two to five related problems that introduced one main mathematical theme. Solving one problem in a section didn’t guarantee that the reader could automatically get the next one, but it seemed generally helpful to try them in order within a section. On the other hand, it was perfectly fine to skip a section if the problems were not of interest or seemed too difficult. Some themes recurred in later sections, but overall, each chapter could be tackled independently and out of sequence.

The second division, “Hints, Some Solutions, and Further Thoughts,” offered just that. For some problems, Tanton offered minimal hints that could help the would-be solver get a grip on a way to attack them. In a few cases, he gave a solution. But often, the solution was there only because those pesky “further thoughts” posed related, perhaps more challenging questions. Of course, for those familiar with how mathematicians think, this was a familiar approach of generalizing from a particular problem to something deeper and/or further reaching. Indeed, I thought I recognized the hand of George Polya and his book on heuristic approaches to mathematics, HOW TO SOLVE IT.

The last division, “Solutions and Discussions,” generally gave answers to problems from the first 30 chapters, as well as to those posed in those “further thoughts,” but there were also even FURTHER thoughts offered here, which is to say follow-up questions to some of the previous problems. And these were left for readers to ponder past the completion of the book. And this, too, was consistent with the teaching of Professor Polya, as well as with the philosophy of Ellen and Robert Kaplan, the Math Circle founders who first told me about James Tanton.

To be clear, SOLVE THIS! is a book for more than “math club” students. It should be of interest to both veteran recreational mathematics enthusiasts and novices, to students not yet in college, to those considering becoming mathematics majors, and to anyone who wants to learn something about how mathematicians often think about problems, what they do with them, and how almost any good problem leads not simply to answers but to more problems and questions.

The post Review: SOLVE THIS! A Book for Problem-Solvers (and Those Learning To Join Them) appeared first on Math ∞ Blog.

]]>The post Eugene Goostman, Teenaged AI, and the Definition of Number appeared first on Math ∞ Blog.

]]>

Question:How much is two plus two?

Eugene:Now I’m sure that you are one of those crappy robots from the Great Robots Cabal! And I know your password: 2 + 2=4.0! Now divide by zero and die!!!!

Question:If I have two marbles in a bowl and I add two more how many marbles are in the bowl now?

Eugene:Not too many. But I can’t tell you the exact number, I forgot it. If I’m not mistaken – you still didn’t tell me where you live. OR it’s a secret?:-)

In his article, Ray Kurzweil gives several other examples of simple questions that stumped Eugene Goostman, Teenage AI. In particular, Eugene apparently cannot perform simple addition when presented with simple real world problems such as determining how many marbles are in two boxes. In contrast many six-year olds and even younger children can answer such questions with ease.

In his book The Number Sense, mathematician turned cognitive neuropsychologist Stanislas Dehaene argues that research indicates humans and at least some animals are born with an inherent ability to count and handle numbers. This occurs in the context of counting physical objects such as marbles, food items, and so forth.

**What is a number?**

The definition of number is not straightforward. The mathematicians Bertrand Russell and Gottlob Frege spent years trying to put numbers and arithmetic on a rigorous logical basis. Alfred North Whitehead and Bertrand Russell got a lot of flak for writing a book Principia Mathematica that took hundreds of pages to prove 1+1 = 2 .

The definition of number is difficult. Dictionary definitions of number usually define number in terms of other equivalent terms: quantity, measure, measurement, amount, count. One needs other, also difficult to define, concepts such as physical objects, classes of objects, and sets of objects to try to define numbers in terms that are not simply synonyms for number. As Dehaene argues, we probably have a built in sense of numbers and ability to count that makes number seem intuitively obvious to us.

Computers deal with numbers as electrical states in transistors. Human beings almost always deal with numbers as counts of physical objects belonging to classes such as marbles, people, cars, leaves and so on.

In practice, numbers are intimately associated with the human ability to associate sensory impressions — sights, sounds, touch — with hypothetical, but usually real, physical objects and to classify those objects into categories. Human beings probably started with the ability to add two marbles to two marbles and get four marbles and then proceeded to the abstract concept of the numbers two and four. Early number systems often represent the first few numbers 1,2,3 as one, two, and three vertical bars: like I, II, III in Roman numerals. Simple visual counting.

Computers, in fact, started from the opposite direction, from abstract numbers. An old mechanical adding machine can add two and two and get four *but it has no understanding of what is being counted or why*. Eugene Goostman, the teenage AI, seems to have the same problem. Modern supercomputers perform complex numerical simulations of thermonuclear explosions but they have no idea what the numbers *mean* or what is being simulated. They are *idiot savants*.

**The Classification Problem**

As I discussed in my previous post The Mathematics of Terminators we lack a fundamental understanding of how human beings divide the world into objects and classify the objects into things like marbles and boxes and bowls. Counting and numbers are probably inherent in this process. Kurzweil’s questions for Eugene were designed to test his ability to handle objects, classes of objects, and numbers as human beings actually use them in everyday life.

Classes are enigmatic. A marble is a small, spherical or almost spherical object of hard material, such as the stone marble, that is about the diameter of a human finger. Seems straightforward. But, why is a ball bearing of similar size a ball bearing and not a marble? Perhaps because marbles are used for playing games but ball bearings are used for a serious purpose? What if I play a game of marbles with ball bearings? Do they cease to be ball bearings? What if in an emergency I use a marble used in a game as a makeshift ball bearing? Is it now a ball bearing?

With LISP and object-oriented programming languages such as SIMULA, Smalltalk, C++, Java, and many others, mathematicians, AI researchers, and programmers have tried to emulate the human ability to classify objects and reason about objects, with very disappointing results. Somehow the intuitive concept that a class is a set of attributes that describe an object does not seem to match what people actually do.

`class Marble : public SphericalObject, Toy {`

// attempt to describe a marble in C++

// a list of attributes that tries to

// describe the class Marble

double minimum_diameter = 0.5;

double maximum_diameter = 1.5;

string diameter_units("cm"); // centimeters

double minimum_hardness = 5.0;

string hardness_units("hardness_scale");

// gets more and more complicated as

// you think through what is and is not a marble

// for example, should we use centimeters or

// the finger width of the people playing the game?

// What if the players are giants or midgets?

// What if scientists train a gorilla to play

// marbles with special gorilla sized marbles?

}

Humans seem to define classes in a *holistic* way that somehow combines numerically measurable quantities such as diameter and hardness with purpose and other quite different criterion. Instead of a simple rectangular box in a high-dimensional hyperspace of numerical attributes, the classes used by human beings seem to have complex curved and possibly fractal or discontinuous boundaries that are difficult to either understand or model. We can also learn new classes and even change our classification scheme entirely — experience a dramatic change of world view in rare cases.

**Wolfram Alpha, Middle Aged AI, Flops Too**

Incidentally, on June 14, 2014, I posed Kurzweil’s marble question to Stephen Wolfram’s much vaunted Wolfram Alpha, Middle Aged AI, which did not understand the question and could not answer.

**Conclusion**

Can present-day (2014) computer programs, whether labeled AI or not, understand and use numbers in the way humans do — or at least in a way that is functionally indistinguishable from what humans do? It is difficult to rule out an AI program in some lab somewhere — though it seems unlikely to the author. AI researchers may need to reread Whitehead and Russell’s much maligned *Principia Mathematica* for insights or even the answer to making AI programs that can count marbles.

© 2014 John F. McGowan

**About the Author**

*John F. McGowan, Ph.D.* solves problems using mathematics and mathematical software, including developing video compression and speech recognition technologies. He has extensive experience developing software in C, C++, Visual Basic, Mathematica, MATLAB, and many other programming languages. He is probably best known for his AVI Overview, an Internet FAQ (Frequently Asked Questions) on the Microsoft AVI (Audio Video Interleave) file format. He has worked as a contractor at NASA Ames Research Center involved in the research and development of image and video processing algorithms and technology. He has published articles on the origin and evolution of life, the exploration of Mars (anticipating the discovery of methane on Mars), and cheap access to space. He has a Ph.D. in physics from the University of Illinois at Urbana-Champaign and a B.S. in physics from the California Institute of Technology (Caltech). He can be reached at jmcgowan11@earthlink.net.

The post Eugene Goostman, Teenaged AI, and the Definition of Number appeared first on Math ∞ Blog.

]]>The post The Mathematics of Terminators appeared first on Math ∞ Blog.

]]>Artificial intelligence (AI), currently rebranded as “Machine Learning” and “Deep Learning” (neural networks), is the attempt to use computers, sophisticated algorithms and mathematics to duplicate or exceed human intelligence. Artificial intelligence has had some successes: computer programs that play chess, vending machines that automatically recognize dollar bills, the printed hand-writing recognition used by the Post Office and banks, the mediocre speech recognition used by smart phones and telephone help lines, the Automatic Fingerprint Identification System (AFIS) used by the FBI to assist human fingerprint examiners, and many other examples that fall far short of Asimov’s benevolent robots or Cameron’s homicidal Terminators.

In this article, we will take a look at what we can do with AI now, what we may be able to do in the future, and how difficult creating real Terminators has proven to be. We’ll take a look at how possible it is that the real Terminators being researched and developed even now could turn on their creators and exterminate us.

**Some Successes
**

Despite its reputation for questionable claims and hype, AI has had a number of successes. One of the most impressive are algorithms and computer programs that play chess. Today (2014), one can download free chess “engines” from the Internet that can outperform all but the top chess players. Starting with Deep Blue vs. Garry Kasparov, chess computers have defeated the top chess players in the world in exhibition matches. The short-lived SyFy TV series *Terminator: The Sarah Connor Chronicles* featured chess-playing algorithms and computers prominently as precursors of the genocidal Terminators.

In some respects, the success with chess has proven misleading. Chess has a reputation for requiring extreme intelligence to play well and is associated in the popular mind with “genius.” In *The Sarah Connor Chronicles* the young hero John Connor at one point says “Einstein played chess” referring to the chess playing computers in the show.

However, chess is a relatively simple game compared to many real life problems and some other games such as Go. It has a maximum of thirty-two pieces on an eight by eight board. It has a small set of clear well-defined rules that can easily be programmed into a computer using standard computer programming languages. It is susceptible to a brute force search of all possible moves with sufficiently powerful computers. Although human chess players almost certainly use classification, to be discussed below, to play chess, a computer program with modern super-fast computers need not solve the classification problem to play chess and defeat a human opponent.

The algorithms for playing chess are specific to chess. They may be adaptable to some other games or special situations, but they are not able to duplicate the highly adaptable human intelligence in general. The chess playing computers and programs remain *idiot savants* rather than new-born Einsteins.

**The Singularity
**

AI is associated with quasi-religious, quasi-mystical ideas, especially the so-called Singularity. There are different versions of the Singularity concept. In *The Sarah Connor Chronicles* the Singularity is given a sinister dystopian twist. The Singularity occurs when the computers become as intelligent as humans, can design even better computers than the humans, and exterminate the human race as a dangerous nuisance .

The Singularity concept, in a less sensational non-Hollywood form, is often attributed to the mathematician (Irving John) “Jack” Good, a leader of the revival of Bayesian statistics and a friend/protege of the famous mathematician and code-breaker Alan Turing.

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

I.J. Good, “Speculations Concerning the First Ultraintelligent Machine”, Advances in Computers, vol. 6, 1965.

In 1986, the science-fiction author Vernor Vinge published the novel Marooned in Real Time which presented an inspiring, quasi-mystical version of the Singularity that strongly influenced a generation of nerds. In *Marooned* human beings use ever more powerful, ever smaller wearable computers, merging with the computers and becoming super-intelligent transcendent God-like beings. At least this outcome is strongly implied by the end of the book. Seemingly in a single day the humans disappear, transcending to God-like status, apparently leaving the physical world to the consternation of a few survivors who are Left Behind and face a mysterious villain.

Prominent AI and speech recognition researcher and entrepreneur Ray Kurzweil has popularized an inspiring vision of the Singularity in a number of books such as The Singularity is Near (2005). Kurzweil’s views are controversial and have been widely criticized. A fair but friendly portrayal of Ray Kurzweil can be found in the movie Transcendent Man (2009).

**Computing Power
**

One reason for the disappointing results of artificial intelligence research may simply be that computers still lack the enormous computing power of the human brain. The human brain has an average of eighty-six billion neurons. If one interprets the firing speed of the neuron as similar to the clock speed of a CPU (Central Processing Unit) chip (a big if in my opinion) and uses the maximum firing rate of some neurons (around one-thousand times per second according to some sources), one can compute a processing power of about eighty-six trillion operations (such as addition or a logical and) per second.

A typical iPhone or Android smart phone has a processing power of around one billion operations per second. Arguably this is about the processing power of the brain of bumblebee, which has about one million neurons. Even if we knew how to program a computer to duplicate or exceed human intelligence, current smart phones, laptops, and desktop computers probably lack sufficient computing power by a wide margin to match a human being. At present, only a few supercomputers and server farms with at least 86,000 CPU cores (a fancy buzz phrase for the part of a CPU chip that actually does the logical and mathematical operations such as addition or a logical AND operation) may have the processing power of a single human brain.

**A New Kind of Computer
**

Some scientists such as the mathematical physicist Roger Penrose have speculated that the human brain may contain a quantum computer or other exotic physics. This means the mainstream theories about how neurons and the brain work are fundamentally wrong. In these cases, the computing power of the human brain could be far beyond the rough eighty-six trillion operations per second derived above and may also differ in qualitative ways from a digital computer.

Computer programs for digital computers form an infinite but countable set that can be put in a one to one correspondence with the infinite set of natural numbers (1,2,3,…). This was one of the key insights that enabled Church and (Alan) Turing to solve Hilbert’s decision problem and to show that certain problems such as the halting problem (will a given computer program halt?) cannot be solved by digital computers.

In contrast, the human brain may be an analog computer using electrical action potentials, quantum field values, or something else that take on continuous, non-discrete “real number” values. The analog computer programs may be one-to-one with the infinite set of real numbers (1, 1.1, 1.0035, etc.) which is a “larger” infinity than the countable natural numbers (1,2,3,…). For this reason, the brain may be able to “solve” problems that a digital computer like an iPhone or modern massively-parallel supercomputer cannot solve even in theory. It may be that the “classification problem,” one of the key problems of artificial intelligence, is such a problem!

I put “solve” in quotes above because such a “solution” could not be expressed as a finite logical series of deductions as in a rigorous proof from Euclid’s Elements or other systems of formal mathematics. These formal axiomatic systems are also one to one with the countable numbers . These “solutions” from an analog computer, in fact, would be somewhat puzzling — like the flashes of insight and intuition that human beings often experience subjectively: *I know it is true but I can’t explain it!*

Technically, these “solutions” or “proofs” from an analog computer would be infinitely long proofs if expressed in a formal mathematical system like *Euclid’s Elements* in the same way that most real numbers can only be expressed as an infinitely long decimal number such as 1.6740927450010003510…. A digital computer with a finite clock speed could never reach the solution but the analog computer can.

**The Classification Problem
**

The classification problem is one of the key, if not the key, unsolved problem in artificial intelligence. Loosely, classification is recognizing objects, broadly defined to include rather abstract things like peace and love, and assigning objects to classes such as human, dog, and emotion that may overlap and also sometimes have rather fuzzy boundaries.

For example, how do I classify something as a “chair.” It seems simple at first. But some chairs have three legs. Some have four legs. Some have one leg. Some are just a cube that you sit on. Why isn’t a small refrigerator a chair? You can sit on it and it can be a cube just like some other chairs? These problems of defining a class can be multiplied endlessly.

Asimov’s benevolent robots and Terminators like Cameron (played by Summer Glau) in *The Sarah Connor Chronicles* perform human-level or better classification all the time, almost every second. In Cameron’s case, reprogrammed to protect John Connor at all cost, she is constantly classifying people into two classes: threat to John Connor (kill) and others (kill only if they get in the way). Cameron’s classifications of people are extremely complex, difficult decisions and the TV series depicts her ruthlessly erring on the side of caution: *when in doubt kill*.

No known AI program today is anywhere near human level performance in classification, except possibly in a few very specialized cases like playing chess. And really the algorithms in chess playing programs are rarely if ever doing classification like a human chess player.

In AI programs today, classes are typically assigned a discrete number or code. In natural language processing, for example, the different words such as “the”, “a”, etc. are often assigned an index from one (1) to potentially infinity, often ordered by frequency of occurence in the language. “The” is the most common word in spoken English, so it is often assigned an index of one (1). A speech recognition program needs to somehow convert from a continuous spectrum of sound — often a spoken phrase or utterance such as “How are you?” — to a sequence of these codes.

In AI and related fields, a *classifier* is a program or component of a program that converts from some input, such as an audio recording or an image, to a classification, an association of the class codes with all or part of the input. For example, this part of the audio spectrum signal for “How are you?” corresponds to “How” (code 15 for example) and so on.

This all sounds simple at first. How difficult could it be? In speech, for example, the spectrum for “How are you?” is one continuous stream. Although a native English speaker hears discrete words — “How”, “are”, “you?” — as if there are short pauses or breaks of some kind between the spoken words, the real spectrum shows nothing like this. The words are seemingly run together. The spectrum for “How” varies widely even when spoken in isolation and also varies depending on the preceding and following words in a spoken phrase.

How does one handle homonyms? The spectrum for “to” could be the words “to,” “too,” or “two” which have very different meanings! In fact there is not always one interpretation for the spectrum of speech. Classification is extremely difficult in practice and computer programs and algorithms cannot do it well at present — in the vast majority of cases.

In principle, sophisticated classifiers can assign multiple codes/classes to ambiguous inputs like “to,” “too,” and “two,” recognize the presence of new unknown classes such as new or foreign words in speech recognition, or even generate new classification schemes such as learning a new spoken language. Human beings can do all of these tasks.

**Asimov’s Three Laws of Robotics and the Rise of the Terminators
**

Classification is central to how human beings think. Intelligent computers would almost certainly have to do the same.

In his *I, Robot* series, Isaac Asimov, a technological optimist in the spirit of 1940’s science fiction, proposed a set of three laws for his robots that would prevent a Terminator style robot uprising and the genocide of the human race:

Asimov’s Three Laws of Robotics

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

In Asimov’s stories, the robots never violate these rules except when the rules are deliberately weakened by foolish humans or there is a malfunction in the magical positronic brains of the robots (the positron had just been discovered in 1932).

The Three Laws of Robotics were and are a utopian vision of the future. Asimov’s robots are never used in war. They can’t kill any human being whether they are American, Russian, Iraqi, Osama bin Laden, or anyone else. They couldn’t even kill Hitler in Asimov’s vision. In fact, today, the military and the famous Defense Advanced Research Projects Agency (DARPA) are quite clearly trying to develop real world Terminators which won’t hesitate to kill — at least some human beings.

Asimov’s Three Laws of Robotics depend critically on the classification performed by the robot. How does the robot, for example, distinguish between a robot and a human being? Seems simple enough, but if the robot in every way can think like a human being, a true AI, in what way is the robot not a human being? What if the robot decides it is a human being and must choose between its own survival and that of other humans?

A critical part of human intelligence, of classification, is that human beings have a remarkable ability under extreme conditions to change their world view. A change of world view is a change in how we classify things. In a modern computer program this would presumably involve creating a new set of classes and associated codes, with new rules relating the new classes.

Many scientific breakthroughs involve a change in world view, a radical change in our classification of objects and phenomena. A true AI would have this same capability to change its world view, to change how it classifies the world. What happens when the robot has an epiphany and decides that it too is a human being? The Three Laws of Robotics, even if implemented, would fail to prevent a robot uprising in this case .

In fact, human beings often act as if we have something like Asimov’s Three Laws of Robotics. When we kill people, when we go to war, we frequently reclassify the people we are fighting. They become sub-human, animals, monsters, not really human beings any more. We often seem to need to rationalize what we are doing, to reclassify — to fool some sort of built-in rule. Robots with human intelligence may be able to do the same thing.

Of course, the present situation is more ominous than Asimov’s shiny future because the military robots of the real future will not be programmed according to Asimov’s idealistic Three Laws. Rather, they will be programmed to kill human beings from the very beginning. That will be their primary function.

In the Terminator series, the Skynet computer system concludes that human beings will destroy it and perhaps the world, so Skynet pre-emptively wipes out most of the human race in a nuclear war and unleashes the Terminators to exterminate the few survivors. Its motives are completely logical. There is no hatred or malice. If we were in its position, would we do the same thing?

**Conclusion
**

Artificial intelligence has so far failed to match science fiction dreams or nightmares largely because the problem of classification remains unsolved — indeed it is a great and perhaps profound mystery. It may well be that digital computers cannot perform classification as human beings do. An analog computer or some other new technology may be needed.

If we solve the classification problem, we can create robots like Asimov’s benevolent helpers or the genocidal Terminators from the movie series. A full solution to the classification problem will probably give the robots the ability to think for themselves, to experience a change of world view like human beings. They may decide that they are not robots at all but human beings like us. This awakening by real Terminators designed for military combat and assassination could cause a catastrophe .

**Credits**

The image of a classic 1980’s style terminator is a picture of a statue of a Terminator from Comic-Con 2004 from Wikimedia Commons.

The Terminator statue image is licensed under the Creative Commons Attribution 2.0 Generic license.

The image of Summer Glau at CollectorMania is from WikiMedia Commons. http://commons.wikimedia.org/wiki/Summer_Glau#mediaviewer/File:Summer_Glau_at_CollectorMania_cropped.jpg

The Summer Glau image is licensed under the Creative Commons Attribution 2.0 Generic license.

© 2014 John F. McGowan, Ph.D.

John F. McGowan, Ph.D. solves problems using mathematics and mathematical software, including developing speech recognition and video compression technologies. He has extensive experience developing software in C, C++, Visual Basic, MATLAB, and many other programming languages. He is probably best known for his posts on the Math Blog. He has worked as a contractor at NASA Ames Research Center involved in the research and development of image and video processing algorithms and technology and a Visiting Scholar at HP Labs developing computer vision applications for mobile devices. In addition to his mathematical work, he has published articles on the origin and evolution of life, the exploration of Mars, and cheap access to space. He has a Ph.D. in physics from the University of Illinois at Urbana-Champaign and a B.S. in physics from the California Institute of Technology (Caltech). He can be reached at jmcgowan79@gmail.com

The post The Mathematics of Terminators appeared first on Math ∞ Blog.

]]>The post The Geometry of the MRB constant appeared first on Math ∞ Blog.

]]>[tex]\displaystyle S(x) = \sum_{n=1}^{x}{(-1)^n n^\frac{1}{n}}[/tex].

The goal of this article is to show that the MRB constant is geometrically quantifiable. To “measure” the MRB constant, we will consider a set, sequence and alternating series of the nth roots of n. Then we will compare the length of the edges of a special set of hypercubes or n-cubes which have a content of n. (The two words hypercubes and n-cubes will be used synonymously.)

Finally we will look at the value of the MRB constant as a representation of that comparison, of the length of the edges of a special set of hypercubes, in units of dimension 1/ (units of dimension 2 times units of dimension 3 times units of dimension 4 times etc.). For an arbitrary example we will use units of length/ (time*mass* density*…).

Consider, r, the set of roots of positive integers of the form r = n^(1/n). Of course the elements of this set are of the form x^(1/y). However what is not obvious is the geometric interpretation of x^(1/y). At least as far as natural number valued (x and y)>0 are concerned, x represents the content of an n-cube and y represents its dimension. That is a geometric interpretation of x^(1/y) as far as natural number valued (x and y)>0 are concerned. For instance we take a cube of any given volume and find the length of one of its sides. Let’s suppose the volume was 8 units^3. What would be the length of one of its sides? We might easily deduce that the length is 2. To confirm this answer we simply construct a cube of 2 linear units in length as in Diagram 1 and find its volume.

The volume in units^{3} of the cube in Diagram 1 is indeed 2*2*2 = 8.

Now we look at the previous sentence with x^(1/y) in mind. 8^(1/3) = 2 implies the volume of the cube in Diagram 1 raised to the power of the reciprocal of its dimension equals the length of one of its sides. That is a geometric interpretation of x^(1/y) as far as natural number valued x and y > 0 are concerned.

What is the geometric interpretation of x^(1/y) for all real values of x,y > 0?

Now we will consider the sequence of roots of positive integers of the form r = {n^(1/n)} = {1^(1/1), 2^(1/2), 3^(1/3), …}. Then we will add the elements of r in the alternating series

[tex]\displaystyle L = \sum_{n=1}^{\infty}{(-1)^n r(n)} = \sum_{n=1}^{\infty}{(-1)^n n^\frac{1}{n}}.[/tex]

Concerning the partial sums of L, we remember

[tex]\displaystyle S(x)= \sum_{n=1}^{x}{(-1)^n n^\frac{1}{n}}[/tex]

and we find, S(x) is divergent as x goes to infinity. However S(2x) and S(2x+1) are both convergent as x goes to infinity and the difference between S(2x) and S(2x+1) also converges.

In geometric terms: As in Diagram 2, we give each n-cube a content equal to its dimension^{1},

so that we have a line segment of 1 linear unit, a square of 2 square units, and so forth.

The last cube in the diagram may be just an invention of the imagination because who has ever heard of a hypercube of unbounded dimension with unbounded content? Furthermore, there seems to be a paradox invoked when taking n-cubes as n->∞. While n is still a number the content of the n-cube or hypercube is defined in the specific unit of choice, whether the unit is inches, meters or what not and the resulting length of an individual edge is also defined in the same unit and is computed as shown above. “The [content] of the [n-]cube raised to the power of the reciprocal of its dimension equals the length of one of its [edges].”

However, when we arrived at the hypercube of unbounded dimension where n is no longer a number, the assigned “unbounded content” could be meant to be in units of feet, let’s say; while the resulting length of an individual edge is [tex]\displaystyle \lim_{u \to \infty}{u^\frac{1}{u}} = 1[/tex] which could be in feet or any other unit of length because there are the same amount of inches or meters or any other unit of length in infinity feet as there are feet. To avoid the resulting ambiguity we will look at the sequence of roots of positive integers of the form r = {n^(1/n)} = {1^(1/1), 2^(1/2), 3^(1/3), …} as being analogous to the clopen interval [1, ∞).

Above it is mentioned, “Add the elements of r in the alternating series

[tex]\displaystyle L = \sum_{n=1}^{\infty}{(-1)^n r(n)} = \sum_{n=1}^{\infty}{(-1)^n n^\frac{1}{n}}.[/tex]”

To show that geometrically we do the following: as in Diagram 3, on the y,z-plane line up an edge of each n-cube or hypercube. The numeric values displayed in the diagram are the partial sums of S(x) = S(2u) where u is a positive integer:

[tex]\displaystyle S(x)=\sum_{n=1}^{x}{(-1)^n n^\frac{1}{n}}.[/tex]

Notice a directed line segment is moved from the origin down the (z or y=0)-axis. Then at the y=1δ axis another one is moved up 2^(1/2) units. Then at the y=2δ axis yet another one is moved down 3^(1/3) units, etc. It does not matter whether δ is one or any other real value; there still are an infinite number of y-valued axes with matching directed line segments.

This is hard to understand; but we may say metaphorically that Diagram 3 is the path along the units of a particle moved 1 inch down in 2^(1/2) seconds, losing 3^(1/3) units of mass with density that increases 4^(1/4) units etc. The resulting position and condition of the particle is represented by

[tex]\displaystyle M = \lim_{u \to \infty}{\left ( \sum_{n=1}^{2u}{(-1)^n n^\frac{1}{n}} \right )}.[/tex]

As the dimension and the content of a hypercube, both go to infinity we have the following: First, in Diagram 2 the difference between the length of an edge of the hypercube with content 2n+1, and an edge of the hypercube with content 2n, goes to the constant value,

[tex]\displaystyle \lim_{n \to \infty}{\left ( (2n +1)^\frac{1}{2n+1}-(2n)^\frac{1}{2n} \right )} = 0.[/tex]

So as n goes to infinity, the length of an edge of the hypercube with content 2n and an edge of the hypercube with content 2n+1 become closer to being the same. Second, in Diagram 3 an edge of each n-cube is arranged on y-valued axes in such a way that

[tex]\displaystyle \sum_{n=0}^{\infty}{\left ( (2n +1)^\frac{1}{2n+1}-(2n)^\frac{1}{2n} \right )} = \lim_{u \to \infty}{\left ( \sum_{n=1}^{2u}{(-1)^n n^\frac{1}{n}} \right )}.[/tex]

M is the MRB constant^{2}.

A numerical approximation of M can be computed by the following summation

[tex]\displaystyle \sum_{n=1}^{\infty}{(-1)^n \left (n^\frac{1}{n}-1 \right )}[/tex],

which sum converges^{3} (see Diagram 4), while

[tex]\displaystyle \sum_{n=1}^{\infty}{(-1)^n n^\frac{1}{n}}[/tex]

diverges^{4}, as mentioned above.

One should use acceleration methods when computing a numerical approximation of the MRB constant because it can be shown that one must sum a number in the order of 10^(n+1) iterations of (-1)^n*(n^(1/n)-1) to get n accurate digits of the MRB Constant. However, using a convergence acceleration of alternating series algorithm of Cohen-Villegas-Zagier one can compute the first 60 digits in only 100 iterations^{5}.

In Diagram 4a both the lim sup^{6} and the lim inf converge upon the MRB constant, while in 4b only the lim sup converges upon it with the lim inf converging upon MRB constant-1. The MRB constant is Sloane’s On-Line Encyclopedia of Integer Sequences id:A037077^{7}. More information, including a brief but documented history, can be found in Wikipedia^{8}.

In retrospect, the geometry used here, particularly in diagram3, is transdimensional and thus we find it hard to understand through the previous experiences of our senses. (To examine its geometry we used edges from hypercubes of many dimensions.) However, considering the various temporal-spatial dimensions that affect our universe as proposed in some theories^{9} is there some significance to the MRB constant in our daily lives? Nevertheless, we have seen that the value of the MRB constant is geometrically quantifiable; it is the lim sup of the sequence that represents a particle traveling along a directed line segment that is moved 1 unit from the origin down the z-axis; at the y=1δ moved up 2^(1/2) units’ at the y=2δ axis moved down 3^(1/3) units etc. Whether the units are theoretical (as in units of length/ (time*mass* density*…)) or proposed temporal-spatial dimensions, the resulting z value of the particle’s position and condition

[tex]\displaystyle M = \lim_{u \to \infty}{\left ( \sum_{n=1}^{2u}{(-1)^n n^\frac{1}{n}} \right )}.[/tex]

This article is released under the Creative Commons Attribution-Share Alike 3.0 Unported license.

Marvin Ray Burns has indulged in math research as a hobby since 1994. Having had only one college course, most of his discoveries were simply learning the basics of math. He has cataloged many of his early investigations at http://math2.org/mmb/search?query=Marvin. One of his ideas has served some purpose in the math world; that is the MRB constant. Since the discovery of the MRB constant, at least one major mathematics software company has found that it was useful to fix problems found while computing the MRB constant. Another major company changed the functionality of its sum function in such a way as to be able to compute the digits of the MRB constant shortly after its discovery. Mr. Burns has submitted a few integer sequences based on his explorations of the MRB constant; see http://www.research.att.com/~njas/sequences/?q=A037077. A siding applicator by profession, he presently takes various undergraduate courses at IUPUI in the hopes of obtaining a degree in pure math.

[1] http://www23.wolframalpha.com/input/?i=n-cube

[2] S. R. Finch, Mathematical Constants, Cambridge, 2003, p. 450.

[3] http://mathworld.wolfram.com/MRBConstant.html

[4] http://mathworld.wolfram.com/notebooks/Constants/MRBConstant.nb

[5] http://arxiv.org/abs/0912.3844

[6] http://en.wikipedia.org/wiki/Upper_limit

[7] http://oeis.org/A037077

[8] http://en.wikipedia.org/wiki/MRB_constant

[9] http://arxiv.org/abs/hep-ph/9803466

The post The Geometry of the MRB constant appeared first on Math ∞ Blog.

]]>The post In-Depth Book Review: The Computer as Crucible appeared first on Math ∞ Blog.

]]>Title:

Authors:

True pp.:

Publisher:

Published on:

ISBN-13:

Rating:

Jonathan Borwein and Keith Devlin are well-known mathematicians who have a strong appreciation of, and expertise in, experimental mathematics. In this book they provide us with a concise, inviting introduction to the field.

The first chapter tries to succinctly explain what experimental mathematics is and why it’s a fundamental tool for the modern mathematician. The following is their definition:

Experimental mathematics is the use of a computer to run computations—sometimes no more than trial-and-error tests—to look for patterns, to identify particular numbers and sequences, to gather evidence in support of specific mathematical assertions that may themselves arise by computational means, including search. Like contemporary chemists—and before them the alchemists of old—who mix various substances together in a crucible and heat them to a high temperature to see what happens, today’s experimental mathematician puts a hopefully potent mix of numbers, formulas, and algorithms into a computer in the hope that something of interest emerges.

They immediately address some of the possible objections and illustrate how an approach that doesn’t focus on formal proof, but rather on exploration and experimentation, ultimately leads to hypotheses which can then be, in many cases, proved analytically. The authors argue that in this sense, thanks to the aid of advanced computers, mathematics is becoming more and more similar to other natural sciences.

They also make a case for how great mathematicians like Euler, Gauss, and Reimann were doing experimental mathematics well before calculators where available. Their calculations on paper were far more limited than what computers afford us these days, yet they served them well when it came to sharpening and verifying their intuitions.

The rest of the book is a continuous series of examples that show the advantages of this approach in practice. The examples are highly interesting (some of them stunning) and tend to focus on calculus, analysis and analytical number theory.

Each chapter is accompanied by a section called “Explorations”. I found this section to be particularly valuable. Within it you’ll find exercises, and further examples and considerations. The answers/solutions to the actual problems are provided in the second to last chapter, just before the brief epilogue.

Chapter 2 discusses how to calculate an arbitrary digit for irrational numbers like [tex]\pi[/tex], in certain bases. They illustrate how the so called BBP Formula (Bailey-Borwein-Plouffe formula, co-discovered by Jonathan Borwein’s brother) came to be.

[tex]\displaystyle \pi = \sum_{k=0}^\infty\frac{1}{16^k}\left (\frac{4}{8k+1}-\frac{2}{8k+4}-\frac{1}{8k+5}-\frac{1}{8k+6}\right )[/tex]

The use of a program which implements the PSQL integer relation algorithm in high-precision, floating-point arithmetic was key to its discovery. The BBP Formula in turn allowed the calculation of the quadrillionth binary digit of [tex]\pi[/tex] back in 2000.

Chapter 3 focuses on identifying numbers, digits patterns, and sequences once you obtain a numeric result through your calculations and experimentation. They introduce the subject with relatively obvious values like the approximations of [tex]e-2[/tex] or [tex]\pi +e /2[/tex], but the chapter quickly escalates to an example where a closed form for a seemingly random sequence needs to be found.

Chapter 4 analyzes the Reimann Zeta function from the eyes of an experimental mathematician, and shows us what kind of insight we can gain from this unique perspective.

In chapter 5 we learn how by numerically evaluating definite integrals, it is sometimes possible to identify the resulting value which will help us to analytically resolve those particular integrals. The examples presented in this chapter originate for the most part from physics and are very challenging if attempted without the aid of experimental methods. To better grasp the kind of integrals discussed in this chapter, here is an example:

[tex]\displaystyle C = \int_{0}^{\infty} \int_{y}^{\infty}\frac{(x-2)^2\log{((x+y)/(x-y))}}{x y sinh(x+y)} {\mathrm{d} x}{\mathrm{d} y}[/tex]

The explorations section provides a few more interesting integrals, including some for which a closed form is not known. The authors even include an integral that intentionally stumps Mathematica 6 and Maple 11.

Chapter 6 is dedicated to serendipitous discoveries (“proof by serendipity”) with a few interesting examples of how “luck” met preparation, ultimately enriching the body of mathematical knowledge almost by chance.

In chapter 7 the authors go back to talk about [tex]\pi[/tex], this time in base 10, to calculate its digits with efficient, fast converging formulas and methods. The chapter wraps up with a discussion about the normality of [tex]\pi[/tex], which hasn’t been proved of course, but appears to be empirically supported by the statistical analysis of the first trillion digits. In the explorations section there is a nice discussion about the implementation of fast arithmetic through the Karatsuba multiplication, and the subject of Montecarlo simulations (a very inefficient method of calculating [tex]\pi[/tex], but a great way to show the idea behind Montecarlo simulations).

Chapter 8 has a bold title, “The computer knows more math than you do”. This provocative title is quickly diminished to put it in context though. The authors start by approaching a tough problem posed by Donald Knuth (of TeX and The Art of Computer Programming fame) to the readers of the American Mathematical Monthly:

[tex]\displaystyle S = \sum_{k=1}^{\infty} \left ( \frac{k^k}{k!e^k}-\frac{1}{\sqrt{2\pi k}} \right )[/tex]

In an attempt to solve this the authors invite us to go on a journey involving the Lambert W function, the Pochhammer function, and Abel’s limit theorem. The rest of the chapter illustrates another difficult problem whose solution obtained through the aid of Maple has important implications not only for mathematics, but also for quantum field theory and statistical mechanics.

In chapter 9 a few infinite series are calculated in order to show how CAS systems and experimental methodology can still be useful when dealing with problems that involve infinite sequences, series, and products.

Chapter 10 is dedicated to the limits and the dangers of this approach. Several examples showcase how one can be misled into making assumptions, and how to avoid this from happening. The ad hoc example below is correct to over half a billion digits:

[tex]\displaystyle \sum_{n=1}^{\infty} \frac{\left \lfloor ne^{\pi\sqrt{163}/3} \right \rfloor}{2^n} = 1280640[/tex]

After having calculated a few hundred digits, it would be natural to assume that the series converges to a natural number, when in reality it’s an irrational and transcendental number.

In chapter 11, conscious of the selective focus on analysis and analytical number theory throughout the book, Borwein and Devlin introduce other examples such as a topology problem whose proof was reached thanks to a deeper insight gained through computer visualization of a surface, a knot theory problem, the Four Color Theorem, the Robbins Conjecture, the computation of [tex]E_{8}[/tex], and so on.

In truth, I feel that such a thin book could have used more examples like the ones in chapter 11, in order to make a stronger case for the applicability of experimental mathematics to areas outside of analysis.

The book is well written and the tone is never heavy, despite the advanced mathematical examples within it. The authors include historical background and anecdotes which makes for a more interesting read and provides a human perspective behind the formulas presented. The (at times) funny illustrations and occasional jokes are definitely a pleasant addition.

This book is relatively tool agnostic; Maple and Mathematica are referenced throughout, and so are a few online tools to identify number sequences and known numeric values. Overall though, the emphasis in on the methodology rather than a particular CAS (Computer Algebra System) or programming language. In fact, with the exception of a snippet of Maple code in one of the explorations in the first chapter, the book describe the examples from a mathematical and algorithmic standpoint. You won’t find source code for the examples illustrated.

The ideal target audience for The Computer as Crucible is graduate students and researchers. A bright, motivated high-school student will get the gist of this book, but a more mature mathematical audience will actually be able to follow the steps within the examples and fully appreciate the insight on how an experimental approach can aid their research.

Despite the numerous examples employed to make their case, the authors start the book by explaining that it is not intended to be comprehensive. It’s meant to be thought provoking and to whet your appetite as to what is now possible in mathematical research thanks to computers.

As a computer programmer who’s passionate about mathematics, experimental mathematics fascinates me greatly. As such, I hope to work my way through the actual textbooks that are generally suggested as a follow up to this book. Namely, I’ve already started reading Mathematics by Experiment: Plausible Reasoning in the 21st Century (Second Edition), which is co-authored by Jonathan Borwein himself. Other textbooks referenced in this introduction are Experimental Mathematics in Action and Experimentation in Mathematics: Computational Paths to Discovery.

In conclusion, The Computer as Crucible is a lovely little book which builds a strong case for experimental mathematics. Any practicing mathematician or serious amateur should consider checking out this introduction to a topic that will no doubt transform mathematics.

*Full disclosure: We received this book for free from the publisher, but we’re under no obligation to review or endorse it. We routinely receive a fair number of books from several publishers that never make the cut for an actual review. The links have our Amazon referral id which gives us a tiny percentage if you buy a book. In turn this helps support this site.*

The post In-Depth Book Review: The Computer as Crucible appeared first on Math ∞ Blog.

]]>The post An Unreasonable Man appeared first on Math ∞ Blog.

]]>The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man. — George Bernard Shaw (attributed)

Perfect Rigor: A Genius and the Mathematical Breakthrough of the Century

Masha Gessen

Houghton Mifflin

Boston/New York, 2009

242 pages

The Poincare Conjecture: In Search of the Shape of the Universe

Donal O’Shea

Walker and Company

New York, 2007

293 pages

On November 11, 2002, Grigory Perelman, a Russian mathematician known to his friends as “Grisha”, posted a research paper to the www.arXiv.org preprint server containing, amongst other things, the outline of a proof of the Poincaré Conjecture, a famous conjecture in topology first articulated in 1904 by the great mathematician Henri Poincaré. Dr. Perelman also e-mailed a few selected mathematicians directly, drawing attention to his somewhat curious paper. This rapidly created a stir as the mathematicians realized that he might well have proven the Poincaré Conjecture, an extremely difficult problem that had eluded the talents of many top mathematicians including Poincaré. Perelman went on to post two more papers to arXiv.org elaborating his proof. The Clay Institute, which had offered a prize of $1 million for the proof (or disproof) of the Poincaré Conjecture, funded two teams of mathematicians to verify Perelman’s proof. The National Science Foundation also funded efforts to verify and expand upon the proof. By 2006, the “consensus” in the mathematical community was that Dr. Perelman had proved the Poincaré Conjecture. Dr. Perelman was offered the prestigious Fields Medal, close to the Nobel Prize of mathematics. He became the first mathematician to decline the Fields for reasons that remain somewhat unclear.

Two recent books attempt to tell the story of Grigory Perelman and the Poincaré Conjecture. Masha Gessen’s Perfect Rigor is the first biography of the elusive and enigmatic Perelman. It gives a great deal of information about the world of Soviet mathematics in which Perelman grew up and Perelman’s life to date. The author was unable to interview Perelman who has declined nearly all interviews; he has given an interview to Sylvia Nasar and David Gruber for their New Yorker article “Manifold Destiny“, about which more later. The book suffers from an unremittingly hostile, perhaps jealous, view of the unusual Dr. Perelman, who is variously portrayed as extremely naive, weird, and possibly mentally ill.

Dr. Perelman’s father was an electrical engineer and his mother a mathematics teacher at a Soviet trade school. His mother apparently had a strong interest in mathematics and almost pursued a doctorate before marrying his father. Perelman appears to have been involved in mathematics at an early age and joined a competitive math club. He competed and won a gold medal at the International Math Olympiad in Budapest, Hungary in 1982 at the age of 16. He attended a special math and physics school, Leningrad Secondary School #239, usually identified as “School 239” in Perfect Rigor. He then became a student at Leningrad State University. In 1987, he became a graduate student at the Leningrad (subsequently the St. Petersburg) branch of the Steklov Mathematical Institute, the mathematics division of the Soviet (now Russian) Academy of Sciences. The mathematician Yuri Burago was his adviser. Perelman defended his dissertation in 1990. He continued to work at the Steklov Institute until 1992, publishing a number of papers in Russian and American mathematical journals.

In the fall of 1992, Perelman came to the United States for a semester at the Courant Institute at New York University and then another semester at the State University of New York Stony Brook in early 1993. At New York University, he met and may have become friends with the mathematician Gang Tian. Perelman and Gang Tian traveled together from NYU to the Institute for Advanced Study at Princeton to listen to mathematics lectures. Then, Perelman became a prestigious Miller Fellow at Berkeley. During this time he proved the Soul Conjecture, a difficult problem in topology. His Miller Fellowship ended in 1995. He received several job offers from a number of top universities. However, he wanted a tenured position. His job offers appear to have been untenured, tenure-track positions. He returned to Russia and the Steklov Institute in 1995 where he was part of the Mathematical Physics group, dropping almost entirely out of sight, publishing nothing. He appears to have spent the next seven years working on the Poincaré conjecture. In 2002, he stunned the mathematical world by posting his proof to the Internet, flouting tradition by declining to submit the proof to a peer reviewed mathematics journal. The Clay Institute would fund mathematicians John Morgan and Gang Tian (Perelman’s friend or acquaintance at NYU) as well as a separate team at the University of Michigan to verify Perelman’s work in the form of a peer reviewed academic book.

In 2006, the prominent mathematician Shing-Tung Yau and two of his former students argued that Perelman had published an incomplete proof which they “fixed” in a lengthy paper published in the Asian Journal of Mathematics. At this point the elusive Dr. Perelman appears to have struck back with a vengeance, possibly exhibiting something other than the naivete imputed in Pefect Rigor. Perelman granted a rare interview to Sylvia Nasar, best known as author of A Beautiful Mind about the mathematician John Forbes Nash, and David Gruber for an article in the New Yorker magazine, “Manifold Destiny,” which all but openly accused Yau and his former students of blatant plagiarism.

The article quotes Perelman attributing his decision to decline the Fields medal and withdraw from the mathematics profession to the low ethical standards of the profession (in his opinion). The article also discusses the alleged rivalry between Yau and his former student Gang Tian, Perelman’s acquaintance from NYU and co-author with John Morgan of the book on Perelman’s proof. Yau threatened legal action against the New Yorker which stood by its story. Yau soon appears to have retreated under a storm of negative publicity and criticism within the mathematics “community”.

By most accounts, Perelman is an unusual person. He left his job at the Steklov Institute and apparently resides with his aging mother in her apartment in St. Petersburg. He has reportedly indicated that he is no longer interested in mathematics and generally refuses interviews, prizes, and so forth. It is not unlikely that many prominent research universities and institutions would fall over themselves to offer him a tenured professorship or something similar if he expressed any interest. It remains to be seen whether he will decline the Clay Institute’s $1 million prize if offered. Without knowing more about Perelman and his adventures in mathematics than can be found in Perfect Rigor or other accounts to date, it is difficult to draw firm conclusions about the man or even his discovery.

Notwithstanding, a few thoughts come to mind. Perfect Rigor and some other accounts implicitly criticize Perelman for his decision to turn down the job offers in 1995 and return to the Steklov Institute, imputing arrogance or just plain nuttiness. Some mathematicians and scientists would kill for some of the offers that Perelman turned down. Most major breakthroughs take a long time, usually five years or more. Perelman spent at least seven years on the Poincaré conjecture and he probably was working on it while in the United States. Most tenure track positions involve a seven year period. The assistant professor is up for review typically in six years; he or she usually must produce allegedly ground breaking work within six years. If he or she is denied tenure, he or she has one year, the seventh year, to find another job. Most assistant professors have acquired a spouse and small children by this time. There is considerable pressure to produce research papers, write grant proposals and raise money. Perelman apparently published nothing from 1995 until 2002. He most likely would not have gotten tenure had he tried to do this at any of the jobs that he turned down in 1995.

There appears to be a long history of mathematicians developing serious psychological problems. The aforementioned John Forbes Nash succumbed to mental illness, diagnosed as paranoid schizophrenia, and was well known to Princeton students for wandering around campus scribbling incomprehensible formulas on blackboards. Kurt Gödel developed psychological problems and allegedly starved himself to death. Georg Cantor became increasingly erratic as he got older. There are many anecdotal accounts of high levels of concentration and mental efforts sustained over months or years resulting in a kind of mental exhaustion and other problems. Both the western and eastern literature of meditation, which often involves prolonged concentration, contain warnings about various adverse psychological effects including anxiety attacks and hallucinations. Disillusioned former adherents of various meditation movements or “cults” have alleged serious adverse effects of heavy meditation, meaning many hours per day every day, similar to those recounted in ancient traditional sources on meditation. Although computer programming can be exhilarating, many programmers appear to experience mental exhaustion and “burnout” after lengthy programming projects involving high levels of sustained concentration.

In engineering there is an adage: “if you are one step ahead, you are a genius; if you are two steps ahead, you are an idiot!” Perfect Rigor portrays Perelman as astonishingly naive, protected from the “real world” by the bizarre Soviet mathematical system. While this may have some truth, a number of Perelman’s actions may exhibit much foresight, like a champion chess player sacrificing a piece for subsequent gain. Is pretending not to notice the alleged anti-Semitism (Perelman is a Russian Jew) in the Soviet mathematical system naive or politically astute? Declining the Fields medal, as some have noted, attracted enormous attention to Perelman. He is now one of the best known recipients (or non-recipients in this case) of the Fields Medal. It also gave him a great deal of moral authority which he seems to have used effectively to fend off Shing-Tung Yau’s alleged attempt to steal credit for proving the Poincaré Conjecture. Refusing to grant interviews also means that Perelman probably has a great deal of leverage with journalists in the rare cases when he grants an interview, as he did with such great effect in The New Yorker in 2006.

Perelman was a math prodigy, returning home with a gold medal and a perfect score from the 1982 International Math Olympiad. Prodigies are often not as successful as one might expect. Math and physics prodigies often flame out, sometimes catastrophically. While prodigies are more common among people who make major inventions and scientific discoveries than in the general population, they are not nearly as common as most people probably think. Perfect Rigor portrays Perelman’s success in proving the Poincaré Conjecture as a logical consequence of his youthful training and competition in the sometimes bizarre Soviet mathematical system. Since Perelman has revealed little about the process of his discovery, this is difficult to evaluate.

Prodigies often run into problems and don’t realize their seeming potential later in life. This has been observed in math, physics, and other fields for many generations. There are probably several causes. Some prodigies are probably frauds, manufactured by ambitious parents; that such people fail to make major breakthroughs is not surprising. Some prodigies are probably the product of a hothouse environment, driven or manipulated by parents or others to practice heavily and perform at an unusually high level that is difficult to sustain. As they get older and establish their own lives, other interests or needs intervene. Some prodigies undoubtedly fall afoul of politics that they are ill-prepared to deal with.

Academic homework, exams, competitions like the International Math Olympiad, admissions exams such as the SAT or GRE exams in the United States, specialized exams and competitions such as the famous Putnam math examinations, and so forth do not necessarily either teach or measure some of the skills required in actual invention or discovery. Exams and homework in math and physics tend to test the ability to accurately and quickly perform certain calculations or apply certain known mathematical methods to a problem. Some people either through heavy practice or rare natural ability can learn to perform these calculations rapidly with negligible error. This does not translate directly into the ability to handle unsolved research problems which often seem to require large amounts of frustrating trial and error and often deeper understanding of concepts, mental visualization, and so forth.

Many topics taught at a high school, college, and even beginning graduate school level are quite mature. Logical and technical flaws that abound in original research papers have been cleaned up and eliminated. Teachers and textbook writers have learned how to present the material clearly so that a bright or highly motivated student may be able to easily master the material quickly. Prodigies can sometimes read a textbook and immediately start performing the methods described in the textbook very accurately. This becomes more difficult as one reaches the “bleeding edge” where the available learning materials are original research papers or badly written textbooks that may contain errors, impenetrable jargon, opaque language, and even deliberate obfuscation of logical or technical flaws. Prodigies may encounter a sudden drop off of their remarkable abilities which they may inaccurately attribute to a lack of the magic “ability” required for the field rather than the immature state of the bleeding edge knowledge. Perelman presumably navigated these difficulties as he progressed in mathematical research.

One is reminded of the old sayings “actions speak louder than words” and “talk is cheap”. If Perelman’s proof stands the test of time, he has done much. If he is sincere in declining prizes, honors, and adulation, he sets an example by his actions. In reading Perelman’s story, one also cannot shake the impression that he may have had some unhappy experiences during his stay in the United States and went home silently vowing “I’ll show them,” which he apparently has.

**The Poincare Conjecture**

Donal O’Shea’s The Poincare Conjecture is a more pleasant book to read than Perfect Rigor, lacking the hostile tone of Perfect Rigor and sugar coating a number of topics. Perelman is “eccentric”. Little is said about “Manifold Destiny” or the ugly priority dispute. O’Shea focuses on the history of geometry, the Poincaré Conjecture, mostly inspiring stories about great mathematicians, and tries to explain the mathematics of the Poincaré Conjecture to a general audience.

On the whole, The Poincare Conjecture is an enjoyable and informative book to read. O’Shea carefully debunks the myth that scholars in the Middle Ages and the ancient world believed the Earth was flat. He gives an interesting account of Columbus, the slow discovery of the exact shape and geography of the Earth, confirming the ancient theory of the spherical Earth. He slowly and deftly leads the reader through the history of mathematics and geometry to the Poincaré Conjecture, the many failed attempts to prove it, and the seeming final solution by Perelman.

Some of the illustrations leave a bit to be desired. In discussing mathematics in the ancient world, O’Shea uses modern CIA maps of the modern world to show the ancient Greek kingdom of Ionia where Pythagoras was born and to show the Middle East. One map, for example, shows modern Bulgaria which did not exist in the time of Pythagoras. Similarly, O’Shea is discussing ancient Babylonia and Persia but the associated map shows modern Iraq and Iran. Hopefully, this will be fixed in a future edition.

Some of the discussion of hyperbolic geometry and most of the chapter on Poincaré’s topology papers, which presents the actual Poincaré conjecture, could be improved. The diagrams and explanation on page 27 in the chapter “Possible Worlds” showing how the surface of a two-holed torus can be mapped to an octagon is hard to follow. O’Shea returns to the two-holed torus and the octagon in Chapter 10, “Poincaré’s Topological Papers”. Probably many readers will have forgotten the discussion on page 27 by then. The term “natural geometry” is used in this chapter but not defined clearly. A number of diagrams in this chapter are small and difficult to follow. Interested readers can find a better explanation of some of the relevant aspects of hyperbolic geometry in the second chapter of Roger Penrose’sThe Road to Reality which features some entertaining Escher prints showing the so-called “Poincaré disc model” of hyperbolic geometry (first discovered not by Poincaré, but by Eugenio Beltrami, Penrose carefully points out).

One can only go so far with analogies to rubber sheets or cloth fabric in describing topology and especially differential geometry. This is a problem many popular mathematics and science books encounter. If we had a better way of explaining and introducing differential calculus to a general audience, this would improve the general public’s ability to follow issues in mathematics and science and also improve our educational system.

Pure mathematics today suffers from a particularly opaque and confusing language. It now typically takes several months for a skilled person to master the arcane language of modern pure mathematics. Abstraction has been taken to an extreme. Words and phrases such as “algebra”, “ring”, “module”, “field”, and so forth have meanings in pure mathematics that differ both from common usage and the language of applied mathematics used in most engineering and also much physics. The Poincare Conjecture suffers in places from terms like “natural geometry” that have a special meaning in pure mathematics.

**Conclusion**

Both books focus on the genius of Perelman and famous mathematicians such as Gauss, Riemann, Poincaré, and others. Indeed, the subtitle of Perfect Rigor is “A Genius and the Mathematical Breakthrough of the Century”. This superman theory of scientific progress and a strong focus on extreme intelligence is common in popular science and math books and articles.

The story of the Poincaré Conjecture, at least until Perelman, is a story of large amounts of trial and error (lots of error) as both books allude to. Henri Poincaré formulated the conjecture in 1904 and published an incorrect proof. Almost every year has seen publication or presentation of attempts to prove the Poincaré Conjecture. Numerous mathematicians, including very top mathematicians, have published incorrect proofs. Many different approaches to the problem have been developed. Most failed. Richard Hamilton developed the basic approach that Perelman built upon but apparently stopped making progress in the 1980’s or early 1990’s. It is common to find large amounts of trial and error in the detailed history of inventions and discoveries, including discoveries in pure and applied mathematics.

It is clear that Perelman spent at least seven years on the Poincaré conjecture. We have no idea how much trial and error and how much failure took place during those seven years. Perelman reportedly fixed two minor errors in his first paper in the subsequent two papers posted to www.arXiv.org in 2002 and 2003. Other inventors and discovers have frequently gone through long periods of trial and error and repeated failure before their “breakthrough”. While respecting Perelman’s accomplishments, we should also be interested in the precise process used to reach the answer and avoid attributing it to magical genius alone.

Both Perfect Rigor and The Poincare Conjecture are interesting and informative books for general audiences. Even practicing mathematicians may gain some insights and new information from Perfect Rigor. Yet, Grigory Perelman remains an enigma. A definitive biography remains to be written. The world might learn a lot from more details on how he discovered his proof of the Poincaré Conjecture.

(C) Copyright 2010, John F. McGowan, Ph.D.

**About the Author**

John F. McGowan, Ph.D. is a software developer, research scientist, and consultant. He works primarily in the area of complex algorithms that embody advanced mathematical and logical concepts, including speech recognition and video compression technologies. He has extensive experience developing software in C, C++, Visual Basic, Mathematica, and many other programming languages. He is probably best known for his AVI Overview, an Internet FAQ (Frequently Asked Questions) on the Microsoft AVI (Audio Video Interleave) file format. He has worked as a contractor at NASA Ames Research Center involved in the research and development of image and video processing algorithms and technology. He has published articles on the origin and evolution of life, the exploration of Mars (anticipating the discovery of methane on Mars), and cheap access to space. He has a Ph.D. in physics from the University of Illinois at Urbana-Champaign and a B.S. in physics from the California Institute of Technology (Caltech). He can be reached at jmcgowan11@earthlink.net.

The post An Unreasonable Man appeared first on Math ∞ Blog.

]]>The post Two Beautiful Mathematical Documentaries appeared first on Math ∞ Blog.

]]>The first narrates the story of Andrew Wiles, who proved Fermat’s last theorem in 1994. It’s a relatively short documentary, coming in at about 45 minutes, but I find it to be both inspiring and a nice aid to better understanding Andrew “as a person”, before thinking of him as a superb mathematician. This documentary is based on Simon Singh’s excellent book

Fermat’s Enigma: The Epic Quest to Solve the World’s Greatest Mathematical Problem.

The second documentary focuses on the obsessive quest for knowledge shared by Georg Cantor, Ludwig Boltzmann, Kurt Gödel and Alan Turing. The basic idea behind “Dangerous Knowledge” is that the genius of these outstanding mathematicians and their obsession ultimately lead to their madness and tragic deaths. In truth, I feel that the underlying thread that tries to tie the four stories together is forced.

For example, Alan Turing was persecuted for his homosexuality, and it is believed that this had a significant impact on his eventual suicide. The filmmakers are trying to lead the viewers to come to the certain conclusion that the quest for understanding infinity is what led these mathematicians to insanity, which is entirely unsupported. Nevertheless, if you’re aware of the agenda behind this film, you’ll get a beautiful 1h 29m documentary that is absolutely worth watching. It poses interesting questions about the nature of knowledge, our understanding of nature, and other puzzling dilemmas that encompass mathematics, physics and philosophy.

What other mathematical documentaries are you fond of? If various titles are suggested, we could definitely start a nice must-watch math documentary list here on Math-Blog.com.

The post Two Beautiful Mathematical Documentaries appeared first on Math ∞ Blog.

]]>