In an enterprise such as the building of the atomic bomb the difference between ideas, hopes, suggestions and theoretical calculations, and solid numbers based on measurement, is paramount. All the committees, the politicking and the plans would have come to naught if a few unpredictable nuclear cross sections had been different from what they are by a factor of two.
Emilio Segre (Nobel Prize in Physics, 1959, key contributor to the Manhattan Project) quoted in The Making of the Atomic Bomb by Richard Rhodes (Simon and Schuster, 1986)
It is widely believed that invention and discovery, especially breakthroughs, revolutionary technological advances and scientific discoveries, are largely the product of genius, of the exceptional intelligence of individual inventors and discoverers. This is one of the lessons frequently inferred from the success of the wartime Manhattan Project which invented the atomic bomb and nuclear reactors. It is often argued that the Manhattan Project succeeded because of the exceptional intelligence of the physicists, chemists, and engineers who worked on the atomic bomb such as Emilio Segre, quoted above. The scientific director J. Robert Oppenheimer is often described as a genius, as are many other key contributors.
Since World War II, there have been numerous “new Manhattan Projects” which have recruited the best and the brightest as conventionally defined and mostly failed to replicate the astonishing success of the Manhattan Project: the War on Cancer, tokamaks, inertial confinement fusion, sixty years of heavily funded research into artificial intelligence (AI), and many other cases. As discussed in the previous article “The Manhattan Project Considered as a Fluke,” the Manhattan Project appears to have been a fluke, atypical of major inventions and discoveries, especially in the sucess of the first full system tests, the Trinity test explosion (July 16, 1945) and the atomic bombings of Hiroshima and Nagasaki (August 6 and 9, 1945) which cost the lives of over 100,000 people and which are, fortunately, so far the only examples of the use of atomic weapons in war.
With rising energy prices, possibly due to “Peak Oil,” a dwindling supply of inexpensive oil and natural gas, there have already been many calls for “new new Manhattan Projects” for various forms of alternative energy. If “Peak Oil” is correct, there is an urgent and growing need for new energy sources. Given the long history of failure of “new Manhattan Projects,” what should we do? This article argues that the importance of genius in breakthroughs is heavily overstated both in scientific and popular culture. Much more attention should be paid to other aspects of the breakthrough process.
To a significant extent, the issue of human genius in inventions and discovery overlaps the topic of the previous article “But It Worked in the Computer Simulation!” which argues that computer simulations have many limitations at present. Frequently, when people refer to human genius they are referring to the ability of human beings to simulate their ideas in their head without actually building a machine or performing a physical experiment. Many of the limitations that apply to theoretical mathematical calculations and computer simulations apply to human beings as well.
One important difference at present is that human beings think conceptually and computers at present cannot. This article argues that many historical breakthroughs were due to an often unpopular contrarian mental attitude that is largely uncorrelated with “genius” as conventionally defined — not due to exceptional conceptual reasoning skills. The success of this contrarian mental attitude is often dependent on the acceptance, which is usually grudging at first, of society at large.
A Note to Readers: The issue of genius and breakthroughs is highly relevant to invention and discovery in mathematics, both pure and applied. This article discusses many examples from applied mathematical fields such as physics, aerospace, power, propulsion, and computers. Nonetheless, it is not a mathematics specific article.
What is Genius?
Genius is difficult to define. It is usually conceived as an innate ability, often presumed to be genetic in origin, to solve problems through reasoning better than most people. It is often discussed as if it referred to a simple easily quantifiable feature of the mind such as the speed at which people think consciously (in analogy to the clock speed of a computer) or the number of items that one can keep track of in the conscious mind at once (in analogy to the number of registers in a CPU or the amount of RAM in a computer). People have tried to quantify a mysterious “general intelligence” through IQ tests. In practice, genius is often equated with a high IQ as measured on these tests (e.g. an IQ of 140 or above on some tests is labeled as “genius”).
Genius is an extremely contentious topic. Political conservatives tend to embrace genius and a genetic basis for genius. Political liberals tend to reject genius and especially a genetic basis for genius. Some experts such as the psychologist K. Anders Ericsson essentialy deny that genius exists as a meaningful concept. The science writer Malcolm Gladwell who has heavily popularized Ericsson’s ideas stops just short of “denying” genius in his writings and public presentations.
Many people, including the author, have a subjective impression that some people are smarter than other people. The author has met a number of people that the author considered clearly smarter than the author. This seemed difficult to explain in purely environmental terms. It is extremely difficult in practice to separate environment from possible genetic factors or other as yet unknown factors that may contribute to perceived or measured “intelligence.” Sometimes really smart people do extremely dumb things: why?
Genius is almost always conceived as an individual trait, similar to height or hair color, something largely independent of our present social environment. Geniuses are exceptional individuals independent of their friends, family, coworkers and so forth. Genius may be the product of environment in the sense of better schooling and so forth. Rich kids generally go to better schools or so most people believe. Nonetheless, in practice, in the scientist’s laboratory or the inventor’s workshop, “genius” is viewed as an individual trait. This conception of individual genius coexists with curious rhetoric about “teams” in business or “scientific communities” in academic scientific research today.
In particular, genuine breakthroughs usually take place in a social context, as part of a group. Historically, prior to World War II and the transformation of science that occurred during the middle of the twentieth century, these were often small, loose-knit, informal groups. James Watt collaborated loosely with some professors at the University of Glasgow in developing the separate condenser steam engine. Octave Chanute and the Wright Brothers seem to have collaborated informally without a written contract or clear team leader. Albert Einstein participated in a physics study group while at the patent office and worked closely at times with his friend and sometimes co-author the mathematician Marcel Grossmann. In his work on a unified field theory, in a different social context at the Institute for Advanced Study at Princeton, Einstein largely failed.
After success, there were often bitter fallings out over credit: “I did it all!” The “lone” inventor or discoverer that is now remembered and revered is typically the individual who secured the support of a powerful institution or individual as James Watt did with wealthy industrialist Matthew Boulton, the Wright Brothers (minus Octave Chanute) did with the infamous investment firm of Charles Flint and Company, and Einstein did with the powerful German physicist Max Planck and later the British astronomer and physicst Arthur Eddington. In a social context, the whole can be greater than the sum of the parts. A group of mediocrities that work well together (whatever that may mean in practice) can outperform a group of “stars” who do not work well together. There may be no individual genius as commonly conceived.
This article accepts that individual genius probably exists as a meaningful concept, but genius is poorly understood. It argues that genius is not nearly as important in genuine scientific and technological breakthroughs as generally conceived.
Genius and Breakthroughs in Popular Culture
In the United States, popular culture overwhelmingly attributes scientific and technological breakthroughs to genius, to extreme intelligence. This is especially true of science fiction movies and television such as Eureka, Numb3rs, Star Trek, The Day the Earth Stood Still (1951), The Absent Minded Professor (1961), Real Genius (1985), and many others. Movies and television frequently depict extremely difficult problems being solved with little or no trial and error very quickly, sometimes in seconds. It is common to encounter a scene in which a scientist is shown performing some sort of symbolic manipulation on a blackboard (sometimes a modern white board or a see-through sheet of plastic) in seconds on screen and then solving some problem, often making a breakthrough, based on the results of this implied computation or derivation. This is also extremely common in comic books. There are a number of materials in popular culture aimed specifically at children such as the famous Tom Swift book series and the Jimmy Neutron movie and TV show (The Adventures of Jimmy Neutron: Boy Genius) which communicate the same picture. Many written science fiction books and short stories convey a similar image.
Many of these popular culture portrayals are extremely unrealistic, particularly where genuine breakthroughs are concerned. In particular, most genuine breakthroughs took many years, usually at least five years, sometimes decades, even if one only considers the individual or group who “crossed the finish line.” Most genuine breakthroughs, on close examination, have involved large amounts of trial and error, anywhere from hundreds to tens of thousands of trials or tests of some sort.
Ostensibly factual popular science is often similar. It is extremely common to find the term “genius” in the title, sub-title, or cover text of a popular science book or article as well as the main body of the book or article. The title of James Gleick’s biography of the famous physicist Richard Feynman (Nobel Prize in Physics, 1965, co-discoverer of Quantum Electrodynamics aka QED) is… Genius. Readers of the book remain shocked to this day to read that Feynman claimed that his IQ had been measured as a mere 125 in high school; this is well above average but not what is usually identified as “genius.” A genius IQ is at least 140. Feynman scoffed at psychometric testing, perhaps with good reason. One should exercise caution with Feynman’s claims. Richard Feynman was an entertaining storyteller. Some of his accounts of events differ from the recollections of other participants (not an uncommon occurrence in the history of invention and discovery). Feynman’s non-genius IQ is not as surprising as it might seem. One can seriously question whether a number of famous figures in the history of physics were “geniuses” as commonly conceived: Albert Einstein, Michael Faraday, and Niels Bohr, for example.
Popular science often creates a similar impression to the science fiction described above without, however, making demonstrably false statements. Often, the long periods of trial and error and failure that precede a breakthrough are simply omitted or discussed very briefly. The reported flashes of insight, the so-called “Eureka moments,” which can be very fast and abrupt if the reports are true, are generally emphasized and extracted from the usual context of years of study and frequent failure that precede the flash of insight. Popular science books tend to focus on personalities, politics, the big picture scientific or technical issues, and… the genius of the participants. The discussions of the trial and error, if they exist at all, are extremely brief and easy to miss: a paragraph or a few pages in a several hundred page book for example. In the 886 page The Making of the Atomic Bomb, the author Richard Rhodes devotes a few paragraphs to the enormous amount of trial and error involved in developing the implosion lens for the plutonium atomic bomb (page 577, emphasis added):
The wilderness reverberated that winter to the sounds of explosions, gradually increasing in intensity as the chemists and physicists applied small lessons at a larger scale. “We were consuming daily,” says (chemist George) Kistiakowsky, “something like a ton of high performance explosives, made into dozens of experimental charges.” The total number of castings, counting only those of quality sufficient to use, would come to more than 20,000. X Division managed more than 50,000 major machining operations on those castings in 1944 and 1945 without one explosive accident, vindication of Kistiakowsky’s precision approach.
While a close reading of The Making of the Atomic Bomb reveals an enormous amount of trial and error at the component level, it is easy to miss this given how short and oblique the references are, buried in 886 pages. The term “trial and error” is not listed in the detailed 24 page index of the book. The index on page 884 lists Tregaskis, Richard, Trinity, tritium, etc. in sequence — no “trial and error”.
In most cases, popular science books don’t point out the obvious interpretation of these huge amounts of trial and error. One is not seeing the results of genius, certainly not as frequently depicted in popular culture, but rather the results of vast amounts of trial and error. This trial and error is extremely boring to describe in detail, so it is either omitted or discussed very briefly. Where the popular science has the goal of “inspiring” students to study math and science, a detailed exposition of the trial and error is probably a good way to convince a student to go play American football (wimpy American rugby with lots of padding) or soccer (everybody else’s football) instead.
On a personal note, the author read The Making of the Atomic Bomb shortly after it was first published and completely missed the significance of Segre’s quote and the passage above. After researching many inventions and discoveries in detail, it became apparent that the most common characteristic of genuine breakthroughs is vast amounts of trial and error, usually conducted over many years. What about the Manhattan Project? Rereading the book closely reveals occasional clear references to the same high levels of trial and error, in this case at the component level. The Manhattan Project is quite unusual in that the first full system tests were great successes: worked right the first time. Many of the theoretical calculations appear to have worked better than is typically the case in other breakthroughs.
Remarkably, the Manhattan Project appears to have been unusually “easy” among major scientific and technological breakthroughs. The first full system tests, the Trinity, Hiroshima, and Nagasaki bombs, were spectacular successes which ended World War II in days. This is very unusual. Attempts to replicate the unusual success of the Manhattan Project have mostly failed. It may well be that even in most successful inventions and discoveries the equivalents of the critical nuclear cross sections that Segre mentions in the quote above are less convenient than occurred in the Manhattan Project.
The Rapture for Geeks
In 1986, the science fiction writer and mathematician Vernor Vinge published a novel length story “Marooned in Real Time” in the Analog Science Fiction/Science Fact science fiction magazine which was shortly thereafter published as a book by St. Martin’s Press/Bluejay Books. This novel introduced the notion of a technological singularity to a generation of geeks.
The basic notion that Vinge presented in the novel was that rapidly advancing computer technology would increase or amplify human intelligence. This in turn would accelerate both the development of computer technology and other technology, resulting in an exponential increase, eventually reaching a mysterious “singularity” somewhat in analogy to the singularities in mathematics and physics (typically a place in a mathematical function where the function becomes infinite or undefined). In the novel, most of the human race appears to have suddenly disappeared, possibly the victims of an alien invasion. A tiny group of survivors have been “left behind.” By the end of the novel, it is strongly implied that the missing humans have transcended to God-like status in a technological singularity.
Vinge’s notion of a technological singularity has had considerable influence and it probably also helps sell computers and computer software. It has been taken up and promoted seriously by inventor, entrepreneur, and futurist Ray Kurzweil, the author of such books as The Age of Spiritual Machines and The Singularity is Near. Kurzweil is, for example, the chancellor of the Singularity University which charges hefty sums to teach the Singularity doctrine to well-heeled individuals, likely Silicon Valley executives and zillionaires. Kurzweil’s views have been widely criticized, notably by former Scientific American editor John Rennie and others. The recent movie “Transcendent Man,” available on NetFlix and iTunes, gives a friendly but fair portrait of Ray Kurzweil.
The Singularity concept implicitly assumes the common notion that intelligence and genius drive the invention and discovery process. It also assumes that computer technology can amplify or duplicate human intelligence. Thus, increase intelligence and automatically the number and rate of inventions and discoveries will increase. An exponential feedback loop follows logically from these assumptions.
If invention and discovery is largely driven by large amounts of physical trial and error (for example), none of this is true. To be sure, fields such as computers and electronics with small scale devices where physical trial and error can be performed rapidly and cheaply will tend to exhibit higher rates of progress than fields with huge, expensive, time-consuming to build devices such as modern power plants, tokamaks, particle accelerators and so forth. This is, in fact, what we see at the moment. But there will be no Singularity.
There is now over forty years of experience in fundamental physics and aerospace, both early adopters of computer technology, in using computers to supposedly enhance human intelligence and accelerate the rate of progress. Both of these fields visibly slowed down around 1970 coincident with the widespread adoption of computers in these fields. This is particularly noticeable in aviation and rocketry where modern planes and rockets are only slightly better than the planes and rockets of 1971 despite the heavy use of computers, computer simulations, computer aided design, and so forth. NASA’s recent attempt to replicate the heavy lift rocket technology of the 1970s (the Saturn V rocket), the modern Ares/Constellation program, has foundered despite extensive use of computer technologies far in advance of those used in the Apollo program, which quite possibly owed much of its success to engineers using slide rules.
Similarly, if one looks at the practical results of fundamental physics, comparable to the nuclear reactors that emerged from the Manhattan Project, the results have been similarly disappointing. It is even possible the prototype miniature nuclear reactors and engines of the cancelled nuclear reactor/engine projects of the 1960’s exceed what we can do today; knowledge has been lost due to lack of use.
Are computers and computer software amplifying effective human intelligence? If one looks outside the computer/electronics fields, the evidence for this is generally negative, poor at best. Are computers and computer software accelerating the rate of technological progress, invention and discovery, increasing the rate of genuine breakthroughs? Again, if one looks outside the computer/electronics fields, the evidence is mostly negative. This is particularly noticeable in the power and propulsion areas, where progress appears to have been faster in the slide rule and adding machine era. Rising gasoline and energy prices reflect the negligible progress since the 1970s. The relatively high rates of progress observed in some metrics (e.g. Moore’s Law, the clock speed of CPU’s until 2003, etc.) in computers/electronics can be attributed to the ability to perform large amounts of trial and error rapidly and cheaply combined with cooperative physics, rather than an exponential feedback loop.
Genius and Breakthroughs in Scientific Culture
“Hard” scientists like physicists or mathematicians tend to act as if they believe in “genius” or “general intelligence”. In academia, such scientists tend to be liberal Democrats in the United States. Probably consciously they do not believe that this genius is an inborn, genetic characteristic. Nonetheless, the culture and institutions of the hard sciences are built heavily around the notion of individual measurable genius.
Many high school and college math and science textbooks have numerous sidebars with pictures and brief biographical sketches of famous prominent mathematicians and scientists. These often include anecdotes that seem to show how smart the mathematician or scientist was. A particularly common anecdote is the account of the young Gauss figuring out how to quickly add the numbers from 1 to 100 (The trick is 1 plus 100 is 101, 2 plus 99 is 101, 3 98 is 101, etc. so the sum is 50 times 101 which is 5050).
Much of the goal of the educational system in math and science is ostensibly to recruit and select the best of the best, in the supposed spirit of the Manhattan Project. There are tests and exams and competitions all designed to select the very best. In modern physics, for example, this means that the very top graduate programs such as the graduate program at Princeton are largely populated by extreme physics prodigies: people who have done things like publish original papers on quantum field theory at sixteen and who, by any reasonable criterion, could, in principle, run rings around historical figures like Albert Einstein or Niels Bohr. But, in practice, they usually don’t.
Psychologists like K. Anders Ericsson, sociologists, anthropologists, and other “softer” scientists indeed are more likely to seriously question the notion of genius and its role in invention and discovery, at least more broadly than most physicists or mathematicians. Even here though, Ericsson’s theory, for example, attributes breakthroughs to individual expertise acquired through many years of deliberate practice.
It is common in discussions of breakthroughs to find circular reasoning about the role of genius. How do you know genius is needed to make a breakthrough? Bob discovered X and Bob was a genius! How do you know Bob was a genius? Only a genius could have discovered X!
The belief that genius is the essential driving force behind breakthroughs — the more significant the breakthrough, the more brilliant the genius must have been — is so strong and pervasive that the inventor or discoverer is simply assumed to have obviously been a genius and any contrary evidence dismissed. Richard Feynman’s claim to have had a measured IQ of only 125 often provokes incredulity. It is simply assumed that the discoverer of QED had to have been a genius. James Gleick titled his biography of Feynman Genius in spite of knowing Feynman’s claim.
So too Albert Einstein is almost always assumed to have been a remarkable genius. The author can recall a satirical practice at Caltech, a celebration of a special day for a high school teacher who allegedly flunked Einstein: “What an idiot!” But, Einstein in fact was an uneven student. He made many mistakes both in school and in his published papers. He ended up at the patent office, working on his Ph.D. part time at the less prestigious University of Zurich, because he was not so good. His erstwhile professor Minkowski was famously astounded that Einstein accomplished such amazing things. Einstein seems to have worked on his discoveries over many years and he seems to have had the contrarian mental attitude so common among people who make major breakthroughs. He also probably would have gone nowhere had not Max Planck become intrigued with several of his papers and heavily promoted them.
Niels Bohr was infamously obscure in his talks and writings. He had very limited mathematical skills and relied first on his brother Harald, a mathematician, and later younger assistants like Werner Heisenberg. Many of his papers and writings are impenetrable. His response in Physical Review to Einstein, Podolsky, and Rosen’s 1935 paper, which is now taken to clearly identify the non-local nature of quantum mechanics in the process of questioning the foundations of quantum theory, is complete gibberish. Yet Bohr acquired such a mystique as a brilliant physicist and genius that many of these dubious writings were uncritically accepted by his students and many other physicists — even to this day.
It is clear that if breakthroughs were usually the product of a short period of time, such as six months or less, and little or no trial and error, as often implied in popular science and explicitly portrayed in much science fiction, something like real genius would be absolutely necessary to explain the breakthroughs. But this is not the case. Almost all major breakthroughs took many years of extensive trial and error. Most inventors and discoverers seem to have been of above average intelligence, like the IQ of 125 that the physicist Richard Feynman claimed, but not clearly geniuses as conventionally defined. Some were definitely geniuses as conventionally defined.
Intelligence or Social Rank?
In discussions of intelligence or genius, one needs to ask the question and be aware whether one is really talking about intelligence, whatever it may be, or social rank. Most societies rely heavily on a hierarchical military chain of command structure. This structure is found equally in government, academia, business, capitalist nations, socialist nations, and communist nations. In military chains of command there is almost always an implicit concept of a simple linear scale of social rank or status as well as specific roles. A general outranks a colonel even though the colonel may not report to the general. A four star general outranks a three star general and so forth. One of the practical reasons for this is so that in a confused situation such as a battle, it is always clear who should assume command, the ranking officer.
In many respects, in the United States, the concept of intelligence is often used as a proxy or stand in for social rank or status. In academic scientific research, the two are often equated implicitly. An eminent scientist such as Richard Feynman must be a genius, hence astonishment at his claim to a mere 125 IQ. England in 1776 had a very status conscious society. Everyone was very aware of their linear rank in society. To give some idea of this, in social dances, the dance would be chosen in sequence starting with the most ranking woman at the dance choosing the first dance, followed by the second ranking woman, and so forth. Somehow everyone knew exactly how each person was ranked in their community. When the United States broke away from England, this notion of rank was questioned and even rejected. Americans actually deliberately drew lots at dances as to who would choose the dances in an explicit rejection of the English notions of status. This is not to portray the early United States as some eqalitarian utopia; surely it was not. Nonetheless, from the early days, the United States tended to reject traditional notions of social status and rank, and substituted notions like “the land of opportunity.”
But the United States and the modern world has social ranks and status, sometimes by necessity, sometimes not. How to justify this and perhaps also disguise the reality? Aha! Some people are smarter than other people and their position in society is due to their innate intelligence, which (surprise, surprise) is a linear numeric scale, and hard work! All animals are equal, but some animals are more equal than others.
Genius or Mental Attitude?
Clearly there is more to breakthroughs than pure trial and error. Blind trial and error could never find the solution to a complex difficult problem in even hundreds of thousands of attempts. It is clear that inventors and discoverers put a great deal of thought into what to try and what lessons to derive from both failures and successes. Many inventors and discoverers have noted down tens, even hundreds of thousands of words of analysis in their notebooks, published papers, books, and so forth. Something else is going on as well. There is often a large amount of conceptual analysis and reasoning, as well as the trial and error. Can we find real genius here? Maybe.
However the most common situation and best understood conceptual reasoning leading to a genuine breakthough does not particularly involve recognizable genius. Actually, one can argue the inventors and discoverers are doggedly doing something rather dumb. In many, many genuine breakthroughs the inventor or discoverers try something that seems like it ought to work over and over again, failing repeatedly. They are often following the conventional wisdom, what “everyone knows”: the motion of the planets is governed by uniform circular motion, rockets have always been made using powdered explosives, Smeaton’s coefficient (aviation) is basic textbook know-how measured accurately years ago for windmills, etc. How smart is it to try something that fails over and over and over again for years? How much genius is truly involved in finally stopping and saying: “you know, something must be wrong; some basic assumption that seems sensible can’t be right.”
At this point, one should make a detailed list of assumptions, both explicit and implicit, and carefully examine the experimental data and theory behind each assumption. Not infrequently in history this process has revealed that something “everyone knew” was not well founded. Then, one needs to find a replacement assumption or set of assumptions. Sometimes this is done by conscious thought or yet more trial and error: what if the motion of the planets follows an ellipse, one of the few other known mathematical functions in 1605 when Kepler disovered the elliptical motion of Mars?
Sometimes the new assumption or group of assumptions seems to pop out of nowhere in a “Eureka” moment. The inventor or discoverer often cannot explain consciously how he or she figured it out. This latter case raises the possibility of some sort of genius. But is this true? Many people experience little creative leaps or solutions to problems that they cannot consciously explain. This usually takes a while. For everyday problems the lag between starting work on the problem and the leap is measured in hours or days or maybe weeks. The lag is generally longer the harder the problem. Breakthroughs involve very difficult, complex problems, much larger in scope than these everyday problems. In this case, the leap takes longer and is more dramatic when it happens. This is a reasonable theory, although there is currently no way to prove it. Are we seeing genius, exceptional intelligence, or a common subconscious mental process operating over years — the typical timescale of breakthroughs?
Is the ultimate willingness to question conventional wisdom after hundreds or thousands of failures genius or simply a contrarian mental attitude, which, of course, must be coupled with a supportive environment? If people are being burned at the stake either figuratively or literally for questioning conventional wisdom and assumptions, this mental attitude will fail and may be tantamount to suicide. In this respect, society may determine what happens and whether a breakthrough occurs.
Historically, inventors and discoverers often turn out to have been rather contrarian individuals. Even so it often took many years of repeated failure before they seriously questioned the conventional wisdom — despite a frequent clear propensity on their part to do so. Is it correct to look upon this mental attitude as genius or something else? In many cases, many extremely intelligent people as conventionally measured were/are demonstrably unwilling to take this step, even in the face of thousands of failures. In the many failed “new Manhattan Projects” of the last forty years, the best and the brightest recruited in the supposed spirit of the Manhattan Project, in the theory that genius is the driver of invention and discovery, are often unwilling to question certain basic assumptions. Are genuine breakthroughs driven by individual genius or by a social process which is often uncomfortable to society at large and to the participants?
The rhetoric of “thinking outside the box” and “questioning assumptions” is pervasive in modern science and modern society. The need to question assumptions is evident even from a cursory examination of the history of scientific discovery and technological invention. It is not surprising that people and institutions say they are doing this and may sincerely believe that they are. Many modern scientific and technological fields do exhibit fads and fashions that are presented as “questioning assumptions,” “thinking outside the box,” and “revolutionary new paradigms.” In fact some efforts that have yielded few demonstrable results such as superstrings in theoretical physics or the War on Cancer are notorious for rapidly changing fads and fashions of this type. On the other hand, on close examination, certain basic assumptions are largely beyond question such as the basic notion of superstrings or the oncogene theory of cancer. In the case of superstrings, a number of prominent physicists have publicly questioned the theory including Sheldon Glashow, Roger Penrose, and Lee Smolin, but it remains very dominant in practice.
The role of genius as commonly defined in genuine breakthroughs appears rather limited. Breakthroughs typically involve very large amounts of trial and error over many years. This alone can create the illusion of exceptional intelligence if the large amounts of trial and error and calendar time are neglected. There is clearly a substantial amount of conceptual analysis and reasoning in most breakthroughs. Certainly some kind of genius, probably very different from normal concepts of genius, may be involved in this. Unlike common portrayals in which geniuses solve extremely difficult problems rapidly, the possible genius in breakthroughs usually occurs over a period of years. While inventors and discoverers usually appear to have been above average in intelligence (like Richard Feynman who claimed a measured IQ of only 125), they are often not clearly geniuses as commonly defined. The remarkable flashes of insight, the “Eureka” experiences, reported by many inventors and discoverers may well be examples of relatively ordinary subconscious processes but operating over an extremely long period of time — the many years usually involved in a genuine breakthrough.
The most common and best understood form of conceptual reasoning involved in many breakthroughs is not particularly mysterious nor indicative of genius as commonly conceived. Developing serious doubts about the validity of commonly accepted assumptions after years of repeated failure is neither mysterious nor unusual nor a particular characteristic of genius. Actually, many geniuses as commonly defined often have difficulty taking this step even with the accumulation of thousands of failures. This is more indicative of a certain mental attitude, a willingness to question conventional wisdom and society. Identifying and listing assumptions, both stated and unstated, and then carefully checking the experimental and theoretical basis for these assumptions is a fairly mechanical, logical process; it does not require genius. Most people can do it. Most people are uncomfortable with doing it and often avoid doing so even when it is almost certainly warranted. This questioning of assumptions is also likely to fail if society at large is too resistant, unwilling even grudgingly to accept the results of such a systematic review of deeply held beliefs.
In the current economic difficulties, which may be due to “Peak Oil,” a dwindling supply of inexpensive oil and natural gas, there may well be an urgent and growing need for new energy sources and technologies. This has already led to calls for “new new Manhattan Projects” employing platoons of putative geniuses to develop or perfect various hoped for technological fixes such as thorium nuclear reactors, hydrogen fuel cells and various forms of solar power. The track record of the “new Manhattan Projects” of the last forty years is rather poor and should give everyone pause. The original Manhattan Project was certainly unusual in the success of the first full system tests and perhaps in other ways as well. This alone argues for assuming that many full system tests, hundreds probably, will be needed in general to develop a new technology. Success is more likely with inexpensive, small scale systems of some sort where the many, many trials and errors usually needed for a breakthrough can be performed quickly and cheaply.
But what about genius? Many breakthroughs may be due in part to powerful subconscious processes found in most people but operating over many years rather than genius as commonly defined. Genius of some kind may be necessary, but if the contrarian mental attitude frequently essential to breakthroughs is lacking or simply rejected by society despite the pervasive modern rhetoric about “questioning assumptions” and “thinking outside the box,” then failure is in fact likely, an outcome which would probably be bad for almost everyone, perhaps the entire human race. It is not inconceivable that we could experience a nuclear war over dwindling oil and natural gas supplies in the Middle East or elsewhere — certainly an irrational act but really smart people sometimes do extremely dumb things.
© 2011 John F. McGowan
About the Author
John F. McGowan, Ph.D. solves problems by developing complex algorithms that embody advanced mathematical and logical concepts, including video compression and speech recognition technologies. He has extensive experience developing software in C, C++, Visual Basic, Mathematica, MATLAB, and many other programming languages. He is probably best known for his AVI Overview, an Internet FAQ (Frequently Asked Questions) on the Microsoft AVI (Audio Video Interleave) file format. He has worked as a contractor at NASA Ames Research Center involved in the research and development of image and video processing algorithms and technology. He has published articles on the origin and evolution of life, the exploration of Mars (anticipating the discovery of methane on Mars), and cheap access to space. He has a Ph.D. in physics from the University of Illinois at Urbana-Champaign and a B.S. in physics from the California Institute of Technology (Caltech). He can be reached at [email protected].
It is an interesting argument, but I think it fails to consider some useful elements. First, regardless of any IQ test, we have independent evidence of Feynman’s intellectual horsepower. He was a Putnam Fellow. Being a Putnam Fellow means being at the very top of your cohort in mathematical reasoning, as expressed in exactly six hours of work. No room for trial and error on the Putnam exam. You see the solution and can work out (and express) the details really fast or you zero out.
The correlation between way above average intelligence and discovery is far from perfect, but even casual observation is convincing that they are strongly related.
Second, if great discovery was mostly a trial and error process then it should be quite uncommon for an individual to make very high grade discoveries two or more times. But, in practice, the contributions of scientists are often thought of as logarithmically distributed (to borrow Landau’s concept). The top rank do 10x what the next rank does, who do 10x what the next rank do, and so forth. By and large, there are only a few one-shot wonders. Feynman came up with innovative and deep work throughout his life. So did Shannon. So did Bardeen. So did Hilbert. So did many others.
The points about social context are well taken. Great work very rarely comes out of a vacuum. There are usually many contenders at work. Einstein barely finished in front of Poincare and Hilbert.
I disagree about the observations on rate of progress. It is certainly true that propulsion has slowed. The main reason is that, after massive trail and error, we’ve settled on propellants with just about as high an ISP as can be achieved with any chemical reactions. There is nowhere else to go but nuclear. We are not doing that for political reasons, not technical ones.
Similarly, nothing is stopping Ares/Constellation except budget. The record of space systems as the computer era has progressed is quite good. The record of first time success for most classes of space system is excellent (typically better than 95%) and hugely better than in the slide rule era. Moreover, rigorous mission assurance processes demonstrably improve first-time success, although one can have a long argument if it is cost effective or not relative to lighter weight processes.
While I think the singulatarian’s claims are overblown, the right place to look for other examples of exponential growth is in biology, not energy. The rapid growth occurs in immature fields. Look at the rate of growth for DNA sequencing, protein synthesis, neuroscience, and the like.
Do the rapture of the geeks and the singularity doctrine play some of the roles religions used to play?
I would say definitely. I am a sympathetic skeptic. I read “Marooned in Real Time” a couple of times and really enjoyed it. At the time I read it I was largely unaware of the parallels to various religious doctrines and mysticism. I can say the same about unified field theories in particle physics which have a strong resemblance to various Neo-Platonic and Kaballistic ideas. My Ph.D. is in experimental particle physics.
I think people really need to learn math. Math is becoming more important as computers make it possible to use mathematics in many new ways. I think mathematics and science can solve some of the problems that people like Ray Kurzweil hope it can. On the other hand, I think people including senior scientists and engineers are carrying over, sometimes consciously and more often subconsciously, ideas from magic, mysticism, and religion to mathematics and science. These ideas lead to very unrealistic ideas and expectations of real math and science. One of the major differences is the large amount of trial and error and fumbling around in real mathematical research and real scientific research.
Intelligence and mathematics is being conceived of as a power like the power of a traditional magician. In the traditional Neo-Platonic magic of the West, the magician is conceived of as channeling a power, the Platonic world spirit (not unlike the chi in China, the prana in India, the “Force” in Star Wars and so on), in analogy to something like water flowing through pipes or other hydraulics. This is the explicit analogy found in ancient magical writings. In a lot of ways, the notion of IQ or general intelligence as a sort of mental horsepower is not unlike this notion. You apply this power, turn the crank by performing magical rituals (symbolic manipulation), and out pop the desired results. Real math and science usually doesn’t work like that.
In response to Mark’s comments:
I am skeptical about the interpretation of tests, whether they are the IQ test or the Putnam. It is very difficult to tell whether a test is measuring some innate mental “horsepower” or whether the test taker knows, has studied, perhaps practiced heavily the type of problems that appear on the test. IQ tests are actually scaled for the chronological age of the test taker, because in absolute terms people clearly get better on IQ tests with age and experience. This actually can explain why Richard Feynman could have a non-genius IQ test score of 125 and a top score on the Putnam exam. He was comparatively unfamiliar with the types of problems on the IQ test (he was well above average) and very familiar with the problems on the Putnam test. Indeed, time spent studying and practicing the types of problems that appear on the Putnam exam might well have cut into and reduced the time spent studying and practicing the types of problems that appear on an IQ test.
On a short, few hour test with very advanced, complex problems, the test need not reward high intelligence if it exists. Rather, the problems may be too complex, difficult for even an extremely intelligent person to solve in the limited time. The best scoring test takers will be the people who have specifically studied and learned this type of problem and know the steps to take already, without any trial and error.
Very high intelligence people may actually be unprepared for this. They may have a history of being able to solve simpler problems, eg. high school level problems, on the fly without drilling or practice. They may assume they can do the same on more advanced tests and attribute the success of others to high intelligence rather than higher levels of practice or study.
I have directly seen this in graduate school in physics. In graduate school, there are tests usually known as “qualifying exams” which in our case were a series of two three-hour exams with advanced physics problems. The word of mouth was to practice past tests heavily to learn the types of problems on these tests. Graduate students who studied general principles, physics textbooks, etc. almost always failed the exams. The people who passed and did well specifically practiced the types of problems on the qualifying exams by practicing with old exams. The problems were simply too advanced for even very bright students to figure out by reasoning from first principles in the short time available. I have talked to other physicists who had the same experience in other graduate programs in physics. This is very much like K. Anders Ericsson’s concept of “deliberate practice” rather than the convention notion of “genius” or mental horsepower.
Even if someone can absorb existing knowledge, how to solve known problems, very quickly due to some innate mental ability, this does not mean they can discover new knowledge rapidly. The two abilities may be distinct or mostly distinct. Academic courses in math and science do not teach the trial and error process, whatever it may be, involved in genuine breakthroughs. Many Ph.D.’s are awarded for measuring or calculating some quantity X to slightly greater accuracy and similar sorts of research projects which have a limited resemblance to the extremely unpredictable, error prone, frustrating process of most breakthrough research and development.
I think there is a strong selection effect today, more so than before World War II. Math and science underwent a major transformation in the middle of the twentieth century, part of which involved a markedly increased reliance on tests and formal academic credentials. So, today, someone can very rarely play the game unless they have extremely high test scores and very high intelligence as conventionally measured. Looking back into the past, inventors and discoverers usually appear to have had above average intelligence as conventionally measured, sometimes much more so, but the relationship is not as clear as today.
There are a lot of one-shot cases in breakthroughs. There are some repeaters. It seems to take between five years and a few decades to make a major breakthrough. People live 60-100 years. It is quite possible for someone to make multiple breakthroughs. Looking at repeaters, they have to engage in large amounts of trial and error each time over a period of several years. Kepler figured out his first two laws of planetary motion (in modern terminology) in 1605. He figured out his third law of planetary motion in 1619 after studying Tycho Brahe’s data for another fourteen (14) years. If someone works in a field like physics from age 20 to age 65, they could potentially make nine breakthroughs at five years each. Was Feynman really a repeater? He certainly published many papers and did many good things after QED, but another breakthrough? Bardeen certainly did. He had two Nobel Prizes for two major discoveries: the transistor and the theory of superconductivity.
I basically think there is “real genius” and it is correlated with making genuine breakthroughs. I think the correlation is significantly less than 100% and the “real genius” is probably not a simple linear scale, something like the horsepower of a car engine. It probably has multiple independent components, some of which are more important in some breakthroughs than in other breakthroughs. The mental attitude of genuinely questioning assumptions isn’t really anything like the concept of individual intelligence or genius; it is a social process and both my personal experience and research indicate it is pretty unrelated to how smart someone is as conventionally defined.
Regarding progress in rocketry, one should expect very high failure rates when a technology such as a rocket is being developed. This occurred in the 1950’s and early 1960’s. Once people know how to build a new technology, these failure rates should drop, sometimes to near zero. This is unrelated to slide rules versus computers. This happened with steam engines and automobiles long before computers, long before the slide rule even. Once you know what to do, you can repeat it. A 95% catastrophic failure rate is pretty high. No one in 1920 would have purchased automobiles with failure rates at that level, not that autos in 1920 didn’t have significant reliability problems. People before computers were able to refine their manufacturing process to achieve much lower failure rates than seen in rocketry in the United States. Who apparently has the lowest failure rates in modern rocketry: reportedly Russia with the Vostok rockets — something like a 99% success rate. Russia had and has very limited use of computers compared to the United States. What is the difference? From what I understand, the Russians don’t tinker with a proven rocket design; they just keep building the same rocket over and over again. If it ain’t broke, don’t fix it. The United States is constantly tinkering with the designs using our sophisticated computers and computer aided design tools.
I would agree that the problems with Ares/Constellation are partly political. But is that the whole story? NASA went off and spent many billions of dollars designing and simulating a very advanced rocket, instead of, for example, dusting off the proven Saturn V design and building something very quickly in the political cycle. They were finally starting to do static tests of the Ares just before the program was cancelled. If people are seduced into spending years performing detailed computer simulations of rockets and planes and tokamaks and reactors that never fly or never really work and eventually are cancelled in part because policy makers and the public start asking the reasonable questions “where is my rocket?” or “where is my fusion reactor?” or “where is my cure for cancer?” Is this computers accelerating progress or slowing progress?
Is biology a good example of computers accelerating progress? I focused on fundamental physics and aerospace because these fields were heavy early adopters of computer technology in the United States back in the 1970s. To some degree, there is forty years of experience with this in these fields. Biology has begun to computerize much more recently which makes it hard to evaluate the success of computerized and mathematical methods. It is too early to tell if huge speedups in DNA sequencing for example will translate into practical results: a cure for cancer, longer lifespans, etc..
Modern high tech genetic biology has not had many practical successes. So far there may have been some significant advances in quite rare diseases associated with a single gene and protein such as cystic fibrosis and some other rare diseases. These diseases are all so rare that it is difficult to independently confirm the progress. I believe there has been major progress in treating cystic fibrosis, but I don’t know anyone with cystic fibrosis. I know people with cancer and I know the official statistics for this common disease are massaged to look better than they really are. These advances for single gene and protein diseases are certainly a good thing. Hopefully, throwing powerful computers at major diseases like cancer will achieve practical results. The track record in physics and aerospace should give people pause. I think everyone wants real results and not empty quantitative measures of progress like numbers of genes sequenced and so forth.
A few follow-up thoughts.
I can agree that the correlation between “genius,” intellectual horsepower, and making important scientific discoveries is very substantial, but less than 1.0. I agree that a lot of smart people are poor at questioning assumptions, but my experience is that being an effective questioner (not just a reflexive skeptic) is correlated with intelligence.
Perhaps Putnam Fellows are just people who practice a lot, but the Putnam Fellow who lived down the hall from me at Caltech didn’t spend much time practicing for it. My room-mate didn’t prepare more for it than a few hours and came within a hairs breadth of being a Fellow. The guy who was a Fellow has something quite big named after him now. Obviously, it is a trivially small data set, but that is my experience.
You can certainly design tests that are unpassable except by studying old versions. One might wonder what the point of such a test is, since a qualifying exam should test basic understanding. That’s the point of a PhD qualifier. In my program word-of-mouth was take certain classes since they were regarded as the source for what the faculty regarded as “fundamental.” It certainly appeared to work that way when I took the exam. Getting through a PhD program has a lot of tricks that are pretty orthogonal to actually conducting a scientific career.
On spacecraft, my point is that in the space of systems that aren’t and can’t be turned into manufacturing line items (exploration probes) the record of first-time, only-time success is much better now in the computer era than it was in the slide rule era. It is much better in spite of much higher complexity. Moreover, software-rich space probes have an excellent record of being recoverable from in-flight anomalies, something that virtually never happened in the old-days.
On Constellation, I think you have a mis-perception of what NASA chose to do. If NASA wanted a conservative heavy lift design it would hardly make sense to dust off Saturn V stuff, for which the industrial base has been dead for 40 years. It would make more sense to integrate heavy lift components that are extensions of currently serial manufacture items, like RD-180 or RS-68 engines and SRBs. In fact, the Ares V design was almost exactly that, except for the J2X upper stage engine, which was a dust-off of the Saturn V engine. Of the propulsion components it was the J-2X that required the most work, since you can’t buy 1960’s electronics anymore and many of the manufacturing technologies are obsolete.
The Ares rockets had a political problem much like that which killed the Saturn V. Really large rockets have really large carrying costs in infrastructure (launch processing, etc.). This makes sense economically only if you launch a lot of really large payloads fairly frequently (like the Apollo pace at its peak). The carrying costs are largely independent of launch rate. So, the case for heavy lift makes sense only if your total program is large enough (in payload mass) to beat out smaller systems flying more often.
I’m really curious to see if SpaceX can change the equation. They are trying (and they use highly automated and streamlined processes), but whether it works or not is still open. SpaceX is largely developing their subsystems (like engines) from scratch, though of course they reuse established component approaches.
I think a better test of information technology applied to biology is what basic scientific breakthroughs result, not the progress of medicine. If you want to assess if IT is useful in medicine examine the direct application, not the indirect through basic biology. As far as basic biology goes I think the verdict is in. The computer era (including computer-enabled laboratory devices) has led to an explosion of knowledge in genetics and neuroscience. Application to medicine will depend, like Serge noted on the Manhattan Project, on luck; whether or not there are useful low complexity pathways to disease processes. There may not be.
I think a better lesson to take from the Manhattan Project, and the less famous but no less impressively staffed MIT Radiation Lab, is the power of research groups that have all the critical factors:
1. A critical mass of highly intelligent and creative people embedded in highly socially interactive milieu.
2. Resources and leadership that translates ideas to trail-and-error practice at scale very rapidly, with lots of feedback, without the bureaucratic delays of budget cycles.
3. A support staff and culture that supports and drives experimental results and scale-up
so you don’t think grisha perleman is
imho, all nontrival progress in math is driven by the brilliant. Yes, people
have collaborators, but they are also
math is going through a golden age–famous
historical open problems are being solved.
could it be that the engineers and scientists working on the saturn rocket
(a cutting edge tech)were more talented
and better educated than the aries engineers?
i like your blog.
off the thread. i have to get back to research===and lunch!
In Masha Gessen’s biography of Grigoriy Perelman, Perfect Rigor, she describes math clubs, math coaches, all kinds of deliberate practice type activities from an early age, leading up to a top score at the International Math Olympiad and much greater achievements after that.
yes, but the olympiad is at high school
grisha’s solution of the soul problem
in geometry astonished the people in the
his work on the classification of three
msnifolds was more astonishing.
he worked very hard.
The purpose of the article and preceding articles is to achieve better practical results in research and development than has been achieved in most of the “new Manhattan Projects” of the last forty plus years. Most of which have not reproduced the spectacular performance of the original Manhattan Project. I argue this is partly because the Manhattan Project was a fluke.
Some points of clarification in response to Mark’s second comment.
I don’t subscribe to K. Anders Ericsson’s “deliberate practice” theory of expert performance. I think it is mostly correct for some sports, games, and intellectual activities. As Mark indicated, one can design tests that can only be passed by practicing old tests — I believe I have encountered this. I argue that Ericsson has generalized this true observation to a general theory of expertise that probably does not hold up. In my own personal experience, I achieved the level of academic performance needed to get into Caltech without the high levels of practice that Ericsson describes. I encountered a couple of prodigy types at Caltech much as Mark describes. One person seemed to study a lot. The other person did not seem to study much at all. Appearances can be deceiving. Both had unusual backgrounds. Their parents were university professors which probably gave them a significant boost.
I suspect some people do indeed achieve high scores on the Putnam or other contests through heavy levels of practice and some do not. In the case of practice, it probably takes years of practice to achieve the top level. So the amount of practice one sees right before the test may not give a proper indication of the total amount of practice that has occurred. In Masha Gessen’s biography of Grigoriy Perelman, Perfect Rigor, she describes math clubs, math coaches, all kinds of deliberate practice type activities from an early age, leading up to a top score at the International Math Olympiad and much greater achievements after that.
It is difficult to separate improvements due to computers replacing slide rules as opposed to growing knowledge of rocketry and space. If for some reason, computers had stopped progressing in 1971, what would would we be able to do today with slide rules, adding machines, and room sized computers with 64K RAM and clock speeds of 500,000 cycles per second? How reliable would the first time space probes have become? How powerful would the rockets and engines be? The reason that I say this is that if one naively extrapolates the progress seen in aviation and rocketry from 1903 to 1971 to 2011, one would expect something more like the future depicted in 2001: A Space Odyssey. One would expect routine supersonic transport on Earth, nuclear or ion drives in space, and manned exploration of the outer planets. We have better rockets and space probes today than in 1971, but not that different or more powerful. We can probably do some things today that would be impossible without modern computers, software, and electronics. On the other hand, it is not clear that — overall — computers have accelerated technological progress in aviation and rocketry; they may have slowed it.
I am sure that Mark is more familiar with the details of the Ares/Constellation program than I am. Nonetheless, I continue to have the impression that Ares/Constellation was seduced into the morass of excessive planning and computer simulation that seems to have consumed numerous previous NASA/aerospace programs. I grant that could be mistaken.
I am not arguing that computers, all other factors held equal, would not have been or be an improvement over slide rules. I suspect that they have become a de facto substitute for conceptual reasoning that computers can’t do and probably other things as well. Proposing a giant computer simulation program to the Nth decimal point does not raise the social and political issues involved in questioning certain basic assumptions. If we want to get better results with our very powerful computers of today, we may need to return to some customs and practices and ideas of the past, or invent new ones.
With respect to biology, I tend to focus on practical results which can be independently verified. Fundamental scientific breakthroughs have a particularly high “mortality” rate. That is, many don’t hold up over the centuries. We can be pretty confident that Kepler made a major advance both because his laws of planetary motion have stood the test of centuries in a scholarly and scientific sense, and also because it now has practical benefits — we actually use it in satellites and space systems today. There are many putative scientific breakthroughs like “cancer worms” which won a Nobel Prize (Johannes Andreas Grib Fibiger won the Nobel Prize in 1926 for cancer worms) and the oxygen deprivation theory of cancer, which also apparently won a Nobel Prize in the 1930s (Otto Warburg received the Nobel Prize in 1931 for this theory), which have not held up. From looking at Google, it looks like there may be an effort to revive Warburg’s theory 🙂 Indeed, it might have had some truth in it. It was absolute orthodoxy for many years, then it went out of favor, and now it may be coming back as cancer researchers continue to struggle.
This infant mortality of putative breakthroughs is not so much a problem with technological inventions, at least once they are clearly in widespread use. Even if the theory of the invention is wrong, one can still see that the invention works. You can look at videos on YouTube and Skype and so forth and see the new video compression that appeared in 2003 in action. There is no question of a major technological advance.
Modern computers enable the collection of vast amounts of data and the implementation of extreme complexity. One can collect gene sequencing and other data on a scale unimaginable even a few decades ago. So too, modern computers enable extremely complex mathematical models with in some cases hundreds of thousands of adjustable parameters. If one chooses quantitative metrics, one can make this look very good: an “explosion of knowledge”. That argument appears explicitly in blue ribbon scientific panel reports and so forth. Does this explosion of data and complexity really translate into practical results? Sometimes yes (video compression algorithms that work are quite complex). Sometimes maybe (our mediocre speech recognition algorithms have hundreds of thousands of tunable parameters). Sometimes maybe not (superstrings, the oncogene theory of cancer,…?).
Computers would have made it possible to expand the Ptolemaic theory of the solar system to hundreds of thousands of epicycles and very high levels of seeming accuracy. If one adds enough adjustable parameters one can get fairly good agreement. Copernicus, Galileo, and Kepler had considerable difficulty overcoming the seeming accuracy, detail and sophistication of the Ptolemaic model: how much harder it would have been to compete with a computerized model with thousands or even hundreds of thousands of tunable parameters. Many fields are showing strong evidence of massive slicing and dicing of data using computers yielding spurious results. Every year or two in particle physics yields yet another five sigma bump: pentaquarks and all kinds of other entities. It is not unusual or unique to physics. I think it is actually much worse in medicine and biology today.
What should we do to achieve better results? That is a difficult question. Attempts to clone the Manhattan Project have mostly failed and will probably continue to fail. Widely held conceptions about genius and IQ probably contribute to that failure.
One point of further clarification.
This is the Wikipedia entry for Otto Warburg:
which discusses some of his work on cancer. As this is controversial, particular caution with Wikipedia is warranted since Wikipedia entries on controversial topics often seem to be taken over by partisans of one view or another.
I have not researched Warburg’s work heavily. It is my impression from my reading that his ideas were taken quite seriously in the 1930’s and 1940’s, hence the Nobel Prize. I have seen articles from this period that describe his theory of cancer causation as a proven fact and similar terminology. His theory seems to have been exiled to quackland for a while and to be currently enjoying some revival.
Warburg’s cancer theory is an example of the difficulty of evaluating purely scientific or scholarly breakthroughs in the absence of practical results.
“His response in Physical Review to Einstein, Podolsky, and Rosen’s 1935 paper, which is now taken to clearly identify the non-local nature of quantum mechanics in the process of questioning the foundations of quantum theory, is complete gibberish.”
I’m curious about this. More details or sources?
I mean, why was it gibberish?
This is the original Einstein-Podolsky-Rosen (usually referred to as EPR) paper available online at the Physical Review web site:
Phys. Rev. 47, 777–780 (1935)
Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?
This is Arthur Ruark’s letter in partial rebuttal to Einstein which is actually quite clear:
PROLA » Phys. Rev. » Volume 48 » Issue 5 (September 1935)
Phys. Rev. 48, 466–467 (1935)
Is the Quantum-Mechanical Description of Physical Reality Complete?
Arthur E. Ruark
University of North Carolina, Chapel Hill, North Carolina
Unfortunately Ruark’s letter does not appear to be available for download for free. It
Bohr’s Famous But Rarely Read Response
Bohr, N., 1935a, “Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?”, Physical Review, 48: 696-702 (Issue 8 — October 1935)
Ruark’s letter, which is not widely known, is actually a clearer partial rebuttal of Einstein’s argument. Bohr’s response is just complete double talk. You can’t follow the argument if you really try in detail.
What was going on here? Completeness has a very nitpicky technical meaning in theoretical physics; I am not actually sure that I understand the sense in which Einstein was actually using it in 1935. It is now taken to mean that Einstein was arguing there must be hidden variables in quantum mechanics, not expressed in the mathematical formalism associated with Bohr and his colleagues. That may or may not have been what Einstein intended to mean. Einstein could have been referring to the lack of a symbolic mathematical formula or rule describing what we now call the quantum mechanical measurement process. This was clearly lacking in the quantum theory in 1935; I believe it is still lacking today despite some contrary claims. My guess is that in using the term “complete” Einstein was trying to put his intuitive feeling that quantum mechanics was “not right” into more precise, rigorous language. “It just doesn’t feel right.” is not an acceptable title for a physics paper — though maybe it should be :-).
What is for sure is that the Copenhagen Interpretation of Quantum Mechanics and the concept of “complimentarity” to classical physics that Bohr discusses in his response does not provide a rigorous predictive mathematical explanation of what constitutes a “measurement” in quantum mechanics. This is the problem that all QM paradoxes such as Schrodinger’s Cat illustrate and which Bohr could never explain.
EPR is a particularly extreme example of the measurement problem since it implies faster than light communication if there are hidden variables. A lot of discussions in mainstream physics dance around this point, but that is what it means. There don’t have to be hidden variables and faster than light communication in quantum mechanics; that is partly the implication of Ruark’s letter. The universe could just be that way. In the Copenhagen Interpretation there is a wave function which supposedly collapses when a measurement occurs. There are no hidden variables. But the EPR effect would require this wave function, which we cannot directly measure, to collapse faster than light; it is usually implied to collapse instantaneously which is hard to reconcile with relativity. In fact, mathematical attempts to reconcile quantum mechanics and relativity (quantum field theory) introduce all sorts of infinities, renormalization, and other problems.
The problem that Bohr is dodging in his response to EPR, whether deliberately or not, is what constitutes an actual measurement and how does it somehow affect or apply instantaneously to a physical system of two particles that can be separated by an arbitrary distance, in principle a separation of light years is possible, when the measurement is performed on only one particle. There are now many experiments that seem to confirm this astonishing result predicted by quantum mechanics — and which would imply faster than light communication if there are hidden variables.
I have a much longer discussion of these issues and the bitter history of the dispute in this article:
In Bohr’s defense, he was a very conceptual thinker, like Einstein, with weak mathematical skills. That put him, and Einstein ironically, at a disadvantage as quantum mechanics grew in mathematical complexity and began to give birth to what is now quantum field theory. A reader can see there is very little math in Bohr’s paper, just a footnote. There is significantly more math in EPR but still it is quite basic, some integrals — first year calculus level. EPR is mostly about conceptual issues as well. EPR is very clear and Bohr’s response is incoherent.
Thanks a lot for your answer.