In an enterprise such as the building of the atomic bomb the difference between ideas, hopes, suggestions and theoretical calculations, and solid numbers based on measurement, is paramount. All the committees, the politicking and the plans would have come to naught if a few unpredictable nuclear cross sections had been different from what they are by a factor of two.
Emilio Segre (Nobel Prize in Physics, 1959, key contributor to the Manhattan Project) quoted in The Making of the Atomic Bomb by Richard Rhodes (Simon and Schuster, 1986)
It is widely believed that invention and discovery, especially breakthroughs, revolutionary technological advances and scientific discoveries, are largely the product of genius, of the exceptional intelligence of individual inventors and discoverers. This is one of the lessons frequently inferred from the success of the wartime Manhattan Project which invented the atomic bomb and nuclear reactors. It is often argued that the Manhattan Project succeeded because of the exceptional intelligence of the physicists, chemists, and engineers who worked on the atomic bomb such as Emilio Segre, quoted above. The scientific director J. Robert Oppenheimer is often described as a genius, as are many other key contributors.
Since World War II, there have been numerous “new Manhattan Projects” which have recruited the best and the brightest as conventionally defined and mostly failed to replicate the astonishing success of the Manhattan Project: the War on Cancer, tokamaks, inertial confinement fusion, sixty years of heavily funded research into artificial intelligence (AI), and many other cases. As discussed in the previous article “The Manhattan Project Considered as a Fluke,” the Manhattan Project appears to have been a fluke, atypical of major inventions and discoveries, especially in the sucess of the first full system tests, the Trinity test explosion (July 16, 1945) and the atomic bombings of Hiroshima and Nagasaki (August 6 and 9, 1945) which cost the lives of over 100,000 people and which are, fortunately, so far the only examples of the use of atomic weapons in war.
With rising energy prices, possibly due to “Peak Oil,” a dwindling supply of inexpensive oil and natural gas, there have already been many calls for “new new Manhattan Projects” for various forms of alternative energy. If “Peak Oil” is correct, there is an urgent and growing need for new energy sources. Given the long history of failure of “new Manhattan Projects,” what should we do? This article argues that the importance of genius in breakthroughs is heavily overstated both in scientific and popular culture. Much more attention should be paid to other aspects of the breakthrough process.
To a significant extent, the issue of human genius in inventions and discovery overlaps the topic of the previous article “But It Worked in the Computer Simulation!” which argues that computer simulations have many limitations at present. Frequently, when people refer to human genius they are referring to the ability of human beings to simulate their ideas in their head without actually building a machine or performing a physical experiment. Many of the limitations that apply to theoretical mathematical calculations and computer simulations apply to human beings as well.
One important difference at present is that human beings think conceptually and computers at present cannot. This article argues that many historical breakthroughs were due to an often unpopular contrarian mental attitude that is largely uncorrelated with “genius” as conventionally defined — not due to exceptional conceptual reasoning skills. The success of this contrarian mental attitude is often dependent on the acceptance, which is usually grudging at first, of society at large.
A Note to Readers: The issue of genius and breakthroughs is highly relevant to invention and discovery in mathematics, both pure and applied. This article discusses many examples from applied mathematical fields such as physics, aerospace, power, propulsion, and computers. Nonetheless, it is not a mathematics specific article.
What is Genius?
Genius is difficult to define. It is usually conceived as an innate ability, often presumed to be genetic in origin, to solve problems through reasoning better than most people. It is often discussed as if it referred to a simple easily quantifiable feature of the mind such as the speed at which people think consciously (in analogy to the clock speed of a computer) or the number of items that one can keep track of in the conscious mind at once (in analogy to the number of registers in a CPU or the amount of RAM in a computer). People have tried to quantify a mysterious “general intelligence” through IQ tests. In practice, genius is often equated with a high IQ as measured on these tests (e.g. an IQ of 140 or above on some tests is labeled as “genius”).
Genius is an extremely contentious topic. Political conservatives tend to embrace genius and a genetic basis for genius. Political liberals tend to reject genius and especially a genetic basis for genius. Some experts such as the psychologist K. Anders Ericsson essentialy deny that genius exists as a meaningful concept. The science writer Malcolm Gladwell who has heavily popularized Ericsson’s ideas stops just short of “denying” genius in his writings and public presentations.
Many people, including the author, have a subjective impression that some people are smarter than other people. The author has met a number of people that the author considered clearly smarter than the author. This seemed difficult to explain in purely environmental terms. It is extremely difficult in practice to separate environment from possible genetic factors or other as yet unknown factors that may contribute to perceived or measured “intelligence.” Sometimes really smart people do extremely dumb things: why?
Genius is almost always conceived as an individual trait, similar to height or hair color, something largely independent of our present social environment. Geniuses are exceptional individuals independent of their friends, family, coworkers and so forth. Genius may be the product of environment in the sense of better schooling and so forth. Rich kids generally go to better schools or so most people believe. Nonetheless, in practice, in the scientist’s laboratory or the inventor’s workshop, “genius” is viewed as an individual trait. This conception of individual genius coexists with curious rhetoric about “teams” in business or “scientific communities” in academic scientific research today.
In particular, genuine breakthroughs usually take place in a social context, as part of a group. Historically, prior to World War II and the transformation of science that occurred during the middle of the twentieth century, these were often small, loose-knit, informal groups. James Watt collaborated loosely with some professors at the University of Glasgow in developing the separate condenser steam engine. Octave Chanute and the Wright Brothers seem to have collaborated informally without a written contract or clear team leader. Albert Einstein participated in a physics study group while at the patent office and worked closely at times with his friend and sometimes co-author the mathematician Marcel Grossmann. In his work on a unified field theory, in a different social context at the Institute for Advanced Study at Princeton, Einstein largely failed.
After success, there were often bitter fallings out over credit: “I did it all!” The “lone” inventor or discoverer that is now remembered and revered is typically the individual who secured the support of a powerful institution or individual as James Watt did with wealthy industrialist Matthew Boulton, the Wright Brothers (minus Octave Chanute) did with the infamous investment firm of Charles Flint and Company, and Einstein did with the powerful German physicist Max Planck and later the British astronomer and physicst Arthur Eddington. In a social context, the whole can be greater than the sum of the parts. A group of mediocrities that work well together (whatever that may mean in practice) can outperform a group of “stars” who do not work well together. There may be no individual genius as commonly conceived.
This article accepts that individual genius probably exists as a meaningful concept, but genius is poorly understood. It argues that genius is not nearly as important in genuine scientific and technological breakthroughs as generally conceived.
Genius and Breakthroughs in Popular Culture
In the United States, popular culture overwhelmingly attributes scientific and technological breakthroughs to genius, to extreme intelligence. This is especially true of science fiction movies and television such as Eureka, Numb3rs, Star Trek, The Day the Earth Stood Still (1951), The Absent Minded Professor (1961), Real Genius (1985), and many others. Movies and television frequently depict extremely difficult problems being solved with little or no trial and error very quickly, sometimes in seconds. It is common to encounter a scene in which a scientist is shown performing some sort of symbolic manipulation on a blackboard (sometimes a modern white board or a see-through sheet of plastic) in seconds on screen and then solving some problem, often making a breakthrough, based on the results of this implied computation or derivation. This is also extremely common in comic books. There are a number of materials in popular culture aimed specifically at children such as the famous Tom Swift book series and the Jimmy Neutron movie and TV show (The Adventures of Jimmy Neutron: Boy Genius) which communicate the same picture. Many written science fiction books and short stories convey a similar image.
Many of these popular culture portrayals are extremely unrealistic, particularly where genuine breakthroughs are concerned. In particular, most genuine breakthroughs took many years, usually at least five years, sometimes decades, even if one only considers the individual or group who “crossed the finish line.” Most genuine breakthroughs, on close examination, have involved large amounts of trial and error, anywhere from hundreds to tens of thousands of trials or tests of some sort.
Ostensibly factual popular science is often similar. It is extremely common to find the term “genius” in the title, sub-title, or cover text of a popular science book or article as well as the main body of the book or article. The title of James Gleick’s biography of the famous physicist Richard Feynman (Nobel Prize in Physics, 1965, co-discoverer of Quantum Electrodynamics aka QED) is… Genius. Readers of the book remain shocked to this day to read that Feynman claimed that his IQ had been measured as a mere 125 in high school; this is well above average but not what is usually identified as “genius.” A genius IQ is at least 140. Feynman scoffed at psychometric testing, perhaps with good reason. One should exercise caution with Feynman’s claims. Richard Feynman was an entertaining storyteller. Some of his accounts of events differ from the recollections of other participants (not an uncommon occurrence in the history of invention and discovery). Feynman’s non-genius IQ is not as surprising as it might seem. One can seriously question whether a number of famous figures in the history of physics were “geniuses” as commonly conceived: Albert Einstein, Michael Faraday, and Niels Bohr, for example.
Popular science often creates a similar impression to the science fiction described above without, however, making demonstrably false statements. Often, the long periods of trial and error and failure that precede a breakthrough are simply omitted or discussed very briefly. The reported flashes of insight, the so-called “Eureka moments,” which can be very fast and abrupt if the reports are true, are generally emphasized and extracted from the usual context of years of study and frequent failure that precede the flash of insight. Popular science books tend to focus on personalities, politics, the big picture scientific or technical issues, and… the genius of the participants. The discussions of the trial and error, if they exist at all, are extremely brief and easy to miss: a paragraph or a few pages in a several hundred page book for example. In the 886 page The Making of the Atomic Bomb, the author Richard Rhodes devotes a few paragraphs to the enormous amount of trial and error involved in developing the implosion lens for the plutonium atomic bomb (page 577, emphasis added):
The wilderness reverberated that winter to the sounds of explosions, gradually increasing in intensity as the chemists and physicists applied small lessons at a larger scale. “We were consuming daily,” says (chemist George) Kistiakowsky, “something like a ton of high performance explosives, made into dozens of experimental charges.” The total number of castings, counting only those of quality sufficient to use, would come to more than 20,000. X Division managed more than 50,000 major machining operations on those castings in 1944 and 1945 without one explosive accident, vindication of Kistiakowsky’s precision approach.
While a close reading of The Making of the Atomic Bomb reveals an enormous amount of trial and error at the component level, it is easy to miss this given how short and oblique the references are, buried in 886 pages. The term “trial and error” is not listed in the detailed 24 page index of the book. The index on page 884 lists Tregaskis, Richard, Trinity, tritium, etc. in sequence — no “trial and error”.
In most cases, popular science books don’t point out the obvious interpretation of these huge amounts of trial and error. One is not seeing the results of genius, certainly not as frequently depicted in popular culture, but rather the results of vast amounts of trial and error. This trial and error is extremely boring to describe in detail, so it is either omitted or discussed very briefly. Where the popular science has the goal of “inspiring” students to study math and science, a detailed exposition of the trial and error is probably a good way to convince a student to go play American football (wimpy American rugby with lots of padding) or soccer (everybody else’s football) instead.
On a personal note, the author read The Making of the Atomic Bomb shortly after it was first published and completely missed the significance of Segre’s quote and the passage above. After researching many inventions and discoveries in detail, it became apparent that the most common characteristic of genuine breakthroughs is vast amounts of trial and error, usually conducted over many years. What about the Manhattan Project? Rereading the book closely reveals occasional clear references to the same high levels of trial and error, in this case at the component level. The Manhattan Project is quite unusual in that the first full system tests were great successes: worked right the first time. Many of the theoretical calculations appear to have worked better than is typically the case in other breakthroughs.
Remarkably, the Manhattan Project appears to have been unusually “easy” among major scientific and technological breakthroughs. The first full system tests, the Trinity, Hiroshima, and Nagasaki bombs, were spectacular successes which ended World War II in days. This is very unusual. Attempts to replicate the unusual success of the Manhattan Project have mostly failed. It may well be that even in most successful inventions and discoveries the equivalents of the critical nuclear cross sections that Segre mentions in the quote above are less convenient than occurred in the Manhattan Project.
The Rapture for Geeks
In 1986, the science fiction writer and mathematician Vernor Vinge published a novel length story “Marooned in Real Time” in the Analog Science Fiction/Science Fact science fiction magazine which was shortly thereafter published as a book by St. Martin’s Press/Bluejay Books. This novel introduced the notion of a technological singularity to a generation of geeks.
The basic notion that Vinge presented in the novel was that rapidly advancing computer technology would increase or amplify human intelligence. This in turn would accelerate both the development of computer technology and other technology, resulting in an exponential increase, eventually reaching a mysterious “singularity” somewhat in analogy to the singularities in mathematics and physics (typically a place in a mathematical function where the function becomes infinite or undefined). In the novel, most of the human race appears to have suddenly disappeared, possibly the victims of an alien invasion. A tiny group of survivors have been “left behind.” By the end of the novel, it is strongly implied that the missing humans have transcended to God-like status in a technological singularity.
Vinge’s notion of a technological singularity has had considerable influence and it probably also helps sell computers and computer software. It has been taken up and promoted seriously by inventor, entrepreneur, and futurist Ray Kurzweil, the author of such books as The Age of Spiritual Machines and The Singularity is Near. Kurzweil is, for example, the chancellor of the Singularity University which charges hefty sums to teach the Singularity doctrine to well-heeled individuals, likely Silicon Valley executives and zillionaires. Kurzweil’s views have been widely criticized, notably by former Scientific American editor John Rennie and others. The recent movie “Transcendent Man,” available on NetFlix and iTunes, gives a friendly but fair portrait of Ray Kurzweil.
The Singularity concept implicitly assumes the common notion that intelligence and genius drive the invention and discovery process. It also assumes that computer technology can amplify or duplicate human intelligence. Thus, increase intelligence and automatically the number and rate of inventions and discoveries will increase. An exponential feedback loop follows logically from these assumptions.
If invention and discovery is largely driven by large amounts of physical trial and error (for example), none of this is true. To be sure, fields such as computers and electronics with small scale devices where physical trial and error can be performed rapidly and cheaply will tend to exhibit higher rates of progress than fields with huge, expensive, time-consuming to build devices such as modern power plants, tokamaks, particle accelerators and so forth. This is, in fact, what we see at the moment. But there will be no Singularity.
There is now over forty years of experience in fundamental physics and aerospace, both early adopters of computer technology, in using computers to supposedly enhance human intelligence and accelerate the rate of progress. Both of these fields visibly slowed down around 1970 coincident with the widespread adoption of computers in these fields. This is particularly noticeable in aviation and rocketry where modern planes and rockets are only slightly better than the planes and rockets of 1971 despite the heavy use of computers, computer simulations, computer aided design, and so forth. NASA’s recent attempt to replicate the heavy lift rocket technology of the 1970s (the Saturn V rocket), the modern Ares/Constellation program, has foundered despite extensive use of computer technologies far in advance of those used in the Apollo program, which quite possibly owed much of its success to engineers using slide rules.
Similarly, if one looks at the practical results of fundamental physics, comparable to the nuclear reactors that emerged from the Manhattan Project, the results have been similarly disappointing. It is even possible the prototype miniature nuclear reactors and engines of the cancelled nuclear reactor/engine projects of the 1960’s exceed what we can do today; knowledge has been lost due to lack of use.
Are computers and computer software amplifying effective human intelligence? If one looks outside the computer/electronics fields, the evidence for this is generally negative, poor at best. Are computers and computer software accelerating the rate of technological progress, invention and discovery, increasing the rate of genuine breakthroughs? Again, if one looks outside the computer/electronics fields, the evidence is mostly negative. This is particularly noticeable in the power and propulsion areas, where progress appears to have been faster in the slide rule and adding machine era. Rising gasoline and energy prices reflect the negligible progress since the 1970s. The relatively high rates of progress observed in some metrics (e.g. Moore’s Law, the clock speed of CPU’s until 2003, etc.) in computers/electronics can be attributed to the ability to perform large amounts of trial and error rapidly and cheaply combined with cooperative physics, rather than an exponential feedback loop.
Genius and Breakthroughs in Scientific Culture
“Hard” scientists like physicists or mathematicians tend to act as if they believe in “genius” or “general intelligence”. In academia, such scientists tend to be liberal Democrats in the United States. Probably consciously they do not believe that this genius is an inborn, genetic characteristic. Nonetheless, the culture and institutions of the hard sciences are built heavily around the notion of individual measurable genius.
Many high school and college math and science textbooks have numerous sidebars with pictures and brief biographical sketches of famous prominent mathematicians and scientists. These often include anecdotes that seem to show how smart the mathematician or scientist was. A particularly common anecdote is the account of the young Gauss figuring out how to quickly add the numbers from 1 to 100 (The trick is 1 plus 100 is 101, 2 plus 99 is 101, 3 98 is 101, etc. so the sum is 50 times 101 which is 5050).
Much of the goal of the educational system in math and science is ostensibly to recruit and select the best of the best, in the supposed spirit of the Manhattan Project. There are tests and exams and competitions all designed to select the very best. In modern physics, for example, this means that the very top graduate programs such as the graduate program at Princeton are largely populated by extreme physics prodigies: people who have done things like publish original papers on quantum field theory at sixteen and who, by any reasonable criterion, could, in principle, run rings around historical figures like Albert Einstein or Niels Bohr. But, in practice, they usually don’t.
Psychologists like K. Anders Ericsson, sociologists, anthropologists, and other “softer” scientists indeed are more likely to seriously question the notion of genius and its role in invention and discovery, at least more broadly than most physicists or mathematicians. Even here though, Ericsson’s theory, for example, attributes breakthroughs to individual expertise acquired through many years of deliberate practice.
It is common in discussions of breakthroughs to find circular reasoning about the role of genius. How do you know genius is needed to make a breakthrough? Bob discovered X and Bob was a genius! How do you know Bob was a genius? Only a genius could have discovered X!
The belief that genius is the essential driving force behind breakthroughs — the more significant the breakthrough, the more brilliant the genius must have been — is so strong and pervasive that the inventor or discoverer is simply assumed to have obviously been a genius and any contrary evidence dismissed. Richard Feynman’s claim to have had a measured IQ of only 125 often provokes incredulity. It is simply assumed that the discoverer of QED had to have been a genius. James Gleick titled his biography of Feynman Genius in spite of knowing Feynman’s claim.
So too Albert Einstein is almost always assumed to have been a remarkable genius. The author can recall a satirical practice at Caltech, a celebration of a special day for a high school teacher who allegedly flunked Einstein: “What an idiot!” But, Einstein in fact was an uneven student. He made many mistakes both in school and in his published papers. He ended up at the patent office, working on his Ph.D. part time at the less prestigious University of Zurich, because he was not so good. His erstwhile professor Minkowski was famously astounded that Einstein accomplished such amazing things. Einstein seems to have worked on his discoveries over many years and he seems to have had the contrarian mental attitude so common among people who make major breakthroughs. He also probably would have gone nowhere had not Max Planck become intrigued with several of his papers and heavily promoted them.
Niels Bohr was infamously obscure in his talks and writings. He had very limited mathematical skills and relied first on his brother Harald, a mathematician, and later younger assistants like Werner Heisenberg. Many of his papers and writings are impenetrable. His response in Physical Review to Einstein, Podolsky, and Rosen’s 1935 paper, which is now taken to clearly identify the non-local nature of quantum mechanics in the process of questioning the foundations of quantum theory, is complete gibberish. Yet Bohr acquired such a mystique as a brilliant physicist and genius that many of these dubious writings were uncritically accepted by his students and many other physicists — even to this day.
It is clear that if breakthroughs were usually the product of a short period of time, such as six months or less, and little or no trial and error, as often implied in popular science and explicitly portrayed in much science fiction, something like real genius would be absolutely necessary to explain the breakthroughs. But this is not the case. Almost all major breakthroughs took many years of extensive trial and error. Most inventors and discoverers seem to have been of above average intelligence, like the IQ of 125 that the physicist Richard Feynman claimed, but not clearly geniuses as conventionally defined. Some were definitely geniuses as conventionally defined.
Intelligence or Social Rank?
In discussions of intelligence or genius, one needs to ask the question and be aware whether one is really talking about intelligence, whatever it may be, or social rank. Most societies rely heavily on a hierarchical military chain of command structure. This structure is found equally in government, academia, business, capitalist nations, socialist nations, and communist nations. In military chains of command there is almost always an implicit concept of a simple linear scale of social rank or status as well as specific roles. A general outranks a colonel even though the colonel may not report to the general. A four star general outranks a three star general and so forth. One of the practical reasons for this is so that in a confused situation such as a battle, it is always clear who should assume command, the ranking officer.
In many respects, in the United States, the concept of intelligence is often used as a proxy or stand in for social rank or status. In academic scientific research, the two are often equated implicitly. An eminent scientist such as Richard Feynman must be a genius, hence astonishment at his claim to a mere 125 IQ. England in 1776 had a very status conscious society. Everyone was very aware of their linear rank in society. To give some idea of this, in social dances, the dance would be chosen in sequence starting with the most ranking woman at the dance choosing the first dance, followed by the second ranking woman, and so forth. Somehow everyone knew exactly how each person was ranked in their community. When the United States broke away from England, this notion of rank was questioned and even rejected. Americans actually deliberately drew lots at dances as to who would choose the dances in an explicit rejection of the English notions of status. This is not to portray the early United States as some eqalitarian utopia; surely it was not. Nonetheless, from the early days, the United States tended to reject traditional notions of social status and rank, and substituted notions like “the land of opportunity.”
But the United States and the modern world has social ranks and status, sometimes by necessity, sometimes not. How to justify this and perhaps also disguise the reality? Aha! Some people are smarter than other people and their position in society is due to their innate intelligence, which (surprise, surprise) is a linear numeric scale, and hard work! All animals are equal, but some animals are more equal than others.
Genius or Mental Attitude?
Clearly there is more to breakthroughs than pure trial and error. Blind trial and error could never find the solution to a complex difficult problem in even hundreds of thousands of attempts. It is clear that inventors and discoverers put a great deal of thought into what to try and what lessons to derive from both failures and successes. Many inventors and discoverers have noted down tens, even hundreds of thousands of words of analysis in their notebooks, published papers, books, and so forth. Something else is going on as well. There is often a large amount of conceptual analysis and reasoning, as well as the trial and error. Can we find real genius here? Maybe.
However the most common situation and best understood conceptual reasoning leading to a genuine breakthough does not particularly involve recognizable genius. Actually, one can argue the inventors and discoverers are doggedly doing something rather dumb. In many, many genuine breakthroughs the inventor or discoverers try something that seems like it ought to work over and over again, failing repeatedly. They are often following the conventional wisdom, what “everyone knows”: the motion of the planets is governed by uniform circular motion, rockets have always been made using powdered explosives, Smeaton’s coefficient (aviation) is basic textbook know-how measured accurately years ago for windmills, etc. How smart is it to try something that fails over and over and over again for years? How much genius is truly involved in finally stopping and saying: “you know, something must be wrong; some basic assumption that seems sensible can’t be right.”
At this point, one should make a detailed list of assumptions, both explicit and implicit, and carefully examine the experimental data and theory behind each assumption. Not infrequently in history this process has revealed that something “everyone knew” was not well founded. Then, one needs to find a replacement assumption or set of assumptions. Sometimes this is done by conscious thought or yet more trial and error: what if the motion of the planets follows an ellipse, one of the few other known mathematical functions in 1605 when Kepler disovered the elliptical motion of Mars?
Sometimes the new assumption or group of assumptions seems to pop out of nowhere in a “Eureka” moment. The inventor or discoverer often cannot explain consciously how he or she figured it out. This latter case raises the possibility of some sort of genius. But is this true? Many people experience little creative leaps or solutions to problems that they cannot consciously explain. This usually takes a while. For everyday problems the lag between starting work on the problem and the leap is measured in hours or days or maybe weeks. The lag is generally longer the harder the problem. Breakthroughs involve very difficult, complex problems, much larger in scope than these everyday problems. In this case, the leap takes longer and is more dramatic when it happens. This is a reasonable theory, although there is currently no way to prove it. Are we seeing genius, exceptional intelligence, or a common subconscious mental process operating over years — the typical timescale of breakthroughs?
Is the ultimate willingness to question conventional wisdom after hundreds or thousands of failures genius or simply a contrarian mental attitude, which, of course, must be coupled with a supportive environment? If people are being burned at the stake either figuratively or literally for questioning conventional wisdom and assumptions, this mental attitude will fail and may be tantamount to suicide. In this respect, society may determine what happens and whether a breakthrough occurs.
Historically, inventors and discoverers often turn out to have been rather contrarian individuals. Even so it often took many years of repeated failure before they seriously questioned the conventional wisdom — despite a frequent clear propensity on their part to do so. Is it correct to look upon this mental attitude as genius or something else? In many cases, many extremely intelligent people as conventionally measured were/are demonstrably unwilling to take this step, even in the face of thousands of failures. In the many failed “new Manhattan Projects” of the last forty years, the best and the brightest recruited in the supposed spirit of the Manhattan Project, in the theory that genius is the driver of invention and discovery, are often unwilling to question certain basic assumptions. Are genuine breakthroughs driven by individual genius or by a social process which is often uncomfortable to society at large and to the participants?
The rhetoric of “thinking outside the box” and “questioning assumptions” is pervasive in modern science and modern society. The need to question assumptions is evident even from a cursory examination of the history of scientific discovery and technological invention. It is not surprising that people and institutions say they are doing this and may sincerely believe that they are. Many modern scientific and technological fields do exhibit fads and fashions that are presented as “questioning assumptions,” “thinking outside the box,” and “revolutionary new paradigms.” In fact some efforts that have yielded few demonstrable results such as superstrings in theoretical physics or the War on Cancer are notorious for rapidly changing fads and fashions of this type. On the other hand, on close examination, certain basic assumptions are largely beyond question such as the basic notion of superstrings or the oncogene theory of cancer. In the case of superstrings, a number of prominent physicists have publicly questioned the theory including Sheldon Glashow, Roger Penrose, and Lee Smolin, but it remains very dominant in practice.
The role of genius as commonly defined in genuine breakthroughs appears rather limited. Breakthroughs typically involve very large amounts of trial and error over many years. This alone can create the illusion of exceptional intelligence if the large amounts of trial and error and calendar time are neglected. There is clearly a substantial amount of conceptual analysis and reasoning in most breakthroughs. Certainly some kind of genius, probably very different from normal concepts of genius, may be involved in this. Unlike common portrayals in which geniuses solve extremely difficult problems rapidly, the possible genius in breakthroughs usually occurs over a period of years. While inventors and discoverers usually appear to have been above average in intelligence (like Richard Feynman who claimed a measured IQ of only 125), they are often not clearly geniuses as commonly defined. The remarkable flashes of insight, the “Eureka” experiences, reported by many inventors and discoverers may well be examples of relatively ordinary subconscious processes but operating over an extremely long period of time — the many years usually involved in a genuine breakthrough.
The most common and best understood form of conceptual reasoning involved in many breakthroughs is not particularly mysterious nor indicative of genius as commonly conceived. Developing serious doubts about the validity of commonly accepted assumptions after years of repeated failure is neither mysterious nor unusual nor a particular characteristic of genius. Actually, many geniuses as commonly defined often have difficulty taking this step even with the accumulation of thousands of failures. This is more indicative of a certain mental attitude, a willingness to question conventional wisdom and society. Identifying and listing assumptions, both stated and unstated, and then carefully checking the experimental and theoretical basis for these assumptions is a fairly mechanical, logical process; it does not require genius. Most people can do it. Most people are uncomfortable with doing it and often avoid doing so even when it is almost certainly warranted. This questioning of assumptions is also likely to fail if society at large is too resistant, unwilling even grudgingly to accept the results of such a systematic review of deeply held beliefs.
In the current economic difficulties, which may be due to “Peak Oil,” a dwindling supply of inexpensive oil and natural gas, there may well be an urgent and growing need for new energy sources and technologies. This has already led to calls for “new new Manhattan Projects” employing platoons of putative geniuses to develop or perfect various hoped for technological fixes such as thorium nuclear reactors, hydrogen fuel cells and various forms of solar power. The track record of the “new Manhattan Projects” of the last forty years is rather poor and should give everyone pause. The original Manhattan Project was certainly unusual in the success of the first full system tests and perhaps in other ways as well. This alone argues for assuming that many full system tests, hundreds probably, will be needed in general to develop a new technology. Success is more likely with inexpensive, small scale systems of some sort where the many, many trials and errors usually needed for a breakthrough can be performed quickly and cheaply.
But what about genius? Many breakthroughs may be due in part to powerful subconscious processes found in most people but operating over many years rather than genius as commonly defined. Genius of some kind may be necessary, but if the contrarian mental attitude frequently essential to breakthroughs is lacking or simply rejected by society despite the pervasive modern rhetoric about “questioning assumptions” and “thinking outside the box,” then failure is in fact likely, an outcome which would probably be bad for almost everyone, perhaps the entire human race. It is not inconceivable that we could experience a nuclear war over dwindling oil and natural gas supplies in the Middle East or elsewhere — certainly an irrational act but really smart people sometimes do extremely dumb things.
© 2011 John F. McGowan
About the Author
John F. McGowan, Ph.D. solves problems by developing complex algorithms that embody advanced mathematical and logical concepts, including video compression and speech recognition technologies. He has extensive experience developing software in C, C++, Visual Basic, Mathematica, MATLAB, and many other programming languages. He is probably best known for his AVI Overview, an Internet FAQ (Frequently Asked Questions) on the Microsoft AVI (Audio Video Interleave) file format. He has worked as a contractor at NASA Ames Research Center involved in the research and development of image and video processing algorithms and technology. He has published articles on the origin and evolution of life, the exploration of Mars (anticipating the discovery of methane on Mars), and cheap access to space. He has a Ph.D. in physics from the University of Illinois at Urbana-Champaign and a B.S. in physics from the California Institute of Technology (Caltech). He can be reached at firstname.lastname@example.org.