The Manhattan Project which developed the atomic bomb during World War II is the exemplar of modern Big Science. Numerous mostly unsuccessful modern scientific megaprojects have been promoted and justified by invoking the successful example of the Manhattan Project. Even sixty-six years after the first atomic bombs were dropped on Hiroshima and Nagasaki, the Manhattan Project continues to have a powerful influence on public policy, the practice of science and engineering, and public consciousness. This article argues that the Manhattan Project was atypical of technological breakthroughs, major inventions and discoveries, and atypical of the successful use of mathematics in breakthroughs. To achieve successful breakthroughs — life extension, cures for cancer, cheaper energy sources — and reduce the costs of these attempts, other models should be followed.
The Manhattan Project is remarkable. The project appears to have begun on a small scale about 1939. It expanded dramatically after the attack on Pearl Harbor which brought the United States into World War II. In less than four years, the project went from some small scale experiments and calculations by theoretical physicists such as Hans Bethe, J. Robert Oppenheimer, and others, some famous and some less famous, to full scale atomic bombs that worked the first time. The Manhattan Project cost billions of dollars in 1940’s dollars, tens of billions of 2011 dollars, and employed tens of thousands of scientists, engineers, and others.
The Manhattan Project coincided with and played a major role in the transition of science in the middle of the twentieth century. The role of governments in funding and directing scientific research expanded dramatically. Science became much more professionalized and institutionalized. The importance of formal credentials such as the Ph.D. increased. The role of so-called amateurs declined sharply. Today, the United States federal government spends about $100 billion per year on activities labeled as research and development, most channeled through huge government bureaucracies such as the Department of Energy (DOE), National Aeronautics and Space Administration (NASA), the National Institutes of Health (NIH), the Defense Advanced Research Projects Agency (DARPA), the National Science Foundation (NSF), and a number of other funding agencies.
The Manhattan Project has often been taken as showing “where there is a will, there is a way” in science. With the political will, huge funding, platoons of the best and brightest scientists, and a large helping of theoretical mathematical calculations (often computer simulations today), any problem can be solved. In the 1940’s and 1950’s, this could be contrasted to the shoestring budget research of the nineteenth and early twentieth century. Since World War II, the Manhattan Project has frequently been explicitly and implicitly invoked in support of a series of scientific megaprojects such as the “War on Cancer”, tokamaks for nuclear fusion power, inertial confinement fusion power, and many, many others. Most of these programs, including most in physics, have failed to replicate the stunning success of the Manhattan Project.
The Manhattan Project is unusual among major breakthroughs, not only in its size and scope, but more importantly in the success of the first full system tests: the Trinity test explosion and the first and only uses of two atomic bombs in war. This is quite unusual among major inventions and discoveries which usually involve large amounts of trial and error both at the component and full system level. It has provided a prominent, well-known example where scientists were able to make theoretical calculations, solve cryptic equations about neutron scattering and interactions in uranium and plutonium, and build a superweapon that worked right the first time.
This is the ideal of professional scientists and engineers often taught in schools and universities. Students learn to solve equations quickly and accurately. They are evaluated by exams and tests including the SAT, Advanced Placement exams, GRE exams, qualifying exams in graduate school and so forth. What is sometimes explicitly asserted and pervasively implied is the central importance of mathematical derivations and calculations. Solve the mathematical problem and build a machine according to the results of the solution and the new machine will work. If you are really really good, it will work right the first time!
This picture of the role of mathematics in invention and discovery is pervasive in popular culture as well. As the author has noted in previous articles such as Symbolmania, it is common to encounter a scene in movies and television in which a scientist or mathematician solves a very difficult problem, makes a major breakthrough usually, on camera in a few seconds by performing some mysterious symbolic manipulations on a blackboard (sometimes a whiteboard or a see-through sheet of plastic, but the blackboard is still the most common icon). For example, the popular and entertaining science fiction television series Eureka, which significantly presents a glamorous Hollywood/giant defense contractor public relations department version of military research and development, features several scenes of this type, with glamorous photogenic superscientists making breakthroughs in almost every episode.
All About Breakthroughs
The Manhattan Project is an example of a technological breakthrough. Breakthoughs are somewhat difficult to define. A scientific or technological breakthough is a bit like the infamous definition of pornography: you know it when you see it.
Genuine breakthroughs are quite rare. Breakthroughs, both explicitly claimed and implied, are reported all the time. The popular Slashdot web site carries a report of a breakthrough every few days. In the computer industry, the term breakthrough has been applied to such gimmicks as tabbed browsers and the latest hot programming language recycling techniques first invented and implemented in the 1960’s (or even the 1950’s). This article is concerned with genuine breakthroughs that stand the test of time.
Breakthroughs typically involve a radical increase, a “quantum leap”, in measured performance or the introduction of new capabilities. In mechanical invention, breakthroughs frequently involve the invention of a new component or a radical redesign of the system. The atomic bomb is an example of the latter in which new materials and principles replaced the traditional chemical explosives entirely. Some breakthroughs are really just the accumulation of many small incremental improvements.
Although breakthroughs are clearly quite rare, they are frequently implicitly or explicitly invoked in politics and public policy. As mentioned, many giant research programs have been funded based on hopes of a major breakthrough similar to the Manhattan Project, such as tokamaks for nuclear fusion. More generally, funding for science and technology research and education is usually justified by explicit or implicit invocation of breakthroughs. Large corporations often claim to be engaged in massive forward looking research programs. In 2008, as gas prices and profits skyrocketed, Exxon Mobil launched a high profile advertising campaign on television and the Internet portraying the oil company as a public spirited Eureka-style research lab filled with idealistic photogenic scientists laboring to produce energy breakthroughs for the United States (not very well it seems, judging from current gas prices). The pharmaceutical industry in the United States likewise frequently invokes its research and development activities to defend high prices and high profits.
Breakthroughs are very rare. Despite the current fascination with the Internet and computers, there may not have been a breakthough comparable to the Manhattan Project since the 1960s. That said, the major advance in video compression technology that reached the market in 2003 may ultimately have profound economic effects. Despite many billions of dollars expended on everything from tokamaks to solar power, there has been very limited progress is power and propulsion technology since 1970 as current gas and energy prices demonstrate.
Breakthroughs vary a lot. There are some common patterns that recur across many cases. But nonetheless each major invention or discovery has its own unique story. It is often difficult to be sure what really happened. The enormous financial gains, professional benefits, glory, and even political power associated with a genuine breakthrough give extreme and exceptional motives for deception and dishonesty by participants. A close examination of many breakthroughs often reveals controversy, lawsuits, and other complications. Did Alexander Graham Bell really invent the telephone or did he rip it off from another inventor as some claim? Was Marconi a boy genius who invented the radio or did he rip off the work of others as more than a few have concluded? What is the truth of the many lawsuits and bitter conflicts between the Wright Brothers, their rivals, and Octave Chanute? Who invented the laser? Many other controversies may be cited. Is the official history of the Manhattan Project taught in schools and widely accepted by many true? Was it altered for legitimate national security reasons, to hide the secret of how to make the atomic bomb, or to further the careers of some of the participants?
The Failure Rate of Breakthrough Research
The failure rate of attempts to make breakthroughs, sometimes referred to as “breakthrough research” as in NASA’s short-lived Breakthrough Propulsion Physics (BPP) program, appears to be extremely high. Scientists (and venture capitalists) often glibly claim that an 80% failure rate is the norm in research and development, mainly when making excuses for an obvious failure. It is not clear where these widely quoted numbers come from: personal experience, a wild guess, the scientist’s thesis advisor told him this factoid and he or she has dutifully repeated it without thought ever since, or detailed studies. The goal of much modern research is to produce publications. If success is defined as a published paper, well then, the success rate of research could easily match or exceed 20% (an 80% failure rate). A great deal of modern research in fact consists of measuring something to slightly greater accuracy (“measuring X to another decimal point” is the standard put down) or calculating some theoretical quantity to slightly greater accuracy or detail. This sort of research may well have a 20% success rate (or higher, which the author suspects). Genuine breakthrough research, however, may well have a failure rate exceeding 99% or indeed, more accurately, have a 100% failure rate until a key enabling technology or method is developed, something obvious only in retrospect.
In the late sixteenth and early seventeenth century, at the time of William Shakespeare and Galileo Galilei, the Holy Roman Emperor Rudolf II funded what has to have been one of the most ambitious breakthrough research programs in human history, ultimately bankrupting his Empire leading to war with the Turks (he couldn’t pay the tribute to keep the Turks from invading) and the Emperor’s overthrow by his royal Habsburg siblings. Rudolf II funded research by hundreds of astrologers, alchemists, magicians, philosophers, and others including famously the astronomer/astrologer Tycho Brahe and the mathematician/astronomer/astrologer Johannes Kepler. This incredible effort produced only one major scientific breakthrough, Kepler’s discovery of the laws of planetary motion including the elliptical orbits of the planets, something still taught in science classes today. This breakthrough fell far short of what Rudolf II hoped for; he was seeking the very secrets of the universe, converting base metals to gold, the elixir of life, accurate prediction of the future through astrology, and so forth. The failure rate of Rudolf’s attempts easily exceeded 99%.
Rudolf II could have poured even more money and manpower into his effort, but he would simply have failed even more. In fact, the “science” of the time, meaning such things as alchemy and astrology, was simply too backward and on the wrong track to produce the breakthroughs that Rufolf hoped for.
In his book Progress in Flying Machines, Octave Chanute, the Wright Brothers largely forgotten mentor, cataloged hundreds of failed attempts to developed powered flying machines. There are fifty-seven illustrations of different major serious attempts that Chanute studied. One can argue on this basis that the failure rate of early attempt to develop powered flight exceeded 98%. It is likely even this daunting figure is misleading. Until about 1890, steam engines and internal combusion engines lacked the combination of high power and light weight needed for flight. So it is likely it was nearly impossible to develop powered flight prior to 1890 without developing major advances in the engines as well, an even more daunting task.
It is generally believed that Russian mathematician Grigoriy Perelman recently proved the Poincare Conjecture, a major breakthrough in pure mathematics. It is difficult to be certain as this is a recent discovery that has only been checked by a small number of expert mathematicians. However, what one can say is that there have been on the order of at least one-hundred failed serious attempts to prove the Poincare Conjecture since it was first proposed by Henri Poincare. This does not even count the many attempts that probably remained forever locked in a mathematician’s file drawer. Again this is something on the order of a 99% failure rate. Here too, Perelman’s discovery depended on key advances made by the mathematician Richard Hamilton (Perelman carefully cites Hamilton’s work in his arxiv.org postings and has been quoted in some press reports as giving plenty of credit to Hamilton’s work). Again, it is likely the proper “failure rate” to use for planning purposes would have been close to 100% prior to Hamilton’s work. And it is only certain in retrospect that Hamilton’s work was a key enabling technique.
Thus, there is good reason to think the failure rate of breakthrough research is very high, well above the 80 (sometimes 90) percent failure rate often cited by scientists when explaining a failure. Further, this failure rate should not be thought of as an independent identically distributed probability like the outcome of flipping a coin or throwing some dice. Scientists (including the author) often argue, usually unsuccessfully, for more diversified research programs. The implicit argument is that if there is an 80% failure rate, then a research program with ten independent efforts is likely to succeed. If there is a 99% failure rate, then a research program with 200 independent efforts is likely to succeed. However, this is probably in error in most cases. Until some enabling technology, method, or concept is developed, something usually obvious only in retrospect, the failure rate of breakthrough research is likely to be 100% or nearly so as the cases above illustrate.
Trial and Error in Breakthroughs
One of the most consistent characteristics of breakthroughs, major inventions and discoveries, is large amounts of trial and error. In all cases that the author has studied in sufficient detail to determine whether large amounts of trial and error were involved, there was a large amount of trial and error, meaning anywhere from hundreds to tens of thousands of trials of some kind. The only possible exception to this pattern is some of Nikola Tesla’s early inventions. Tesla claimed to have unusual visualization abilities such that he could literally see the operation of his inventions in his head without having to build them. He claimed to have built some of his inventions and they worked correctly the first time, something particularly relevant to the remarkable case of the atomic bomb. Tesla did describe a large amount of mental trial and error followed by a mysterious flash of insight while walking in a park in which he literally saw the correct design for his invention.
In the vast majority of mechanical inventions, there have been thousands of trials at a component level, hundreds of partial (e.g. static tests of a rocket in which the engine is run but the rocket is not actually flown) or complete trials of a full system. It usually involves many attempts before a full system such as an atomic bomb actually works. Mechanical inventions that work right the first time are clearly the exception in the history of invention and discovery. Some possible exceptions are Tesla’s alternating current motor (if Tesla is to be believed), the atomic bomb, and the first flight of the Space Shuttle. Inventions that work right the first time do appear to occur, but they are rare, exceptions, outliers, flukes. They probably should not be treated as typical or likely for planning purposes or investment decisions.
The Manhattan Project as Fluke
The Manhattan Project stands revealed as a fluke, atypical of most breakthroughs. Indeed, many attempts to replicate the success of the Manhattan Project by physicists including veterans of the Manhattan Project have failed to make comparable breakthroughs since World War II: tokamaks, inertial confinement fusion, various particle accelerator megaprojects, etc.. So too, attempts by other scientists such as the War on Cancer have largely failed. Despite some limited successes, much ballyhooed attempts such as the Human Genome Project have failed to produce the great benefits that the general public and probably most people would like to see: cures for cancer and other diseaes, for example. Like Rudolf II, four hundred years ago, the public is rewarded with kowledge of scientific or scholarly interest but of little practical use, at least today.
Looking at the broad history of invention and discovery, this is not surprising. First, the failure rate of breakthrough research appears to be very high, much higher than the 80-90% failure rate frequently cited by scientists and venture capitalists. Nor do breakthroughs appear to be amenable to simply throwing money and manpower at the problems as Rudolf II discovered. Without certain key enabling technologies, methods, or concepts which may lie far in the future, sucess may simply be impossible. These key enablers are often clear only in retrospect.
Secondly, projects that succeed on essentially the first attempt are rare; in this, the Manhattan Project is quite unusual. Yet, this success of the Manhattan Project has greatly helped fund scientific R&D megaprojects that implicitly assume that the full system will work on the first try or with only a few attempts, something that is historically rare. Full scale systems like the ITER tokamak, particle accelerators like the Large Hadron Collider (LHC), and so forth are both extremely expensive and each trial of the full system is likely to cost anywhere from millions to billions of dollars. Thus, one hundred full system trials, perhaps a more realistic planning number, implies vast costs. Not surprisingly, many scientific megaprojects like the NASA Ares/Constellation program recently or the Super Conducting Supercollider (SSC) have foundered in a sea of rising costs.
Why Was the Manhattan Project Different?
It is difficult to know for sure why the Manhattan Project differed from most breakthroughs. Several possibilities exist. It may be that the official history is simply not correct. Failed atomic bomb tests were kept classified and never mentioned for national security or other reasons: for example, to hide the true cost of the program which is already admitted to have run far over its original budget. Given the long history of fraud and deception associated with major breakthroughs as well as secret government military programs, one should keep this possibility in mind.
The Manhattan Project involved an explosive device, a bomb, rather than an engine or typical instrument. With most inventions, most engines, and most instruments, it is a major failure if the invention explodes. Indeed, in the history of engines and power sources, an undesired explosion is one of the common types of failure. Hence it may simply have been “easier” to develop a bomb than a typical machine such as, for example, a steam engine in the past or a fusion power source in the hoped for future.
Finally, the Manhattan Project may have been a case where the theoretical mathematical calculations worked well, something that is often not the case. Instead of running into the intractable problems that aeronautical engineers and fluid dynamics scientists have run into solving the Navier-Stokes equations for aircraft and other machines, in this case, the theory and calculations worked well. But this should probably be considered an exception as it has proven to be, rather than proof that mathematics or computer simulations have finally eliminated the need for actual trial and error in the real world.
The Manhattan Project should be considered a fluke. In particular, in genuine breakthrough research and development, one should generally plan for many full system trials before success. Now, occasionally a new invention may work right the first time. This appears to have happened a few times, but it is generally the exception. This argues strongly in favor of using scale models or other inexpensive prototyping methods for the full system tests to minimize costs and maximize the likelihood of success. This differs from common practice in many areas such as particle physics and aerospace.
It is also unwise to plan on sophisticated mathematical methods or computer simulations providing a total or nearly total substitute for physical testing. Mathematical methods are helpful and, in some cases, such as the Manhattan Project, may prove highly successful. They are not a panacea and they rarely perform anywhere near the magical performance depicted in popular culture.
Finally, the failure rate of breakthrough research is probably much higher than the 80-90% failure rates frequently cited by scientists and venture capitalists. This failure rate should not be thought of as an independent identically distributed random variable such as the outcome of flipping a coin or throwing dice. Rather, it is usually closer to an extremely high failure rate, 100% or nearly so, until certain enabling discoveries and conditions occur, something usually clear only in retrospect.
© 2011 John F. McGowan
About the Author
John F. McGowan, Ph.D. solves problems by developing complex algorithms that embody advanced mathematical and logical concepts, including video compression and speech recognition technologies. He has extensive experience developing software in C, C++, Visual Basic, Mathematica, MATLAB, and many other programming languages. He is probably best known for his AVI Overview, an Internet FAQ (Frequently Asked Questions) on the Microsoft AVI (Audio Video Interleave) file format. He has worked as a contractor at NASA Ames Research Center involved in the research and development of image and video processing algorithms and technology. He has published articles on the origin and evolution of life, the exploration of Mars (anticipating the discovery of methane on Mars), and cheap access to space. He has a Ph.D. in physics from the University of Illinois at Urbana-Champaign and a B.S. in physics from the California Institute of Technology (Caltech). He can be reached at firstname.lastname@example.org.
Admin’s message: Looking for some great mathematical reads? Check out our list of must-read math books.
Get more stuff like this
Get interesting math updates directly in your inbox.
Thank you for subscribing. Please check your email to confirm your subscription.
Something went wrong.