The Manhattan Project which developed the atomic bomb during World War II is the exemplar of modern Big Science. Numerous mostly unsuccessful modern scientific megaprojects have been promoted and justified by invoking the successful example of the Manhattan Project. Even sixty-six years after the first atomic bombs were dropped on Hiroshima and Nagasaki, the Manhattan Project continues to have a powerful influence on public policy, the practice of science and engineering, and public consciousness. This article argues that the Manhattan Project was atypical of technological breakthroughs, major inventions and discoveries, and atypical of the successful use of mathematics in breakthroughs. To achieve successful breakthroughs — life extension, cures for cancer, cheaper energy sources — and reduce the costs of these attempts, other models should be followed.
The Manhattan Project is remarkable. The project appears to have begun on a small scale about 1939. It expanded dramatically after the attack on Pearl Harbor which brought the United States into World War II. In less than four years, the project went from some small scale experiments and calculations by theoretical physicists such as Hans Bethe, J. Robert Oppenheimer, and others, some famous and some less famous, to full scale atomic bombs that worked the first time. The Manhattan Project cost billions of dollars in 1940’s dollars, tens of billions of 2011 dollars, and employed tens of thousands of scientists, engineers, and others.
The Manhattan Project coincided with and played a major role in the transition of science in the middle of the twentieth century. The role of governments in funding and directing scientific research expanded dramatically. Science became much more professionalized and institutionalized. The importance of formal credentials such as the Ph.D. increased. The role of so-called amateurs declined sharply. Today, the United States federal government spends about $100 billion per year on activities labeled as research and development, most channeled through huge government bureaucracies such as the Department of Energy (DOE), National Aeronautics and Space Administration (NASA), the National Institutes of Health (NIH), the Defense Advanced Research Projects Agency (DARPA), the National Science Foundation (NSF), and a number of other funding agencies.
The Manhattan Project has often been taken as showing “where there is a will, there is a way” in science. With the political will, huge funding, platoons of the best and brightest scientists, and a large helping of theoretical mathematical calculations (often computer simulations today), any problem can be solved. In the 1940’s and 1950’s, this could be contrasted to the shoestring budget research of the nineteenth and early twentieth century. Since World War II, the Manhattan Project has frequently been explicitly and implicitly invoked in support of a series of scientific megaprojects such as the “War on Cancer”, tokamaks for nuclear fusion power, inertial confinement fusion power, and many, many others. Most of these programs, including most in physics, have failed to replicate the stunning success of the Manhattan Project.
The Manhattan Project is unusual among major breakthroughs, not only in its size and scope, but more importantly in the success of the first full system tests: the Trinity test explosion and the first and only uses of two atomic bombs in war. This is quite unusual among major inventions and discoveries which usually involve large amounts of trial and error both at the component and full system level. It has provided a prominent, well-known example where scientists were able to make theoretical calculations, solve cryptic equations about neutron scattering and interactions in uranium and plutonium, and build a superweapon that worked right the first time.
This is the ideal of professional scientists and engineers often taught in schools and universities. Students learn to solve equations quickly and accurately. They are evaluated by exams and tests including the SAT, Advanced Placement exams, GRE exams, qualifying exams in graduate school and so forth. What is sometimes explicitly asserted and pervasively implied is the central importance of mathematical derivations and calculations. Solve the mathematical problem and build a machine according to the results of the solution and the new machine will work. If you are really really good, it will work right the first time!
This picture of the role of mathematics in invention and discovery is pervasive in popular culture as well. As the author has noted in previous articles such as Symbolmania, it is common to encounter a scene in movies and television in which a scientist or mathematician solves a very difficult problem, makes a major breakthrough usually, on camera in a few seconds by performing some mysterious symbolic manipulations on a blackboard (sometimes a whiteboard or a see-through sheet of plastic, but the blackboard is still the most common icon). For example, the popular and entertaining science fiction television series Eureka, which significantly presents a glamorous Hollywood/giant defense contractor public relations department version of military research and development, features several scenes of this type, with glamorous photogenic superscientists making breakthroughs in almost every episode.
All About Breakthroughs
The Manhattan Project is an example of a technological breakthrough. Breakthoughs are somewhat difficult to define. A scientific or technological breakthough is a bit like the infamous definition of pornography: you know it when you see it.
Genuine breakthroughs are quite rare. Breakthroughs, both explicitly claimed and implied, are reported all the time. The popular Slashdot web site carries a report of a breakthrough every few days. In the computer industry, the term breakthrough has been applied to such gimmicks as tabbed browsers and the latest hot programming language recycling techniques first invented and implemented in the 1960’s (or even the 1950’s). This article is concerned with genuine breakthroughs that stand the test of time.
Breakthroughs typically involve a radical increase, a “quantum leap”, in measured performance or the introduction of new capabilities. In mechanical invention, breakthroughs frequently involve the invention of a new component or a radical redesign of the system. The atomic bomb is an example of the latter in which new materials and principles replaced the traditional chemical explosives entirely. Some breakthroughs are really just the accumulation of many small incremental improvements.
Although breakthroughs are clearly quite rare, they are frequently implicitly or explicitly invoked in politics and public policy. As mentioned, many giant research programs have been funded based on hopes of a major breakthrough similar to the Manhattan Project, such as tokamaks for nuclear fusion. More generally, funding for science and technology research and education is usually justified by explicit or implicit invocation of breakthroughs. Large corporations often claim to be engaged in massive forward looking research programs. In 2008, as gas prices and profits skyrocketed, Exxon Mobil launched a high profile advertising campaign on television and the Internet portraying the oil company as a public spirited Eureka-style research lab filled with idealistic photogenic scientists laboring to produce energy breakthroughs for the United States (not very well it seems, judging from current gas prices). The pharmaceutical industry in the United States likewise frequently invokes its research and development activities to defend high prices and high profits.
Breakthroughs are very rare. Despite the current fascination with the Internet and computers, there may not have been a breakthough comparable to the Manhattan Project since the 1960s. That said, the major advance in video compression technology that reached the market in 2003 may ultimately have profound economic effects. Despite many billions of dollars expended on everything from tokamaks to solar power, there has been very limited progress is power and propulsion technology since 1970 as current gas and energy prices demonstrate.
Breakthroughs vary a lot. There are some common patterns that recur across many cases. But nonetheless each major invention or discovery has its own unique story. It is often difficult to be sure what really happened. The enormous financial gains, professional benefits, glory, and even political power associated with a genuine breakthrough give extreme and exceptional motives for deception and dishonesty by participants. A close examination of many breakthroughs often reveals controversy, lawsuits, and other complications. Did Alexander Graham Bell really invent the telephone or did he rip it off from another inventor as some claim? Was Marconi a boy genius who invented the radio or did he rip off the work of others as more than a few have concluded? What is the truth of the many lawsuits and bitter conflicts between the Wright Brothers, their rivals, and Octave Chanute? Who invented the laser? Many other controversies may be cited. Is the official history of the Manhattan Project taught in schools and widely accepted by many true? Was it altered for legitimate national security reasons, to hide the secret of how to make the atomic bomb, or to further the careers of some of the participants?
The Failure Rate of Breakthrough Research
The failure rate of attempts to make breakthroughs, sometimes referred to as “breakthrough research” as in NASA’s short-lived Breakthrough Propulsion Physics (BPP) program, appears to be extremely high. Scientists (and venture capitalists) often glibly claim that an 80% failure rate is the norm in research and development, mainly when making excuses for an obvious failure. It is not clear where these widely quoted numbers come from: personal experience, a wild guess, the scientist’s thesis advisor told him this factoid and he or she has dutifully repeated it without thought ever since, or detailed studies. The goal of much modern research is to produce publications. If success is defined as a published paper, well then, the success rate of research could easily match or exceed 20% (an 80% failure rate). A great deal of modern research in fact consists of measuring something to slightly greater accuracy (“measuring X to another decimal point” is the standard put down) or calculating some theoretical quantity to slightly greater accuracy or detail. This sort of research may well have a 20% success rate (or higher, which the author suspects). Genuine breakthrough research, however, may well have a failure rate exceeding 99% or indeed, more accurately, have a 100% failure rate until a key enabling technology or method is developed, something obvious only in retrospect.
In the late sixteenth and early seventeenth century, at the time of William Shakespeare and Galileo Galilei, the Holy Roman Emperor Rudolf II funded what has to have been one of the most ambitious breakthrough research programs in human history, ultimately bankrupting his Empire leading to war with the Turks (he couldn’t pay the tribute to keep the Turks from invading) and the Emperor’s overthrow by his royal Habsburg siblings. Rudolf II funded research by hundreds of astrologers, alchemists, magicians, philosophers, and others including famously the astronomer/astrologer Tycho Brahe and the mathematician/astronomer/astrologer Johannes Kepler. This incredible effort produced only one major scientific breakthrough, Kepler’s discovery of the laws of planetary motion including the elliptical orbits of the planets, something still taught in science classes today. This breakthrough fell far short of what Rudolf II hoped for; he was seeking the very secrets of the universe, converting base metals to gold, the elixir of life, accurate prediction of the future through astrology, and so forth. The failure rate of Rudolf’s attempts easily exceeded 99%.
Rudolf II could have poured even more money and manpower into his effort, but he would simply have failed even more. In fact, the “science” of the time, meaning such things as alchemy and astrology, was simply too backward and on the wrong track to produce the breakthroughs that Rufolf hoped for.
In his book Progress in Flying Machines, Octave Chanute, the Wright Brothers largely forgotten mentor, cataloged hundreds of failed attempts to developed powered flying machines. There are fifty-seven illustrations of different major serious attempts that Chanute studied. One can argue on this basis that the failure rate of early attempt to develop powered flight exceeded 98%. It is likely even this daunting figure is misleading. Until about 1890, steam engines and internal combusion engines lacked the combination of high power and light weight needed for flight. So it is likely it was nearly impossible to develop powered flight prior to 1890 without developing major advances in the engines as well, an even more daunting task.
It is generally believed that Russian mathematician Grigoriy Perelman recently proved the Poincare Conjecture, a major breakthrough in pure mathematics. It is difficult to be certain as this is a recent discovery that has only been checked by a small number of expert mathematicians. However, what one can say is that there have been on the order of at least one-hundred failed serious attempts to prove the Poincare Conjecture since it was first proposed by Henri Poincare. This does not even count the many attempts that probably remained forever locked in a mathematician’s file drawer. Again this is something on the order of a 99% failure rate. Here too, Perelman’s discovery depended on key advances made by the mathematician Richard Hamilton (Perelman carefully cites Hamilton’s work in his arxiv.org postings and has been quoted in some press reports as giving plenty of credit to Hamilton’s work). Again, it is likely the proper “failure rate” to use for planning purposes would have been close to 100% prior to Hamilton’s work. And it is only certain in retrospect that Hamilton’s work was a key enabling technique.
Thus, there is good reason to think the failure rate of breakthrough research is very high, well above the 80 (sometimes 90) percent failure rate often cited by scientists when explaining a failure. Further, this failure rate should not be thought of as an independent identically distributed probability like the outcome of flipping a coin or throwing some dice. Scientists (including the author) often argue, usually unsuccessfully, for more diversified research programs. The implicit argument is that if there is an 80% failure rate, then a research program with ten independent efforts is likely to succeed. If there is a 99% failure rate, then a research program with 200 independent efforts is likely to succeed. However, this is probably in error in most cases. Until some enabling technology, method, or concept is developed, something usually obvious only in retrospect, the failure rate of breakthrough research is likely to be 100% or nearly so as the cases above illustrate.
Trial and Error in Breakthroughs
One of the most consistent characteristics of breakthroughs, major inventions and discoveries, is large amounts of trial and error. In all cases that the author has studied in sufficient detail to determine whether large amounts of trial and error were involved, there was a large amount of trial and error, meaning anywhere from hundreds to tens of thousands of trials of some kind. The only possible exception to this pattern is some of Nikola Tesla’s early inventions. Tesla claimed to have unusual visualization abilities such that he could literally see the operation of his inventions in his head without having to build them. He claimed to have built some of his inventions and they worked correctly the first time, something particularly relevant to the remarkable case of the atomic bomb. Tesla did describe a large amount of mental trial and error followed by a mysterious flash of insight while walking in a park in which he literally saw the correct design for his invention.
In the vast majority of mechanical inventions, there have been thousands of trials at a component level, hundreds of partial (e.g. static tests of a rocket in which the engine is run but the rocket is not actually flown) or complete trials of a full system. It usually involves many attempts before a full system such as an atomic bomb actually works. Mechanical inventions that work right the first time are clearly the exception in the history of invention and discovery. Some possible exceptions are Tesla’s alternating current motor (if Tesla is to be believed), the atomic bomb, and the first flight of the Space Shuttle. Inventions that work right the first time do appear to occur, but they are rare, exceptions, outliers, flukes. They probably should not be treated as typical or likely for planning purposes or investment decisions.
The Manhattan Project as Fluke
The Manhattan Project stands revealed as a fluke, atypical of most breakthroughs. Indeed, many attempts to replicate the success of the Manhattan Project by physicists including veterans of the Manhattan Project have failed to make comparable breakthroughs since World War II: tokamaks, inertial confinement fusion, various particle accelerator megaprojects, etc.. So too, attempts by other scientists such as the War on Cancer have largely failed. Despite some limited successes, much ballyhooed attempts such as the Human Genome Project have failed to produce the great benefits that the general public and probably most people would like to see: cures for cancer and other diseaes, for example. Like Rudolf II, four hundred years ago, the public is rewarded with kowledge of scientific or scholarly interest but of little practical use, at least today.
Looking at the broad history of invention and discovery, this is not surprising. First, the failure rate of breakthrough research appears to be very high, much higher than the 80-90% failure rate frequently cited by scientists and venture capitalists. Nor do breakthroughs appear to be amenable to simply throwing money and manpower at the problems as Rudolf II discovered. Without certain key enabling technologies, methods, or concepts which may lie far in the future, sucess may simply be impossible. These key enablers are often clear only in retrospect.
Secondly, projects that succeed on essentially the first attempt are rare; in this, the Manhattan Project is quite unusual. Yet, this success of the Manhattan Project has greatly helped fund scientific R&D megaprojects that implicitly assume that the full system will work on the first try or with only a few attempts, something that is historically rare. Full scale systems like the ITER tokamak, particle accelerators like the Large Hadron Collider (LHC), and so forth are both extremely expensive and each trial of the full system is likely to cost anywhere from millions to billions of dollars. Thus, one hundred full system trials, perhaps a more realistic planning number, implies vast costs. Not surprisingly, many scientific megaprojects like the NASA Ares/Constellation program recently or the Super Conducting Supercollider (SSC) have foundered in a sea of rising costs.
Why Was the Manhattan Project Different?
It is difficult to know for sure why the Manhattan Project differed from most breakthroughs. Several possibilities exist. It may be that the official history is simply not correct. Failed atomic bomb tests were kept classified and never mentioned for national security or other reasons: for example, to hide the true cost of the program which is already admitted to have run far over its original budget. Given the long history of fraud and deception associated with major breakthroughs as well as secret government military programs, one should keep this possibility in mind.
The Manhattan Project involved an explosive device, a bomb, rather than an engine or typical instrument. With most inventions, most engines, and most instruments, it is a major failure if the invention explodes. Indeed, in the history of engines and power sources, an undesired explosion is one of the common types of failure. Hence it may simply have been “easier” to develop a bomb than a typical machine such as, for example, a steam engine in the past or a fusion power source in the hoped for future.
Finally, the Manhattan Project may have been a case where the theoretical mathematical calculations worked well, something that is often not the case. Instead of running into the intractable problems that aeronautical engineers and fluid dynamics scientists have run into solving the Navier-Stokes equations for aircraft and other machines, in this case, the theory and calculations worked well. But this should probably be considered an exception as it has proven to be, rather than proof that mathematics or computer simulations have finally eliminated the need for actual trial and error in the real world.
Conclusion
The Manhattan Project should be considered a fluke. In particular, in genuine breakthrough research and development, one should generally plan for many full system trials before success. Now, occasionally a new invention may work right the first time. This appears to have happened a few times, but it is generally the exception. This argues strongly in favor of using scale models or other inexpensive prototyping methods for the full system tests to minimize costs and maximize the likelihood of success. This differs from common practice in many areas such as particle physics and aerospace.
It is also unwise to plan on sophisticated mathematical methods or computer simulations providing a total or nearly total substitute for physical testing. Mathematical methods are helpful and, in some cases, such as the Manhattan Project, may prove highly successful. They are not a panacea and they rarely perform anywhere near the magical performance depicted in popular culture.
Finally, the failure rate of breakthrough research is probably much higher than the 80-90% failure rates frequently cited by scientists and venture capitalists. This failure rate should not be thought of as an independent identically distributed random variable such as the outcome of flipping a coin or throwing dice. Rather, it is usually closer to an extremely high failure rate, 100% or nearly so, until certain enabling discoveries and conditions occur, something usually clear only in retrospect.
© 2011 John F. McGowan
About the Author
John F. McGowan, Ph.D. solves problems by developing complex algorithms that embody advanced mathematical and logical concepts, including video compression and speech recognition technologies. He has extensive experience developing software in C, C++, Visual Basic, Mathematica, MATLAB, and many other programming languages. He is probably best known for his AVI Overview, an Internet FAQ (Frequently Asked Questions) on the Microsoft AVI (Audio Video Interleave) file format. He has worked as a contractor at NASA Ames Research Center involved in the research and development of image and video processing algorithms and technology. He has published articles on the origin and evolution of life, the exploration of Mars (anticipating the discovery of methane on Mars), and cheap access to space. He has a Ph.D. in physics from the University of Illinois at Urbana-Champaign and a B.S. in physics from the California Institute of Technology (Caltech). He can be reached at jmcgowan11@earthlink.net.
Admin’s message: Looking for some great mathematical reads? Check out our list of must-read math books.
Some recent invocations of the Manhattan Project:
(Congressman J. Randy Forbes) Forbes Introduces New Manhattan Project to Tackle Energy Dependence, Rising Gas Prices
https://www.forbes.house.gov/News/DocumentSingle.aspx?DocumentID=94607
H.R.301 – New Manhattan Project for Energy Independence
To ensure the energy independence of the United States by promoting research, development, demonstration, and commercial application of technologies through a system of grants and prizes on the scale of the original Manhattan Project.
https://www.opencongress.org/bill/112-h301/show
Obama could kill fossil fuels overnight with a nuclear dash for thorium
If Barack Obama were to marshal America’s vast scientific and strategic resources behind a new Manhattan Project, he might reasonably hope to reinvent the global energy landscape and sketch an end to our dependence on fossil fuels within three to five years.
By Ambrose Evans-Pritchard, International Business Editor 6:55PM BST 29 Aug 2010
https://www.telegraph.co.uk/finance/comment/7970619/Obama-could-kill-fossil-fuels-overnight-with-a-nuclear-dash-for-thorium.html
A New Manhattan Project
November 12, 2009
By Thomas J. Espenshade and Alexandria Walton Radford
Inside Higher Ed
https://www.insidehighered.com/views/2009/11/12/radford
Michael McCarthy: Needed, a new Manhattan Project
The more one knows, the more one is likely to conclude that global warming is not a threat the world will master
Monday, 7 March 2005
https://www.independent.co.uk/opinion/commentators/michael-mccarthy-needed-a-new-manhattan-project-527515.html
Google/Yahoo/Bing/etc. “new Manhattan Project” to see numerous additional examples.
I am very surprised of the bias of this paper, which ignores many other “big science” or “big novel technology” projects which were/are eminently successful: the TeVatron at Fermilab (first superconducting accelerator), the LHC at CERN, the Hubble space telescope (in spite of its early problems), the new generation of wide-aperture, segmented-mirror, adaptive-optics Earth-based telescopes, the Apollo program, the (many) unmanned outer space missions handled by NASA and ESA, the WWW, the high-speed trains and railway networks in Europe, the Boeing 747… Writing only about (very few) sensational failures should be left to bad journalists.
In response to Philippe LeBrun’s comment, a few clarifications:
I do not argue in the article that Big Science should be eliminated or that it has had no successes. The latter is clearly not the case. I would classify the development of the atomic bomb, hydrogen bomb, and nuclear reactors as genuine technological breakthroughs. I would also classify the development of orbit capable rockets (1957) as a genuine technological breakthrough, leading to the successful Apollo project which I consider more of a logical continuation of the new technology than a breakthrough in itself. There are some other examples.
I do stand by my conclusion that most scientific megaprojects have either failed outright or produced knowledge of scientific or scholarly importance, but of no practical use, at least today. If success is defined as scientific publications, then I believe nearly all such projects can be defined as successes.
There is no doubt that Rudolf II, Tycho Brahe, and Kepler made a major scientific breakthrough that has substantial practical use (communications satellites, GPS navigation, etc.) four hundred years later. This was of scant use to the Renaissance Germans who suffered in the war with the Turks. In these difficult economic times, who can eat publications on dark matter or Mars rover images; I do support research into both.
I think almost everyone would like to see genuine scientific and technological breakthroughs that lead to practical benefits to humanity in our lifetimes.
I argue three major connected points in the article:
1. The Manhattan Project was atypical of major breakthroughs especially in the success of the first full system tests.
2. In general, theoretical mathematical calculations are not nearly as successful as they appear to have been in the Manhattan Project.
3. Partly for this reason, in general breakthroughs and other research and development generally require many full system tests to achieve success.
Therefore, one should plan on a large number of full system tests and use scale models or other rapid prototyping methods where possible to reduce costs and substantially increase the likelihood of success, which we would all like to see.
I am aware that in Big Science, scientists and engineers often produce arguments and equations that “bigger is better,” and that they have no choice but to build and test large scale systems. In rocketry, there are arguments that giant rockets are more efficient than small rockets. In fusion research, one encounters various scaling arguments that tokamaks and other putative fusion reactors (see the late Robert Bussard’s promotion of his polywell device for example) must be implemented at large scale to work. These arguments are often difficult to evaluate without extensive technical knowledge of the specific field.
Success would almost certainly be much more likely, which we all desire, if a fast, reliable, inexpensive way to perform small scale physical tests could be found.
Finally, this is not an argument that theoretical mathematical calculations are not helpful, only that their seeming spectacular success in the Manhattan Project is atypical.
Sincerely,
John
Rather than just trial and error by a single inventor or organization, I think it is important to emphasize that successful practical technologies usually come about after an extended evolutionary process involving multiple players.
Initially the optimum solution to a complex systems goal is simply not known, even by a group of really smart people. It is crucial that many different groups of smart people pursue many different approaches to the challenge. Gradually as flawed approaches are discarded and successful ones enhanced and cross-bred, the “fittest” solution(s) arise.
There were, for example, many aviation companies in the 20s and 30s all offering many variations in plane designs. Learning what worked and didn’t work led to the introduction in 1936 of DC-3, the first truly practical and successful airliner. It’s extremely unlikely that a single govt crash program starting in the 1920s would have created a vehicle anywhere near as optimum as the DC-3. It’s also unlikely Douglas could have developed it without seeing what worked and didn’t work for other planes from other firms.
We see similar processes all the time. E.g., $20k flat 32 inch TVs in the mid 1990s, gradually evolve through many tech cycles and vicious competition to the $800 48 inch screens in Walmart today.
OTOH, the basics of how to build an atomic bomb or to get to the Moon could in fact be laid out by a group of smart people in a room. By throwing them enough money, they could get an implementation built and operating. However, as we see with Apollo, the implementation is likely to be flawed and terrifically expensive.
If the money spent on Apollo, or later the Shuttle, had instead funded multiple approaches and incremental development of space transports systems, we would have had fully reusable, fast turnaround, low-cost systems for access to space a long time ago.
Successful technologies today seldom come about due to a grand “Eureka” event. Rather, they come from lots of people all having little Eurekas and little “damn, I was sure that would work” moments.
I think that the uniqueness of the Manhattan project revolves much around the personnel.
I.I Rabi consultant
Neils Bohr consultant
Hans Bethe Los Alamos Participant
Enrico Fermi Los Alamos Participant
Richard Feynmann Los Alamos Participant
Ernest Lawrence Los Alamos Participant
Glenn Seaborg (Not positive where he worked)
Compton and Millikan were involved too, though I think Millikan only peripherally
Nobel Prize winners all (and I suspect I left a few out!)
J. Robert Oppenheimer Lab Director
(who probably would have won a Nobel if he had lived long enough for his and Hartland Snyder’s astro-physical predictions to be verified.) A lab director of prodigious intellect – someone of whom even Bethe said “there was no one who came close”
The Hungarian crew:
John Von Neumann, Leo Szilard, Eugene Wigner,Edward Teller
No project since the Manhattan Project has had such a concentration talent. Nor does there seem to have been a project of the same magnitude with the lack of funding difficulties (or a bastard like Groves to negotiate the stupidity of our corrupt pork barrel bureaucracy) A bureaucracy that has reached the point of ridiculousness – case in point the SSC going down to Waxahachie instead of using the Fermilab infrastructure as the booster rings. Why? Pork.
Consider also that Henry Stimson was the Secretary of War, a man of impeccable credentials and honesty. Unlike the executive branch appointees and lackeys of today who are schooled in political backroom dealing and think of some backroom deal with a pride of accomplishment rather than the shame and embarrassment that such deals should engender.
If it was done today, with the morals and honesty of the current crop of bureaucrats and legislators, you can be rest assured that World War 2 would still be going on – as Country Joe MacDonald said “There’s money good money to be made supplying the Army with tools of the trade” and Halliburton would still be on the gravy train.
No, there was less dishonesty in the world then.
And there was the motivation was the specter of Nazi Germany getting there first.
I don’t see the success as a fluke. I think the concentration of talent was the under the tremendous motivation of the specter of Hitler’s success was unique.
If the government (or even private investment got together – Gates and Buffet get together and pool 25 billion into a private company dedicated to solve the engineering problems of solar power – they, or at least their foundations end up with a company supplying all the energy needed and they go down in history as the most foresight-full and biggest philanthropists in history – they save the planet and the human race!).
However I do not believe our government can perform or fund such a task because of the intense lobbying that would go on – it would sabotage the project from the beginning….
If you can name another project that has had the same talent with the same funding that has failed then I might yield to your argument.
Otherwise, I stand by the position that the Manhattan project was uniquely successful because of the uniqueness of its participants and its funding.
A few comments and clarifications in response to the comment by Joe Pendergast.
Just to restate, the point of the article is that the Manhattan Project was a fluke, highly exceptional, in that the first full system tests were successful. Theoretical mathematical calculations, including computer simulations today, are generally not nearly as successful as they appear to have been in the Manhattan Project. Research and development projects and programs should not assume this experience will be typical. I believe the historical record to date now strongly supports this position. It probably should not be controversial at this point. As a consequence, research and development projects in general should plan on a large number of full system tests and use scale models or other rapid prototyping methods where possible to reduce costs and increase the likelihood of success.
I also make some critical comments about the success of scientific megaprojects since World War II in making actual technological breakthroughs comparable to the Manhattan Project — with practical demonstrable benefits in our lifetime. There have clearly been some successes but not what has been hoped for based on the experience with the Manhattan Project.
At the end of World War II, it was reasonable to think that the Manhattan Project constituted a new model for research and development, that compared well to the historical pattern of sometimes kooky inventors and discoverers in small labs or workshops operating on shoe string budgets. The Manhattan Project was not isolated at the time. The German V-2 rocket program and a few other wartime successes could also be cited. We have now accumulated sixty-six years of experience with many scientific megaprojects and the results are disappointing. I am not sure any has replicated the stunning success of the Manhattan Project, certainly not in power and propulsion. The biggest other success would be the development of the orbit capable rocket (1957) and its follow-ons in the Apollo program and other space programs (many of which are cited by Philippe LeBrun in his comment). Unlike the Manhattan Project, the success of the rocket programs involved hundreds (that is hundreds) of full system tests; in fact, possibly thousands, if the roughly 3000 V-2 rocket launches during World War II are counted as part of the R&D effort, which I think they should be.
The Manhattan Project reportedly cost about $2 billion in 1940’s dollars. This is roughly $20 billion today. This was certainly a big program. There have been comparable programs by physicists that have failed, notably tokamaks and inertial confinement fusion (at least $15 billion in the United States not adjusting for inflation). These have involved veterans of the Manhattan Project, their students, students of their students, and so forth. Physics became, if anything, more competitive after World War II. Yet the actual practical results have declined, especially since 1970. The caliber of the people is very high, comparable to the Manhattan Project, quite possibly much better if one uses conventional measures such as IQ tests, academic test scores, SATs, and so forth. Now that could mean that these conventional measures are not predictive of performance in breakthrough research. That may well be part of the reason. Some famous, highly successful physicists including Michael Faraday (technically a chemist with no formal training or mathematical knowledge), Niels Bohr, and Albert Einstein would probably not be competitive today in the modern Big Science system.
Experimental particle physics has also consumed a total budget comparable to the Manhattan Project. Projects like the LHC, SSC, the Tevatron, and so forth have had huge costs. The implicit justification for particle physics research, both experimental and theoretical, that is frequently invoked publicly is the quest for a unified field theory, a breakthrough comparable to the work of Michael Faraday and James Clerk Maxwell in the early 19th century. It is frequently implied in Scientific American articles, PBS Nova specials, and so forth that the unification of the fields would eventually yield practical benefits comparable to the unification of electricity, magnetism, and light in the 19th century. This analogy to Faraday and Maxwell has been made in popular science articles by particle physicists promoting their research programs repeatedly. Although there have been successes of scientific or scholarly importance (the standard model in particle physics), Nobel Prizes, and so forth, these hoped for immediate practical benefits have been just as elusive as tokamaks or inertial confinement fusion. Again the caliber of the people in experimental and theoretical particle physics is very high, certainly comparable to the people who worked on the Manhattan Project (indeed it includes many of those people).
One of the reasons for this disappointing performance is that the Manhattan Project was a fluke. Theoretical calculations and computer simulations are generally not nearly as reliable as they seemed to be in the Manhattan Project. These huge machines like tokamaks, inertial confinement fusion devices, and particle accelerators usually do not work the first time or even say the fifth time. They require many full system trials or tests to achieve the desired performance or even to determine that they are a dead end, that the particular design or architecture of the machine has a serious non-obvious flaw. Since these are huge machines, these full system tests are very costly and time consuming. This occurred very visibly with the Large Hadron Collider (LHC) recently, where there were very serious startup problems including the explosion of many of the magnets.
From researching the history of research and development, this is not unusual. The people at the LHC are very talented and passionate people (maybe too passionate but that is another story). These failures are normal. Cases like the Manhattan Project which are held up in popular culture and in science classes as exemplars, are atypical. To achieve success in general one should plan on many more full system tests. One should not expect theoretical mathematical calculations to work as well as they seem to have done in the Manhattan Project, although it does appear to occasionally occur.
I suspect that the quite rapid rate of progress that we have seen in electronics and computers for many decades is partly due to the small size of the machines. It is quite inexpensive and fast to perform the many trials and errors needed to achieve progress. This has really been true going back to the discovery of the battery which enabled small, inexpensive, fast, safe tabletop experimentation on electricity and magentism. In power and propulsion, many of the engines, power plants, and R&D prototypes like tokamaks are anywhere from large to gigantic.
With oil and energy costs rising, possibly due to “Peak Oil,” a dwindling supply of inexpensive oil and natural gas, these issues are far from academic. We have been down this road before. During the energy crisis of 1974-1982, there were several “new Manhattan Projects” which failed (tokamaks are one of those). Now, today, we are again seeing calls for “new new Manhattan Projects.” We know more about scientific megaprojects than we did in the 1970s. To succeed, we should not simply try to clone the Manhattan Project. This was a reasonable idea years ago, but we know more now.
We should not expect theoretical mathematical calculations to work as well as they seemed to in the Manhattan Project. They may but the odds are against it. We should plan on a large number of full system tests as well as large amounts of trial and error at the component level. This argues strongly for using scale models or other rapid prototyping methods where possible.
Some numbers
The United States Department of Energy Fiscal Year 2012 Congressional Budget Request (https://www.cfo.doe.gov/budget/12budget/content/volume4.pdf) gives the Fiscal Year 2010 Current Appropriation for High Energy Physics as $790,811,000 (page 249). Adjusting for inflation this has been the approximate level of support for high energy physics (experimental and theoretical particle physics) for many decades. This means about $23 billion in 2010 dollars has been spent since 1981 alone; the programs are much older, some dating back to the Manhattan Project era.
The same budget request gives the 2010 current appropriations for Fusion Energy Sciences, predominantly tokamaks and inertial confinement fusion, at $417,650,000 (page 211). I believe fusion funding has had more ups and downs than particle physics, but this is still typical of annual funding levels. The programs have been heavily funded since 1974 (Energy Crisis I). This means approximately $15 billion in 2010 dollars spent since 1974 on fusion research, mostly tokamaks and inertial confinement fusion devices.
Wikipedia (not a primary source) gives the total cost of the Large Hadron Collider (LHC) as approximately $9 billion as of June 2010. Only a small fraction of this is from the United States. LHC and CERN are largely funded by European nations. Thus, this amount should be added to the United States DOE budget for the total amount invested in experimental particle physics (23 + 9 = $32 billion).
These numbers are comparable to the Manhattan Project, although not as concentrated in time, in a four year period. In fact the amount spent on experimental particle physics to date now appears to substantially exceed the budget of the Manhattan Project. These numbers are especially significant because these are physics programs, to some degree direct descendants of the Manhattan Project with some of the same personnel, involving veterans of the Manhattan Project and physicists trained by the veterans of the Manhattan Project.
I think we would all like to see these vast sums yield substantial practical benefits for humanity. That is the goal of my article.
One more number.
According to published reports, the “War on Cancer” in the United States has consumed an astonishing $200 billion since the National Cancer Act of 1971 under President Nixon (40 years). This is roughly ten (10) times the inflation adjusted budget of the Manhattan Project. The rate of spending, $5 billion/year, is just about the same as the wartime Manhattan Project.
https://www.cancer.gov/aboutnci/servingpeople/nci-budget-information/requests
This has produced over one million (1,000,000) published papers and several Nobel Prizes (Harold Varmus and J. Michael Bishop, for example). While there have been some very limited successes, the practical results have clearly been very disappointing. The caliber of the many top cancer researchers, as conventionally measured, must be extremely high.
John
I think this is naïve.
The MP succeeded because the right science existed at the right time to support the effort. That science existed from at the latest 1905 (when Einstein speculated on the possibility), an entire generation before the MP. I’ve read more than once that the Germans’ were splitting the atom before the onset of WWII, before the MP.
To say “if the right science hadn’t existed then it wouldn’t have succeeded” is not insightful or even useful. The unknown and challenge is determining when the right science exists. Once it does, I think the 80/20 rule (or 90/10 which I more often now) is completely realistic. But someone has to correctly guess that one critical component (or 50) to make the effort successful. No one knows what that critical component is, so no one can tell when to attempt the effort.
The proposal process for all the major funding organizations is about making a pitch that you know what that critical component is for some important problem. Without some expectation that what is proposed is viable, an idea will never get funded. That doesn’t mean it will succeed, but it does need to be a viable idea. Our whole system of funding science is based on this system of proposal evaluation.
The idea that all or even most science is “large” is foundationally ridiculous, most R&D grants are << $100k. (vs. the billions mentioned here)
The idea that breakthrough science exists at all is also naïve. All science is incremental, even the MP.
In response to Chris’s comment,
I do not think the right science existed for the Manhattan Project prior to the splitting of the uranium atom by Otto Hahn, Lise Meitner, and Fritz Strassmann in 1938 at the earliest. It is true that Einstein had derived the famous e=mc^2 equation in 1905. However one has to find specific nuclear reactions such as the fission of U-235 to turn this famous equation into a practical power source or weapon. There had been several failed attempts prior to Hahn and Meitner including Millikan’s dubious cosmic ray studies and some equally dubious results at Lawrence’s cyclotron. Many people were aware that if a proper fission or fusion reaction could be found, then a formidable new energy source might be feasible.
https://www.chemheritage.org/discover/chemistry-in-history/themes/atomic-and-nuclear-structure/hahn-meitner-strassman.aspx
Was that it? Was everything after Hahn, Meitner and Strassmann “just engineering” as physicists like to say? Probably not. The project had to develop methods to purify U-235 and Plutonium 239 in bulk quantities sufficient for a weapon and, according to the official history, find a way to compress the uranium or plutonium to unusual densities to cause an explosion. Thus several significant leaps had to occur in a few short years. The official account indicates that a large number of possibilities were tried in parallel. Lawrence tried unsuccessfully to use cyclotrons to separate U-235; another method proved successful. An enormous amount of trial and error went into developing the “implosion lens,” the explosive system used to compress the plutonium core. The Trinity test supposedly tested this mechanism in the only full system test prior to Hiroshima and Nagasaki.
I contend that breakthroughs as commonly conceived have occurred in history. They are rare and the term is grossly overused in the mass media and by scientists today. I am not sure Chris’s view is that far from mine. In many genuine breakthroughs, the inventor or discover realizes that he/she can combine, sometimes with substantial modifications, existing technology or knowledge developed (usually) for another purpose and resolve the problem. In most cases they do not realize this until they have been immersed in the problem for many years and after many failures. Consequently, they could not have written a grant proposal up front describing the solution or correctly identifying the existing knowledge or technology that they ultimately use.
Johannes Kepler is a pretty clear example of this. He worked for almost five years without success, eventually demonstrating that the geocentric theory of Ptolemy, the heliocentric theory of Copernicus, and the hybrid theory of Tycho Brahe were mathematically equivalent. He abruptly realized that the orbit of Mars was something like an ellipse and looked up the mathematics of the ellipse in Apollonius of Perga’s Conics. Kepler pulled in a body of mathematics that no one had previously connected to planetary motion. It was this that made a dramatic leap in accuracy possible. There are many other cases like this in the history of invention and discovery. This is why I say that success depends on other “right science” or “technology”. In many cases, there is no reasonable way that the inventor or discoverer could have written a grant proposal in advance of the discovery identifying the right science or technology.
Grant Proposal to National Science Foundation
Revolutionary Breakthrough in Astronomy and Cosmology
J. Kepler
I will start from the crackpot ideas of the late Nicolaus Copernicus which contradict all established physics and the works of all eminent, heavily cited experts including Aristotle, Thomas Aquinas, and many others. Even though Copernicus’s theory fails to predict the motions of Mars and the planets as well as the well-established, extensively studied theory of K. Ptolemy, I have a mystical feeling that Copernicus is right. I expect to show that my idea is right in eight days using the time-honored method of epicycles and the uniform circular motion of the planets which we all know is correct.
🙂 Kepler actually bet that he could make sense of Tycho Brahe’s data on the motions of Mars in eight days using the heliocentric theory. He was wrong. He struggled for almost five years and ended up with a very different answer from what he expected at the start. He probably would have failed if someone else had not already worked out the mathematics of the ellipse, which neither he nor anyone else saw had any connection to the motion of the planets.
Nor is this quirky saga unusual in the history of invention and discovery. Modern proposal driven research is not well suited to these types of breakthroughs since it is rarely possible to map out the breakthrough in advance. A crazy Emperor like Rudolf II might fund such a “proposal” (he essentially did although there does not appear to have been a formal proposal), but it is doubtful the present day National Science Foundation or similar funding agencies would. Hence, we are probably seeing fewer genuine breakthroughs and more “incremental” research. When science or technology reaches a plateau in performance, a genuine breakthrough is often needed; “incremental” approaches often fail.
Just to restate the main point of the article. The Manhattan Project was unusual in that the first full system tests (Trinity, Hiroshima, and Nagasaki) worked right the first time. This is unusual and in general research and development projects should not assume this will happen. I note that many gigantic projects in aerospace and particle physics frequently seem to assume this. I argue that to achieve success R&D projects should assume that many full system tests will be needed and plan accordingly. This argues strongly for the use of scale models or other rapid prototyping methods where feasible.
Sincerely,
John
One point of clarification. The Nobel-Prize winning physicist Robert Millikan (the long term President of Caltech as well) argued that cosmic rays were due to nuclear fusion of hydrogren into helium and other heavier elements in deep space releasing vast energies. Millikan’s cosmic ray research occupied much of his career in the 1920 and 1930’s. Here is a Time article from 1932:
https://www.time.com/time/magazine/article/0,9171,743063,00.html
Millikan was almost certainly wrong about this and his theory is not accepted today and largely forgotten. If cosmic rays had been due to fusion in deep space this would have proven the existence of powerful fusion reactions that occurred not inside the center of stars but under conditions in deep space that could much more easily have been replicated in a laboratory or machine on Earth — a working fusion power source. Physicists were certainly aware that nuclear reactions might exist which could form the basis for a new power source or weapon well before the Manhattan Project. A number including Leo Szilard, the original mastermind of the Manhattan Project according to many accounts, were actively looking for such reactions. However, they had not found any genuine candidates until the work of Otto Hahn, Lise Meitner, and Fritz Strassmann.
Prior to World War II, most research in physics was funded by individuals and companies involved in the electric power and lighting industries, such as the Rockefeller family in the United States who funded a great deal of physics research through the Rockefeller Foundation.
There are a number of serious questions about Millikan’s research including the questionable cosmic ray research and his data selection for the oil drop experiment demonstrating that electric charge is quantized (and also measuring the charge on the electron) that won him the Nobel Prize. One of his graduate students claimed that Millikan excluded him from proper credit and authorship on the oil drop experiment as well. Similarly, Lise Meitner did not share the Nobel Prize in 1945 (awarded 1944, received 1945) with Otto Hahn for discovering nuclear fission. She received very little credit until relatively recently. It is common to encounter such controversies over who did what, what actually happened, what was actually discovered, and so forth in scientific and technological breakthroughs.
https://nobelprize.org/nobel_prizes/chemistry/laureates/1944/
In the context of my article, this means that the entire research and development of the atomic bomb took place between February 1939 when Lise Meitner and Otto Frisch’s paper was published in Nature identifying the reaction as the fission of uranium into barium and the Trinity test July 16, 1945 (6 1/2 years total with most of the work between 1942 and 1945). This is an unusually short time to go from a tabletop laboratory experiment to a full scale functioning invention, let alone one that works right the first time. It is not typical and attempts to replicate the unusual experience of the Manhattan Project have mostly failed.
Lise Meitner and O. R. Frisch, Disintegration of Uranium by Neutrons: a New Type of Nuclear Reaction, Nature, Volume 143, Number 3615, 239–240 (16 February 1939)
I would not call the Manhattan Project a fluke of modeling and engineering, I would call it a fluke of the underlying physics. As it turns out, if you put a sufficient quantity of sufficiently pure U-235 together you get lot of energy released. It doesn’t take much subtlety of design to make it happen, you get enough stuff, bring it together suddenly, and things blow up. A Plutonium implosion is harder, but the record shows that conservative designs work pretty robustly.
The process of making sufficiently pure U-235 and Plutonium is an entirely different matter. As the Richard Rhodes book makes clear, getting the manufacturing complex to work was much more difficult than the bomb design, involved huge amounts of trial and error, and involved exploring many blind alleys. The factory machines did not work as planned the first time out, and years of refinement were required to get them to behave according to calculation. If there was a fluke, beyond the random chance that the physics of U-235 and Plutonium make it easy to make a bomb, it was getting the factories running by 1945.
The hydrogen bomb may be a bigger fluke. From what is in the Richard Rhodes’ books it is probably a bigger fluke that they worked as well first time out, given what are apparently vastly less robust physical margins.
If success is judged solely on how well a project delivers against its initial projections then any project pursuing uncertain ends is bound to measure out badly. When the MP started nobody knew if a weapon would result at all, much less one with the specific characteristics it did. In the case of Tokamaks and Inertial Confinement Fusion we don’t have cheap energy, but nobody knows if there is any configuration that will ever yield useful results. The fact that we don’t know just makes it a real research project, instead of a project to get the next decimal point.
In terms of first time success and matching models I think there are elements of the space program that easily beat out the MP. I think a successful landing and return from the Moon in 1969 was easily a less certain first time success, with minimal prior end-to-end test history, than the early Fission explosions. Every Saturn V launch succeeded, too.
Building a large, complex system out of immature technology under “ultraquality” constraints (to use Rechtin’s term) is undoubtedly a huge challenge. It has been done. The MP is not the only case, and probably not the most impressive. You can’t model your way to success, but neither can you succeed solely through empirical trials. The successful examples always blend the two in clever ways. Breakthrough systems that are long-term revolutions are virtually always so because they become part of a process of coupled change in operations as well as technology. It is true this is often an unappreciated aspect when projects start up.
The truly amazing thing about the Manhattan Project is that it succeeded twice: shotgun U-235 and implosion Pu-239.
So two completely different detonation structures, two completely different materials to isolate and purify, and they both worked first time.
Mik
Even the discover of fission was preceded by the discovery of the nucleus and of the nuclear force, a force of nature many orders of magnitude stronger than any earlier understood force (ie gravity and electro-magnetism.)
H.G. Wells understood this and predicted in print powerful nuclear weapons in the 20’s or at least early 30’s, well before the specifics of uranium fission were discovered.
So the MP was a success because it exploited NEW pieces of science – the nuclear force and specifically the odd behavior of U-235.
I’m an engineer. Give me a new piece of science and I can exploit in ways not seen before. Easy piezy.
Discovering NEW science is the key! Building electric cars or windmills, for two examples, involve the exploitation or application of NO new science. Batteries are limited by electrochemistry and we don’t have new reactants. Wind is wind. These are the sort of boondoogles that politicians regularly sell to the tax payers, as you present, using the MP as the promised role model.
Long after the debate… I also see large projects critical, especially as they tend to grow around mainstream “hypes”, where it seems easy to get funding, instead out of personal research interrests of individual researchers. Thus, I generally agree to this article;
However, I do not consider the Manhattan Project or the Apollo Project as great scientific breakthroughs; both were “mere” engineering. The scientific breakthroughs making the MP possible were e.g. the discovery of the nucleus by Rutherford, of the neutron by Chadwick, and before all, the development of Quantum Mechanics (by a large number of scientists), which could make sense of the experimental findings. I suspect none of this research was done with the aim of producing something useful, but just because the researchers had fun figuring out.
Also, I do not consider nuclear weapons as great progress, and fortunately they have not yet impacted out lifes a lot (still, they could any time). The transistor (respectively, the computer, IT and internet) are the unparalled breakthrough in the 20th century. Actually, it was a poor century regarding new science brought to practical use, if compared to the 19th century (steam engines, internal combustion engines, electromotors; steam ships, trains, cars, planes; telegraphs, telephones, broadcasts; photographs, phonographs, electric light).
Concerning enineering: I saw a televison show in 2000 asking ladies of age 100 about the inventions with the greatest public benefit, and after some chatting they agreed on the washing machine.
I feel a disrespect for fundamental research in your article. Michael Faraday answered a question by the Royal Mister of Finance, for what electricity was good for by: “You will raise a tax on it in 200 years.”
There is a possibility that you have not explored. There are many reports, about the bad results, in the Manhattan project. It is known that in November 44, only a few grams of plutonium had been obtained. The methods of transforming uranium 235 into plutonium were very slow and unreliable. Not to mention the biggest problem, the necessary detonators that had to be electronic. which technology was not available. In a few months, later, nothing had changed, of course. They were in a stalemate. until providence, it appeared in the form of German Submarine. To be more exact, the U234. with a load of 500 kilograms of U235. and the famous infrared detonators, designed by the German doctor “von Ardenne” that the declaration of the charge of the hold of the German submarine, remains under the secret of national security. It shows that it has to be something very harmful, after more than more than 75 years. The reality is very stubborn. Everything tells us that, the US bought the German atomic secrets, to be able to mount two plutonium bombs. and that of hiroshima, was a gift from the Germans. remember that all the research of the manhattan project was to build a plutonium device, that the trinity test was plutonium. where did a uranium bomb come from? with a completely different technology? with a critical mass system different from plutonium? That was the first and last time that the US used a Uranium bomb … for good understanding, you don’t need the words.