People often assume that theoretical mathematical calculations and computer simulations will work well enough that machines or experiments will work successfully the first time or at most within a few tries (or similar levels of performance in other contexts). This belief is often implicit in the promotion of scientific and engineering megaprojects such as the NASA Ares/Constellation program or CERN’s Large Hadron Collider (LHC). One of the reasons for this belief is the apparent success of theoretical mathematical calculations and primitive computer simulations during the Manhattan Project which invented the first atomic bombs in World War II, as discussed in the previous article “The Manhattan Project Considered as a Fluke”. This belief occurs in many contexts. In the debate over the Comprehensive Test Ban Treaty (CTBT) which bans all nuclear tests on Earth, proponents (sincerely or not) argued that sophisticated computer simulations could substitute for actual tests of nuclear weapons in the United States nuclear arsenal. After the terrorist attacks of September 11, 2001, federal, state, and local government officials apparently decided to dispose of most of the wreckage of the World Trade Center and rely on computer simulations to determine the cause of the three major building collapses that occurred (instead of physically reconstructing the buildings as has been done in other major accident investigations). Space entrepreneur Elon Musk apparently believed he could achieve a functioning orbital rocket on the first attempt; he did not succeed until the fourth attempt, recreating a known but extremely challenging technology. This article discusses the many reasons why theoretical mathematical calculations and computer simulations often fail, especially in frontier engineering and science where many unknowns abound.

This article does not argue that theoretical mathematical calculations and computer simulations are not helpful or should not be performed. This is clearly not the case. Occasionally, as in the Manhattan Project, theoretical mathematical calculations and computer simulations have worked right the first time, even in frontier areas of engineering and science. In frontier areas such as major inventions and scientific discoveries, this appears to be the exception rather than the rule. Research and development programs and projects that implicitly or explicitly assume that theoretical mathematical calculations and computer simulations will work right the first or even within the first few attempts are likely to be disappointed and may fail for this reason. Rather, in general, we should plan on combining theoretical mathematical calculations and computer simulations with a substantial number of physical tests or trials. There is evidence from the history of major inventions such as the orbit capable rocket, that one should plan on hundreds, even thousands, of full system tests, and many more partial system tests and component tests. This argues strongly for using scale models or other rapid prototyping methods where feasible — or focusing research and development efforts on small scale machines as in the computer/electronics industry today, again where feasible.

**Let Me Count the Ways**

There are many reasons why theoretical mathematical calculations and computer simulations fail. Indeed, given the sheer number, it is somewhat remarkable that they do work at all. This section discusses most of the major reasons for failure.

**Simple Error**

Scientists, engineers, and computer programmers are human beings. Even the best of the best make mistakes. This is worth some elaboration. Most scientists and engineers today are professionally trained in schools and universities until their twenties (sometimes even longer). Much of this training involves solving problems in classes, homework, and exams that typically take anywhere from seconds to, in rare cases, several full days (say eight hours per day) to solve. In the vast, vast majority of cases, these problems have been solved many, many times before by other students; it is often possible to look up, learn, and practice the appropriate method to solve the problem — something not possible with genuine frontier science and engineering problems.

An “order of magnitude” is a fancy way of saying a “factor of ten”. Two orders of magnitude is a fancy way of saying a factor of 100. Three orders of magnitude is a fancy way of saying a factor of 1000. And so on. Even the most difficult problems solved in an advanced graduate level science or engineering course are typically orders of magnitude simpler than the problems in “real life,” especially in frontier science and engineering. At a top science and engineering university such as MIT, Caltech or (fill in your alma mater here), scoring 99% (1 error in 100) is phenomenal performance. Yet a frontier engineering or science problem can easily involve thousands, even millions, of steps. The Russian mathematician Grigoriy Perelman’s arxiv.org postings which are generally thought to have proved the Poincare Conjecture are hundreds of pages in length; Perelman left many steps out as “obvious”. A modern computer simulation such as the highly classified nuclear weapon simulation codes involved in the Comprehensive Test Ban Treaty debate can involve millions of lines of computer code. Even a single subtle error can invalidate a theoretical mathematical proof or calculation or a computer simulation. On complex “real world” problems, even the very best are *likely* to make mistakes because of the size and complexity of the real world problems. Computer programmers spend most of their time debugging their programs.

In computer simulations, consider a sophisticated numerical simulation program with one million (1,000,000) lines of code written by a team of top programmers with an error rate of one error per 1000 lines of code. If a computer program were implemented as a physical machine like a traditional mechanical clock (a very complex and sophisticated machine in its heyday), each line of code would be at least one moving part (gear, switch, lever, etc.). A computer program with one million lines of code is far more complex than a traditional pre-computer automobile or a nautical chronometer used to measure longitude (John Harrison’s first successful nautical chronometers had a few thousand parts). The Space Shuttle Main Engine (SSME), one of the most powerful and sophisticated engines in the world, has approximately 50,000 parts.

By one error in 1000 lines of code, we mean the programmer can write 1000 lines of code with only one error (bug) before any testing or debugging. This is truly phenomenal performance, but let us assume only one error for 1000 lines of code for the sake of argument: to make a point. This simulation program will have approximately 1000 errors! In general, it will take extensive debugging, testing, and comparison with real world data and trials to find and fix these 1000 errors. A subtle error may evade detection despite very extensive efforts.

The modern professional training in science and engineering produces some seemingly phenomenal individuals, such as the winners of the International Math Olympiad (IMO). Most of these people perform extremely well in school and university classes, homework, exams, and so forth. If you witness their performance in an academic setting, it resembles the magical mathematics depicted in popular culture, in television shows such as Numb3rs or Eureka for example (which depict the same kind of performance on very complex real world problems). Nonetheless they are *likely* to make errors on extremely complex real world problems, something they are not used to. They can become puzzled or worse angry when this occurs. * It couldn’t be me; it must be those idiots in the next office — how did they ever graduate from MIT, Caltech, or (fill in your alma mater here)?*

Many real world systems such as aircraft, rockets, particle accelerators, and the human body are complex integrated systems in which a very large number (thousands to millions) of parts must work together within very tight tolerances for the entire system to work correctly (fly, collide beams, stay alive and healthy). Even *one* undetected error can be fatal. This is beyond the performance level of even the very best students in school where the problems are generally simpler and the solutions are known; the proper methods can be studied and practiced prior to taking a test or exam. This near perfect performance in complex real world systems is usually achieved by an iterative process of trial and error in which some errors are found the hard way (the rocket blew up on the launch pad, the accelerator magnets exploded, the patient died 🙁 ) and eliminated. The final example is not a snide comment; the author’s father passed away in 2008 participating in yet another unsuccessful clinical trial of a new cancer treatment.

A great deal of modern research consists of measuring some quantity to slightly greater accuracy (known disparagingly as “measuring X to another decimal point”) or computing some theoretical quantity to slightly greater accuracy. Despite the popular image of graduate students like mathematician John Nash in A Beautiful Mind or the physicist Albert Einstein *part-time* at the University of Zurich performing path-breaking breakthrough research, graduate students are frequently assigned or manipulated into projects of this type in modern research, even at top research universities like MIT, Caltech, or (fill in your alma mater here). These projects often involve repeating something that has been done many times before, only just a little better (hopefully). Although the error rates are noticably higher than academic coursework, the error rates are still far from representative of true frontier or breakthrough research and development. Hence, many graduate students, post-doctoral research associates, all the way up to full professors who have built a career measuring X to another decimal point have negligible experience with the truly high error rates frequently encountered in frontier research and development.

For example, in measuring X to another decimal point, one is often reusing complex simulations or analysis software that has been developed incrementally over many years, even decades (some programs now date back to the 1960’s and 1970’s). Thus much of the testing and debugging is largely done. One encounters far fewer errors. If one ventures into a frontier or breakthrough area, one may need to develop a new computer program *from scratch*, where the probability of serious errors at first is likely to be near one (1.0, unity) for the reasons discussed above even for truly exceptional individuals and teams.

It is worth understanding that popular science materials such as PBS/Nova specials, Scientific American articles, or Congressional testimony by leading scientists, rarely describes the research as “measuring X to another decimal point” or anything similar. Popular science materials usually focus on the quest for some “Holy Grail” such as unifying the fields in particle physics, a cure for cancer in biology and medicine, cheap access to space in aerospace, and so forth. The quest for the “Holy Grail” captures the imagination and is generally the public reason for funding the research. The Holy Grails have also proven exceedingly difficult to achieve and not necessarily amenable to throwing money and manpower at the problems. And often *exceptional* intelligence as conventionally measured has proven inadequate to find an answer. The “War on Cancer” for example has consumed about $200 billion in the United States alone since 1971 when President Nixon signed the National Cancer Act, a level of inflation adjusted funding comparable to the wartime Manhattan Project continued for forty years to date.

I should add that measuring X to another decimal point can be quite important. The astronomer/astrologer Tycho Brahe successfully measured the position of the planet Mars in its path through the Zodiac to another decimal point. While it may have been possible to infer the laws of planetary motion correctly prior to this measurement, there is no question that this improved measurement was essential for Johannes Kepler to discover the correct laws of planetary motion, a major scientific breakthrough that now has practical use in the computation of the orbits of communciation satellites, GPS navigation, Earth observing satellites, and so forth. Nonetheless, I will take the position that measuring X to another decimal place has gone to an unhealthy extreme in modern research. It fills curriculum vitae, produces millions of published papers, rarely leads to genuine breakthroughs and practical advances, and provides poor, misleading training for students in genuine breakthroughs, amongst other things by giving a misleading sense of the actual error rates that occur in real breakthroughs.

**Most Theoretical Calculations and Simulations Are Approximations**

Most theoretical calculations and simulations are approximations. A few grams of matter has on the order of 10^23 (ten raised to the twenty-third power) atoms or molecules. This is about one-hundred billion trillion atoms or molecules. By definition one mole of carbon-12 is 12 grams of carbon. One mole of a substance contains Avogadro’s number, 6.02214179(30)×10^23, atoms or molecules. Even small machines, e.g. computer chips, weigh grams. Automobiles weigh thousands of kilograms (1000 grams). Airplanes and rockets weigh many thousands of kilograms. Nuclear power plants probably weigh millions of kilograms. Each atom or molecule has, in general, several protons and neutrons in the atomic nucleus or nuclei, and several electrons in complex quantum mechanical “orbitals”. Even with thousands of supercomputers, it is impossible to simulate matter at this level of detail. Thus, on close examination, the vast majority of theoretical mathematical calculations and computer simulations are making signficant approximations. Sometimes these approximations introduce serious errors — sometimes subtle errors that are very difficult or impossible to detect in advance. The errors may become obvious after a difference between the theory and experiment (real data, physical trials) is detected (e.g. the rocket blew up on the launch pad).

**Computers and Symbolic Math Cannot Reason Conceptually**

The Webster’s New World Dictionary (Third College Edition) defines a concept as (page 288):

*An idea or thought, especially a generalized idea of a thing or class of things; a notion.*

Most human beings think almost entirely conceptually. The vast majority of human beings rarely if ever use abstract mathematical symbols to think, and then only in specialized contexts. A “cat” is a concept: a special kind of “animal,” another concept, distinguishable from, for example, a “dog,” yet another concept. Many things that scientists and engineers deal with are concepts: particle accelerators, rockets, airplanes, electrons, cancer, and so forth. In only a few special cases, such as simple geometrical forms like the perfect sphere, can we express the concept in purely symbolic mathematical terms that can be programmed on a computer.

Most major inventions or scientific discoveries started out as a concept in the inventor or discoverer’s mind: James Watt’s separate condenser for his steam engine, Kepler’s hazy notion of an elliptical orbit, Faraday’s mental picture of pressure and motion in the mysterious aether to explain electricity and magnetism, eccentric (to put it mildly) rocket pioneer Jack Parson’s concept of combining a smooth fuel such as asphalt with a powdered oxidizer such as potassium perchlorate to overcome the severe problems with powdered explosives, and so forth. To this day, we cannot express most concepts in mathematical symbols that can be programmed on a computer. In some cases, we can simulate a specific instance of the concept on a computer or through traditional pencil and paper derivations or calculations.

Johannes Kepler was able to find a mathematical formula that corresponded to his hazy concept of an elliptical orbit in Apollonius of Perga’s Conics. He was lucky that the mathematics of the ellipse had already been worked out and corresponded closely to the motion of the planets. James Clerk Maxwell, after many years of effort, was able to find a set of differential equations, Maxwell’s Equations, that corresponded to Faraday’s mental concepts of pressure and motion in the aether. Even in cases where specific mathematics can be found (in a book, for example) or developed for a concept (from a detailed mechanical model as Maxwell did with Faraday’s ideas, for example), we still cannot represent the process of the transformation from the mental concept to the mathematics either in formal symbolic mathematics or in a computer program.

Computers and symbolic mathematics cannot reason conceptually. Most of the research in artificial intelligence (machine learning, pattern recognition, etc.) has been an attempt to find a way to do this. Most of this research tries to replicate the process by which human beings identify classes and their relationships (concepts) and correctly assign objects (cats, dogs, speech sounds, etc.) to these classes. So far, we have been unable to either understand or duplicate what human beings do, in many everyday cases effortlessly. A conceptual error is often beyond the ability of either formal symbolic mathematics or computer simulations to detect or identify; it can show up in real world tests very dramatically as in a rocket exploding on launch or a miracle cancer drug failing in clinical trials.

Conceptual reasoning is poorly understood. It is not clear how to teach it, if it can be taught, and how to measure it or even if it can be measured. Very basic questions about its nature are unresolved. Conceptual reasoning appears to play a major role in many major inventions and scientific discoveries, so-called breakthroughs. In this context, it is particularly mysterious. Many inventors and discoverers describe a flash of insight, usually following many years of failure and frequently occurring on a break such as a recreational walk, in which a key concept or even the entire answer occurs to them. These are reports, anecdotal data. We cannot be absolutely sure they are true, just like reports of UFO sightings, which are actually more common than breakthroughs. Just to be clear there is a clear possible motive for inventors or discoverers to make up the story of a “Eureka” experience; they, in fact, stole their work from someone else and need to explain a sudden leap forward in another way. There are inventions and discoveries where there are serious questions about what really happened, who did what, and the work may well have been stolen. Even so, the reports of “Eureka” experiences are extremely common in the history of invention and discovery and they resemble less dramatic flashes of insight or creative leaps reported/experienced by many people (including the author).

These conceptual skills or phenomena may account for why some inventors and discoverers do not seem as intelligent as one might expect, and certainly not as intelligent as inventors and discoverers are depicted in popular culture, and also why platoons of the best and brightest scientists, as conventionally measured, have failed (so far) in such heavily funded efforts as the War on Cancer.

**The Math is Intractable**

In some cases, we believe that we have the correct math and physical theory to solve a problem. However, the math has proven intractable to solve (so far) either through traditional pencil and paper calculations and symbolic manipulations or through numerical simulation on a computer. The Navier-Stokes equations are thought to govern fluids (liquids and gas such as water and air). Nonetheless, the solution of the Navier-Stokes equations in fluid dynamics has proven intractable to date. This is one of the reasons that the Navier-Stokes equations are included in the Clay Mathematics Institute’s Millenium Problems. Sometimes it may not even be clear that the math is intractable, resulting in reliance on spurious theoretical mathematical calculations or computer simulations.

**New Physics**

This article is concerned with the use of mathematics and computer simulations for real world problems, not proving theorems in pure abstract mathematics. In this context, inevitably, one is trying to predict or simulate the actual physics of the real world. How do mechanical devices, electricity, magnetism, gravity, and so forth work in the real world? That is the question. If the theoretical mathematical calculations or computer simulations are based on incorrect physics, they will probably fail. In some cases, the fundamental physics may be known but the implications, the theory derived from the fundamental laws of physics, is somehow in error. In other cases, truly new physics may be involved.

One tends to assume that new physics would stand out, that it would be obvious that it is present. Yet this is not always the case. Human beings tend to be conservative. We do not embrace new ideas quickly or easily, especially as we get older. Small discrepancies and anomalies can occur and accumulate for long periods of time without the presence of new physics being recognized. This occurred, for example, with the Ptolemaic theories of the solar system. These theories had predictive power, but they kept making errors. It took about a century of work by Nicolaus Copernicus, Galileo Galilei, Tycho Brahe, Johannes Kepler, Isaac Newton, and many others to overturn this theory and develop a superior, much more accurate theory. It did not happen overnight for solid scientific reasons — Copernicus’s original heliocentric theory was measurably inferior to the prevailing Ptolemaic theory, contrary to the impression given in science classes. Galileo’s extreme arrogance and grossly inaccurate theory of the tides did not help either.

Electricity and magnetism had been known for thousands of years, both large scale phenomena like lightning and small scale effects such as static electricity or lodestones. Nonetheless, without the battery and the ability to control and study electricity and magnetism in a laboratory, it was almost impossible to make progress or discover the central role electricity and magnetism play in chemistry and matter. New physics can be hiding in plain sight and causing anomalies that are persistently attributed to selection bias, instrument error, or other mundane causes.

**Conclusion**

There are many reasons that theoretical mathematical calculations or computer simulations may fail, especially in frontier science and engineering where many unknowns abound. The major reasons include:

- simple error (almost certain to occur on large, complex projects)
- most theoretical mathematical calculations and simulations are approximations
- symbolic math and computers cannot reason conceptually and may not detect conceptual errors
- the math may be intractable
- new physics.

In the history of invention and discovery, it is rare to find theoretical mathematical calculations or computer simulations working right the first time as seemingly occurred in the Manhattan Project which invented the first atomic bombs during World War II. Indeed, it often takes many full system tests or trials to achieve success and to refine the theoretical mathematical calculations or simulations to the point where they are reliable. Even after many full system tests or trials, theoretical mathematical calculations or simulations may still have significant flaws, known or unknown.

This argues for planning on many full system tests of some type in research and development. In turn, this argues strongly in favor of focussing research and development efforts on small-scale machines, or using scale models or other rapid prototyping methods where feasible. This does not mean that theoretical mathematical calculations and computer simulations should not be used. They can be helpful and, in some cases, such as the Manhattan Project may prove highly successful. However, one should not plan on the exceptional level of success apparently seen in the Manhattan Project or some other cases.

In these difficult economic times, almost everyone would like to see more immediate tangible benefits from our vast ongoing investments in research and development. If current rising oil and energy prices reflect “Peak Oil,” a dwindling supply of inexpensive oil and natural gas, then we have an urgent and growing need for new and improved energy technologies. With increasing economic problems and several bitter wars, it is easy to succumb to fear or greed. Yet it is in these difficult times that we need to think most clearly and calmly about what we are doing to achieve success.

© 2011 John F. McGowan

### About the Author

*John F. McGowan, Ph.D.* solves problems by developing complex algorithms that embody advanced mathematical and logical concepts, including video compression and speech recognition technologies. He has extensive experience developing software in C, C++, Visual Basic, Mathematica, MATLAB, and many other programming languages. He is probably best known for his AVI Overview, an Internet FAQ (Frequently Asked Questions) on the Microsoft AVI (Audio Video Interleave) file format. He has worked as a contractor at NASA Ames Research Center involved in the research and development of image and video processing algorithms and technology. He has published articles on the origin and evolution of life, the exploration of Mars (anticipating the discovery of methane on Mars), and cheap access to space. He has a Ph.D. in physics from the University of Illinois at Urbana-Champaign and a B.S. in physics from the California Institute of Technology (Caltech). He can be reached at jmcgowan11@earthlink.net.

Software models often get things right for the wrong reasons.

A fellow software developer once told me the story of an attempt to build an automatic satellite recon system which could spot tanks.

Worked fine in development, then failed totally on its first full scale test.

On investigation, it was discovered that all the majority of aerial “training” photos which had been shown to the system, which contained a tank, were taken on cloudy days, but the training photos which didn’t contain a tank were taken on sunny days.

So what they had actually built was a satellite system capable of spotting whether the day was cloudy.

It is not simply the world of research where overreliance on computer models occurs – in the very practical world of engineering, your observations are also true. I lecture a class each fall to senior undergraduate engineering students on exactly this topic and also provide similar guidance to people inside my own organization. The problems of not having sufficient data, having to make assumptions, having incorrect code, having correct code that doesn’t behave the way you expect occur very regularly.

Also, engineers have fallen victim to the what I term the “curse of precision”, which you call measuring X to another decimal point. Rerunning the calculations because the answer changes in the nth decimal place takes time, and if usually performed regardless of whether the change in the result actually affects anything.

As for the reference to the Manhattan Project – I think this is misleading. The Manhattan Project had the very best minds on the problem, and in that era they understood the “concepts” of what they were doing far better than many people today who lean too heavily on computer models. Further, they did do a fair bit of experimentation and spent an enormous amount of money in a very short time.

Further, the Manhattan calculations were not “right”, just “right enough”. IIRC, the yield was underestimated by about half.

I think many engineering and other models and calcs are used because they are (apparently) “right enough”. Until they aren’t.

A few points from my own experience:

(1) Researchers who rely on published data do so at their own peril; error limits are rarely published on empirical measurements. Often the last digit (or two) in a published number is insignificant, but the reaer is not told so.

(2) Every computer model has hidden assumptions and approximations concerning nearly all the interactions of the variables. The most subtle of errors in these can completely change the character of a problem.

(3) Computation introduces errors. Numbers get rounded off, errors and uncertainties are propagated (carried through) to the next calculation, cyclic feedback mechanisms involving three or more variables are almost impossible to identify, let alone characterize.

(4) Humans have an inescapable tendency to ignore information that either makes no immediate sense or that contradicts expectations, resulting in use-introduced bias in the ‘tweaking’ of the model.

While your statement of limitations of computers today is accurate in practice, it’s becoming less accurate as time goes on. Computers are getting better at using logic to simulate reason. Computers are getting better at using statistics to create hunches. Feedback neural nets can be trained to perform jobs with a good set of input and expected outputs. Genetic algorithms are good at searching multidimensional problem spaces for solutions. These techniques and others have been used to routinely get practical solutions to otherwise intractable problems. They’re still not common.

A bug per thousand lines of code is pretty much the norm, though i’ve encountered much worse. With effort, a bug per two thousand lines is achievable. The Space Shuttle main code, about 500,000 lines, is thought to have achieved a bug per 20,000 lines – measured using a statistical approach. That’s still flying with 25 bugs. This was achieved by spending $20,000 per line of code. It’s not practical.

There are something like 30 computer models for weather. None of them are good for predicting weather past 36 hours. None of them are consistent in predicting where a hurricane might go next.

Perhaps we need to back up and talk about exactly what a breakthrough is. I’m not sure that it exists. Engineers (i’m an engineer) talk about how many problems need to be solved to achieve some goal. There might be, for example, “only” about a thousand problems. And it’s not that one problem is easy or hard.

When one writes software, one thinks about what libraries there are to build on, and therefore how many problems are left. Once you’re in the process of designing a solution to a problem, you might discover that there are other problems to be solved. As long as none of the individual problems are too difficult, even quite complex problems are solvable. The end result, oh maybe the first spreadsheet, might turn out to be a breakthrough. But none of the individual problems were difficult.

One of Newton’s quotes is “standing on the shoulders of giants” – presumably Galileo and Kepler among them. Newton invented Calculus, but so did Leibniz, about the same time, and i note that we use the Leibniz notation today. Many advances were bound to happen soon.

While Army rockets were blowing up on the pad, Von Braun’s team was ready, and the US got into the space race. While the Soviet N-1 rocket failed in six or so launches, Von Braun’s Saturn V got Apollo to the Moon, arguably without a failure. But Von Braun’s approach was “build a little, test a little”. It was an evolutionary approach to engineering management. And he stood on the shoulders of Goddard, who himself started small and tested frequently.

The SpaceX team didn’t succeed on their first flight. But they did succeed on a budget that is an order of magnitude smaller than similar efforts. Burt Rutan also had setbacks, but also made his achievements on unbelievably small budgets.

I’m no biologist, but the War on Cancer seems different. It’s not one disease but many, and it’s hardly even at the point where “i’d recognize it if i saw it”. As far as i know, we’re just starting to use quantum mechanics to simulate how many basic biological chemical reactions really work. And it matters. We’re not at the point where we know how cells work in detail. So it’s not a real surprise that we don’t know how they can fail. I doubt that there is a realistic estimate of the number of issues to understand. What we can hope for is to gain some understanding of a critical cancer function that differs from normal operation. Then either a function could be interrupted, or a cancer cell could be recognized at the cell level. Perhaps a virus or bacteria that only knows how to infect cancer cells could work. One promising cancer cure turned out to only cure testicular cancer. A good idea. It had partial success. I’d call it a breakthrough.

It turns out that there are few adequate technical managers. Early in my career, people told me that managing computer programmers was like “herding cats”. I’ve discovered that computer programmers and other engineers are generally people who can be relied to work on problems assigned and even ask for help if they need it, with minimal guidance and oversight. Hardly herding cats. The good technical managers, Von Braun, Kelly Johnson, Igor Sikorsky, Seymour Cray, Fred Brooks and others are mostly completely ignored. Even the pretty good technical managers are ignored. There are lots of sociological reasons for this. But we are not doomed as a species to be at the complete mercy of politics forever.

On the other hand, Star Wars was never going to work, if the goal was to make a shield against incoming nuclear missiles. Some projects have the smell of death in them.

In response to Stephen’s comment:

For the sake of argument, I postulated the existence of a programmer or team of programmers who could write code with an error rate of 1 bug/error per 1000 lines of code before any testing or debugging whatsoever. This is extremely rare. Most programs achieve an error rate of 1 per 1000 lines of code only after extensive testing and debugging.

Yet, even with such a team of exceptional programmers, there will still be errors both at the start and probably after even extensive testing and debugging of the software. Indeed, despite massive effort, the Space Shuttle is still probably flying with some bugs.

Sincerely,

John

With respect to Stephen’s second longer comment:

It is my view that breakthroughs as commonly conceived do occur. They are quite rare and the term is heavily overused in the mass media and by scientists today. In mechanical invention, a breakthrough is typically a conceptual leap that in practical terms is usually a new component or a radical redesign of the system.

In the case of rocketry, the true breakthroughs that I have been able to identify took place prior to 1950. A rocket scientist may correct me, but by 1957 rockets were being developed “incrementally” to reach the power and performance level needed for orbit, deep space, and the Moon.

Some of the major leaps between 1900-1950 in rocketry:

1. Abandoning trying to use powdered explosives such as traditional black powder or more modern “gunpowders” for the fuel. For hundreds of years rockets had been using black powder but it was essentially impossible to adquately control the release of energy with powders. This mean either liquid propellant rockets (Robert Goddard, German vfr, etc) or the precursors to modern solid fuel rockets (Jack Parsons/Suicide Club). (radical redesign of system)

2. Development of the torpedo design that we take for granted with the engine in the back, the tank in the middle, and the payload at the tip. (radical redesign of system)

3. The multiple stage concept to achieve orbit. (radical redesign of system)

4. Introduction of turbo pumps to achieve the high rates of fuel/oxidizer consumption needed for long range and ultimately orbit in the liquid propellant rockets. (introduction of new component)

I may be missing some other leaps. “Build a little, test a little” is important for revolutionary leaps as well. Particular system designs/architectures often reach plateau’s in performance. With many tests, the plateau’s become clear and the inventor/discoverers are led to reevaluate the design, underlying assumptions, and so forth. Then, they may make a leap, a true breakthrough. Something is learned from the failures.

Perhaps especially relevant to the “War on Cancer”, it is necessary to realize that the repeated failures, the lack of progress is indicating that something fundamental is wrong, often some underlying widely accepted assumption or assumptions are incorrect.

Just to restate the main point of the article. In general, it is not realistic to assume that theoretical mathematical calculations and/or computer simulations will work right the first time, or even within the first few tries. There are several reasons that theoretical mathematical calculations and computer simulations often fail, especially in frontier or breakthrough research which are listed in the article. For this reason, one should plan on a large number of full system tests or trials, as well as partial system tests and probably many more component tests.

Sincerely,

John

Distinguishing errors which are inconvenient boo-boos and those which point to a conceptual flaw is much to be desired. The latter offer a chance for learning and advancement. They should be welcomed when discovered.