nuclear bomb exploding over Nagasaki

Bioethics Forum Essay

Oppenheimer’s Nuclear Value Judgment Wasn’t the First

Christopher Nolan’s film, “Oppenheimer,” which opens in theaters on July 21, highlights a value judgment that the Manhattan Project scientists had to make before Trinity, the test of the first atomic bomb. They had to calculate the odds that the “gadget” wouldn’t initiate a catastrophic chain reaction that could ignite Earth’s atmosphere. In so doing they implicitly invoked two dimensions of risk, probability and magnitude, where in this case the probability of an error was low, but the magnitude of an error was immense. They also needed to factor in another dimension of risk: the suspicion that Nazi Germany’s formidable scientific establishment was on course for its own atomic device, which would have been a catastrophe for human civilization of a different sort.

But that fateful judgment in Los Alamos, New Mexico, in 1945 wasn’t the first such determination that Project scientists needed to make. That took place in 1942 as part of the work in the top-secret Metallurgical Laboratory on the University of Chicago campus, where other great physicists like Arthur Compton, Enrico Fermi, and Leo Szilard were working on the first nuclear reactor. Known as the Chicago Pile, the immediate purpose of the reactor was to generate sufficient plutonium to power a bomb. In the course of testing when the material would reach the point of criticality, they controlled the reactor by inserting and withdrawing cadmium control rods. If their calculations had been incorrect, a large part of Chicago would have ceased to exist.

We don’t know if the experience with the Pile figured into the judgment of Robert Oppenheimer and his team, but scientists make value determinations all the time. In a classic 1953 paper called “The Scientist Qua Scientist Makes Value Judgments,” the philosopher of science Richard Rudner (who was my dissertation director) used the Pile as an example of the way scientists inevitably make moral choices. He argued that the acceptance of any scientific hypothesis requires that the evidence in its favor must be strong enough for the risks to be morally acceptable. To the atomic scientists in charge of the Pile and the Trinity test the hypotheses were strong enough for them to accept the risks of an error. In other words, they made a value judgment.

Rudner argued that moral choices in science are so routine that we don’t notice them. Take an example from drug research. Imagine that of two potential medications one would be far more beneficial to future patients if it worked, but an experiment for that one would pose higher risk to human volunteers. The experimenters would have to find the risk-benefit balance to be favorable enough to offer the drug to human subjects. Though far less spectacular than the Manhattan Project experiments, the consequences of error would be significant in placing people in a study at risk. Unlike the Pile and Trinity, in which neither the people of Chicago nor the inhabitants of the whole planet were asked for their permission, the people who agree to be in a drug study must give their informed consent.

Human civilization is now in a different sort of experiment, one that Oppenheimer predicted. This poorly managed experiment is based on the theory that human control of these awful weapons can succeed indefinitely. Thus, we are sleepwalking into disaster. The consequences of being wrong are too great, and the probabilities are not in our favor.

Jonathan D. Moreno the David and Lyn Silfen Professor of Ethics at the University of Pennsylvania and a Hastings Center Fellow. (@pennprof)

Read More Like This
  1. Thank you to Jonathan Moreno for this very insightful analysis of probability and risk. What must be stressed from his final point is that the time frame for assessing risks are not limited to the immediate consequences of Trinity or Fermi’s Pile in Chicago. Possible risks need to be understood as being never-ending and enduring following the dawn of the nuclear age. If this were a drug trial the analogy might be Phase IV surveillance – in perpetuity. But even that is of insufficient duration. Instead the risk is more like germ-line interventions that span generations, climate change, or AI, each of which – like nuclear war – have the potential to pose existential risks. Given this, it is important for bioethicists to formalize rigorous methods which are cognizant of the temporal dimensions of risk, beyond the short term. We too often neglect this at our peril and discount future events for short term gain. Given the stakes we must do better. History, ultimately, will be the judge of how well we have done.

  2. It is always important to be reminded that “ethical choices” / “should questions” are not necessarily limited to “Oh, my God” moments. (I acknowledge that this is only the lead-in point of the article, but…)
    For years, I asked first year medical students to tell me how often practicing physicians would face ethical dilemmas. Their answers were often in the weeks-to-months range. Even now, when my students are mostly mid-career health professionals, many do not immediately acknowledge the ordinariness of practical decisions that prompt ethical uncertainty. Fewer still appreciate the systemic harms potentiated by thoughtlessly…or selfishly… blowing past those uncertainties.

  3. Oppenheimer is often seen as some kind of moral hero because he helped America win WWII (or at least, end the war against Japan sooner) while suffering deep emotional pain for the guilt of being involved making the most destructive weapon of all time… But almost nothing is written about Heisenberg’s moral heroism in finding a way to decide to not make it at all – to deny Hitler the atomic bombs Heisenberg absolutely could have invented first, had his distaste for Nazis not outweighed his patriotism. There’s a good short book covering this (and many other insightful topics) at length and it’s free this weekend (through 7/23/23) to encourage people to “buy” the free ebook and hopefully give a brief five star review on Amazon afterwards. Oppenheimer and Heisenberg: Friends, Enemies and Architects of Destiny

  4. Wonderful piece. With the absolute bias of hindsight – Might it be said that at the time of the Manhattan Project, leadership’s contemplation of a(n acute) chain reaction (eg; catastrophic ignition of the planet’s entire atmosphere) was given more weight than the prospect of the “chronic chain reaction (CCR)” that has become our lived experience? By CCR, I mean the production of massive stockpiles over 3-4 generations and the nefarious pursuit of nuclear weapons by both rogue and developed nations to be used as both a shortcut and a geopolitical cudgel/sledgehammer/negotiating chip?

    Indeed the planet seems to have been ignited, and, whether through luck, restraint, divine intervention or some other force(s), the burn has been relatively contained. But as Oppenheimer did seem to predict, even the slow burn inches its way toward finding accelerants. Those accelerants, along with their extinguishers, btw, seem to be far more social than physical forces. We see now that the key isn’t more assurances of mutual destruction. It’s turning down the social temperature first through listening and finding shared values, even among adversaries. Emphasizing those (values) and building mutual trust is the true long-game that ensures our survival.

Leave a Reply

Your email address will not be published. Required fields are marked *