Sorry, there are no polls available at the moment.

Belmont’s Ethical Malpractice

“The language of the biomedical model lends itself to ethical malpractice.” With these words, Albert Reiss condemned the 1978 Belmont Report. The report, he complained in a 1979 essay, reduced “people” to “subjects” and required procedures – such as risk-benefit analysis – that were inapplicable to research that did not resemble medical experiments.

Reiss was not a random critic. Rather, he was a prominent sociologist who himself had participated in the 1976 conference at Belmont House that would give the report its name, and who had been hired as a consultant by the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, the report’s creator. But throughout the discussions that led to the report, the members of that commission ignored objections put forth by Reiss and other social scientists. The result was a report that is a notable achievement in the exploration of the ethical challenges raised by medical research, but which serves as a poor guide to research in the social sciences and humanities.

There is nothing grossly wrong with the Belmont Report’s “basic ethical principles”: respect for persons, beneficence, and justice. But, as bioethicist Albert Jonsen – one of the authors of the report – told me in a 2007 interview in San Francisco those terms are “fairly vapid . . . [They] hardly rise above common sense notions.” Rather, the problem lies in the meatier “Applications” section, which demands informed consent, an assessment of risk and benefits, and fair outcomes in the selection of research subjects. Drawn from the traditions of medical research, these terms are often inappropriate or simply inapplicable to much research in other areas. Yet the Belmont Report does not explicitly limit the application of those terms to medical and psychological experimentation.

The reasons for this mismatch lies in the history of the National Commission, which I discuss in my book, Ethical Imperialism: Institutional Review Boards and the Social Sciences, 1965-2009. Congress established the commission with eleven members, five of them researchers. Of these five, three were physicians, two were psychologists, but none were scholars in the social sciences and humanities. As a result, the commissioners did not themselves explore the ethics of those fields, and they were slow to heed the advice of their own expert consultants who challenged the concepts that would eventually appear as the Belmont applications.

In his report to the commission, Reiss warned that many of the assumptions of the “bio-medical model” did not apply to sociological work. For one thing, that model assumed that investigators were in full control, something much truer in a medical clinic than in a telephone survey, where the subject could always hang up. Second, it assumed that the investigator began with a fixed list of procedures, hardly the case in much of social science. Third, it assumed that all harms were to be minimized, not the case in “muckraking sociology or social criticism.”

Another consultant, law professor Charles Fried, insisted at the Belmont conference that “freedom of inquiry” should be a “very important, basic underlying principle” in any report. Other social scientists, not employed by the commission, offered their own warnings. For example, sociologist Carl Klockars described the writing of one of his books and noted, “I am not aware of any . . . weighing of risks and benefits ever occurring.”

The commissioners and their staff ignored such arguments throughout their monthly meetings in 1976 and 1977, while the Belmont Report was being drafted. Only in 1978, in the last months of the commission’s four-year existence, did they voice their own doubts about the universal applicability of their work. At the February 1978 meeting, Jonsen commented on a near-final draft: “There is nothing in here that tells us why we are about to make a great big step which we have made from the beginning. Namely, why ought the thing that we are calling research be subject to what we call review?”

At the next month’s meeting, Chairman Kenneth Ryan, a physician, noted that the commission’s September 1977 report on research involving children had been “worked out – largely in a biomedical model, if you will, biomedical and behavioral model,” and now federal officials were unsure if it was supposed to apply to education research. Robert Cooke, another physician, confessed his own uncertainty: “I think that some things are applicable, and I suspect some are not.” In the April meeting, Ryan lamented that “The Commission, over its long time, has not really had an opportunity to spend adequate time, on the social science research problem.”

Donald Seldin, the third physician-commissioner, argued at the March meeting that the Belmont Report should be a “document which deals with handling specific human beings from a medical point of view.” He was supported at the same meeting by staffer Stephen Toulmin, who argued that “the basic idea that we are concerned with [is] the protection of individual research subjects, who, after all, are the people who are exposed to the experimentation,” and by staffer Bradford Gray, who warned against forcing sociologists to “respect” noxious groups like the Ku Klux Klan. But by the time these debates took place – in the spring of 1978 – the commission was closing up shop, its staff departing. No one had the time or energy to rewrite the report.

The result was a report that is maddeningly imprecise in its intended scope. True, the report offers a definition of research: “an activity designed to test an [sic] hypothesis, permit conclusions to be drawn, and thereby to develop or contribute to generalizable knowledge (expressed, for example, in theories, principles, and statements of relationships).” But that definition appears in a section devoted to allowing physicians to offer innovative therapies without running into regulatory requirements. It does not distinguish scholarly research from journalism (as Gray repeatedly begged the commission to do). Nor does it distinguish behavioral science from social experimentation; a hastily drafted footnote to the report concedes that the commissioners were stumped by that one.

Indeed, the definition of research in the Belmont Report does not even match the definition in the commission’s report on institutional review boards, which does not mention the testing of hypotheses. This latter version became the definition used by today’s regulations.

In the absence of an adequate definition, one must find the true meaning of the Belmont Report in the medical language it uses. It mentions Nazi biomedical experiments, “medical or behavioral practice,” the Hippocratic Oath, childhood diseases, poor ward patients, the Tuskegee syphilis study, the withdrawal of health services, “populations dependent on public health care,” and vaccination as areas of concern. It cites work in medical research ethics: a statement by Claude Bernard, the Nuremberg Code of 1947, and the Helsinki Declaration of 1964. It also cites the ethical code of the American Psychological Association. By contrast, the report makes no mention of the research or ethical standards of fields outside of the therapeutic areas of medicine and psychology, or any examples of ethical missteps in social science.

In a 2007 interview with me in his Georgetown University office, Tom Beauchamp, a commission staffer and one of the key authors of the Belmont Report, conceded that the commission had failed to explore the ethics of research outside of medicine and psychology. “You cannot do good work in [professional ethics] unless you have a pretty good understanding of the area that you’re concerned with,” he said. “For example, if you’re into the ethics of genetics, if you don’t understand genetics, you can’t do it. And so on. Did we do that [good work] on the commission when it came to the social sciences? Absolutely not.”

Albert Jonsen concurred, telling me in San Francisco, “We really should have made much clearer distinctions between the various activities called research.” He explained that “the principles of the medical model are beneficence – be of benefit and do no harm. I simply don’t think that that applies to either the intent or the function of most people doing research.”

It is not surprising that the National Commission paid so little attention to such fields as sociology, anthropology, linguistics, political science, history, and journalism. Aside from a few brief remarks, Congress had not taken testimony about or expressed interest in these disciplines, nor had it required the commission to investigate them. It is surprising, and disheartening, that the commission did not acknowledge its lack of expertise in the problems of social science research and decline to make recommendations not grounded in careful investigation. Sound ethical advice requires some humility.

Zachary M. Schrag is an associate professor in the Department of History and Art History at George Mason University and author of Ethical Imperialism: Institutional Review Boards and the Social Sciences, 1965-2009.

Published on: November 30, 2010
Published in: Clinical Trials and Human Subjects Research

Receive Forum Updates

Recent Content