Illustrative image for Clicking 8216 Accept 8217 Is Not Informed Consent

Bioethics Forum Essay

Clicking ‘Accept’ Is Not Informed Consent

A recent Science article published the results of an experiment conducted on 20 million LinkedIn users over five years involving the “People You May Know” algorithm. The experiment randomly manipulated the algorithm to understand the effect on users’ likelihood of getting jobs. None of these people knew they were part of an experiment, nor did they consent to participate.

Informed consent is a bedrock of human subjects research ethics in the United States. Any study done with federal funds or conducted at an institution that receives federal funds is required to review all proposed human subjects research through an institutional review board. Part of the process is ensuring that there is adequate protection to prevent harm to the subjects and to make sure that they consent to participating in the experiment. Such consent means that they know the risks, benefits, and alternatives, as well as having an opportunity to ask questions and to refuse to participate.

Since LinkedIn is a private company, owned by Microsoft, it does not legally fall under the requirements for human subjects review. But these requirements are so widely accepted that most human research studies in the U.S. abide by them. Although the study was approved by the MIT Institutional Review Board, one must question what was included in the protocol application and what elements of this study were debated. After all potential subjects should at the very least know they are subjects. There are some experiments in which knowing the process would influence the outcome and IRBs have mechanisms for such situations—people agree to being subjects and are debriefed afterwards. Even that ethical practice is missing here.

Eight years ago, Facebook was criticized for conducting a social experiment that manipulated the emotional content of users’ news feeds and learned that people who saw more negative content displayed more traits of depression in their posts. In all this time, no standards or regulations have been created to address the gap in human research oversight involving studies conducted on social media.

Social media companies claim that their terms of use permit them to run research trials on users. The main user agreement for LinkedIn is 14 pages long. In addition, there are community policies, additional terms of service, a privacy policy, a cookie policy, a copyright policy, and California-specific policies. All told, people would have to read 46 pages of legalese to know the company can experiment on them. The privacy policy states: “We use data… to conduct research and development for our Services in order to provide you and others with a better, more intuitive and personalized experience, drive membership growth and engagement on our Services, and help connect professionals to each other and to economic opportunity.” The goal of you being an unwitting research subject is to help LinkedIn make more money.

Researchers and IRBs aim to have informed consent documents written in everyday language to protect the potential research participants and to provide a benefit to society at large. Both the LinkedIn and Facebook studies are violations of these basic human subjects research ethics. If you disagree with these companies’ terms of service (which includes being a research subject), your only option is to close your account. These companies could voluntarily follow research ethics standards that have existed for over 50 years. They have chosen not to. The only possible response is either for individuals to stop using their products or for states and the federal government to develop regulations and require research oversight to protect social media users from the potential harms of these studies.

Craig Klugman, PhDis the Vincent de Paul Professor of Bioethics and Health Humanities at DePaul University. @CraigKlugman

Read More Like This
  1. There seems to me to be something intrinsically wrong with having terms of use agreements pull double duty as informed consent documents. It’s no secret that a great deal of people don’t even bother to read terms of service, with numerous studies showing anywhere from 91% to 99% of people accepting without reading. This type of agreement is antithetical to the idea of informed consent, which by its very nature requires a close reading and understanding of what is being agreed to. As you noted, there is also a marked difference in the language used in each type of document – informed consent applies everyday language, in stark contrast to the legalese that often riddles terms of use. Ethically, at least, the examples you’ve highlighted from LinkedIn and Facebook appear to be basically indefensible.

    I believe there are some interesting legal questions as well. Provisions in contracts can be rendered unenforceable due to their unconscionability; it’s not too far of a stretch to imagine that requiring compulsory service as a potential research subject with no idea of the nature or parameters of the research in order to use a service might be enough to “shock the conscience.” The existence of 45 CFR 46 proves that the state has at least some interest in this. The manner in which the terms are presented plays a role as well, and courts have held terms of service unenforceable for being presented in too inconspicuous of a manner (Cullinane v. Uber Technologies, Inc., No. 16-2023, 2018 WL 3099388 (1st Cir. June 25, 2018)).

    The more terms of use ask of a user, the more scrutiny they should receive. Asking for informed consent is quite a lot. If social media companies continue to insist on ignoring well-established research ethics standards, I agree that governmental regulation is required.

Leave a Reply

Your email address will not be published. Required fields are marked *