Illustrative image for Fake News A Role for Neuroethics

Bioethics Forum Essay

Fake News: A Role for Neuroethics?

Fake news proliferates on the internet, and it sometimes has consequential effects. It may have played a role in the recent election of Donald Trump to the White House, and the Brexit referendum. Democratic governance requires a well-informed populace: fake news seems to threaten the very foundations of democracy.

How should we respond to its challenge? The most common response has been a call for greater media literacy. Fake news often strikes more sophisticated consumers as implausible. But there are reasons to think that the call for greater media literacy is unlikely to succeed as a practical solution to the problem of fake news. For one thing, the response seems to require what it seeks to bring about: a better informed population. For another, while greater sophistication might allow us to identify many instances of fake news, some of it is well crafted enough to fool the most sophisticated (think of the recent report that the FBI was fooled by a possibly fabricated Russian intelligence report).

Moreover, there is evidence that false claims have an effect on our attitudes even when we initially identify the claims as false. Familiarity – processing fluency, in the jargon of psychologists – influences the degree to which we come to regard a claim as plausible. Due to this effect, repeating urban legends in order to debunk them may leave people with a higher degree of belief in the legends than before. Whether for this reason or for others, people acquire beliefs from texts presented to them as fiction. In fact, they may be readier to accept that claims made in a fictional text are true of the real world than claims presented as factual. Even when they are warned that the story may contain false information, they may come to believe the claims it makes. Perhaps worst of all, when asked how they know the things they have come to believe through reading the fiction, they do not cite the fiction as their source: instead, they say it is ‘common knowledge’ or they cite a reliable source like an encyclopedia. They do this even when the claim is in fact inconsistent with common knowledge.

So we may come to acquire false beliefs from fake news. Once acquired, beliefs are very resistant to correction. For one thing, memory of the information and of correction may be stored separately and have different memory decay rates: even after correction, people may continue to cite the false claim because they do not recall the correction when they recall the information. If they recall the information as being common knowledge or coming from a reliable source, knowing that Breitbart or Occupy Democrats is an unreliable source may not affect their attitudes. Even if they recall the retraction, moreover, they may continue to cite the claim.

Finally, even when we succeed in rejecting a claim, the representation we form of it remains available to influence further cognitive processing. Multiple studies (here and here) have found that attitudes persist even after the information that helped to form them is rejected.

All this evidence makes the threat of fake news – of false claims, whether from unreliable news sources, from politicians and others who seek to manipulate us – all the greater, and suggests that education is not by itself an adequate response to it. We live in an age in which information, true and false, spreads virally across the internet in an unprecedented way. We may need unprecedented solutions to the problem.

What are those solutions? I must confess I don’t know. An obvious response would be censorship: perhaps with some governmental agency vetting news claims. While my views on free speech are by no means libertarian, I can’t see how such a solution could be implemented without unacceptable limitations of individual freedoms. Since fake news has an international origin, the sources can’t effectively be regulated, so regulation would have to target individuals who would share the stories on social media. That kind of regulation would require incredibly obtrusive monitoring and unacceptable degrees of intervention, and would place too much power in the regulating agency.

A better solution might be utilize the same kinds of psychological research that warn us about the dangers of fake news to design contrary sources of information. The research that shows us how people may be fooled by false claims also provides guidance as to how to make people more responsive to good evidence. We could utilize this information to design informational nudges, with the aim of ensuring that people are better informed.

This solution itself requires scrutiny. Are such nudges ethical? I think they are, or at least can be. Further, would good information crowd out bad? We aren’t in a position to confidently say right now. What we can say, however, is that fake news is a problem that cries out for a solution. If we can’t solve it, we may find that democratic institutions are not up to the job of addressing the challenges we face today.

Neil Levy is professor of philosophy at Macquarie University in Sydney, Australia, and a senior research fellow at the Uehiro Centre for Practical Ethics at the University of Oxford. A version of this essay orginally appeared on The Neuroethics Blog.

 

Read More Like This

Hastings Bioethics Forum essays are the opinions of the authors, not of The Hastings Center.

Leave a Reply

Your email address will not be published. Required fields are marked *