Imagine this: two news articles are shared simultaneously online.
The first is a deeply reported and thoroughly fact-checked story from a credible news-gathering organisation – perhaps Le Monde, The Wall Street Journal, or Süddeutsche Zeitung.
The second is a false or misleading story. But the article is designed to mimic content from a credible newsroom, from its headline to the way in which it has been shared.
How do the two articles fare?
The first article – designed to inform – receives limited attention. The second article – designed for virality – accumulates shares. It exploits the way your brain processes new information, and the way social media decides what to show you. It percolates across the internet, spreading misinformation.
This isn’t a hypothetical scenario – it’s happening now in the United States, the United Kingdom, France, Germany and beyond. The Pope did not endorse a US presidential candidate, nor does India’s 2,000-rupee note contain a tracking device. But fabricated content, misleading headlines, and false context convinced millions of internet users otherwise.
This type of fraud is reaching epidemic proportions worldwide, at least in part because the online advertising economy that underlies much of today’s internet is terribly broken. The rise of misinformation discussed under today’s catch-all banner of ‘fake news’ needs to be understood in the context of unhealthy market realities that can reward malicious behaviour for profit or political gain.
Most people are getting at least some of their news from social media now. In order to maximise profits from displaying ads, news feeds and timelines show the content that attracts the most attention. This ends up favouring headlines that scream for reactions (expressed as shares, ‘likes’ and comments). Add to this the ability to boost the visibility of any message by buying an ad and targeting the people most likely to react to it (based on interests, behaviours and relationships), and anyone can churn out disinformation at unbelievable rates and then track their success.
Finding fixes
Online misinformation is a major threat to the health of the internet and all of the societies it touches because of the potential for political disorder, the undermining of the truth, and the hatred and rumours that can spread in conflict or disaster zones, but also because attempted quick fixes by politicians (with or without ulterior motives) may threaten the openness of the internet.
For example, Germany’s reaction to misinformation and hate speech online was to make social media platforms responsible for taking down unlawful content. Other countries, including Russia and Kenya, have passed laws that follow suit. We should be wary of any solutions that make Facebook, Twitter or any other corporations (or their algorithms) the gatekeepers of the internet.
Instead of quick fixes, we need to take the time to better understand the problem and the kaleidoscope of actors and symptoms. We’re facing a mix of junk news, computational propaganda, information pollution and low digital literacy.
Numerous people are already working on ways to tackle parts of the problem. Developers and publishers are trying to build more thoughtful and balanced communities around their news. The Credibility Coalition is working on a web standard to support the detection of less trustworthy and unreliable content. Teachers are developing curricula to help their students grapple with misinformation. And social platforms are trying to make political ads more transparent, although with limited effect. These are still early days for many ideas.
Even if efforts like these succeed, many argue that we’ll still have to tackle a bigger internet health problem: the underlying online advertising and engagement model that rewards abuse, fraud and misinformation. It’s hard to imagine fixing this problem without regulation, radical changes in internet business models, or both.