Skip to content
headshot2-randy

Posted On

By

In

Deepfakes and Digital Disinformation: A Looming Threat

In recent years, democracies have increasingly come under attack by perpetrators of Digital Disinformation, also popularly labeled Fake News. A European Union (EU) study conducted after the recent European Parliament elections showed a consistent trend of malicious activity. Russia, most notably, used fake accounts and bots during the European campaign to amplify divisive content, promote extreme views, and polarize local debates. EU countries that have strong cultures of independent journalism and governments that are actively fighting Russian disinformation campaigns were the most resistant to phony news stories. Other countries lacking these institutional protections, such as Poland and Hungary, were more vulnerable.

A growing concern is a new technology that almost anyone can use to create increasingly convincing—but false—sound clips, videos, and photos. “Deepfakes,” which are media that have been digitally altered through artificial intelligence techniques, pose a major risk for both individuals and democratic institutions. Laymen now can plug a photograph or video clip into prewritten code and produce an extremely realistic, life-like false image or video. Deepfakes are inherently hard to detect and, so far, society is largely ill-equipped to deal with them.

Incentives to post Deepfakes on social media are likely to grow in the coming months as the US presidential election campaign heats up. Digital Disinformation and Deepfakes could also be used in the coming months to enflame passions surrounding such highly divisive issues as abortion, gun control, and immigration because:

•      Massive groups of people can be reached almost instantaneously.

•      The Deepfakes can be micro-targeted, focusing on those most easily swayed and open to persuasion.

•      The perpetrators are rarely held accountable for what is posted.

The perpetrators of Digital Disinformation and Deepfakes appear particularly adept at manipulating perceptions by exploiting common cognitive biases, misapplied heuristics, and intuitive traps. Examples that the Russians and others have leveraged to promote their agendas include:

Confirmation Bias. Social media is a Confirmation Bias “machine.” People are predisposed to accept information that is consistent with the judgments, conclusions, and preferences they have already formed. Perpetrators of Digital Disinformation know how to tailor their messages to reinforce someone’s fears or influence how he or she votes.

Vividness Bias. The objective of much Digital Disinformation is to generate clicks. Clicks lead to increased site traffic, which leads to increased income from ad revenue and donations. The more salacious and outrageous the story, the more clicks are generated. By focusing attention on vivid scenarios, individuals are less likely to pay attention to other possibilities or alternative hypotheses.

Groupthink. Social media creates echo chambers that enable the acceptance of a certain view without challenging it through critical thinking. This is especially easy when one is surrounded by others holding the same opinion.

Anchoring Effect. Once anchored on an assessment, people usually adjust their views as they learn more. But if the initial assessment is highly skewed, even people’s adjusted views will be influenced by first impressions, leading them to make decisions grounded in incorrect or misleading information. People are particularly susceptible to this bias if they are already predisposed to believe a certain idea.

Judging by Emotion. Accepting or rejecting new information because the recipient is predisposed to like or dislike the source is a classic trap that is easily manipulated. Much of the visceral hatred evidenced in political campaigns is likely to be a product of this intuitive trap.

Confusing Correlation with Causality. Many people will easily jump to a conclusion that one variable causes another because they want it to be true or they think that the “connection” proves their beliefs or justifies their positions. This trap is a favorite tool of manipulators of social media.

Ignoring Inconsistent Evidence. When confronted with data that is inconsistent with one’s world view, politics, or deeply-held beliefs, a classic reaction is not to argue the facts but to avoid the discussion. A true metric for success for purveyors of Digital Disinformation is when people believe there is no truth or that real truth is unknowable.

The best antidote for such manipulation is to increase popular consciousness of how vulnerable people are to such behavior modification campaigns and to adopt more deliberate and purposeful thought processes as described by Daniel Kahneman in his book, Thinking Fast and Slow. Structured Analytic Techniques, in particular, are effective in helping people recognize when they are being influenced by disinformation campaigns and in countering their impact. Four especially effective techniques that can help combat the growing scourge of Deepfakes and Digital Disinformation are the Key Assumptions Check, Analysis of Competing Hypotheses, Indicators, and Premortem Analysis with its companion, the Structured Self-Critique.

You can learn more about cognitive biases, misapplied heuristics, and intuitive traps in the Handbook of Analytic Tools and Techniques by Randolph H. Pherson (available at shop.globalytica.com). To learn more about our associated training opportunities, including a Strategic Foresight Workshop to be taught in Australia in August 2019, click here.

×
Pherson books and other publications are now available through Globalytica.com
Please note Globalytica is a separate entity from Pherson.