2.
An Algorithmic About-Face

While disinformation's virulent spread on social media is alarming on its own, it's exacerbated by the fact that many Americans lack a diverse media diet. In a Pew Research Center survey, nearly one in five adults said that they get their political news primarily from social media. And even for U.S. adults who also check news sources or rely on television news, nearly half of them get some form of their political news from social media.

The lack of education around media literacy in the United States is a huge driver of misinformation. In a different Pew Research Center survey, participants were asked to identify five fact-based news articles from five opinion-based news articles. Only one in three respondents correctly identified all ten of the articles.

And social media companies aren’t helping. In 2018, Facebook made a big change to its news feed algorithm. They deprioritized showing content from publishers, and pushed up content that drives what they call “meaningful interactions.” If you take Mark Zuckerberg at face value, a “meaningful interaction” means more posts from your grandma. But in reality, these “meaningful interactions” are just posts similar to other posts you’ve engaged with that generate more likes and shares. And likes and shares are often driven by emotional responses.

I spoke with Katya Vogt, an expert who works on international disinformation campaigns, and she talked to me about “the pause” - the moment between when you read a headline that enrages you and when you smash the share button. Not enough people actually do the pause, which has a massive impact on how quickly misleading information spreads. Katya told me that her organization had done studies on the idea of making people pause before they shared an article; essentially, the pause would force them to check their emotional response, and decreased their likelihood of sharing disinformation.

So we’re supposed to do the pause, but algorithms are built to show us content that prevents the pause. So we’re set up for failure. That’s especially true for content that trafficks in anger and fear—like hate groups. In the past year, we’ve seen kidnapping plots, organization of armed militia, and more being organized over Facebook Groups—which is one of the things that Facebook’s “meaningful interactions” algorithm promotes. In an internal review, Facebook found that “64% of all extremist group joins are due to our recommendation tools.

And it’s not just Facebook; this is happening on other websites that use algorithms to recommend content as well. Someone watches a Youtube video from FOX News, then it recommends something from Ben Shapiro, then eventually you end up watching videos from extremist, alt-right personalities. We know this happens because of anecdotal stories, such as the one told in The New York Times’s podcast Rabbit Hole. We also know that online communities on the right are very interconnected; the pathway to radicalization is just one Youtube click away.

In short, we are vulnerable, and algorithms are built to take advantage of that vulnerability. We need to ban the algorithmic amplification of content, period. There’s just one problem: the entire business model of the internet is based off of them.

Next: An Advertising Intervention

Map 2: How does algorithmic amplification lead to misinformation?

YOUR FRIEND I DON’T LIKEJOE BIDEN YOU ME NEITHER YOUR FEED BREAKING:JOE BIDENHIT A PUPPY YOU THIS MAKESME MAD YOUR FEED JOE BIDEN HITTHREE PUPPIES YOU I HATE JOEBIDEN SO MUCH YOUR FEED JOE BIDEN IS ABABYKILLER FROMCHINA YOU LET’S OVERTHROWJOE BIDEN STAGE 1:PERSONALINTERACTION STAGE 2:RECOMMENDEDPOST STAGE 3:LIKE THIS PAGE STAGE 4:JOIN THIS GROUP