In the wake of the Capitol insurrection on January 6, the federal government is scrambling to address the rising threat of extremism driven by online misinformation. In Congress, lawmakers are holding dozens of hearings and testimonies—and while there seems to be a growing desire to regulate the companies that have helped amplify radical content, there is no clear path forward.
Meanwhile, misinformation is occurring on a global scale, and it’s happening fast: Chinese information operations groups used American-owned platforms like Youtube in order to influence the 2020 Taiwan election, according to The Atlantic Council's Digital Forensic Research Lab. And a study from the journal BMJ Global Health found there is a statistically significant link between falling vaccination rates and disinformation campaigns on social media.
We are in a crisis, and we need to address it now. To do so, we will need a global effort to assess the issue and an audit of the algorithms that help spread it. There are many people in government already thinking about this, and many people in government whose job it is to protect us from misinformation. There’s only one problem: they’re scattered all over the 700,000 person workforce that is the federal government. You have legislators trying to draft laws to make certain tech practices illegal; antitrust lawyers trying to break up big companies; and intelligence agencies trying to track harmful extremist networks online. But these agencies aren’t talking and collaborating in a structured way. There is no centralized location within the government for experts to collaborate and share information; instead, the work happens in silos, and is often seen as unrelated from agency to agency. The disconnectedness of our federal response harms our ability to respond holistically to the extensiveness of this ongoing threat.
We need to urge Congress to create a mis- and disinformation task force within the federal government. With efforts from agencies across the federal government, the task force could coordinate to advance policy, research, and public awareness pertaining to misinformation. It could make recommendations to Congress for laws to draft and pass. And it could help enforce those laws and hold large technology companies to account.
While a policy response isn’t the only one that matters, it is one that companies care about. In the leadup to the 2020 U.S. presidential election, companies like Twitter, Facebook, and Youtube worked to fight disinformation by providing alerts on posts that were potentially fraudulent, which resulted in conservatives claiming they were being censored by social media. But in reality, posts from conservative sources like FOX News, Donald Trump, and Dan Bongino had some of the highest shares on Facebook—even posts containing disinformation about election fraud.
Following the Capitol insurrection, companies took a stronger stance. Many tech reporters described this as “all of the social media companies finally holding hands and jumping”—taking the leap to deplatform Donald Trump as well as hundreds of other accounts related to right-wing extremism.
Initial data shows that after Trump’s deplatforming, misinformation and disinformation dropped dramatically on these websites. It seems that, at least in the short term, the power to curb disinformation lies in the hands of these companies who can either choose to take action, or be pressured to take action through policy and activism. We know they won’t choose it themselves, so we have to demand that our government mandates it.
Map 1: Why is the government's response to misinformation so fractured?