Republished with permission from Votebeat, by Carrie Levine
We are 135 days out from the 2024 presidential election. The fight against misinformation and disinformation is hopelessly politicized, and the infrastructure set up to push back against bad or misleading information online is far less robust than it was.
High-profile efforts to combat bad information about elections, such as the Stanford Internet Observatory, have scaled back or shut down. Some are facing threats, as well as lawsuits and investigations from conservatives who say the groups are allied with the left and targeting speech liberals disagree with.
The result of the Republican pushback, some experts say, is a climate that’s friendlier to disinformation baselessly undermining faith in elections, and feeding political extremism.
“We may be less prepared 155 days out in 2024 than we were under President Trump” in 2020, Sen. Mark Warner told The Associated Press earlier this month. Warner, a Virginia Democrat, chairs the Senate Intelligence Committee.
There’s plenty of bad or misleading information about elections out there, but not many clear answers on what can be done about it—practically or legally. Some direction could come soon from a potentially pivotal decision: The U.S. Supreme Court is expected to rule any minute in a closely watched case over whether the White House and government agencies may urge social media companies to take down what the government characterizes as misinformation.
Experts say the case and allegations of partisan-driven censorship have already had a chilling effect, and the federal government has pulled back its efforts to fight misinformation. During oral arguments, though, many of the justices seemed disinclined to rule that such efforts violated the First Amendment.
Other courts are wrestling with the limits of free speech.
The Michigan Supreme Court last week ruled in a case involving felony charges against two conservative political operatives, Jacob Wohl and Jack Burkman, who targeted Black voters in multiple states in 2020 with robocalls. The robocalls falsely warned voters that the information they supplied when voting by mail could be used to target people with unpaid debts and other potential issues.
In the ruling, a majority of the justices found that a provision of a Michigan law that bars using threats or other improper means to deter a person from voting was too broad. They held that false speech related to voting requirements could be punished without violating free speech rights, but “only if it is intentionally false speech that is related to voting requirements or procedures and is made in an attempt to deter or influence an elector’s vote.”
The court returned the case to the lower court, instructing it to determine whether Wohl and Burkman’s conduct violated the narrowly constructed wording.
The Michigan case is still pending, but other jurisdictions have already found that Wohl and Burkman’s robocalls crossed the line. The pair agreed to pay $1.2 million to settle a suit in New York, and pleaded guilty to related charges in Ohio. And the Federal Communications Commission issued a $5 million penalty against them for illegal robocalls, brushing away the argument that political robocalls were exempt from a consumer protection law.
There aren’t many officials or groups standing up to defend the Burkman/Wohl robocall effort, and for good reason. But on other matters of disinformation, there’s less consensus.
Claire Wardle, an expert on misinformation who co-founded and co-directs the Information Futures Lab at Brown University, said the conservative attacks on experts and institutions researching and speaking out against misinformation have “made funders more tentative about supporting these types of projects.”
Individual researchers have either stopped doing this work or become less vocal, she said, and the relationships between researchers, social media platforms, and the government have “shut down almost completely. There is almost no communications between these groups, which means there is no knowledge sharing, learning or flagging issues.”
Nina Jankowicz, who earlier this year formed a new nonprofit, the American Sunlight Project, aimed at fighting disinformation, made a similar point, citing the biggest change as “the landscape of cooperation” on election disinformation, inside and outside government.
Russia is just one of several foreign actors trying to influence U.S. elections with bad information at this point, Jankowicz pointed out, and other nations are “laundering” their efforts to conceal the true sources of it and make it appear more trustworthy.
“It’s a much easier way to ensure that your message gets out there than trying to buy ads in rubles,” she said.
So given the threats from many directions, what can members of the public do?
The good news is, Americans are now more aware that social media algorithms are tailoring the information aimed at them, and they are more information literate, Jankowicz said. Armed with that knowledge, she said, people need to take a few beats and check that something is from a reputable source before they share it.
“Everybody has a responsibility to be vetting their information,” she said.
Carrie Levine is Votebeat’s managing editor and is based in Washington, D.C. She edits and frequently writes Votebeat’s national newsletter. Contact Carrie at clevine@votebeat.org.
Votebeat
Votebeat is a nonprofit news organization committed to reporting the nuanced truth about elections and voting at a time of crisis in America.
Our mission is to help people understand our system of democracy so they can participate in strengthening it. Our approach is to cover and explain the mechanics of voting—no political polls, candidate platforms, or Election Day results; instead we will focus on how elections are run, from early and mail-in voting to voter registration and election security. Because we believe that elections are fundamentally a local issue, our coverage is rooted in local communities.