Generative AI and How Can It Can Create New Dimensions to Election Interference

by | May 21, 2023 | Politics, Corruption & Criminality

Samuel Altman, CEO of OpenAI, testified about the promise and dangers of current AI technology during a Senate Judiciary Subcommittee meeting on Tuesday, May 16, 2023. (ANDREW CABALLERO-REYNOLDS/AFP/TNS)

Generative AI and How Can It Can Create New Dimensions to Election Interference

by | May 21, 2023 | Politics, Corruption & Criminality

Samuel Altman, CEO of OpenAI, testified about the promise and dangers of current AI technology during a Senate Judiciary Subcommittee meeting on Tuesday, May 16, 2023. (ANDREW CABALLERO-REYNOLDS/AFP/TNS)

Election administrators are still digging out from the mountains of misinformation from the 2020 election cycle. Bad actors are using AI to ramp up for the next one.

In Brief:

  • Misinformation about election processes and election officials, spread and amplified through social media, has made the work of election officials much more difficult.
  • New artificial intelligence tools have the potential to intensify this problem.
  • Election officials have options to fight back, whether working with industry to detect bad actors or using AI tools to defend against misinformation.

Social media, powered in no small part by artificial intelligence (AI), was used to greatly amplify falsehoods and slander regarding the integrity of American elections and the people who run them. It played a major role in creating a climate in which election officials faced unprecedented harassment and threats, an atmosphere that emboldened some citizens to mount an assault on the nation’s Capitol building.

Lives were lost, and American democracy itself came uncomfortably close to unraveling. These outcomes—not to mention the muddying of the life-and-death pandemic response by the same technology—raise powerful, real-world questions about how much better off we might really be if machines make more of our decisions for us.

Concerns about the potential for disruption and unanticipated consequences from the latest advances in artificial intelligence have made headlines for weeks. At a Senate hearing on May 16, the CEO of OpenAI testified that AI has the potential to improve “nearly every aspect of our lives.” And that it poses “serious risks.”

AI has advanced in the private sector, with government only now catching up to what it can do. The Senate is considering the need for regulation, possibly under a newly created agency. The European Parliament has just approved a draft of a law that takes a risk-based approach; China has also created draft rules.

The potential applications of this technology touch on almost every aspect of public- and private-sector operations. The trouble it has already created for those preparing for a high-stakes election in 2024 hasn’t subsided; if anything, its roots are deeper.

What kind of trouble could AI add to the mix, and how can the public sector respond?

Thinking Machines

Stanford computer scientist John McCarthy describes intelligence as “the “computational part of the ability to achieve goals in the world.” Humans and animals have this ability.

The first computers that could solve problems by processing data (“digital computers”) were built in the 1940s. One of the earliest demonstrations that a machine could learn was Theseus, a robotic mouse created in 1950 that could solve a maze and remember how it did it.

The concept of AI entered the public vocabulary around this time, thanks in large part to the work of the late Alan Turing, widely regarded as the father of computer science. “May not machines carry out something which ought to be described as thinking but which is very different from what a man does?” he asked.

At issue today are the expanded, and expanding, capabilities of “generative AI”—technology that can “learn” from massive amounts of data provided to it through large language models, including interactions with humans, and create something new. (It’s worth underscoring that at this point “new” is more a matter of “reshuffling” existing data than original thought.)

Generative AI is already powering chatbots, GIS mapping applications, search engines and other familiar tools. But as it becomes more powerful, so does its potential to give new capabilities to those who want to do harm.

Deepfake images and videos are familiar examples, but even as they get better and better, they may be easier to identify and defuse than other things that persons with dubious intentions can create with generative AI.

Dramatic increase in the ability of artificial intelligence to understand language have multiple consequences, from its ability to create convincing writing or translate to lowering the bar for users, who don’t need more than language to put AI to work for them.

The Wrong Kind of Efficiency

Paul Barrett, deputy director and senior research scholar at NYU’s Stern Center for Business and Human Rights, is the author of numerous reports about social media’s corrosive effects on social discourse and politics, including its role in spreading election misinformation. He’s currently working on a paper about the potential for generative AI to raise the volume.

“These programs will almost certainly help bad actors who want to confuse voters or put false ideas in their head,” Barrett says. They will make it possible to create more personalized communication. They will give foreign actors the ability to write messages in English without the idiomatic or punctuation mistakes that can be tip-offs to their origin.

“These large language models produce realistic, grammatically smooth, nonrepetitive prose,” he says. Russians won’t need hundreds of people working for an Internet research agency in a St. Petersburg office building to produce fake posts tailored to specific social media platforms.

An application like ChatGPT could be used in combination with another AI system to command and control a disinformation campaign, says Todd Helmus, a behavioral scientist at RAND Corporation who studies disinformation. “We’re probably a bit far off from that right now, but if you look into the future that capability will certainly be there.”

Generative AI can also give users who are not technology experts or skilled communicators a tool to create sophisticated information (disinformation) operations, Barrett warns. “You can tell ChatGPT or its cousins that you want a message expressed in a vernacular that a certain audience will appreciate or understand, and it will do its darnedest to accommodate you.”

Bad actors may not have free rein, he notes. Designers are attempting to build in filters that detect illegitimate queries such as requests to create code for malware. But it’s possible that a clever and persistent person can still find ways to work around them. There’s talk on the dark web about AI and malware, Barrett says, and election systems are potentially vulnerable to cyber attacks.

The same technology that could bedevil election offices can be used by them to defend against attacks. Companies that hope to profit from broad implementation of generative AI should have a vested interest in finding ways to block users with bad intentions.

In March, Microsoft announced the release of a “next-generation” AI cybersecurity tool. Barrett hopes this is a sign that companies will move forward with a combination of profit seeking and civic responsibility.

The suggestion that injecting AI into the communications infrastructure carries dangerous, real-world consequences is not theoretical. (Kent Nishimura/TNS)

The Election is the Thing

There are other proactive steps. Digital watermarks can help establish the authenticity of an image or video. Helmus points to the Content Authenticity Initiative, a collaborative of academics, nonprofit organizations, tech and media companies promoting an open standard for content provenance that will indicate when content has been created by AI.

Noah Praetz, president of The Elections Group, consults state, local and federal officials involved in election administration. As scary as AI might seem, he believes public servants will get the greatest ROI from putting their brainpower to work running great elections.

That said, he cautions, they can’t be silent. “They have to put their own record out there. They have to be out in the community and be extra transparent to compete with the craziness.”

All politics is local, say Marek Posard, a RAND sociologist. He can imagine a sort of renaissance of bottom-up politics if AI concerns push more members of the public to verify the things they hear with local officials, poll workers or neighbors. “Nobody’s accusing the League of Women Voters of stealing elections,” he says.

Posard agrees that, in some ways, AI is scary. “But there might also be opportunities that we don’t want to lose sight of.”

Republished with permission from Governing Magazine, by Carl Smith

Governing Magazine

Governing Magazine

Governing: The Future of States and Localities takes on the question of what state and local government looks like in a world of rapidly advancing technology. Governing is a resource for elected and appointed officials and other public leaders who are looking for smart insights and a forum to better understand and manage through this era of change.

Follow Us

Subscribe for Updates!

Subscribe for Updates!

Join our mailing list to receive the latest news and updates from our team.

You have Successfully Subscribed!

Share This