Artificial intelligence is increasingly blurring the line between falsehood and truth.

The ability to generate any image in seconds is a powerful tool with significant potential for harm. 

Take the recent situation in Mexico as an example.

Nemesio Rubén Oseguera Cervantes, known as “El Mencho,” the head of the Jalisco New Generation Cartel and one of Mexico’s most violent criminal figures, was reportedly killed by authorities on Feb 22. 

Following the reported killing, cartel operatives rampaged across 20 Mexican states, according to news reports, causing widespread disruption in opposition to the government. Major outlets reported that gunmen had infiltrated the Guadalajara airport, and an image of an airplane ablaze on the tarmac spread rapidly on social media.  

Chaos appeared to follow. Dozens of passengers, both residents and tourists, were reported fleeing through airport doors and taking cover behind desks.

Days later, however, reports emerged that the images and some accounts weren’t only false news, but generated using AI. 

AI poses a threat to the foundational truths and democratic values on which society relies, yet navigating that threat is becoming increasingly difficult.

Thousands of news outlets with varying biases already exist, and social media has become a primary way people consume information. 

That creates a problem: social media accounts can publish content without verification and advances in AI further enable that by pairing false claims with fabricated images and videos. 

Although figures vary by study, research suggests false information can make up a significant share of online content, much of it circulating on social media platforms. 

One of the most significant challenges AI poses is in American elections.

To understand what AI means for the future of U.S. politics, it helps to look at how political communication has evolved. 

In my first semester at Lehigh, I became interested in this subject through the now-retired professor Luke Lule’s Media and Society class, where we examined the evolution of politics in the media. 

Newspapers were the earliest form of mass media, with many publications aligned with political parties and offering partisan coverage. 

Radio followed, allowing candidates such as Franklin D. Roosevelt to connect directly with the public through “fireside chats” from 1933-1944, building support by informing and reassuring listeners during the Great Depression and World War II. 

The rise of television in the 1950s brought major shifts in campaigning. As candidates appeared on screen, personality and physical presentation became more important.

Television also expanded the role of entertainment in American culture. As attention spans shortened, the average televised clip of a candidate reportedly shrank from about 40 seconds in the 1960s to about seven seconds by the 2000s.

Today, social media is a primary campaign tool, enabling constant, direct communications between candidates and voters. Viral messaging has helped turn candidates into public figures whose visibility can rival that of celebrities.

But the rapid spread of information also accelerates misinformation, a problem intensified by AI that can distort voters’ perception of reality. 

Anyone can now create convincing content depicting candidates saying or doing things that never occured. That capability can be used to influence public opinion and raise national security concerns. 

During the 2024 election cycle, multiple incidents involving AI-generation content were reported. 

Robocalls using an AI-generated voice resembling former President Joe Biden urged primary voters in New Hampshire not to vote. Russian operatives were also reported to have created AI-generated deepfakes of former Vice President Kamala Harris making false, inflammatory remarks. 

Now that it’s widely accessible, AI is likely to remain a fixture in political campaigns, the potential to erode trust in democratic institutions. 

If falsehood and truth haven’t already become indistinguishable, they may soon.