Dirty tricks are a staple of election season, including the use of misleading visuals now easily created with artificial intelligence. The Texas Legislature missed an opportunity this year to require the disclosure of altered images in election ads, including the use of AI. That policymaking failure is coming back to haunt candidates this election cycle.
A Republican candidate in Texas Senate District 9, Leigh Wambsganss, said a website published a manipulated photo of her wearing an upside-down cross necklace, which is deemed to be a satanic symbol. Lt. Gov. Dan Patrick, who supports Wambsganss, quickly accused John Huffman, another GOP candidate in the North Texas race, of using “evil campaign tactics.” Huffman’s campaign consultant denied any involvement. A website linked to a mysterious political action committee attacking Wambsganss prominently features a photo of the candidate wearing a cross necklace, but our newspaper could not independently verify whether the site had previously displayed an image of Wambsganss with an inverted cross.
Patrick is right to object to the use of a fake image to deceive voters. However, he killed a bill earlier this year that would have required political ads to disclose the use of manipulated visuals. Labeling images as altered or created with AI should be the bare minimum in terms of disclosure. This isn’t censorship, like some conservative hardliners have argued.
House Bill 366, authored by former House Speaker Dade Phelan, R-Beaumont, focused on the use of deceptive messaging and AI “with the intent to influence an election.” Phelan himself was targeted in a misinformation campaign in 2023 when he faced a contentious primary election.
Opinion
The image at the center of this controversy has been clearly manipulated, stylized to look weathered, perhaps with photo-editing software. Disinformation will only get more sophisticated and deceptive given that AI-manipulated visuals are becoming disturbingly realistic. It’s only fair that voters understand who is behind these images in the context of an election. The fast evolution of AI technology means that the photos that land in people’s mailboxes or videos that circulate online might not be real.
OpenAI, the company behind ChatGPT, recently unveiled Sora, an app that instantly creates realistically looking videos. Although the app generates watermarks to help trace these videos, these can easily be used for disinformation. And there are other apps that do similar things.
Texas already has a law that criminalizes the use of deep fake videos in politics with the intent to influence the outcome of an election. Senate Bill 751 passed in 2019 with an overwhelming majority. The law, however, doesn’t cover photos.
Phelan’s bill to require the disclosure of AI use in election ads languished in the Texas Senate after easily clearing the House. Federal efforts so far haven’t gained much traction.
Disinformation is as old as politics, but the ubiquity of AI technology nowadays can propel dirty campaign tricks to new levels.
Requiring the labeling of manipulated images is a worthwhile effort. At the end of the day, we are playing catch-up with fast-evolving technology that can make our politics even more toxic.
We welcome your thoughts in a letter to the editor. See the guidelines and submit your letter here.
If you have problems with the form, you can submit via email at letters@dallasnews.com