Two lawmakers are looking to crack down on fraudsters’ increasing use of artificial intelligence in a bipartisan proposal Tuesday that seeks to expand penalties for AI scams and criminalize impersonating federal officials using AI.

The AI Fraud Deterrence Act, proposed by Reps. Ted Lieu, D-Calif., and Neal Dunn, R-Fla., would update criminal definitions and penalties for fraud to account for the rise of AI.

“As AI technology advances at a rapid pace, our laws must keep up,” Dunn said in a statement announcing the bill.

“The AI Fraud Deterrence Act strengthens penalties for crimes related to fraud committed with the help of AI. I am proud to co-lead this legislation to protect the identities of the public and prevent misuse of this innovative technology,” Dunn said.

“The majority of American people want sensible guardrails on AI,” Lieu told NBC News last week. “They don’t think a complete Wild West is helpful.”

The proposed law would double the maximum penalty for defrauding financial institutions from $1 million to $2 million when AI is knowingly used as part of the crime.

The bill would also explicitly include AI-mediated deception in the definitions of both mail fraud and wire fraud, the latter more commonly known for covering fraud involving “radio or television communication in interstate or foreign commerce,” opening up the explicit possibility of charging individuals who use AI to conduct either type of fraud.

Both would be punishable by $1 million in fines and up to 20 years in prison for mail fraud and 30 years for wire fraud.

The draft also criminalizes the impersonation of federal officials with AI deepfakes, citing AI’s use in attempts to successfully mimic White House chief of staff Susie Wiles and Secretary of State Marco Rubio earlier this year.

While fraud has existed for millennia, experts say AI could exacerbate it by easing access to fraud-making tools and increasing the quality of fraudulent outputs.

People who, pre-AI, would not have expended the energy required to commit fraud can now easily enter a few phrases into an image- or video-generation software to generate a fraudulent image or document.

By using AI, fraudsters can also create higher-quality faked media or documents compared to often sloppy or clearly faked manual efforts.

In December, the FBI warned that “generative AI reduces the time and effort criminals must expend to deceive their targets.” The alert further cautioned that AI “can correct for human errors that might otherwise serve as warning signs for fraud.”

As reported by The New York Times, expense- and reimbursement-management companies like Expensify, AppZen, and SAP’s Concur all implemented tools to screen for fraudulent, AI-generated receipts earlier this year.

AppZen said that roughly 14% of all fraudulent documents submitted in September were generated by AI, up from zero AI-fueled incidents a year before.

Maura R. Grossman, a research professor of computer science at the University of Waterloo, in Ontario, and a lawyer, told NBC News that AI enables a new era of deception: “AI presents a scale, a scope, and a speed for fraud that is very, very different from frauds in the past.”

Many observers worry that existing institutions, like the courts, cannot keep up with AI’s rapid development. “AI years are dog years,” said Hany Farid, professor of computer science at the University of California, Berkeley, and co-founder of GetReal Security, a leading digital-media authentication company, referring to the speed of AI progress.

Whereas AI-generated images could previously be identified by the appearance of extra feet or hands due to the rudimentary nature of prior image-generation models, today’s image-generation models are much more accurate.

The FBI’s warning in December urged individuals to search for discrepancies in images and videos to identify AI-generated media: “Look for subtle imperfections in images and videos, such as distorted hands or feet.”

But to Farid, this 11-month-old advice is wrong and even harmful. “The multiple hands trick, that’s not true anymore,” Farid said. “You can’t look for hands or feet. None of that stuff works.”

Emphasizing the importance of labeling AI-generated content, Lieu and Dunn’s proposed bill clarifies that there is a time and place for AI-generated media.

Tuesday’s draft includes a carveout for AI in satire or other acts protected by the First Amendment, “provided such content includes clear disclosure that it is not authentic.”