Only half joking, Jennie DeSerio described herself as the definition of a helicopter parent.

A stay-at-home mom, she made her own organic baby food, joined the PTA, attended every sporting event and, when the time came for smart devices, implemented every child control she could find.

But her oversight, she said, was no match for the AI-driven algorithm she believes drove her son to suicide.

Over a 13-day spiral in the first half of November 2022, the TikTok content recommended to her son, Mason Edens, morphed from encouraging messages about depression to depictions of suicide.

Mason, still reeling from a breakup, was “inundated” by videos of suicidal ideation, DeSerio said. Not because he had searched for it, she said, but because the artificial intelligence had learned something about him.

On Nov. 14, Mason died by suicide in his Arkansas home, using a gun just like the hundreds of videos he had seen.

As a plaintiff in one of several high profile lawsuits against social media giants, DeSerio is convinced that her experience tells a much broader story about the unique threat AI poses to young people.

“It is sending our children not what they are looking for, but what they can’t look away from,” DeSerio told the Deseret News.

This strategy extends beyond social media powered by AI to the rapidly evolving world of AI models that accumulate personal information to mesmerize and manipulate, according to DeSerio.

Since 2022, DeSerio, who was born and raised in Ogden, Utah, has crisscrossed the country, urging local and congressional lawmakers to regulate AI companies despite overwhelming industry opposition.

It is sending our children not what they are looking for, but what they can’t look away from.

—  Jennie DeSerio

That mission has brought her back to the Beehive State where legislators are currently debating the balance between AI policy and parental rights. But she thinks this frame misses the real picture.

“There is not a parent in America that can outsmart an algorithm,” DeSerio said. “Parents do not stand a chance.”

As the legislative session comes to a close, lawmakers are advancing child protections for AI. But DeSerio worries the most impactful bill has already died at the hands of the nation’s most powerful AI bureaucrat.

Trump administration opposes Utah bill

On Feb. 12, the White House Office of Intergovernmental Affairs sent top lawmakers a one-line memo declaring the administration’s all-out opposition to HB286, sponsored by Rep. Doug Fiefia, R-Herriman.

The bill would require AI firms behind the latest “frontier models” to publish internal risk assessments, to post child protection plans on their websites and to report safety incidents to Utah’s AI office.

Unlike Utah’s prior policies, which offered liability protection for AI companies, HB286 would establish civil penalties of up to $3 million and would provide whistleblower protections for those who report safety concerns.

“We are categorically opposed to Utah HB286 and view it as an unfixable bill that goes against the administration’s AI agenda,” said the letter, viewed by the Deseret News, which was first reported by Axios.

The memo did not include further explanation. President Donald Trump had signed an order in December challenging state AI laws after failing to include an AI moratorium in legislation over the summer and fall.

The order empowered the Department of Justice to preempt state policies that would slow AI innovation by creating an unworkable patchwork of laws. However, it included carve-outs for “child safety protections.”

The White House’s February memo sparked the ire of child protection activists, including Melissa McKay, president of the Utah-based Digital Childhood Institute, who pointed the finger at Trump’s AI czar, David Sacks.

“I don’t know why an unelected bureaucrat from Silicon Valley like David Sacks is telling us how to protect our own families in Utah,” McKay told the Deseret News.

Last week, DeSerio sent a letter to Utah Gov. Spencer Cox, Senate President Stuart Adams, House Speaker Mike Schultz and Senate Majority Leader Kirk Cullimore asking them to reconsider HB286.

The letter, which was signed by eight other parents of children who were harmed or died after extended use of AI-driven platforms, called HB286 “one of the most important AI safety bills in the country.”

“We are writing because we have learned, at devastating cost, what happens when bills like this die,” it said. ” When we look at what is happening with AI, and at who is trying to stop HB286, we are watching the same deadly cycle begin again.”

On Monday, Schultz told the Deseret News HB286 would likely not receive a floor vote because the bill went “went well beyond protecting kids,” straying into “troubling” regulatory territory that lawmakers need more time to work through.

HB286 departed from Utah’s approach to AI policy by shaping a widely applicable technology instead of responding to a specific product application, said Cullimore, who sponsored most of the state’s prior AI bills, on Tuesday.

Who is responsible to monitor AI — parents or the government?

On Tuesday, Sutherland Institute convened some of Utah’s most influential voices on AI child safety for a panel discussion on whether state lawmakers should consider policies like HB286. Or whether AI policy should be left to parents.

Sutherland also unveiled a new survey which found that 80% of Utah voters thought it was primarily the responsibility of parents to monitor a child’s AI usage, compared to 8% who said tech companies and 3% who said state government.

A majority of voters also said it was the parents’ job to prevent access to harmful AI content. But one-third said this burden should fall on tech companies and 42% said tech companies should be responsible for protecting child data privacy.

I don’t trust that these technology companies … are ever going to come to the table unless they’re forced to. It’s going to take some regulation from government. It’s going to take parents speaking out.

—  Aimee Winder Newton, director of the Utah Office of Families

Voters also support some form of regulation: 87% said there should be “some” (40%) or “heavy” (47%) regulation of AI businesses. A similar share preferred regulation of AI applications in science, health care, education and government.

Aimee Winder Newton, who serves as the director of the Utah Office of Families, said the same principles behind Utah’s bold war against Big Tech, through social media age verification and lawsuits, should apply to Utah’s treatment of AI.

“I don’t trust that these technology companies … are ever going to come to the table unless they’re forced to,” Winder Newton said. “It’s going to take some regulation from government. It’s going to take parents speaking out.”

Utah Department of Commerce Director Margaret Busse, who oversees the office of AI policy, said the state’s position is that AI algorithms are not protected free speech, they are product features that ought to be regulated.

Busse echoed Cullimore in saying the state’s role is to regulate how AI is deployed, from a consumer protection stand point, not how the underlying technology is developed. AI needs guardrails, she said, so it can be trusted instead of feared.

But hyperfocusing on AI, simply because it is a disruptive technology, is a mistake, according to Chris Koopman, the CEO of the Utah-based Abundance Institute.

Too often conversations around AI treat the technology as a totally separate product from television or streaming services, which, Koopman pointed out, are left alone by regulators as being the purview of parental preferences.

“Does it empower or replace parental decision making?” Koopman asked. “Parents are going to have to be the bad guy. We’re going to have to figure out as a society how do we operate with an entire generation of digital natives.”

But policies like HB286 don’t replace parents, or hamper innovation, they require transparency from some of the world’s largest companies, who have been caught prioritizing engagement over child health, according to DeSerio.

“I don’t understand how anybody can say this is a bad parent, bad kid issue,” DeSerio said. “That’s just a scapegoat for the lack of regulation. And as long as that narrative keeps being fed, children are going to continue to die.”

Utah Legislature still considering AI regulations

Utah legislative leaders have backed away from HB286 after it became the clearest example yet of the White House targeting state level AI policy. However, the Legislature is on track to pass several significant AI regulations this session.

Leadership is backing another bill from Fiefia, HB438, which requires AI chatbots to receive consent before using the data of users who are minors, to disclose when its content is an advertisement and to report to the AI policy office annually.

The bill also prohibits chatbots from generating responses that encourage suicidal ideation, self-harm, harm to others or illegal activity. If the user expresses suicidal thoughts, the chatbot must provide a referral to crisis services.

The bill passed the House last week and advanced through a Senate committee on Monday. Fiefia’s HB408, which passed the House Monday and Senate committee Tuesday, requires social media to let users transfer personal data to other platforms.

SB256, sponsored by Cullimore, R-Sandy, would follow laws passed last year establishing rules for mental health chatbots, expanding prohibitions on AI abuse of personal identity and establishing AI disclosure requirements for businesses.

The bill would clarify that Utah’s defamation law applies to any AI-generated image, or “deep fake,” that is created without the subject’s permission. The bill would expand the definition of abuse of personal identity to include AI representations.

Senators and a House committee have advanced another bill that would make Utah only the second state in the country to create a new tax on targeted online advertising, which makes up nearly all of the revenue of internet behemoths like Facebook and Google.

SB287, sponsored by Sen. Mike McKell, R-Spanish Fork, would apply the state’s 4.7% sales tax to companies that derive at least $1 million from targeted advertising in Utah, and $100 million overall, if advertising is at least 50% of revenues.

These policies mark a promising step in the right direction, according to DeSerio. But without HB286, they leave AI companies that interact daily with more and more children without a common sense transparency framework, she said.

A poll conducted in January by Public Opinion Strategies, and obtained by the Deseret News, found that more than 90% of Utah voters supported every component of HB286, with around 80% signaling strong support, DeSerio noted.

After years of recounting her loss to members of Congress and state legislators, DeSerio has grown used to policymakers siding with industry giants even as an increasing number of parents feel like they are “in a losing battle” with AI platforms.

“We haven’t even got social media under control, and now we’re introducing AI to our children,” DeSerio said. “So now here I am, just over three years later, and I cry every single day still, and my son should still be here.”