So far, only a fraction of Massachusetts school districts have policies addressing deepfakes or mandating punishments for students who create or spread them.
Last November, 14-year-old Grace Mancini was on her way to English class at Hingham Middle School when a group of girls warned her that an eighth-grade boy had made a fake AI-generated naked image of her.
The teen who created the image shared it with at least two friends. One saw it over a Snapchat call, took a screenshot, and admitted to showing it to other peers in the hallways at school, according to an investigation conducted by the school and shared with the Globe.
Grace Mancini told her mom, Megan Mancini, who said she reported the incident to the police and filed a sexual harassment, or Title 9, complaint with the school. The teen boy admitted to creating the image in a text message to Grace Mancini and confessed the same to school officials, Megan Mancini said.
Grace Mancini, 15, and her mother, Megan Mancini (not shown), talked about how a student created a deepfake pornographic image of her, and shared it at school. Pat Greenhouse/Globe Staff
Still, the boy received no formal punishment from the school district. The investigation concluded he had not violated the school’s policy because there was “insufficient evidence” that the images were shared with “other students at locations, events, or circumstances over which the school exercised substantial control.“
Since the boy did not violate the sexual harassment policy, “the school is precluded from taking disciplinary action against him,” Barbara Cataldo, interim director of student services, wrote to Megan Mancini in an email.
“I don’t understand because people get in trouble for drinking in the woods. They get suspended for that or can’t play sports,” Megan Mancini said. “This happened during school hours in the school hallway, and yet the school says it’s not under their jurisdiction.”
Experts studying the spread of deepfakes said the school’s response was all too common. School districts often fail to treat AI-generated sexually explicit images as child pornography, they said. Massachusetts was one of the last states to make it illegal to share nonconsensual deepfake images.
“Unless a parent goes on a warpath, these things really go unnoticed, or [school officials] say there’s nothing to be done here,” said Riana Pfefferkorn, a policy fellow at the Stanford Institute for Human-Centered Artificial Intelligence who studies sexually explicit deepfakes.
Some parents, lawyers, and researchers are pushing for schools to hold assemblies to explain to students the legal, disciplinary, and social implications of sharing or creating deepfakes.
Last fall, a group of eighth-grade boys used AI to create and spread fake naked images of three 13- and 14-year-old girls at Mountain View Middle School in Goffstown, N.H.
Krystal Labranche, the mother of one of the girls, expressed frustration that the school didn’t talk to students about deepfakes. Instead, Labranche wrote on Facebook that they remained “quiet” and pretended “it didn’t happen.”
Three days after the post, Goffstown Superintendent Brian Balke sent an email to parents warning that information circulating on social media about “a recent incident” at Mountain View “may be incomplete or not entirely accurate.” Balke declined to comment for this story.
For Labranche’s daughter, the incident was debilitating.
“My daughter is a social butterfly. She loved school, every aspect of it,“ Labranche said in an interview. “But at least once a week now, she doesn’t want to go.“
After pushback, the school changed its policy to explicitly prohibit “creating, distributing and presenting” deepfakes and outlined disciplinary actions if the rules were violated.
Schools are twice as likely to have adopted policies and procedures addressing nonconsensual images after a deepfake of someone associated with the school had been shared, according to a 2024 report by the Center for Democracy and Technology.
In Massachusetts, AI-generated sexual harassment is mentioned in nine of 113 school district policies posted on the website of the Massachusetts Association of School Committees. Only five of those policies mentioned that disciplinary action would be taken against students who used AI to create harmful images of others.
In Medford, Chloe Dorcellus, 14, said her McGlynn Middle School classmate was the victim of a deepfake last spring when an eighth-grade boy created a sexually explicit image of her. School officials did not talk to the student body about what happened, Dorcellus said, which is often the case.
A spokesperson for the Medford Public Schools told the Globe that they were unaware of the alleged incident. A few months later, Medford Public Schools adopted an AI policy that said “abusive, harmful, or disrespectful conduct through AI platforms is unacceptable.”
In Hingham, the school district had a policy on AI, but it only mentioned plagiarism, not sexual harassment.
Grace Mancini, 15, and her mother, Megan Mancini. Pat Greenhouse/Globe Staff
Across the country, less than one-quarter of teachers said their school had policies for how to address deepfake images, according to a recent report by the Center for Democracy and Technology.
TNG Consulting, a firm that helps schools complying with civil rights laws, said clients often only ask about how to handle deepfakes after an incident has occurred, according to Mikiba Morehead, a senior supervising consultant.
Websites that generate deepfakes are rapidly growing. Hundreds are now available, making it easier for teens to find them online, even when mobile app stores ban the programs, according to the social network analysis company Graphika. Users simply upload a photo of someone’s face to generate the image.
When one site is taken down, another pops right back up, according to Tyler Williams, vice president of intelligence at Graphika. “It’s a little bit like playing whack a mole,” he said.
One nudify company alone saw upward of 5 million visitors on their sites in January, according to Graphika.
“This is affecting millions of kids right now,” said Elizabeth Laird, director of equity in civic technology at the Center for Democracy and Technology.
In the last school year, 15 percent of students reported seeing sexually explicit deepfakes of someone associated with their school, research by the center found.
Schools across the country have not focused on “preventing this from happening in the first place” nor are they ”providing more support for the victims,” Laird said.
At Hingham Middle School, Megan Mancini said the school offered to have her daughter see a school counselor and invited the district attorney to come in and do a training with students about the issues of using AI improperly.
The mother expressed frustration because, she said, “they never talked about consequences for creating deepfakes.” The district emailed parents when kids left offensive comments on bathroom walls but sent out nothing after the deepfakes, she said.
“I didn’t feel supported,” Grace Mancini said. “Their reaction made it seem like it didn’t matter. And this does matter.”
The district has a policy that prohibits using AI to cheat but “no changes around deepfakes specifically have been made to our policies,” school committee secretary Alyson Anderson told Mancini in an email.
A study by Stanford University last year on deepfakes found ”that most schools are not currently addressing the risks of nudify apps with students.”
For the report, researcher Riana Pfefferkorn studied deepfake incidents that took place in four public school districts across the country and conducted more than 52 interviews with educators, law enforcement, and state officials.
Out of the 113 Massachusetts school district policies reviewed by the Globe, Haverhill Public Schools was the only one that explicitly prohibited AI-generated deepfakes.
Haverhill crafted the rule a year ago to “stay on top of” the technology before an incident actually took place, said Superintendent Margret Marotta.
The policy says any inappropriate use of the technology “will result in disciplinary action,” according to the student handbook. But it does not specify penalties or include a plan for providing specific training to students on AI and sexual harassment, said Doug Russell, director of IT at Haverhill Public Schools.
In Wellesley Public Schools, students study AI and sexual harassment in a required social and emotional learning class in which children work to manage their feelings, said Wellesley Superintendent David Lucier.
Some schools leaders say they need more direction.
When state education officials put out guidelines last August for the responsible use of AI in classrooms, it offered no specific guidance on the technology and sexual harassment, according to the Department of Elementary and Secondary Education.
The Massachusetts Association of School Committees, which develops model policies for districts, has not issued recommendations for how to handle the issue.
There is no time to waste, experts said. Attorney Naomi Shatz of Boston said she has already handled several AI-related deepfake cases involving students over the last few years.
“Education is the best way to address it,” Shatz said. “Because you want to be preventing these situations, rather than reacting to them when they happen.”
Mariana Simões can be reached at mariana.simoes@globe.com. Follow her on X @MariRebuaSimoes.