Pennsylvania’s Senate on Tuesday passed legislation that would add new protection for minors using artificial intelligence chatbots that would require those systems to disclose that they are not human and implement safeguards against suicide, harm to oneself or others, and sexually explicit content.The legislation passed 49 to 1 in the Pennsylvania Senate, sending it to the state House for consideration.Senate Bill 1090, authored by state Sen. Tracy Pennycuick, R-Montgomery, Berks counties, would require artificial intelligence chatbots that could otherwise be mistaken for real people to have disclosures identifying that users are interacting with AI and not a person.Additionally, if a chatbot is interacting with a minor, it would require:Disclosure that a child or teen is not interacting with a human.Notifications every three hours of use that the chatbot is not a human and a suggestion that the user take a break from using it.Safeguards against producing sexually explicit conduct or material or instructions regarding that type of behavior.”I’m hearing parents say, actually, I’m really concerned that these chatbots are becoming a substitute for my child interacting in a healthy relationship with their peers, with their siblings, with their parents, with their grandparents. So there’s, I think, more concern than the positives behind it,” Pennycuick said in an interview with WGAL.Supporters of the bill pointed to cases across the country in which parents have alleged self-harm and even suicide may have been connected to interactions with Charles Palmer, an associate professor of interactive media at Harrisburg University, said many chatbots do not check whether a user is a minor or an adult using them, presenting a challenge.”This has been one of the biggest problems that I’ve had with the entire platform over the last two years, and the fact that there aren’t really good safeguards in place for this,” he said.”We think about individuals who would very easily at hand or app on their iPad, off to a young child to be entertained. But we now have these devices that you can hand them off, and you have no idea the types of conversations or communications that are happening.”Palmer said many chatbots will discourage illegal activity, but questions whether enough is being done to protect people who may be vulnerable. “Maybe I’m in the midst of a mental health crisis, and I’m not sure of how I’m responding to different stimuli, and maybe I mentioned about ending my life, and the bot does not tell me not to. It maybe encourages me to dig into those feelings a bit more,” he said.With the federal government eyeing a framework for AI regulation and policy that may seek to override state laws, Pennycuick said she is unconcerned and that Pennsylvania should move forward on protecting children and teens.”We have always put Pennsylvania first. I’m not going to wait for the federal government to get on board,” she said.
HARRISBURG, Pa. —
Pennsylvania’s Senate on Tuesday passed legislation that would add new protection for minors using artificial intelligence chatbots that would require those systems to disclose that they are not human and implement safeguards against suicide, harm to oneself or others, and sexually explicit content.
The legislation passed 49 to 1 in the Pennsylvania Senate, sending it to the state House for consideration.
Senate Bill 1090, authored by state Sen. Tracy Pennycuick, R-Montgomery, Berks counties, would require artificial intelligence chatbots that could otherwise be mistaken for real people to have disclosures identifying that users are interacting with AI and not a person.
Additionally, if a chatbot is interacting with a minor, it would require:
Disclosure that a child or teen is not interacting with a human.Notifications every three hours of use that the chatbot is not a human and a suggestion that the user take a break from using it.Safeguards against producing sexually explicit conduct or material or instructions regarding that type of behavior.
“I’m hearing parents say, actually, I’m really concerned that these chatbots are becoming a substitute for my child interacting in a healthy relationship with their peers, with their siblings, with their parents, with their grandparents. So there’s, I think, more concern than the positives behind it,” Pennycuick said in an interview with WGAL.
Supporters of the bill pointed to cases across the country in which parents have alleged self-harm and even suicide may have been connected to interactions with
Charles Palmer, an associate professor of interactive media at Harrisburg University, said many chatbots do not check whether a user is a minor or an adult using them, presenting a challenge.
“This has been one of the biggest problems that I’ve had with the entire platform over the last two years, and the fact that there aren’t really good safeguards in place for this,” he said.
“We think about individuals who would very easily at hand or app on their iPad, off to a young child to be entertained. But we now have these devices that you can hand them off, and you have no idea the types of conversations or communications that are happening.”
Palmer said many chatbots will discourage illegal activity, but questions whether enough is being done to protect people who may be vulnerable.
“Maybe I’m in the midst of a mental health crisis, and I’m not sure of how I’m responding to different stimuli, and maybe I mentioned about ending my life, and the bot does not tell me not to. It maybe encourages me to dig into those feelings a bit more,” he said.
With the federal government eyeing a framework for AI regulation and policy that may seek to override state laws, Pennycuick said she is unconcerned and that Pennsylvania should move forward on protecting children and teens.
“We have always put Pennsylvania first. I’m not going to wait for the federal government to get on board,” she said.