Bill to Limit Youth Chatbot Usage Gains Support In The Senate
‘The bill means well, but the people behind it just do not understand how kids use the internet nowadays’
By Evan Symon, February 6, 2025 2:26 pm
A new bill that would force companies that use chatbots to have them frequently remind users under the age of 18 that they are not talking to a real person gained support this week.
Senate Bill 243, authored by Senator Steve Padilla (D-Chula Vista), would require AI companies and other similar sites and online platforms to make chatbots, which are programs designed to imitate human conversations, safer for minor users. Specifically, the bill would require operators to prevent addictive engagement patterns to stop users from becoming addicted to using chatbots.
Young users would be reminded periodically that chatbots are AI-generated and not human. In addition, SB 243 would include a disclosure statement requirement that would warn children and parents that chatbots might not be suitable for minors. Finally, the bill would require annual reporting on how chatbot usage can impact the mental health of someone who uses frequently uses it.
Senator Padilla wrote the bill because of the growing number of youths using social chatbots and the lack of parameters in youth usage. Padilla said that chatbot usage without guardrails can be dangerous and gave multiple instances in a statement he made last week shortly after introducing SB 243 in the Senate.
“There have been many troubling examples of how AI chatbots’ interactions with children can be dangerous,” said Senator Padilla in a statement. “In 2021, when a 10-year-old girl asked an AI bot for a “fun challenge to do” she was instructed her to “plug in a phone charger about halfway into a wall outlet, then touch a penny to the exposed prongs.” In 2023, researchers posing as a 13-year-old girl were given instructions on how to lie to her parents to go on a trip with a 31-year-old man and lose her virginity to him. These interactions may seem trivial, but research conducted at the University of Cambridge shows that children are more likely to view AI chatbots as quasi-human and thus trust them more than adults. Thus, when dialog between children and chatbots goes wrong, the consequences can be dire.
“In Florida, a 14-year-old child ended his life after forming a romantic, sexual, and emotional relationship with a chatbot. Social chatbots are marketed as companions that are helpful to people who are lonely or depressed. However, when 14-year-old Sewell Setzer communicated to his AI companion that he was struggling, the bot was unable to respond with empathy or the resources necessary to ensure Setzer received the help that he needed. Setzer’s mother has initiated legal action against the company that created the chatbot, claiming that not only did the company use addictive design features and inappropriate subject matter to lure in her son, but that the bot encouraged him to “come home” just seconds before he ended his life. This is yet another horrifying example of how AI developers risk the safety of their users, especially minors, without the proper safeguards in place.”
Senator Padilla then noted that SB 243 would install multiple guard rails and give some regulation to help prevent instances like these from happening again.
A new bill aimed at limiting youth chatbot usage
“Our children are not lab rats for tech companies to experiment on at the cost of their mental health,” added Padilla. “We need common sense protections for chatbot users to prevent developers from employing strategies that they know to be addictive and predatory.”
Child welfare advocates and researchers studying the links between AI chatbots and youth health became the first supporters of SB 243 late last month when it was introduced in the Senate.
“We have growing reasons to be concerned about the risks that relational chatbots pose to the health of minors,” said Dr. Jodi Halpern, a UC Berkeley Professor of Bioethics. “We would never allow minors to be exposed to products that could harm them without safety testing and guardrails. This is the first bill we are aware of nationally to take an important first step toward creating those guardrails through safety monitoring. We commend Senator Padilla for bringing multiple stakeholders to the table to proactively address this emerging issue.”
This week, a growing number of Senators and Assembly members are backing the bill as well, preparing for a long fight this year. While no opposition has announced yet, AI companies and sites that use chatbots are expected to fight the bill. Already, some industry analysts are calling the bill ultimately pointless simply because of how people use the internet and social media.
“This bill is going to require age verification and agreeing to new terms for permission,” Gus Stevenson, an advisor for tech companies on youth usage, told the Globe Thursday. “But you know what then entails? A popup for agreeing to the terms of usage and a box verifying that the user is over the age of 18. Almost no one reads those terms before using and kids always click on the box saying they are 18 or older. Did the people who wrote this ever meet a kid before?
“Now, if the chatbot reminder was for everyone, it would be noticeable, but all it would take is an annoyed click to get rid of that message. A much better solution would be a requirement for all companies to have a warning that the chatbot they are talking to is not human permanently in the chatbox so that they would always see it. That would be a lot more effective. But, if we can’t even get junk phone calls to tell us if the person we are talking to is real or not, what hope is there here?
“There have been some ways of breaking AI. Like with drive thrus. If you don’t like the fact you are giving an order to an AI bot, then just say you want 10,000 big macs or something and they have to switch you over to a human. McDonalds had to stop their AI drive thru program last year partially because of so many people doing this. People stuck talking to an automated or AI voice on the phone have also found a quick way to talk with a human operator by saying multiple incomprehensible things.
“With chatbots online and wanting to warn kids? You need permanent messages on there. Or, you know, limit screentime or block AI sites on their laptop or phone or tablet. The bill means well, but the people behind it just do not understand how kids use the internet nowadays.”
- Bill to Limit Youth Chatbot Usage Gains Support In The Senate - February 6, 2025
- New College Poll asks ‘Is Gavin Newsom to Blame for California Wildfires?’ - February 6, 2025
- California High Speed Rail Authority Refutes President Trump’s Investigation Threats - February 5, 2025