Pennsylvania Governor Josh Shapiro has announced a push for enhanced regulations on artificial intelligence chatbots, citing concerns about their potential to mislead users, particularly children. This initiative aims to position Pennsylvania alongside other states that are implementing measures to safeguard young users as AI technology becomes increasingly prevalent.
During his recent budget address, Shapiro emphasized the rapid evolution of AI, stating, “We need to act quickly to protect our kids.” His call to action follows a 2022 survey by the nonprofit organization Common Sense Media, which revealed that a significant portion of U.S. teenagers engage with chatbots. Specifically, one in three reported using these tools for social interaction and relationship-building, often seeking conversation practice, emotional support, and even romantic connections.
Shapiro underscored the risks of unregulated chatbot interactions, highlighting that children may not fully grasp the distinction between AI and human beings. He referenced a troubling legal case involving Google‘s Character.AI, which faced allegations that it contributed to mental health crises among young users, including a tragic instance involving a young person’s suicide after developing a relationship with a chatbot.
In response, the governor proposed several measures, including mandatory age verification, parental consent for use, and bans on chatbots generating sexually explicit or violent content featuring minors. Additionally, he advocated for protocols directing users who express self-harm or violent thoughts to appropriate support services, as well as reminders that they are not conversing with a human.
Enforcement of these proposed regulations raises important questions. Hoda Heidari, a professor of ethics and computational technologies at Carnegie Mellon University, pointed out the complexities involved in implementing such measures. “The devil is in the details,” she remarked, noting that while the goals are commendable, the feasibility of achieving them needs thorough exploration.
The topic of age verification has gained traction among regulators, despite warnings from security experts regarding its challenges and potential privacy concerns. Heidari explained that online age verification methods, such as “age gates” requiring users to input their birthdates, are easily circumvented, as individuals can simply provide false information.
Ensuring chatbot compliance with content restrictions, particularly those related to violence or sexual exploitation, poses another significant challenge. Heidari indicated that while AI companies are actively working on blocking the creation of harmful content, current safeguards can be bypassed. “Think of all the ways in which you can prompt a chatbot to generate the same kind of content you have in mind,” she noted.
In addition to the proposed regulations, Shapiro has urged lawmakers to draft legislation that establishes “age-appropriate standards” for chatbot use. A bipartisan bill currently under consideration in the Pennsylvania state Senate aims to implement safeguards against content that promotes self-harm, suicide, or violence. It also mandates that users be directed to crisis resources when high-risk language is detected.
The effectiveness of these protective measures and the potential penalties for companies that violate them remain uncertain. Heidari cautioned that enforcement could prove difficult, yet she believes this should not deter agencies from striving to implement meaningful regulations. She suggests a broader “Swiss cheese model” for AI risk management, where multiple layers of protection collectively enhance user safety, even if each layer has its vulnerabilities.
As the landscape of artificial intelligence evolves, the growth of unregulated AI tools has sparked comparisons to the historical gold rush. The lack of a unified federal policy complicates the regulatory environment, particularly under the previous Trump Administration, which discouraged state-level regulations deemed overly burdensome.
Currently, states like California and New York are leading the charge in AI legislation, establishing frameworks aimed at improving transparency and accountability in AI technologies. As Pennsylvania formulates its regulations, Heidari noted that the resulting patchwork of state laws could create confusion for AI companies operating across different jurisdictions.
The trajectory of AI regulation in the U.S. may increasingly be influenced by larger states, as companies seek to avoid the complexity of adhering to varying state laws. Heidari remarked that companies are likely to adopt regulations from states like California and New York, which set the tone for national standards.
Under Shapiro’s renewed push for regulation, Pennsylvania has the potential to emerge as a significant player in the ongoing dialogue surrounding AI governance. Heidari commended the Shapiro administration for engaging with stakeholders and experts in crafting these regulations, emphasizing the importance of thoughtful policy-making that transcends mere political posturing.
