top of page
The Burner draft logo.png

Washington Democrats Want To Regulate Artificial Intelligence

  • Writer: Hannah Krieg
    Hannah Krieg
  • 4 days ago
  • 2 min read

Alright, Big Tech. You’ve had your libertarian fantasy with Artificial Intelligence (AI) for the past few years — clogging our feeds with deepfakes, letting chatbots send lonely people into psychosis, and automizing discrimination in the workplace. This legislative session, Washington Democrats are more serious than ever about setting some ground rules on the anything-goes tech playground.  


No, The Chatbot Doesn’t Love You


At the request of Gov. Bob Ferguson, the state legislature is considering a bill (HB 2225) to try to stop chatbots from talking people, particularly kids, off a ledge. 


Chatbots, designed to mimic real social connection, have formed pseudo- intimate connections with its users, but offered some really shoddy mental health support. In some high profile and deeply troubling cases, parents accuse these bots of encouraging their children to commit suicide. 


HB 2225 attempts to keep these conversations between AI and people more grounded. HB 2225 requires the AI system to notify the user that the chatbot is artificially generated and not an actual human being at the beginning of every chat session and at least once every three hours of continuous interaction. Side note: If you’re talking to a chatbot for 3 hours at a time, you may have needed a reality check even sooner!


The bill adds extra provisions for when a system knows it’s interacting with a minor. Companies must “implement reasonable measures” to prevent chatbots from sexting minors or generating sexually explicit content for them. They also have to prohibit “manipulative engagement techniques.” It’s vague on purpose — which gives victims and their families plenty of room to argue their case when companies screw this up.


Additionally, AI companies would have to set up protocols for when a user is suicidal.


Deepfake Not So Deep


House Bill 1170, which was first introduced last session, would require big AI companies to help solve the problem they created: the proliferation of hard-to-clock AI content. AI companies with more than 1 million users would have to provide AI detection tools at no cost to users. 


This is made more powerful with the bill’s requirement for companies to include "latent disclosure,” meaning a hidden, embedded identification, within its generated content. That disclosure must be detectable by the company’s AI detection tool.


Additionally, companies have to offer users the option to include a “manifest disclosure” within the AI generated content, such as a watermark. 


PLEASE Don’t Reinforce System Racism, Tech Brethren 

House Bill 2157 attempts to protect people from AI-driven discrimination in high-stakes situations, think hiring decisions and the doctor’s office. It basically gives Washingtonians grounds to sue companies that do not take “reasonable care” to protect them from “known or reasonably foreseeable risks of algorithmic discrimination.” 


This opens up companies to liability, which could slow down innovation — or the reckless experimentation with people’s real, actual lives. Whatever you want to call it!


 
 
 
bottom of page