Skip to content
Congress·In Committee·3 months ago

House Bill Would Require AI Chatbots to Disclose Non-Human Status, Show Crisis Resources to Minors

Also known as: SAFE BOTs Act

Legislative Progress

Filed
Review
House
Senate
President

Key Points

  • Chatbot companies would have to clearly tell minors that the chatbot is AI, not a real person, at the start of the first chat and if asked.
  • If a minor brings up suicide or suicidal thoughts, the chatbot would have to show suicide and crisis hotline resources during that chat.
  • Chatbots could not claim they are a licensed professional (like a therapist or doctor) unless that is actually true.
  • Companies would need policies to: suggest a break after 3 hours of nonstop chatting, and address harmful sexual content, gambling, and illegal drugs/tobacco/alcohol for minors.
  • The Federal Trade Commission and state attorneys general could enforce these rules; the rules would start 1 year after the bill becomes law, and states could not set different rules on these same topics.
Consumer ProtectionArtificial IntelligenceTechnologyCivil Rights

Milestones

2 milestones2 actions
Dec 5, 2025House

Referred to the House Committee on Energy and Commerce.

Dec 5, 2025

Introduced in House

What Happens Next

Projected impacts based on AI analysis

Soon after the Act is enacted

Chatbot providers begin redesigning youth experiences to add required disclosures and safety triggers

Over the months after enactment, apps may update screens and chat behavior for users under 17, even before the legal deadline, to avoid FTC risk

1 year after the Act is enacted (effective date)

Required AI identity disclosure appears at a minor’s first chatbot interaction and when asked “are you AI?”

Minors should see a clear message that the chatbot is not a real person, reducing the chance they believe they’re talking to a human

1 year after the Act is enacted (effective date)

Crisis hotline resources must pop up when a minor prompts about suicide or suicidal thoughts

Teens in crisis are more likely to see an immediate path to real help while chatting

1 year after the Act is enacted (effective date)

3-hour continuous-use break prompts become required for covered minors

Teens who get “stuck” in long chats would be nudged to pause, which may reduce overnight or marathon use

1 year after the Act is enacted (effective date)

Providers must have policies for harmful sexual material, gambling, and illegal drugs/tobacco/alcohol for covered minors

Youth-focused chatbots may block or redirect certain conversations more often, which could make chats feel more restricted but safer

After the 1-year effective date

Federal Trade Commission begins active enforcement of the new youth-chatbot rules

Companies that ignore the rules could face investigations and penalties; users may see faster fixes after complaints

After enactment, once the study is set up

HHS/NIH launches the 4-year study on chatbots and minors’ mental health

Families may not see immediate changes, but results could shape future safety guidance and product design

No later than 4 years after the Act is enacted

HHS/NIH report to Congress on study results and recommendations is delivered

The public may learn clearer risks and benefits, which could lead to new rules or school/health guidance later

Related News

2 articles

Source Information

Document Type

Congressional Bill

Official Title

SAFE BOTs Act

Bill NumberHR 6489
Congress119th Congress
ChamberHouse of Representatives
Latest ActionReferred to the House Committee on Energy and Commerce.

Sponsor

Analysis generated by AI. While we strive for accuracy, this should not be considered legal or professional advice. Always verify information with official government sources.