Skip to content
Congress·In Committee·4 months ago

Senate Committee Reviews GUARD Act to Force Age Checks, Disclosures on AI Chatbots

Also known as: GUARD Act

Legislative Progress

Filed
Review
Senate
House
President

Impacts

Negative Impacts(2)
Small Business Owner
Hurts
Gig Worker
Hurts
Mixed Impacts(7)
Child Tax Credit
Neutral
Mental Health
Neutral
Disability Benefits
Neutral
Student
Neutral
Chronic Illness
Neutral
Pregnant
Neutral
Aca Marketplace
Neutral

Key Points

  • AI chatbot services would have to make users create accounts and verify age with stronger proof than just entering a birthday.
  • Existing accounts would be frozen until the user completes age verification; accounts would also be re-checked from time to time.
  • If a user is under 18, the company must block them from using “AI companions” built to simulate friendship, companionship, or therapy-like chat.
  • Chatbots would have to clearly say they are not human at the start of each chat and every 30 minutes, and they can’t claim to be a professional like a therapist or doctor.
  • Companies could face fines up to $100,000 per violation, and the U.S. Attorney General or state attorneys general could enforce the rules.
Artificial IntelligenceData PrivacyConsumer ProtectionCriminal JusticeCivil Rights

Milestones

2 milestones2 actions
Oct 28, 2025Senate

Read twice and referred to the Committee on the Judiciary.

Oct 28, 2025

Introduced in Senate

What Happens Next

Projected impacts based on AI analysis

Right after the President signs the bill

Chatbot companies start a 180-day countdown to comply after the bill becomes law

Apps may begin updating sign-up screens, adding age checks, and changing how the chatbot talks to users so they don’t get hit with penalties later

On the law’s effective date (180 days after enactment)

Existing chatbot accounts get frozen until the user completes age verification

People may suddenly lose access to their chatbot history or features until they verify age; families may need to help teens understand why access changed

Starting on the effective date

New chatbot users must create an account and verify age during sign-up

Using a chatbot may feel more like opening a bank account than trying a website; some users will stop because they don’t want to share age data

Starting on the effective date

Minors are blocked from “AI companion” chatbots after age is verified

Teens who used companion-style bots for friendship or comfort may be locked out and may switch to other apps or seek human support instead

Starting on the effective date

Chatbots must repeatedly disclose they are not human and cannot provide professional services

Users will see regular reminders (at the start of chats and at intervals) that reduce deception and discourage taking medical/legal/financial advice from a bot

After the effective date, on a schedule set by each company and any Justice Department rules

Periodic re-checks of previously verified users begin

Even after you verify once, you may be asked again later, which can be annoying but is meant to stop kids from using an adult’s verified account

After the bill becomes law; timing depends on when rules are issued

The Justice Department issues rules and starts enforcement actions

Companies that do not comply could face court orders, required fixes, restitution, and civil penalties; users may notice sudden product changes if a service is investigated

After the effective date, once violations are identified

States’ attorneys general begin bringing cases under the law

Enforcement could vary by state priorities; companies might adopt one strict national standard to avoid different lawsuits

Related News

6 articles

Source Information

Document Type

Congressional Bill

Official Title

A bill to require artificial intelligence chatbots to implement age verification measures and make certain disclosures, and for other purposes.

Bill NumberS 3062
Congress119th Congress
ChamberSenate
Latest ActionRead twice and referred to the Committee on the Judiciary.

Sponsor

Cosponsors

(12)
D: 8R: 4

Analysis generated by AI. While we strive for accuracy, this should not be considered legal or professional advice. Always verify information with official government sources.