Protecting Consumers from Deceptive AI Act
Rep. Foushee Introduces Bipartisan Bill to Require Labels on AI-Generated Images and Videos
This bill is currently in the early stages of the legislative process and is being reviewed by two House committees. It is actively moving through the system as it waits for committee members to study its details. There are no upcoming votes scheduled at this time.
Legislative Progress
While there is bipartisan support and public worry about deepfakes, major tech bills often face long delays and pushback from industry groups.
Key Points
Impact Analysis
Personal Impact
Small businesses that develop or distribute generative AI tools would need to comply with new labeling requirements, including embedding machine-readable disclosures and tamper-resistant watermarks in AI-generated content. This adds compliance costs, but the self-regulatory safe harbor provision could ease the burden. Small businesses that use AI content for marketing would benefit from clearer standards, though they may face new responsibilities to maintain AI labels on content they share.
“A person who makes available to users a software application based on generative artificial intelligence technology shall-- (A) ensure that audio or visual content created or substantially modified by such application incorporates (as part of such content and in a manner that may or may not be perceptible by unaided human senses) a disclosure”
Milestones
Referred to the Committee on Energy and Commerce, and in addition to the Committee on Science, Space, and Technology, for a period to be subsequently determined by the Speaker, in each case for consideration of such provisions as fall within the jurisdiction of the committee concerned.
Sent to a congressional committee for expert review. The committee decides whether this bill moves forward.
Introduced in House
The bill was officially filed and given a number. It now enters the legislative queue.
Votes
No votes have been recorded for this legislation yet.
Related News
6 articlesHouse lawmakers introduce deepfake bill to require AI content labeling
A trio of US House lawmakers introduced the Protecting Consumers from Deceptive AI Act on April 24, 2026. The bill mandates that AI-generated images, video, and audio contain machine-readable disclosures and requires NIST to establish technical standards for watermarking and metadata.
Protecting Consumers From Deceptive AI Act – H.R. 8479
Introduced by Rep. Valerie Foushee (D-NC), the bill would establish technical standards and guidelines for generative AI content and ensure that the use of this technology is disclosed when it is used to create or modify audio and visual content.
Deepfakes Cross the Threshold: New Legislation and Market Impacts
Representatives Foushee, Beyer, and Moylan introduced the bipartisan Protecting Consumers From Deceptive AI Act on April 24, 2026. The bill establishes accountability standards for generative AI, requiring clear disclosure when the technology is used to create or modify media.
Source Information
Document Type
Congressional Bill
Official Title
Protecting Consumers from Deceptive AI Act
Data Sources
Sponsor
Cosponsors
(2)Analysis generated by AI. Always verify with official sources.