Content Origin Protection and Integrity from Edited and Deepfaked Media Act of 2025
Senators Propose New Labels for AI Deepfakes and Protections for Artists Against Unauthorized AI Training
Stalled
No legislative action in over 90 days.
Legislative Progress
Key Points
- This bill requires companies that make artificial intelligence tools to include 'digital fingerprints' or labels on the content they create. These labels help people identify if a photo, video, or piece of text was made by a human or generated by a computer algorithm.
- Social media sites and other large internet platforms would be banned from removing these digital labels. This ensures that even when a deepfake is shared across different websites, the information about its AI origins stays attached to it.
- The policy protects artists, writers, and journalists by making it illegal for AI companies to use their work to train AI systems without their permission. If a creator includes a digital label on their work saying it cannot be used for training, AI companies must respect those terms and pay for the use if required.
- To help people spot fakes, the government will work with private companies to create official standards for watermarking and detection. There will also be a public education campaign to teach Americans how to recognize deepfakes and understand how AI-generated media works.
- If a company removes these digital labels or uses an artist's work without consent, they could face lawsuits from the creators or fines from the Federal Trade Commission. Most of these new requirements would go into effect two years after the bill is officially signed into law.
Impact Analysis
Personal Impact
Small businesses that develop or sell AI tools for creating images, videos, or text would need to build content provenance labeling into their products within two years. This creates new compliance costs and technical hurdles, especially for startups. On the other hand, small creative businesses and independent studios gain new protections against having their work used without permission to train AI systems.
Milestones
Read twice and referred to the Committee on Commerce, Science, and Transportation.
Sent to a congressional committee for expert review. The committee decides whether this bill moves forward.
Introduced in Senate
The bill was officially filed and given a number. It now enters the legislative queue.
Votes
No votes have been recorded for this legislation yet.
Related News
4 articles
April 2025 US Tech Policy Roundup
Covers the reintroduction of the COPIED Act of 2025 (S. 1396) by Sen. Maria Cantwell. The bill requires transparency for AI content provenance and establishes protections for artistic content, ensuring creators can control and be compensated for the use of their work in AI training.
New Bill Proposes More Protection for Original Content From AI Companies
Explains the COPIED Act's goal to combat deepfakes and IP theft. It details how the bill requires AI tool makers to allow content owners to attach 'provenance information' and makes it illegal for AI developers to scrape or train on labeled content without permission.

How the 'COPIED Act' could make it unlawful to train AI using copyrighted material without permission
Analyzes the COPIED Act's impact on the music and creative industries. The article highlights the bill's mechanism that would effectively make it illegal to use copyrighted material for AI training if the owner has attached provenance information and opted out.
Source Information
Document Type
Congressional Bill
Official Title
Content Origin Protection and Integrity from Edited and Deepfaked Media Act of 2025
Data Sources
Sponsor
Cosponsors
(2)Analysis generated by AI. Always verify with official sources.