Table of Contents
Quick Answer
Consumer-protection regulators (FTC, CFPB, state AGs, EU Commission DG JUST, UK CMA, India CCPA) now enforce against deceptive and unfair AI practices under pre-existing statutes like FTC Section 5, the UK Consumer Protection from Unfair Trading Regulations, and India's Consumer Protection Act 2019.
- FTC's Operation AI Comply (September 2024) sweep included DoNotPay, Ascend Ecom, Rytr
- California CCPA Automated Decision-Making Technology (ADMT) regulations take effect 2026
- EU Unfair Commercial Practices Directive and DSA cover AI-powered dark patterns
What Is AI Consumer Protection Law?
AI consumer protection law is the enforcement of existing UDAP (unfair, deceptive, or abusive acts or practices) statutes and new AI-specific rules to protect consumers from deceptive AI claims, algorithmic manipulation, fake reviews, AI-powered scams, and opaque automated decisions.
Key Details / Requirements
US Federal Consumer Protection Authorities
Authority
AI Focus
FTC
Section 5 UDAP, Endorsement Guides, COPPA
CFPB
Algorithmic credit, fair lending
SEC
AI washing, predictive analytics
DOJ
Price-fixing algorithms, deceptive practices
State AGs
Mini-FTC acts, CA ADMT regulations
Selected FTC AI Enforcement Actions
Target
Year
Issue
DoNotPay
2024
"AI lawyer" claims (USD 193K settlement)
Rite Aid
2023
Facial recognition in stores
Rytr
2024
AI-generated fake reviews
Ascend Ecom
2024
AI business-opportunity scheme
Amazon (Alexa)
2023
COPPA violations in voice AI
CRI Genetics
2023
Deceptive "DNA + AI" marketing
Global AI Consumer Protection Rules
Jurisdiction
Rule
EU
Unfair Commercial Practices Directive, DSA, AI Act Art. 50
UK
Consumer Protection from Unfair Trading Regs, DMCC Act 2024
India
Consumer Protection Act 2019, CCPA guidelines on misleading ads
Australia
ACL Sec 18 (misleading conduct), ACCC digital platforms inquiry
Japan
Act against Unjustifiable Premiums and Misleading Representations
Singapore
Consumer Protection (Fair Trading) Act
Real-World Examples / Case Studies
DoNotPay (FTC, 2024) — Settled for USD 193,000 over claims that its AI could "replace a lawyer" — the FTC said it could not substitute for professional legal advice.
Rite Aid (FTC, 2023) — Banned from using facial recognition technology in stores for five years.
Rytr (FTC, 2024) — Ordered to stop providing services that facilitate mass-produced fake consumer reviews.
Amazon Alexa (FTC, 2023) — USD 25M settlement over indefinite retention of children's voice recordings in violation of COPPA.
Facebook (now Meta, FTC Consent Order) — Expanded 2019 consent order includes AI and algorithmic transparency obligations.
What This Means for Businesses
In 2026, companies marketing AI products must:
- Avoid "AI washing" — do not claim AI capabilities you cannot deliver
- Substantiate all AI performance claims (FTC's "advertising substantiation" doctrine)
- Disclose AI-generated content, including fake reviews and endorsements
- Build dark-pattern-free consent flows for AI features
- Comply with CCPA ADMT regulations effective 2026
- Implement child-safety measures (COPPA, UK Children's Code, India DPDP Act)
Compliance Checklist
- Audit all AI-related marketing claims against FTC's substantiation standard
- Flag AI-generated reviews and endorsements per FTC Endorsement Guides
- Implement ADA and COPPA-compliant AI consumer flows
- Provide clear, prominent AI disclosures (Avon's "the dress is red" standard)
- Test for dark patterns using EDPB, FTC, OECD, and ICPEN taxonomies
- Build a consumer complaint handling SLA
- For California: prepare for CCPA ADMT compliance
FAQs
Q: What is AI washing?
Overstating AI capabilities in marketing; SEC targeted it in 2024 settlements with two investment advisers.
Q: Does the FTC require AI disclosures?
Yes — under Section 5 and Endorsement Guides updated in 2023.
Q: What is CCPA ADMT?
California Consumer Privacy Agency's Automated Decision-Making Technology regulations, adopted 2025 and effective 2026.
Q: Are AI-generated reviews legal?
Not if they mislead consumers; the FTC's 2024 Consumer Review Fairness Rule prohibits them.
Q: Can chatbots impersonate humans?
Not without clear disclosure in most jurisdictions. California's B.O.T. Act has required disclosure since 2019.
Q: Do children require special protection?
Yes — COPPA (US), UK Children's Code, EU GDPR Art. 8, India DPDP Act Sec 9.
Q: Are dark patterns illegal?
Increasingly yes — EU DSA Art. 25, FTC enforcement, and India CCPA guidelines all prohibit them.
Conclusion
AI consumer protection is where regulators enforce first because statutes are already on the books. Claim carefully, disclose fully.
Launch FTC-safe AI marketing with Misar AI's substantiation playbook.