K.G.M. v. Meta and Google: The Los Angeles Verdict That Ended 'Move Fast and Break Things' for Algorithmic Design
K.G.M. v. Meta Platforms, Inc. and Google LLC
Key Issue
Product liability for addictive algorithmic design — Section 230 bypassed
Key Takeaways for Practitioners
March 25, 2026: Los Angeles jury finds Meta and Google negligent and liable for psychological harm to a minor through addictive algorithmic design — the most significant platform liability verdict since the first tobacco cases.
Section 230 bypass: the defective design theory treats the algorithm as a manufactured product, not hosted content — Section 230 protects publishers, not manufacturers.
The jury found fraud and malice — not just negligence — based on internal documents showing Meta and Google knew of addiction risks and prioritized engagement over safety.
Liability split 70/30 between Meta and Google. $6 million in damages — symbolic against billion-dollar companies, but the precedent affects 10,000+ cases in MDL 3047.
The internal documents are the case: Zuckerberg and Mosseri testimony showing corporate knowledge of harm to minors while pursuing early user acquisition strategies.
The Kids Online Safety Act (KOSA) and analogous state laws now have a jury verdict to point to as evidence that the legislative findings are factually supported.
The Verdict Comparable to the First Tobacco Cases
On March 25, 2026, a Los Angeles jury returned a verdict that tort lawyers, platform policy advocates, and tech industry counsel will study for decades. In K.G.M. v. Meta Platforms, Inc. and Google LLC, the jury found both companies negligent, fraudulent, and malicious in causing serious psychological harm to a minor user through the addictive design features of Instagram and YouTube.
The comparison to the first tobacco verdicts is not rhetorical. Like those cases, K.G.M. involves:
- Internal corporate documents proving companies knew their product caused harm
- A design that was specifically engineered to create compulsive use
- A population of particularly vulnerable users — minors — whom companies deliberately targeted
- Decades of industry denial followed by a single verdict that opens the floodgates
There are approximately 10,000 cases in MDL 3047, the federal multidistrict litigation coordinating social media harm claims against Meta, Google, TikTok, and others. K.G.M. is the first bellwether trial. Its outcome will define settlement values and litigation strategy for every case in that pipeline.
Facts: Kaley G.M.'s Story
Kaley G.M. was introduced to YouTube at an early age — before she was cognitively capable of evaluating the platform's design incentives. By the time she reached adolescence, she was a regular Instagram user. Over several years of algorithmic content delivery, she developed clinical depression, body dysmorphia, and self-harm tendencies.
The plaintiffs alleged — and the jury found — that these outcomes were not accidental. They were foreseeable products of specific design choices:
Infinite scroll: Eliminates natural stopping points, extending session length beyond what users would choose if given regular pause opportunities.
Push notifications: Designed to interrupt and re-engage users at intervals calibrated to maximize daily active use, not user wellbeing.
Beauty filters: Systematically present users with algorithmically enhanced images, creating comparison anxiety particularly acute in adolescent females.
Engagement optimization: The core algorithmic objective — maximizing time-on-platform — systematically surfaces emotionally activating content, including content that triggers anxiety, body image concerns, and social comparison.
ISSUE: Does Section 230 Bar Product Liability Claims Against Platform Algorithms?
Rule
47 U.S.C. § 230(c)(1) bars treating platforms as "the publisher or speaker of any information provided by another information content provider." The statute has been interpreted broadly to immunize platforms from liability for third-party content.
The critical doctrinal question: Does § 230 protect the platform's algorithmic design choices — how it selects and sequences content — or only the underlying user-generated content itself?
The Design Defect Theory
The Lemmon v. Snap Inc. precedent (9th Cir. 2021) established the framework the K.G.M. plaintiffs applied: Section 230 does not immunize platforms from product liability claims based on the design of the product itself, as distinct from the content the product delivers.
The theory has three components:
1. The algorithm is a product. Meta and Google designed, manufactured, and deployed recommendation systems that are independent of any specific piece of user content. These systems make choices — what to show, when to show it, how long to keep showing it — that are proprietary engineering decisions.
2. The product is defectively designed. A product is defectively designed when its foreseeable risks could have been reduced by a reasonable alternative design without substantially impairing utility. Chronological feeds, session time limits, and content diversity requirements are all technically feasible alternative designs. Meta and Google chose engagement-maximization designs over safety-protective designs.
3. Section 230 does not apply. Section 230 protects publishers from liability for content. It does not protect manufacturers from liability for product design. The algorithm is the product, not the content. The harm arises from the design, not from any specific third-party post.
The Fraud and Malice Finding
The jury went beyond negligence. It found fraud and malice — findings that require evidence of intentional or reckless disregard for known harm.
The evidentiary basis was the internal documents: emails, presentations, and research studies showing that Meta and Google had internal evidence that their platforms caused psychological harm to adolescent users, particularly girls, and made corporate decisions to suppress or minimize that research while expanding features designed for early user acquisition.
Mark Zuckerberg's and Adam Mosseri's testimony about corporate knowledge of these harms was central to the fraud finding. The jury's message: this was not an accident or an oversight. It was a choice.
The Damages Question: $6 Million Against Billion-Dollar Companies
The jury awarded $6 million in damages — a figure that is simultaneously a landmark verdict and a traffic ticket for companies with market capitalizations in the hundreds of billions.
The damages calculation matters in two ways:
Precedent for the MDL: 10,000 cases multiplied by even a fraction of $6 million each represents existential financial exposure. Meta and Google will move aggressively to settle the MDL before additional bellwether trials establish a higher per-case value.
Punitive damages are still available: The fraud and malice findings open the door to punitive damages in subsequent cases — and punitive damages in California can be substantial multiples of compensatory awards. The K.G.M. verdict did not include punitives; future cases may.
What K.G.M. Means for Platform Counsel and AI Practitioners
The Section 230 design defect bypass is now trial-tested. The Lemmon theory — Section 230 doesn't protect algorithmic design, only content — has now survived a full trial and produced a verdict. Expect it to be applied broadly in other platform liability contexts, including AI-generated content cases.
Internal documents are the existential risk. The fraud and malice findings in K.G.M. came from internal corporate documents showing knowledge of harm. Any AI or platform company with internal research on user harm should treat that research as litigation-ready material.
KOSA and state kids' safety laws now have jury support. The Kids Online Safety Act's legislative findings — that platforms cause measurable psychological harm to minors — are now backed by a jury verdict under the preponderance of evidence standard. Expect accelerated legislative momentum at both federal and state levels.
MDL 3047 settlement pressure is now acute. With a bellwether verdict establishing liability, fraud, and malice, the settlement calculus for 10,000 cases has shifted dramatically. Platform counsel should be reassessing MDL exposure immediately.
AI recommendation systems are next. The K.G.M. framework applies wherever algorithmic design creates foreseeable harm — social media, AI companion apps, content recommendation systems, gaming. The design defect theory is not limited to Instagram and YouTube.
Legal Citation
K.G.M. v. Meta Platforms, Inc. and Google LLC, MDL 3047, C.D. Cal. (March 25, 2026) (Jury Verdict) (Docket No. MDL 3047)
This analysis is based on publicly available court records and reported trial proceedings. It does not constitute legal advice.