On March 25, 2026, a Los Angeles jury found Meta and YouTube negligent in the design of their platforms, awarding $6 million in compensatory and punitive damages to the plaintiff in K.G.M. v. Meta Platforms, Inc. The verdict came one day after a New Mexico jury ordered Meta to pay $375 million in a separate case for violating consumer protection laws for deceiving users about the safety of its platforms.
The plaintiff in the California case, a twenty-year-old identified in court as Kaley, grew up in Chico, far from the affluent enclaves of Silicon Valley. Her parents divorced when she was three; her father was abusive. Kaley started using YouTube at age six on an iPod Touch and Instagram at age nine on a hand-me-down phone that already had the app installed — bypassing her mother’s download restrictions. By elementary school she had posted 284 videos on YouTube. At her peak, she spent sixteen hours in a single day on Instagram.
What struck me reading about Kaley was how ordinary her experience sounds. Most of us have felt the pull of the scroll at our lowest moments, reaching for the feed precisely when we are least equipped to put it down. But for those facing the most difficult times — family instability, trauma, poverty — the scroll is not a momentary comfort. It is the only refuge available, making such users particularly vulnerable to addiction. K.G.M. is rightly celebrated as a breakthrough in product liability law. But the trial also exposed something the legal commentary has largely missed: the harms of addictive platform design are structured by class.
Who Bears the Cost?
As Tamara Piety once observed on this blog, addictive user behavior is good for business. In K.G.M., the first bellwether in over 1,600 consolidated cases, a jury found for the first time that it can also be the basis for liability. Most legal commentary has focused on the novel strategy employed in the case: by casting social media apps as defective products and targeting design features — infinite scroll, autoplay, variable-reward algorithms — rather than user content, the plaintiffs successfully sidestepped Section 230.
Yet the trial also revealed more. Meta’s own suppressed research, Project MYST, found that parental controls and household rules had almost no effect on teens’ compulsive use, and that children who had experienced adverse life events were the most susceptible to addiction. Meta never published those findings.
Instead, Meta’s lawyers spent weeks blaming Kaley’s family, arguing that her depression and anxiety stemmed from a “turbulent childhood” rather than platform design. Kaley’s mother had implemented blocking software, set download passwords, and repeatedly confiscated her daughter’s phone. None of it worked. As plaintiff attorney Mark Lanier told the jury, “the moment Kaley was locked into the machine, her mom was locked out.” The jury sided with the plaintiff on every question, finding that Meta and YouTube acted with malice, oppression, or fraud.
Kaley’s story is not anomalous. And much has been written about societal issues related to digital addiction. Zephyr Teachout has rightly warned that protecting children from addictive feeds is a democratic concern. Evelyn Atkinson has recovered the common-law tradition of imposing duties on companies whose products cause emotional harm. But what remains frustratingly unexamined in legal analysis is the distributional question: not just who uses these platforms most, but who is most vulnerable to their design. Social scientists have been flagging the issue for years. In what Candice Odgers has called the “new digital divide,” the harms of addictive platform design fall hardest on children who are already the most vulnerable — whether through family instability, trauma, economic disadvantage, or (as is often the case) all three.
Common Sense Media’s surveys show that lower-income tweens spend roughly three hours more per day on entertainment screen media than higher-income peers. Among children under eight, lower-income children’s screen time is more than double that of their more affluent peers. When lower-income families do get online, roughly 28 percent are smartphone dependent (compared to 4 percent of the highest-income adults), funneling users toward exactly the apps at issue in K.G.M. There is thus good reason why lower-income parents are 50% more likely to be “extremely or very worried” about their children’s mental health than upper-income parents. Perhaps most worryingly, the 2026 World Happiness Report, drawing on 330,000 adolescents across 43 countries, confirmed that lower-SES adolescents bear the greatest costs of compulsive digital behaviors.
The Political Economy of Screen Time
None of this surprises me. Before law school, I taught at a public high school in New York City for nearly a decade. Around vacations, I noticed something that many others have observed: my most vulnerable students would often dread long breaks from school. While vacations meant summer camp, travel, and enrichment for their more privileged classmates, for others, vacation meant hours of unstructured time in front of screens — not necessarily because their parents are negligent, but because screens are often the best option in a landscape stripped of alternatives.
This was not the “digital divide” I had learned about while training as a teacher, which largely viewed a lack of access to technology as a primary education equity issue. In New York at least, that issue had been largely resolved. The city has done a decent job at getting laptops to most students and expanding free Wi-Fi access. Instead, once we got laptops into the hands of students, another structural issue revealed itself: the kids with the fewest offline alternatives were now the most exposed to platforms engineered to keep them scrolling.
What looks like a parenting problem is a political economy problem. The screen-time gap tracks the defunding of afterschool programs (low-income participation fell from 4.6 million to 2.7 million between 2014 and 2020); nonstandard work schedules among lower-income parents; and an enrichment spending gap in which higher-income families spend roughly five times more on out-of-school activities. According to research by Lurie Children’s Hospital, one in four parents used screens because they could not afford childcare. An internal YouTube document presented at trial described the platform as a “short-term digital babysitter” while parents cook, clean, or do laundry — language with unmistakable class resonance.
The policy framework for addressing these harms, built almost entirely on parental responsibility, is regressive by design. While tech executives protect their own children from the products that made them rich — Steve Jobs said his children had not used the iPad, Bill Gates withheld smartphones until age fourteen, Peter Thiel limits his children’s screen time to an hour an half per week — this is largely because they can afford alternatives. Many families cannot.
I do not think these companies should be expected to regulate themselves. Like firms in any concentrated market, they will maximize engagement within whatever boundaries society sets. The question is whether we will set those boundaries with attention to who bears the costs. As my co-authors and I have argued, the K.G.M. litigation is one mechanism, and public health regulation is another.
But the point I want to make here is that public duties are incomplete without public alternatives. An adequate response must pair platform-side obligations with investment in the public commons: affordable childcare, afterschool capacity, stable work schedules, and safe public spaces. Regulating the product is necessary. Building the world in which children do not need it is equally so.