I.
Today let’s talk about an important lawsuit against the platforms that begins this week, a related new investigation by the European Commission, and why the infinite-scroll apps that dominate the lives of so many teenagers might soon be a thing of the past. (For kids, at least.)
The old way of thinking about how to make social platforms safer was that you had to make them do more content moderation. Hire more people, take down more posts, put warning labels on others. Suspend people who posted hate speech, and incitements to violence, or who led insurrections against their own governments.
At the insistence of lawmakers around the world, social platforms did all of this and more. But in the end they had satisfied almost no one. To the left, these new measures hadn’t gone nearly far enough. To the right, they represented an intolerable infringement of their freedom of expression.
Earlier in their existence, the social platforms had experimented with having principles of their own, rooted in expert opinion about the promotion of human rights. But this had proven costly, in terms of lawsuits against them filed by governments around the world; and dangerous, as the more authoritarian governments realized they could force the social networks to appoint local representatives and then throw them into prison when human rights conflicted with the government’s objectives.
And so, as 2025 dawned, the platforms adjusted course. Except where required by law, they would no longer seek to build new and more effective forms of content moderation. And in the United States, human rights principles would take a backseat to the question that increasingly dominated policy questions inside tech companies: what does the Trump administration want us to do?
What this approach lacked in moral virtue it made up for in effectiveness. During the 2024 campaign, President Trump threatened to throw Meta CEO Mark Zuckerberg in prison; by mid-2025, Trump was championing Meta’s interests around the world. TikTok should have shut down in the United States after ByteDance failed to divest it by the deadline set by Congress; Trump granted the company a series of unconstitutional delays via executive order to give them the time he needed to transfer its ownership to his allies. Google donated $1 million to Trump’s inaugural fund and then watched the Andreessen Horowitz wing of the Republican party push a deregulation agenda for AI around the world.
This state of affairs may have held for a while longer, if not for an inconvenient truth that is recognized by both Republicans and Democrats: some significant number of children experience a wide range of harms on these platforms, and no amount of public pressure had managed to force meaningful change.
Whether time spent on social media worsens mental health problems for young people at the population level remains bitterly contested. Studies generally find weak effects from social media when you zoom out to the level of an entire population.
And yet it’s also true that millions of children are harmed on social platforms every year. They are bullied and harassed by their peers; they are introduced to groomers and predators; they tumble down rabbit holes leading them to eating disorders and self-harm; they fall victim to sextortion schemes. A steady drumbeat of notifications and “streaks” anxiety disrupt their sleep, make them anxious, and cause them trouble in school. Screen time “nudges” are easily swiped away.
The child deletes the app, only to reinstall it days later after being beset with FOMO. She will feel bad about herself for what she perceives as a failure, unaware that whole teams at each platform are dedicated to increasing the amount of time that users like her spend on the platform. Nor will she understand just how good they are at their jobs.
For a long time, the platforms have gotten away with this on free speech grounds. What, you’re going to tell us we can’t rank posts in a feed? What, you’re going to tell someone how many posts they can view? “Social media addiction” is a media invention, they’ll say. There’s no proven causal link between using apps like these and mental health harms. And in any case, Section 230 of the Communications Decency Act prevents them from being held liable for what other users post on their platforms. Don’t like that video celebrating eating disorders? Take it up with the person who posted it.
And all of this mostly worked, because good democracies protect free expression. But by the mid-2020s, almost everyone knew both adults and children who struggled to regulate their usage of social apps and suffered as a result. The problem was almost never an individual act of speech on the platform. Rather, it was the way the products are designed.
Regulators and plaintiffs’ attorneys began new investigations into whether a social app might be held liable not for what people said on it, but rather how it worked.
Increasingly, it appears they will.
II.
Several critical lawsuits are coming to trial this year alleging that the platforms have enabled widespread harm to young people. Opening statements for the first of them will take place in Los Angeles County Superior Court this week.
Here’s the Associated Press:
Instagram’s parent company Meta and Google’s YouTube will face claims that their platforms deliberately addict and harm children. TikTok and Snap, which were originally named in the lawsuit, settled for undisclosed sums.
At the core of the case is a 19-year-old identified only by the initials “KGM,” whose case could determine how thousands of other, similar lawsuits against social media companies will play out. She and two other plaintiffs have been selected for bellwether trials — essentially test cases for both sides to see how their arguments play out before a jury and what damages, if any, may be awarded, said Clay Calvert, a nonresident senior fellow of technology policy studies at the American Enterprise Institute.
It’s the first time the companies will argue their case before a jury, and the outcome could have profound effects on their businesses and how they will handle children using their platforms.
As the AP notes, the lawsuit seeks to sidestep questions of immunity under Section 230 by focusing on questions of exploitative product design. “Borrowing heavily from the behavioral and neurobiological techniques used by slot machines and exploited by the cigarette industry, Defendants deliberately embedded in their products an array of design features aimed at maximizing youth engagement to drive advertising revenue,” it says.
Meta and Google deny the claims. Meta put up a long blog post accusing the lawsuits of “oversimplif[ying] a serious issue.”
“Despite this complexity, plaintiffs’ lawyers have selectively cited Meta’s internal documents to construct a misleading narrative, suggesting our platforms have harmed teens and that Meta has prioritized growth over their well-being,” the company said. “These claims don’t reflect reality. The evidence will show a company deeply and responsibly confronting tough questions, conducting research, listening to parents, academics, and safety experts, and taking action.”
YouTube offered a blog post of its own. Among other things, it says autoplay is disabled by default on YouTube videos for teens.
“In collaboration with youth, mental health and parenting experts, we built services and policies to provide young people with age-appropriate experiences, and parents with robust controls,” spokesman José Castañeda told me over email. “The allegations in these complaints are simply not true.”
Not every social media trial generates such strong pushback. That Google and Meta have said as much as they have is a reflection, I think, of how serious this issue is. Note that TikTok and Snap have already settled the case that begins trial in LA this week. (Without commenting on it.) And the KGM case is only the first of dozens of similar cases faced by the platforms around the country. (Forty attorneys general have signed on to one of them aimed at Meta.)
Meanwhile, the European Commission has arrived at conclusions similar to those of US plaintiffs’ lawyers. Here’s Adam Satariano at the New York Times:
On Friday, the regulators released a preliminary decision that TikTok’s infinite scroll, auto-play features and recommendation algorithm amount to an “addictive design” that violates European Union laws for online safety. The service poses potential harm to the “physical and mental well-being” of users, including minors and vulnerable adults, the European Commission, the 27-nation bloc’s executive branch, said in a statement.
The findings suggest TikTok must overhaul the core features that made it a global phenomenon, or risk major fines. European officials said it was the first time that a legal standard for social media addictiveness had been applied anywhere in the world.
“TikTok needs to change the basic design of its service,” the European Commission said in a statement.
TikTok, for its part, called the findings “categorically false and entirely meritless.” The company will be given a chance to respond to the allegations in detail. But if found liable, it could be fined up to 6 percent of its global revenue under the Digital Services Act.
III.
It’s hard to predict the outcome of any individual trial or regulatory proceeding. But in their shared point of view and sheer volume, design-based critiques of social platforms have gathered unusual force. It’s rare to see plaintiffs’ lawyers in Los Angeles, European regulators in Brussels, and attorneys general across both red and blue states all arrive at the same conclusion. But they have here.
Some countries, of course, are going even further. France advanced a bill to bar social media for anyone under 15; Australia has already banned it for under-16s. Spain has gone further still, proposing an under-16 ban plus criminal liability for tech executives.
In such a world, eliminating the infinite scroll and other engagement-maxing features may come to be seen as the moderate position. So what might that look like?
The European Commission tried to sketch it out. In its preliminary findings against TikTok, regulators suggested that the platform should disable infinite scroll, make its screen time limits more robust, and make unspecified changes to its recommendation algorithms.
How far will this go in making teens’ lives better? As always, it depends on the individual child. But with 7 percent of children 12 to 15 spending between four and five hours a day on TikTok, and a commission finding that kids spent more time on TikTok after midnight than on any other platform, it’s clear that the app has a powerful hold on Europe’s kids. And you don’t have to believe that TikTok causes depression to believe an app that regularly keeps a 13-year-old scrolling past midnight is not working in her interest.
Of course, Instagram Reels and YouTube Shorts work in similar ways. And so, whether on the stand or before the commission, I hope platform executives are called to answer: if you did want to make your products addictive, how different would they really look from the ones we have now?
The platforms will surely fight back. They have to — infinitely scrolling, user-generated content is the business model. (And they have all those new friends in the Trump administration who might be able to help.)
But they are arriving at the fight in a weaker position than usual. In a polarized world, their failures around child safety are increasingly the one thing that partisans of every stripe can agree on. Regulators are no longer impressed by the bare minimum. (They have teenagers of their own now, and all the screen-time battles that come with them.)
I don’t know which trial or regulatory action will be the one that finally forces major changes to social platforms for teenagers. But it seems increasingly clear that change is in fact coming. And for the first time, some subset of users will find that the feed they are scrolling through suddenly comes to an end.
Elsewhere in social media trials: Another high-profile trial began in New Mexico. The company is accused of failing to protect children from sexual predators. “Prosecutors say they’ll present evidence that Meta knew that some 500,000 inappropriate interactions with children take place daily on its platforms, and that the company doesn’t adequately track those interactions,” the AP reports.

Sponsored
Your Skills Could Shape the Future of AI
AI is evolving at breakneck speed — and the risks are growing just as fast. We’re not ready for what’s coming. 80,000 Hours has spent nearly a decade researching the biggest threats from advanced AI, long before ChatGPT made headlines. They believe this could be one of the most important challenges of our time — and they need people with all kinds of skills to help. Whether you’re into policy, safety research, governance, or another field entirely, you can be part of the solution. Their free career guide goes beyond the “follow your passion” clichés, giving you concrete, research-backed steps to build a career that truly matters. Everything is free because they’re a nonprofit. The only goal: help you use your career to solve global problems. Curious how your skills could shape the future of AI?
FollowingAI comes for Super Bowl ads
What happened: This year’s Super Bowl ads (and the discourse around them) were dominated by AI. Brands ranging from big tech companies to retail businesses jumped at the chance to promote their latest AI products and air AI-generated ads.
As previewed last week, Anthropic aired its ad taking a veiled jab at OpenAI’s decision to bring ads to ChatGPT, which sparked a public feud with OpenAI CEO Sam Altman. (He called Anthropic’s ad “clearly dishonest.”
The ad that aired did feature a change from the original tagline that made it less of a direct shot at OpenAI. Instead of “ads are coming to AI. But not to Claude,” the new tagline said “there is a time and place for ads. Your conversations with AI should not be one of them.” (So are they coming to Claude or …?)
Elsewhere, vodka brand Svedka aired a creepy 30-second ad that featured two robots dancing at a club, which it touted as the first Super Bowl ad “primarily” generated by AI.
Silverside AI, which generated Svedka’s ad, was also behind Coca-Cola’s recent AI-generated holiday commercials, which sparked backlash online for resembling AI slop. (Pepsi took aim at Coca-Cola in its own Super Bowl ad, which featured a CGI Coca-Cola polar bear doing a blind taste test and choosing Pepsi.) (Disclosure: my boyfriend is a VFX artist and worked on the polar bear in the Pepsi Super Bowl ad. Platformer boyfriends are really doing the most.)
Why we’re following: The Super Bowl represents a good chance to check in with the cultural zeitgeist — particularly those parts of it that can afford spending $8 million on a 30-second spot. Unfortunately, this means we were inundated with ads about AI, prediction markets, and crypto.
What people are saying: “My takeaway from the super bowl ads is that the entire american economy is being propped up by AI, weight loss drugs, cryptocurrency and gambling,” Axios congress reporter Andrew Solender posted on X.
Others were creeped out by home security company Ring’s ad, which promoted a feature for locating missing pets: “every commercial was ‘gamble your life away, AI will live it for you. We’re watching you,’” wrote @zaydante in an X post that garnered more than 800,000 views.
Others prodded at Salesforce’s odd decision to feature Youtuber MrBeast in its ad considering the very different demographics in customer base and audience. “My 9-year old and all his friends are creating Salesforce accounts right now. And they’re all making cold calls to B2B decision makers and generating SQLs for enterprise SaaS companies,” @bradcarryvc joked. “Mr. Beast just created 1 billion new CRM users.”
AI fatigue emerged as a topic of discourse. “Super bowl commercials so evil this year seeing a shitty bud light commercial felt healing like oh yes… bud light… a tangible object unrelated to ai or crypto or gambling,” @agneswickfields posted.
—Lindsey Choo
ChatGPT gets new users, launches ads
What happened: After a contentious couple of weeks, OpenAI launched ads — and told its team that the company has resumed growing.
In an internal memo, CEO Sam Altman said ChatGPT is “Back to exceeding 10% monthly growth.” That’s a nice sign for OpenAI, which declared a “code red” to improve the product in December after facing slowing growth and competition from Google’s Gemini 3.
The company is also preparing to launch “an updated Chat model” this week, he said.
Altman talked up the growth in OpenAI’s coding product, Codex, after the launch of its Mac app and a new coding model last week. The company is working to challenge Anthropic’s Claude Code, which has generated a lot of hype (and revenue) over the past few months.
Meanwhile, the company announced it has begun testing advertising in its cheapest tiers, Free and Go, starting today.
“Ads do not influence the answers ChatGPT gives you,” the company said. They’ll appear as links under the conversation.
They also won’t appear next to “sensitive topics” like mental health or politics.
Why we’re following: Since Altman got really annoyed about Anthropic’s Super Bowl ads last week, we’ve been watching his comms with fascination. (I’m obsessed with the fact that he called ChatGPT “Chat,” which is what high schoolers call it on TikTok.)
Altman’s memo looks like an attempt to rally the troops after some recent negative press, including slowing ChatGPT growth, drama with chipmaker Nvidia, and Anthropic’s attack ad.
And that’s fair! Prioritizing ChatGPT seems like the right priority for OpenAI, and these are preliminary signs that it’s paying off.
At the same time, OpenAI now has advertiser pressure to worry about. It will be fascinating to watch how the company reacts.
What people are saying: Tech news show TPBN offered a parody: “Claude with Ads.”
After the money I blew on Anthropic API credits last week, I was honestly kind of excited to try their Claude wrapper, which offers the high-end Opus 4.6 model for free — if you’re willing to subject yourself to TPBN’s (parody) ads.
Unfortunately, Platformer was very disappointed by the Claude with Ads user experience.
—Ella Markianos
Side Quests
A first look at the supposed Trump phone, which appears to be different from earlier promised versions. An investigation into how Binance, whose billionaire founder President Trump pardoned, has significantly boosted the Trump family’s crypto firm.
OpenAI is reportedly working with Abu Dhabi-based G42 to build a new ChatGPT version geared for the UAE. That will be a fun content moderation story.
The DOJ is investigating whether Netflix engaged in anticompetitive conduct as it probes the Warner Discovery acquisition.
The US Patent and Trademark Office is shifting the way it views AI patent eligibility.
New York lawmakers introduced a bill that would pause data center development for three years amid an ongoing backlash. The White House is preparing a draft voluntary agreement for tech companies to pledge that data centers won’t raise household electricity prices.
A look at the various proposed social media bans proliferating across the EU. The EU gave Meta a warning over the company’s blocking of rival AI assistants in WhatsApp.
Goldman Sachs is working with Anthropic to create AI agents to automate accounting and onboarding tasks. Users can pay six times the normal price to use a faster version of Claude Opus 4.6. Anthropic’s bet on enterprise products is paying off.
X awarded $1 million to a user who shared racist posts in a creator contest, despite contest rules prohibiting “political, or religious statements.” It also launched a new pay-per-use model for its developer API.
Google, Amazon, Meta, and Microsoft have forecasted a combined $650 billion AI spending in 2026. Cloud giants Amazon, Google, and Microsoft collectively reported a $1.1 trillion backlog of revenue — that is, money they could make if they had the infrastructure to support it.
Google is set to raise $20 billion from a US dollar bond sale. Access to YouTube Music lyrics now requires a Premium account. Waymo said it’s using DeepMind’s Genie 3 AI model to create realistic worlds for training.
Republican tax cuts shaved billions off Amazon’s tax bill last year, which fell to $1.2 billion from $9 billion. A nice thank-you gift for spending $40 million to make MELANIA. Amazon shares dropped 8 percent after it posted mixed fourth-quarter earnings and raised its 2026 spending forecast to $200 billion.
ByteDance’s new Seedance 2.0 video generation model sparked a stock rally and buzz on social media.
OpenClaw is partnering with VirusTotal to detect malicious skills uploaded to ClawHub.
Spotify changed its developer mode API to require mandatory premium accounts and limited test users. Its new “About the song” feature lets users learn the stories behind the music they’re listening to.
A look at how Kalshi and Polymarket are increasingly luring pro gamblers.
Discord is rolling out age verification and will require a face scan or ID for full access.
An engineer’s struggle with AI fatigue. A look at the romance novel industry’s battle with over the use of AI. AI intensifies workloads instead of reducing them, new research suggests.
Those good posts
For more good posts every day, follow Casey’s Instagram stories.

(Link)

(Link)

(Link)

(Link)

(Link)

(Link)
Talk to us
Send us tips, comments, questions, and expert witness testimony: casey@platformer.news. Read our ethics policy here.