Grok, the generative AI tool integrated with Elon Musk’s X, isn’t technically a dedicated “undressing app.” Yet lax and inconsistent safeguards meant that over the past few weeks, it became abundantly clear that users could generate scantily clad or nearly nude images with a simple prompt.
The result has been that sexualized images, including images of children, have been widely shared on a mainstream platform. Rather than simply hosting harmful user-generated content, the platform is actively producing nonconsensual sexual imagery.
Grok’s latest internet scandal has triggered significant pushback and legal scrutiny. UK media regulators have already opened a formal investigation, and other countries are threatening to suspend X. Some U.S. senators are urging Apple and Android to remove X from their app stores. In response, X initially limited these AI image requests to paid subscribers, but now says it has implemented measures to prevent Grok from undressing images of real people.
Synthetic Images, Real Harm
Young people deserve more than a world where a company’s first instinct is to monetize the ability to create and share sexualized images of real children and non-consenting adults. Research on image-based sexual abuse already tells us that the non-consensual distribution of intimate or sexual material can have significant psychological and emotional consequences. Images of real people suddenly clad in bikinis, underwear, or clear plastic undergarments may be AI-generated, but the anger, embarrassment, fear, and powerlessness that follow are very real.
Women and girls are disproportionately targeted. These images can become flash points for harassment, objectification, and exploitation. No matter what, this is tricky terrain for kids and teens who are still learning about relationships, power, consent, and boundaries.
Teens Deserve Safety-by-Design
While attention to Grok is important, this issue extends far beyond a single app or platform. AI-generated sexualized content and child sexual abuse material (CSAM) are becoming more common across the internet.
Responding after harm occurs isn’t enough. As writers Bruna Santos and Shirin Anlen note, “AI developers, tech companies, social media platforms, and regulators must treat nonconsensual sexualized imagery as a design-level risk, not a downstream moderation problem.”
In other words, it’s time to move beyond reactive band-aids and toward safety by design. Teens want, and deserve, an internet built with their privacy, safety, and dignity in mind. Let’s advocate for one.
Adolescence in the Age of AI
Most experimentation with undressing prompts, especially when the feature is available on a widely accessible platform, may come from curiosity, messing around, disbelief (“Will this work?”), or attempts to impress friends or make jokes. In other cases, young people may use these images to intentionally hurt, harass, or humiliate a peer. Regardless of intent, once an image is created, it has a real psychological impact.
The vast majority of teens, sitting at our kitchen tables or working on a paper in class, can easily explain why this is a harmful use of technology. That doesn’t mean they won’t experiment with it, be pressured to engage with it, or be pulled into it as a bystander.
That disconnect makes more sense when we remember that skills like resisting pressure, perspective-taking, and thinking ahead to future consequences are still developing during adolescence. Researchers often distinguish between “cool” (low-stakes) and “hot” (high-stakes, emotionally charged) executive function skills. Teens tend to do much better with cool skills than hot ones. This helps explain why thoughtful decision-making can fall apart in moments of excitement, pressure, or heightened emotion.
Plus, teens are navigating a confusing terrain where boundaries and expectations feel inconsistent: They’re told this isn’t OK, while watching the CEO of one of the world’s most powerful tech companies joke about it. This does not mean that poor decision-making is inevitable—far from it. It does mean that talking early and often, working through realistic scenarios, and building media literacy matter.
Skip the Catastrophic Lectures
We know from years of conversations about issues like sexting that simply threatening kids with legal consequences and worst-case scenarios doesn’t reliably reduce risk. Long lectures don’t either.
Generating sexual, non-consensual images does pose serious risks, and teens deserve to understand them. The problem is that fear-based messages alone, such as “One photo and you could go to jail” or “Never do this! Your reputation will never recover!” don’t seem to stop young people from exploring sex and sexual imagery online.
The reality is that in lower-stakes, “cool” settings, most teens could deliver the lecture themselves. In addition, lectures that rely on fear, shame, or legal threats can backfire by pushing behavior underground and making it less likely that young people will reach out for help if they make a mistake.
Set Boundaries and Start Conversations
We want young people to go to a trusted adult when they hit a challenge online. That means moving away from catastrophic lectures and toward conversations grounded in clear boundaries, communication, and media literacy. Here are some ways to get started:
Start with curiosity. Ask teens whether they’ve ever asked for, received, or seen a sexualized photo that has been manipulated by AI. Is it common at their school? Do they think it’s a big deal? Why or why not? Listen to their perspective without rushing to correct or react.
Share the facts. Talk with teens about consequences without defaulting to catastrophic warnings. Laws matter, but legality is a low bar. A deeper conversation invites teens to consider what non-consensual image generation and sharing can do to another person’s sense of safety, dignity, and belonging.
Talk about sextortion. Explain what AI-enabled sextortion is and make it clear that if they are targeted, it isn’t their fault. Make it clear that talking about it right away can help prevent escalation.
Revisit consent. Name that consent applies even when images are AI-generated or altered. It applies even if it is a “joke.” Just because an image isn’t “real” doesn’t mean it isn’t harmful and wrong.
Practice upstander skills. Ask what they think someone should do if they see or are sent an AI-generated sexual image. Frame it as an upstander moment: How can you avoid causing harm, interrupt it, or get help? Brainstorm responses together, including choosing not to forward it, refusing to give it an audience, and looping in a trusted adult as soon as possible.
Generate strategies to resist pressure. Talk about strategies for resisting pressure to generate, view, or share images. Remind teens that adults are always happy to be the excuse: “I can’t even mess with that. My parent(s) always find out about everything.”
Activate media literacy. Step back and explore the bigger picture. Who is creating these tools? Who benefits? How do these platforms make money? Who is harmed? Who is responsible for harm? How does gender shape these interactions?
Double messages are OK. Hold both/and messages, such as: “I expect you to prioritize your safety and the safety of others by not using undressing or nudify apps ever. And, if you end up in a tough spot or make an unsafe choice, I can be your first call, and I won’t make you regret it. We will figure it out together.”
Make it about more than Grok. Nest conversations about Grok or nudify apps within broader discussions about relationships, including consent, communication, misogyny, self-worth, and decision-making.
Stay Connected
These moments make the distance between the internet as we want it to be and the internet as it is feel like a chasm that keeps getting wider. That’s why it is more important than ever that we keep engaging young people in the essential conversations and skill-building that will help them stay rooted in their self-worth and in our shared humanity—even when AI features are designed to strip us of it.