{"id":375285,"date":"2026-01-17T13:57:06","date_gmt":"2026-01-17T13:57:06","guid":{"rendered":"https:\/\/www.newsbeep.com\/uk\/375285\/"},"modified":"2026-01-17T13:57:06","modified_gmt":"2026-01-17T13:57:06","slug":"my-picture-was-used-in-child-abuse-images-ai-is-putting-others-through-my-nightmare-mara-wilson","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/uk\/375285\/","title":{"rendered":"My picture was used in child abuse images. AI is putting others through my nightmare | Mara Wilson"},"content":{"rendered":"<p class=\"dcr-130mj7b\">When I was a little girl, there was nothing scarier than a stranger.<\/p>\n<p class=\"dcr-130mj7b\">In the late 1980s and early 1990s, kids were told, by our parents, by TV specials, by teachers, that there were strangers out there who wanted to hurt us. \u201cStranger Danger\u201d was everywhere. It was a well-meaning lesson, but the risk was overblown: <a href=\"https:\/\/rainn.org\/facts-statistics-the-scope-of-the-problem\/statistics-children-teens\/#:~:text=93%25%20of%20victims%20under%2018,were%20strangers%20to%20the%20victim.&amp;text=1-,Department%20of%20Justice%2C%20Office%20of%20Justice%20Programs%2C%20Bureau%20of%20Justice,to%20Law%20Enforcement%20(2000).\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">most child abuse and exploitation is perpetrated by people the children know<\/a>. It\u2019s much rarer for children to be abused or exploited by strangers.<\/p>\n<p class=\"dcr-130mj7b\">Rarer, but not impossible. I know, because I was sexually exploited by strangers.<\/p>\n<p class=\"dcr-130mj7b\">From ages five to 13, I was a child actor. And while as of late we\u2019ve heard many horror stories about the abusive things that happened to child actors behind the scenes, I always felt safe while filming. Filmsets were highly regulated spaces where people wanted to get work done. I had supportive parents, and was surrounded by directors, actors, and studio teachers who understood and cared for children.<\/p>\n<p class=\"dcr-130mj7b\">The only way show business did endanger me was by putting me in the public eye. Any cruelty and exploitation I received as a child actor was at the hands of the public.<\/p>\n<p class=\"dcr-130mj7b\">\u201cHollywood throws you into the pool,\u201d I always tell people, \u201cbut it\u2019s the public that holds your head underwater.\u201d<\/p>\n<p class=\"dcr-130mj7b\">Before I was even in high school, my image had been used for child sexual abuse material (CSAM). I\u2019d been featured on fetish websites and Photoshopped into pornography. Grown men sent me creepy letters. I wasn\u2019t a beautiful girl \u2013 my awkward age lasted from about age 10 to about 25 \u2013 and I acted almost exclusively in family-friendly movies. But I was a public figure, so I was accessible. That\u2019s what child sexual predators look for: access. And nothing made me more accessible than the internet.<\/p>\n<p class=\"dcr-130mj7b\">It didn\u2019t matter that those images \u201cweren\u2019t me\u201d, or that the fetish sites were \u201ctechnically\u201d legal. It was a painful, violating experience; a living nightmare I hoped no other child would have to go through. Once I was an adult, I worried about the other kids who had followed after me. Were similar things happening to the Disney stars, the Strangers Things cast, the preteens making TikTok dances and smiling in family vlogger YouTube channels? I wasn\u2019t sure I wanted to know the answer.<\/p>\n<p class=\"dcr-130mj7b\">When generative AI started to pick up a few years ago, I feared the worst. I\u2019d heard stories of \u201cdeepfakes\u201d, and knew the technology was getting exponentially more realistic.<\/p>\n<p class=\"dcr-130mj7b\">Then it happened \u2013 or at least, the world noticed that it had happened. Generative AI has already been used many times <a href=\"https:\/\/bsky.app\/profile\/eliothiggins.bsky.social\/post\/3mboy3hmcxs2q\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">to create sexualized images of adult women without their consent.<\/a> It happened to friends of mine. But recently, it was reported that X\u2019s AI tool Grok had been used, quite openly, <a href=\"https:\/\/www.axios.com\/2026\/01\/02\/elon-musk-grok-ai-child-abuse-images-stranger-things\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">to generate undressed images of an underage actor<\/a>. Weeks earlier, a girl was expelled from school for hitting a classmate who allegedly made deepfake porn of her, <a href=\"https:\/\/www.live5news.com\/2025\/11\/12\/girl-13-expelled-hitting-classmate-who-made-deepfake-porn-image-her-lawyers-say\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">according to her family\u2019s lawyers<\/a>. She was 13, about the same age I was when people were making fake sexualized images of me.<\/p>\n<p class=\"dcr-130mj7b\">In July 2024, <a href=\"https:\/\/www.iwf.org.uk\/media\/nadlcb1z\/iwf-ai-csam-report_update-public-jul24v13.pdf\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">the Internet Watch Foundation found more than 3,500 images of AI-generated CSAM on a dark web forum<\/a>. How many more thousands have been made in the year and a half since then?<\/p>\n<p>In order to stop the threat of a deepfake apocalypse, we need to look at how AI is trained<\/p>\n<p class=\"dcr-130mj7b\">Generative AI has reinvented Stranger Danger. And this time, the fear is justified. It is now infinitely easier for any child whose face has been posted on the internet to be sexually exploited. Millions of children could be forced to live my same nightmare.<\/p>\n<p class=\"dcr-130mj7b\">In order to stop the threat of a deepfake apocalypse, we need to look at how AI is trained.<\/p>\n<p class=\"dcr-130mj7b\">Generative AI \u201clearns\u201d by a repeated process of \u201clook, make, compare, update, repeat\u201d, says Patrick LaVictoire, a mathematician and former AI safety researcher. It creates models based on things it\u2019s memorized, but it can\u2019t memorize everything, so it has to look for patterns, and base its responses on that. \u201cA connection that\u2019s useful gets reinforced,\u201d says LaVictoire. \u201cOne that\u2019s less so, or actively unhelpful, gets pruned.\u201d<\/p>\n<p class=\"dcr-130mj7b\">What generative AI can create depends on the materials the AI has been trained on. A<a href=\"https:\/\/purl.stanford.edu\/kh752sm9123\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\"> study at Stanford University<\/a> in 2023 showed that one of the most popular training datasets already contained more than 1,000 instances of CSAM. The links to CSAM <a href=\"https:\/\/arstechnica.com\/tech-policy\/2024\/08\/nonprofit-scrubs-illegal-content-from-controversial-ai-training-dataset\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">have since been removed from the dataset<\/a>, but the researchers have emphasized that another threat is CSAM made by combining images of children with pornographic images, which is possible if both are in the training data.<\/p>\n<p class=\"dcr-130mj7b\"><a href=\"https:\/\/support.google.com\/transparencyreport\/answer\/10330933\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">Google<\/a> and <a href=\"https:\/\/openai.com\/index\/combating-online-child-sexual-exploitation-abuse\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">OpenAI<\/a> claim to have safeguards in place to protect against the creation of CSAM: for instance, by taking care with the data they use to train their AI platforms. (It\u2019s also worth noting that many adult film actors and sex workers have had their images scraped for AI <a href=\"https:\/\/www.youtube.com\/watch?v=U0CFR2i_aTY\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">without their consent<\/a>.)<\/p>\n<p class=\"dcr-130mj7b\">Generative AI itself, says LaVictoire, has no way of distinguishing between innocuous and silly commands such as \u201cmake an image of a Jedi samurai\u201d and harmful commands, such as \u201cundress this celebrity\u201d. So another safeguard incorporates a different kind of AI that acts similarly to a spam filter, which can block those queries from being answered. xAI, which runs Grok, seems to have been careless with that filter.<\/p>\n<p class=\"dcr-130mj7b\">And the worst may be yet to come: <a href=\"https:\/\/about.fb.com\/news\/2024\/07\/open-source-ai-is-the-path-forward\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">Meta<\/a> and <a href=\"https:\/\/stability.ai\/news\/stability-ai-announces-101-million-in-funding-for-open-source-artificial-intelligence\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">other companies<\/a> have proposed that future AI models be open source. \u201cOpen source\u201d means anyone can access the code behind it, download it and edit it as they please. What is usually wonderful about open-source software \u2013 the freedom it gives users to create new things, prioritizing creativity and collaboration over profit \u2013 could be a disaster for children\u2019s safety.<\/p>\n<p class=\"dcr-130mj7b\">Once someone downloaded an open-source AI platform and made it their own, there would be no safeguards, no AI bot saying that it couldn\u2019t help with their request. Anyone could \u201cfine-tune\u201d their own personal image generator using explicit or illegal images, and make their own infinite CSAM and \u201crevenge porn\u201d generator.<\/p>\n<p class=\"dcr-130mj7b\">Meta seems to have <a href=\"https:\/\/www.bloomberg.com\/news\/articles\/2025-12-10\/inside-meta-s-pivot-from-open-source-to-money-making-ai-model\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">stepped back from making its newer AI platforms open source<\/a>. Perhaps Mark Zuckerberg remembered that <a href=\"https:\/\/theguardian.com\/commentisfree\/2018\/sep\/12\/what-attracts-mark-zuckerberg-roman-hardman-augustus\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">he wants to be like the Roman emperor Augustus<\/a>, and that if he continued down this path, he might be remembered more as the Oppenheimer of CSAM.<\/p>\n<p class=\"dcr-130mj7b\">Some countries are already fighting against this. China was the first to enact <a href=\"https:\/\/www.chinalawtranslate.com\/en\/ai-labeling\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">a law that requires AI content to be labelled as such<\/a>. Denmark is working on legislation that would give citizens the copyright to their appearances and voices, and would impose fines on AI platforms that don\u2019t respect that. In other parts of Europe, <a href=\"https:\/\/theguardian.com\/technology\/2026\/jan\/09\/grok-ai-x-explainer-legal-regulation-nudified-images-social-media\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">and in the UK<\/a>, people\u2019s images may be protected by General Data Protection Regulation (GDPR).<\/p>\n<p class=\"dcr-130mj7b\">The outlook in the United States seems much grimmer. Copyright claims aren\u2019t going to help, because when a user uploads an image to a platform, they can use it however they see fit; it\u2019s in nearly every Terms of Service agreement. With <a href=\"https:\/\/www.whitehouse.gov\/presidential-actions\/2025\/12\/eliminating-state-law-obstruction-of-national-artificial-intelligence-policy\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">executive orders against the regulation of generative AI<\/a> and companies such as xAI <a href=\"https:\/\/theguardian.com\/technology\/2025\/jul\/14\/us-military-xai-deal-elon-musk\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">signing contracts with the US military<\/a>, the US government has shown that making money with AI is far more important than keeping citizens safe.<\/p>\n<p class=\"dcr-130mj7b\">There has been some recent legislation \u201cthat makes a lot of this digital manipulation criminal\u201d, says Akiva Cohen, a New York City litigator. \u201cBut also, a lot of those statutes are probably overly restrictive in what exactly they cover.\u201d<\/p>\n<p class=\"dcr-130mj7b\">For example, while making a deepfake of someone that makes them appear nude or engaged in a sexual act could be grounds for criminal charges, using AI to put a woman \u2013 and likely even an underage girl \u2013 into a bikini probably would not.<\/p>\n<p class=\"dcr-130mj7b\">\u201cA lot of this very consciously stays just on the \u2018horrific, but legal\u2019 side of the line,\u201d says Cohen.<\/p>\n<p class=\"dcr-130mj7b\">Maybe it\u2019s not criminal \u2013 that is to say, a crime against the state, but Cohen argues it could be a civil liability, a violation of another person\u2019s rights, for which a victim requires restitution. He suggests that this falls under a \u201c<a href=\"https:\/\/www.law.cornell.edu\/wex\/false_light\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">false light<\/a>, invasion of privacy\u201d tort, a civil wrong in which offensive claims are made about a person, showing them in a false light, \u201cdepicting someone in a way that shows them doing something they didn\u2019t do\u201d.<\/p>\n<p class=\"dcr-130mj7b\">\u201cThe way that you can really deter this type of conduct is by imposing liability on the companies that are enabling this,\u201d Cohen says.<\/p>\n<p class=\"dcr-130mj7b\">There\u2019s legal precedent for that: the <a href=\"https:\/\/www.nysenate.gov\/legislation\/bills\/2025\/A6453\/amendment\/A\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">Raise Act<\/a> in New York, and <a href=\"https:\/\/leginfo.legislature.ca.gov\/faces\/billTextClient.xhtml?bill_id=202520260SB53\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">Senate Bill 53<\/a> in California, say that AI companies can be held accountable for harms they have done past a certain point. X, meanwhile, <a href=\"https:\/\/theguardian.com\/technology\/2026\/jan\/14\/elon-musk-grok-ai-explicit-images\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">will now block Grok<\/a> from making sexualized images of real people on the platform. But it <a href=\"https:\/\/www.washingtonpost.com\/technology\/2026\/01\/15\/grok-ai-image-generator-sexualized\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">appears that policy change<\/a> doesn\u2019t apply to the stand-alone Grok app.<\/p>\n<p class=\"dcr-130mj7b\">But Josh Saviano, a former practicing attorney in New York, as well as a former child actor, believes more immediate actions need to be taken, in addition to legislation.<\/p>\n<p class=\"dcr-130mj7b\">\u201cLobbying efforts and our courts are eventually going to be the way that this is handled,\u201d says Saviano. \u201cBut until that happens, there are two options: abstain entirely, which means take your entire digital footprint off the internet \u2026 or you need to find a technological solution. \u201c<\/p>\n<p class=\"dcr-130mj7b\">Ensuring the safety of young people is of paramount importance to Saviano, who has known people who\u2019ve had deepfakes of them, <a href=\"https:\/\/www.snopes.com\/fact-check\/wondering-about-marilyn\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">and \u2013 as a former child actor \u2013 knows a little about losing control<\/a> of one\u2019s own narrative. Saviano and his team have been working on a tool that could detect and notify people when their images or creative work are being scraped. The team\u2019s motto, he says, is: \u201cProtect the babies.\u201d<\/p>\n<p class=\"dcr-130mj7b\">Regardless of how it may happen, I believe that protection against this threat is going to take a lot of effort from the public.<\/p>\n<p class=\"dcr-130mj7b\">There are many who are starting to feel an affinity with their AI chatbots, but for most people, tech companies are nothing more than utilities. We may prefer one app over another for personal or political reasons, but few feel strong loyalty to tech brands. Tech companies, and especially social media platforms like Meta and <a href=\"https:\/\/www.theguardian.com\/technology\/twitter\" data-link-name=\"in body link\" data-component=\"auto-linked-tag\" rel=\"nofollow noopener\" target=\"_blank\">X<\/a>, would do well to remember that they are a means to an end. And if someone like me \u2013 who was on Twitter all day, everyday, for more than a decade \u2013 can quit it, anyone can.<\/p>\n<p class=\"dcr-130mj7b\">But boycotts aren\u2019t enough. We need to be the ones demanding companies that allow the creation of CSAM be held accountable. We need to be demanding legislation and technological safeguards. We also need to examine our own actions: nobody wants to think that if they share photos of their child, those images could end up in CSAM. But it is a risk, one that parents need to protect their young children from, and warn their older children about.<\/p>\n<p class=\"dcr-130mj7b\">If our obsession with Stranger Danger showed anything, it\u2019s that most of us want to prevent child endangerment and harassment. It\u2019s time to prove it.<\/p>\n","protected":false},"excerpt":{"rendered":"When I was a little girl, there was nothing scarier than a stranger. In the late 1980s and&hellip;\n","protected":false},"author":2,"featured_media":375286,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[554,733,4308,86,56,54,55],"class_list":{"0":"post-375285","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-technology","12":"tag-uk","13":"tag-united-kingdom","14":"tag-unitedkingdom"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts\/375285","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/comments?post=375285"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts\/375285\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/media\/375286"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/media?parent=375285"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/categories?post=375285"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/tags?post=375285"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}