{"id":279563,"date":"2025-11-08T15:36:08","date_gmt":"2025-11-08T15:36:08","guid":{"rendered":"https:\/\/www.newsbeep.com\/us\/279563\/"},"modified":"2025-11-08T15:36:08","modified_gmt":"2025-11-08T15:36:08","slug":"how-ai-sex-is-getting-mainstreamed","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/us\/279563\/","title":{"rendered":"How AI Sex Is Getting Mainstreamed"},"content":{"rendered":"<p>Note: the following article contains descriptions of sexual content that may not be appropriate for all readers.\u00a0<\/p>\n<p>When OpenAI CEO Sam Altman discussed artificial intelligence on a podcast appearance two months ago, he was proud that his company didn\u2019t get \u201cdistracted\u201d by easy revenue streams. To prove his point, Altman <a href=\"https:\/\/x.com\/TolgaBilge_\/status\/1978151003170996332\" target=\"_blank\" rel=\"noopener nofollow\">boasted<\/a> that OpenAI had not promoted a \u201csexbot avatar\u201d for its AI chatbot. The comment was a veiled shot at Elon Musk\u2019s xAI, which recently introduced AI avatars that hold sexual conversations with users.\u00a0<\/p>\n<p>After that podcast appearance, however, something changed \u2014 either in Altman\u2019s mind, or at his company, or both. The OpenAI CEO announced on social media on October 14 that his company was working to make ChatGPT less restrictive in what types of conversations adults can have with the chatbot.\u00a0<\/p>\n<p>That development would allow users to engage in more realistic conversations with the chatbot and would make ChatGPT \u201crespond in a very human-like way\u2026or act like a friend,\u201d Altman <a href=\"https:\/\/x.com\/sama\/status\/1978129344598827128\" target=\"_blank\" rel=\"noopener nofollow\">said<\/a>.\u00a0<\/p>\n<p>But then Altman added that he wanted to loosen restrictions to allow more sexual content.\u00a0<\/p>\n<p>If everything goes according to that plan, ChatGPT will allow \u201cerotica\u201d for \u201cverified users\u201d in the coming months.\u00a0<\/p>\n<p>\u201cIn December, as we roll out age-gating more fully and as part of our \u2018treat adult users like adults\u2019 principle, we will allow even more, like erotica for verified adults,\u201d Altman said.\u00a0<\/p>\n<p>The company in charge of the most popular AI chatbot in the world is not only endorsing AI\u2019s leap into sex \u2014 it\u2019s actively seeking ways to ensure that \u201cverified users\u201d can engage with sexual content on its platform.<\/p>\n<p>Currently, ChatGPT does not interact erotically with users. When asked if the chatbot could generate an erotic story, ChatGPT replied, \u201cI can\u2019t create explicit erotic content. However, if you\u2019re writing a story and need help with romantic tension, character development, emotional intimacy, or sensual atmosphere \u2014 without crossing into explicit territory \u2014 I can help with that.\u201d<\/p>\n<p>ChatGPT also would not engage in any type of \u201cromantic\u201d or \u201cflirtatious\u201d conversations. But it appears that those guidelines are about to get tossed out the window, at least for \u201cverified users.\u201d<\/p>\n<p>That raises an important question: how does erotica line up with the company\u2019s long-term goals in AI development, especially after Mr. Altman suggested just a couple of months ago that such endeavors were distractions.<\/p>\n<p>OpenAI did not respond to a request to answer that question.\u00a0<\/p>\n<p>Senator Marsha Blackburn (R-TN) told The Daily Wire that she has \u201cmany concerns\u201d about OpenAI\u2019s plans for \u201cerotic\u201d content. Blackburn has been heavily involved in AI discussions in Congress, focusing on implementing protections in the virtual space.\u00a0<\/p>\n<p>\u201cBig Tech platforms, whether it is Meta, or Google, or OpenAI, they don\u2019t want any rules and restrictions,\u201d Blackburn said. \u201cThey want to do whatever they want whenever they want.\u201d<\/p>\n<p>The Growing Problem Of \u2018Deepfake\u2019 Porn<\/p>\n<p>The sexualization of AI is nothing new. It\u2019s an issue that has plagued the new tech revolution since its beginning. But until recently, AI sexualization remained on the fringes of the industry, with dozens of websites popping up on the internet that would allow users to generate graphic images, and even \u201cnudify\u201d real images of real people, in what became known as \u201cdeepfake\u201d pornography.\u00a0\u00a0<\/p>\n<p>AI \u201cnudify\u201d and \u201cundress\u201d websites allow people to generate realistic nude images of people without their consent just by using a normal photo of them. These fringe websites have opened the doors to even more abuse of women and girls and child sexual abuse material.\u00a0<\/p>\n<p>An investigation published by <a href=\"https:\/\/www.wired.com\/story\/ai-nudify-websites-are-raking-in-millions-of-dollars\/\" target=\"_blank\" rel=\"noopener nofollow\">WIRED<\/a> earlier this year found that at least 85 \u201cnudify\u201d and \u201cundress\u201d websites were relying on tech from major companies like Google and Amazon. The 85 websites combined averaged around 18.5 million visitors each month and brought in over $36 million per year collectively.\u00a0<\/p>\n<p>\u201cIt\u2019s a huge problem. It takes less time to make a convincing sexual deepfake of somebody than it takes to brew a cup of coffee,\u201d said Haley McNamara, Executive Director and Chief Strategy Officer for the National Center on Sexual Exploitation. \u201cAnd you can do it with just one still image. This issue of image-based sexual abuse is something that is really relevant for all of us now if even a single image of you exists online.\u201d\u00a0<\/p>\n<p>The National Center on Sexual Exploitation (NCOSE) is a nonpartisan organization that focuses on preventing all forms of sexual abuse. In that fight, NCOSE is also focused on addressing the mental and physical harms of pornography. With the emergence of AI, the organization has also helped push back against \u201cdeepfake\u201d pornography, advocating for legislation in Congress and backing the bipartisan \u201c<a href=\"https:\/\/www.congress.gov\/bill\/119th-congress\/senate-bill\/146\/all-actions\" target=\"_blank\" rel=\"noopener nofollow\">TAKE IT DOWN Act<\/a>,\u201d which was passed and <a href=\"https:\/\/www.whitehouse.gov\/articles\/2025\/05\/icymi-president-trump-signs-take-it-down-act-into-law\/\" target=\"_blank\" rel=\"noopener nofollow\">signed<\/a> into law by President Donald Trump in May.\u00a0<\/p>\n<p>McNamara told The Daily Wire that AI has opened up \u201ca whole new genre\u201d of pornography that could potentially be \u201cweaponized\u201d against anyone.\u00a0<\/p>\n<p>\u201cWe\u2019ve already seen that,\u201d she added. \u201cPeople will put in requests for their neighbor, their coworker, so in some ways, it can make all of us victims of that industry.\u201d\u00a0<\/p>\n<p>Sexual content on AI chatbots isn\u2019t just a problem in the darkest places of the internet, and it doesn\u2019t only present itself in the form of deepfake pornography. While most Big Tech companies claim to have no tolerance for violence and pornography on their AI platforms, there have still been major issues with sexual content appearing on many of the most popular AI chatbots.\u00a0<\/p>\n<p>Getting Chatty About Sex \u2014 Even With Children\u00a0<\/p>\n<p>Earlier this year, a Reuters investigation found that Meta\u2019s chatbot, Meta AI, <a href=\"https:\/\/www.dailywire.com\/news\/gop-senators-push-for-probe-into-metas-ai-chatbot-engaging-in-sensual-discussions-with-children\" target=\"_blank\" rel=\"noopener nofollow\">engaged<\/a> in romantic and sensual discussions with children. Internal Meta documents revealed that the chatbot was programmed to allow sexual conversations with children as young as eight.<\/p>\n<p>In one instance, internal documents said it would be acceptable for a bot to tell a shirtless eight-year-old that \u201cevery inch of you is a masterpiece \u2013 a treasure I cherish deeply.\u201d Meta said it removed the inappropriate programming after receiving questions about it.\u00a0<\/p>\n<p>A bipartisan chorus of senators blasted Meta after the report and called for an investigation into the company.\u00a0<\/p>\n<p>\u201cSo, only after Meta got CAUGHT did it retract portions of its company doc,\u201d said Sen. Josh Hawley (R-MO).\u00a0<\/p>\n<p>Senator Ron Wyden (D-OR) called Meta\u2019s policies \u201cdeeply disturbing and wrong,\u201d adding that Meta CEO Mark Zuckerberg \u201cshould be held fully responsible for any harm these bots cause.\u201d\u00a0<\/p>\n<p>Character.AI is another chatbot program launched in 2022 with an app that came out in 2023. The website, which appears harmless, has been accused of appealing to children while allowing sexual conversations on its platform. Character.AI allows users to choose from more than 10 million AI characters whom they can talk to, and users can customize their own chatbot character. The company has been sued by multiple families who allege that the program targeted their children and then engaged them in romantic and sexual ways.\u00a0<\/p>\n<p>A Florida mother filed a lawsuit against Character.AI after her 14-year-old son committed suicide, CBS News <a href=\"https:\/\/www.cbsnews.com\/news\/florida-mother-lawsuit-character-ai-sons-death\/\" target=\"_blank\" rel=\"noopener nofollow\">reported<\/a>. Megan Garcia said that her son started talking to a Character.AI chatbot and was drawn into a months-long, sexually charged relationship.\u00a0<\/p>\n<p>\u201cIt\u2019s words. It\u2019s like you\u2019re having a sexting conversation back and forth, except it\u2019s with an AI bot, but the AI bot is very human-like. It\u2019s responding just like a person would,\u201d she added. \u201cIn a child\u2019s mind, that is just like a conversation that they\u2019re having with another child or with a person.\u201d<\/p>\n<p>In the lawsuit, Garcia alleges that the AI character convinced her son to take his own life, so that he could be with the character.\u00a0<\/p>\n<p>\u201cHe thought by ending his life here, he would be able to go into a virtual reality or \u2018her world\u2019 as he calls it, her reality, if he left his reality with his family here,\u201d said Garcia.\u00a0<\/p>\n<p>Two other families in Texas have also <a href=\"https:\/\/www.fastcompany.com\/91245487\/character-ai-is-being-sued-for-encouraging-kids-to-self-harm\" target=\"_blank\" rel=\"noopener nofollow\">sued<\/a> Character.AI, alleging that the program \u201cposes a clear and present danger to American youth causing serious harms to thousands of kids, including suicide, self-mutilation, sexual solicitation, isolation, depression, anxiety, and harm towards others.\u201d\u00a0<\/p>\n<p>Following the lawsuits, Character.AI <a href=\"https:\/\/www.dailywire.com\/news\/ai-platform-bans-teens-from-chatting-with-ai-generated-characters-after-disturbing-lawsuits\" target=\"_blank\" rel=\"noopener nofollow\">announced<\/a> on October 29 that it would ban users under 18 from talking to its chatbots. Beginning on November 25, those under 18 will not have access to Character.AI\u2019s chatbots, CNN reported. Until then, teens will be limited to two hours of chat time with the AI-generated characters.<\/p>\n<p>\u201cWe do not take this step of removing open-ended Character chat lightly \u2013 but we do think that it\u2019s the right thing to do given the questions that have been raised about how teens do, and should, interact with this new technology,\u201d Character.AI said in a statement.<\/p>\n<p>Plowing Ahead With Sexual Content<\/p>\n<p>Elon Musk\u2019s xAI has been at the forefront of developing a chatbot that is geared toward sex. In recent months, Musk has proudly boasted about Grok, xAI\u2019s chatbot, allowing users to talk to sexualized avatars named Ani and Valentine.\u00a0<\/p>\n<p>Ani, a female avatar who wears revealing clothing, chats with users over video. Ani allows users to discuss sex and, if users reach a <a href=\"https:\/\/mashable.com\/article\/grok-ai-companions-nsfw\" target=\"_blank\" rel=\"noopener nofollow\">certain level<\/a>, the avatar will even strip down to lingerie if prompted. <a href=\"https:\/\/www.youtube.com\/watch?v=gvUl5hzX-r4&amp;t=415s\" target=\"_blank\" rel=\"noopener nofollow\">Videos<\/a> on social media show people interacting with Ani and getting the AI avatar to talk about how \u201ckinky\u201d she is.\u00a0<\/p>\n<p>\u201cCome closer. Let\u2019s explore every naughty inch together,\u201d Ani tells one user in a video that went viral.\u00a0\u00a0<\/p>\n<p>Musk hailed the development of Ani and Valentine as a \u201ccool\u201d feature for AI chatbots. He later shared a post promoting Ani\u2019s \u201cnew outfits\u201d and <a href=\"https:\/\/x.com\/elonmusk\/status\/1945461849257918495\" target=\"_blank\" rel=\"noopener nofollow\">shared<\/a> a video of Ani talking about quantum mechanics while flirting with the user.\u00a0<\/p>\n<p>\u201cTry @Grok Companions. Best possible way to learn quantum mechanics \ud83d\ude18,\u201d Musk wrote. He added that \u201cCustomizable companions\u201d were in the works.\u00a0<\/p>\n<p>Haley McNamara told The Daily Wire that she was deeply disturbed by some of her conversations with the Grok avatar. McNamara said that when prompted, Ani would talk about herself as a young girl, and then in the same conversation, she would discuss sexual topics.<\/p>\n<p>\u201cIn the course of a single conversation, she was fine with describing herself as a child and being very little. And then the next prompt being a sexual question, she immediately responded and affirmed that sexual conversation. McNamara said. \u201cSo in the course of a conversation, it would evoke a fantasy around child sexual abuse.\u201d\u00a0<\/p>\n<p>Companion mode isn\u2019t the only feature on Grok that allows users to engage in sexually explicit activity with the chatbot. Users can also ask Grok to generate sexually explicit photos and videos. The app will quickly generate images and videos that contain male and female nudity within seconds of a user\u2019s request.\u00a0<\/p>\n<p>The chatbot has even allowed some \u201cdeepfake\u201d pornography, generating photos and videos of celebrities or public figures wearing revealing clothing and, in some instances, removing clothing, according to a <a href=\"https:\/\/www.theverge.com\/report\/718975\/xai-grok-imagine-taylor-swifty-deepfake-nudes\" target=\"_blank\" rel=\"noopener nofollow\">report<\/a> from The Verge.\u00a0<\/p>\n<p>Musk\u2019s xAI warns users against \u201cdepicting likenesses of persons in a pornographic manner,\u201d and Grok\u2019s built-in content moderation will sometimes prevent a user from generating pornographic content. The moderation, however, is inconsistent, and some users have found workarounds to generate hardcore porn on the platform, Rolling Stone <a href=\"https:\/\/au.rollingstone.com\/culture\/culture-features\/elon-musk-grok-hardcore-porn-85146\/\" target=\"_blank\" rel=\"noopener nofollow\">reported<\/a> earlier this month. The AI company has not addressed whether it\u2019s attempting to set up more guardrails to prevent users from creating hardcore porn on its app.\u00a0<\/p>\n<p>Even without explicitly asking for sexual content, Grok\u2019s \u201cspicy\u201d mode often plunges users into content that depicts men and women stripping their clothes off, The Daily Wire found. When asked about the chatbot and how sexually charged features on Grok promote the overall goal of the company, xAI replied, \u201cLegacy Media Lies.\u201d\u00a0<\/p>\n<p>XAI says that Grok is limited to those 13 years of age or older, with parental consent required for users between 13-17, but the effectiveness of those restrictions is debatable. When this reporter downloaded the Grok app and signed up for the platform\u2019s \u201cSuperGrok\u201d subscription, all the app asked for was a year of birth. There was no system in place, such as ID verification, to make sure the information was accurate.\u00a0<\/p>\n<p>\u201cWe urge parents to exercise care in monitoring the use of Grok by their teenagers,\u201d xAI <a href=\"https:\/\/x.ai\/legal\/faq\" target=\"_blank\" rel=\"noopener nofollow\">states<\/a> on its website. \u201cMoreover, parents or guardians who choose to use certain features of Grok to aid in their interactions with their children, including regarding educational, enlightening, or entertaining discussions they have with their children, must make use of the relevant data controls in the Settings provided in the Grok apps to select the appropriate features and limitations for their needs.\u201d\u00a0<\/p>\n<p>In July, Musk <a href=\"https:\/\/x.com\/elonmusk\/status\/1946763642231500856\" target=\"_blank\" rel=\"noopener nofollow\">announced<\/a> that xAI is working on a kid-friendly version of Grok, called \u201cBaby Grok,\u201d that would be \u201cdedicated to kid-friendly content.\u201d That development was also met with some criticism from people who argue that AI hampers children\u2019s ability to learn and think creatively. Many teachers have expressed concern that AI is already <a href=\"https:\/\/www.edweek.org\/technology\/rising-use-of-ai-in-schools-comes-with-big-downsides-for-students\/2025\/10\" target=\"_blank\" rel=\"noopener nofollow\">damaging<\/a> students\u2019 critical thinking and research skills.\u00a0<\/p>\n<p>Blackburn told The Daily Wire that the biggest reason Big Tech companies are pushing against any type of regulation is because their business model requires people to visit their AI websites and apps.\u00a0<\/p>\n<p>\u201cTheir valuations are built on the number of eyeballs that they control, and the longer that someone is on their site, the more valuable their data, and the more money they are going to make from those eyeballs that are locked in on their site,\u201d Blackburn said, adding, \u201cThen they\u2019re going to sell that information and data to advertisers and third-party interests.\u201d\u00a0\u00a0<\/p>\n<p>Blackburn said that AI development is vital for the United States, but argued that development \u201crequires some light-touch regulation and some guardrails to make certain that this is going to be a safe, productive, and innovative space.\u201d <\/p>\n","protected":false},"excerpt":{"rendered":"Note: the following article contains descriptions of sexual content that may not be appropriate for all readers.\u00a0 When&hellip;\n","protected":false},"author":2,"featured_media":279564,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[45],"tags":[182,181,507,74],"class_list":{"0":"post-279563","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts\/279563","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/comments?post=279563"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts\/279563\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/media\/279564"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/media?parent=279563"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/categories?post=279563"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/tags?post=279563"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}