{"id":373275,"date":"2026-04-03T17:02:09","date_gmt":"2026-04-03T17:02:09","guid":{"rendered":"https:\/\/www.newsbeep.com\/il\/373275\/"},"modified":"2026-04-03T17:02:09","modified_gmt":"2026-04-03T17:02:09","slug":"the-facebook-insider-building-content-moderation-for-the-ai-era","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/il\/373275\/","title":{"rendered":"The Facebook insider building content moderation for the AI era"},"content":{"rendered":"<p id=\"speakable-summary\" class=\"wp-block-paragraph\">When Brett Levenson left Apple in 2019 to lead business integrity at Facebook, the social media giant was in the thick of the <a href=\"https:\/\/techcrunch.com\/2019\/07\/25\/facebook-ignored-staff-warnings-about-sketchy-cambridge-analytica-in-september-2015\/\" rel=\"nofollow noopener\" target=\"_blank\">Cambridge Analytica<\/a> fallout. At the time, he thought he could simply fix Facebook\u2019s content moderation problem with better technology.\u00a0<\/p>\n<p class=\"wp-block-paragraph\">The problem, he quickly learned, ran deeper than technology. Human reviewers were expected to memorize a 40-page policy document that had been machine-translated into their language, he said. Then they had about 30 seconds per piece of flagged content to decide not just whether that\u00a0 content violated the rules, but what to do about it: block it, ban the user, limit the spread. Those quick calls were only \u201cslightly better than 50% accurate,\u201d according to Levenson.<\/p>\n<p class=\"wp-block-paragraph\">\u201cIt was kind of like flipping a coin, whether the human reviewers could actually address policies correctly, and this was many days after the harm had already occurred anyway,\u201d Levenson told TechCrunch.<\/p>\n<p class=\"wp-block-paragraph\">That sort of delayed, reactive approach is not sustainable in a world of nimble and well-funded adversarial actors. The rise of AI chatbots has only compounded the problem, as content moderation failures have resulted in a string of high-profile incidents, like chatbots providing teens with <a href=\"https:\/\/techcrunch.com\/2025\/11\/23\/chatgpt-told-them-they-were-special-their-families-say-it-led-to-tragedy\/\" rel=\"nofollow noopener\" target=\"_blank\">self-harm guidance<\/a> or <a href=\"https:\/\/techcrunch.com\/2026\/01\/16\/california-ag-sends-musks-xai-a-cease-and-desist-order-over-sexual-deepfakes\/\" rel=\"nofollow noopener\" target=\"_blank\">AI-generated imagery<\/a> evading safety filters.<\/p>\n<p class=\"wp-block-paragraph\">Levenson\u2019s frustration led to the idea of \u201cpolicy as code\u201d \u2014 a way to turn static policy documents into executable, updatable logic tightly coupled to enforcement. That insight led to the founding of <a href=\"https:\/\/moonbounce.io\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Moonbounce<\/a>, which announced on Friday it has raised $12 million in funding, TechCrunch has exclusively learned. The round was co-led by Amplify Partners and StepStone Group.<\/p>\n<p class=\"wp-block-paragraph\">Moonbounce works with companies to provide an additional safety layer wherever content is generated, whether by a user or by AI. The company has trained its own large language model to look at a customer\u2019s policy documents, evaluate content at runtime, provide a response in 300 milliseconds or less, and take action. Depending on customer preference, that action could look like Moonbounce\u2019s system slowing down distribution while the content awaits a human review later, or it might block high-risk content in the moment.\u00a0<\/p>\n<p class=\"wp-block-paragraph\">Today, Moonbounce serves three main verticals: Platforms dealing with user-generated content like dating apps; AI companies building characters or companions; and AI image generators.\u00a0<\/p>\n<p>Techcrunch event<\/p>\n<p>\n\t\t\t\t\t\t\t\t\tSan Francisco, CA<br \/>\n\t\t\t\t\t\t\t\t\t\t\t\t\t|<br \/>\n\t\t\t\t\t\t\t\t\t\t\t\t\tOctober 13-15, 2026\n\t\t\t\t\t\t\t<\/p>\n<p class=\"wp-block-paragraph\">Moonbounce is supporting more than 40 million daily reviews and serving over 100 million daily active users on the platform, Levenson said. Customers include AI companion startup Channel AI, image and video generation company Civitai, and character roleplay platforms Dippy AI and Moescape.\u00a0<\/p>\n<p class=\"wp-block-paragraph\">\u201cSafety can actually be a product benefit,\u201d Levenson told TechCrunch. \u201cIt just never has been because it\u2019s always a thing that happens later, not a thing you can actually build into your product. And we see our customers are finding really interesting and innovative ways to use our technology to make safety a differentiator, and part of their product story.\u201d<\/p>\n<p class=\"wp-block-paragraph\">Tinder\u2019s head of trust and safety <a href=\"https:\/\/www.youtube.com\/watch?v=ViWAHYFjb90\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">recently explained<\/a> how the dating platform uses these types of LLM-powered services to reach a 10x improvement in accuracy of detections.<\/p>\n<p class=\"wp-block-paragraph\">\u201cContent moderation has always been a problem that plagued large online platforms, but now with LLMs at the heart of every application, this challenge is even more daunting,\u201d Lenny Pruss, general partner at Amplify Partners, said in a statement. \u201cWe invested in Moonbounce because we envision a world where objective, real-time guardrails become the enabling backbone of every AI-mediated application.\u201d<\/p>\n<p class=\"wp-block-paragraph\">AI companies are facing mounting legal and reputational pressure after chatbots have been accused of pushing teenagers and vulnerable users toward <a href=\"https:\/\/www.npr.org\/sections\/shots-health-news\/2025\/09\/19\/nx-s1-5545749\/ai-chatbots-safety-openai-meta-characterai-teens-suicide\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">suicide<\/a> and image generators like xAI\u2019s Grok have been used to create <a href=\"https:\/\/www.nytimes.com\/2026\/01\/22\/technology\/grok-x-ai-elon-musk-deepfakes.html\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">nonconsensual<\/a> nude imagery. Clearly, safety guardrails internally are failing, and it\u2019s becoming a liability question. Levenson said AI companies are increasingly looking outside their own walls for help beefing out safety infrastructure.\u00a0<\/p>\n<p class=\"wp-block-paragraph\">\u201cWe\u2019re a third party sitting between the user and the chatbot, so our system isn\u2019t inundated with context the way the chat itself is,\u201d Levenson said. \u201cThe chatbot itself has to remember, potentially, tens of thousands of tokens that have come before\u2026We\u2019re solely worried about enforcing rules at runtime.\u201d<\/p>\n<p class=\"wp-block-paragraph\">Levenson runs the 12-person company with his former Apple colleague Ash Bhardwaj, who previously built large-scale cloud and AI infrastructure across the iPhone-maker\u2019s core offerings. Their next focus is a capability called \u201citerative steering,\u201d developed in response to cases like the <a href=\"https:\/\/techcrunch.com\/2024\/10\/23\/lawsuit-blames-character-ai-in-death-of-14-year-old-boy\/\" rel=\"nofollow noopener\" target=\"_blank\">2024 suicide of a 14-year-old Florida boy<\/a> who became obsessed with a Character AI chatbot. Rather than a blunt refusal when harmful topics arise, the system would intercept the conversation and redirect it, modifying prompts in real time to push the chatbot toward a more actively supportive response.<\/p>\n<p class=\"wp-block-paragraph\">\u201cWe hope to be able to add to our actions toolkit the ability to steer the chatbot in a better direction to, essentially, take the user\u2019s prompt and modify it to force the chatbot to be not just an empathetic listener, but a helpful listener in those situations,\u201d Levenson said.\u00a0<\/p>\n<p class=\"wp-block-paragraph\">When asked whether his exit strategy involved an acquisition by a company like Meta, bringing his work on content moderation full circle, Levenson said he recognizes how well Moonbounce would fit into his old employer\u2019s stack, as well as his own fiduciary duties as a CEO.\u00a0<\/p>\n<p class=\"wp-block-paragraph\">\u201cMy investors would kill me for saying this, but I would hate to see someone buy us and then restrict the technology,\u201d he said. \u201cLike, \u2018Okay, this is ours now, and nobody else can benefit from it.\u2019\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"When Brett Levenson left Apple in 2019 to lead business integrity at Facebook, the social media giant was&hellip;\n","protected":false},"author":2,"featured_media":373276,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[345,43595,179141,343,344,43750,6947,85,46,179142,179143,125],"class_list":{"0":"post-373275","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-ai-safety","10":"tag-amplify-partners","11":"tag-artificial-intelligence","12":"tag-artificialintelligence","13":"tag-content-moderation","14":"tag-exclusive","15":"tag-il","16":"tag-israel","17":"tag-moonbounce","18":"tag-stepstone-group","19":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts\/373275","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/comments?post=373275"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts\/373275\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/media\/373276"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/media?parent=373275"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/categories?post=373275"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/tags?post=373275"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}