{"id":309485,"date":"2026-03-02T14:23:11","date_gmt":"2026-03-02T14:23:11","guid":{"rendered":"https:\/\/www.newsbeep.com\/nz\/309485\/"},"modified":"2026-03-02T14:23:11","modified_gmt":"2026-03-02T14:23:11","slug":"im-on-the-meta-oversight-board-we-need-ai-protections-now-suzanne-nossel","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/nz\/309485\/","title":{"rendered":"I\u2019m on the Meta oversight board. We need AI protections now | Suzanne Nossel"},"content":{"rendered":"<p class=\"dcr-130mj7b\">The speed with which AI is transforming our lives is head-spinning. Unlike previous technological revolutions \u2013 radio, nuclear fission or the internet \u2013 governments are not leading the way. We know that AI can be dangerous; chatbots advise teens on suicide and may soon be capable of instructing on <a href=\"https:\/\/www.theguardian.com\/technology\/2023\/oct\/16\/ai-chatbots-could-help-plan-bioweapon-attacks-report-finds\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">how to create biological weapons<\/a>. Yet there is no equivalent to the Federal Drug Administration, testing new models for safety before public release. Unlike in the nuclear industry, companies often don\u2019t have to disclose dangerous breaches or accidents. The tech industry\u2019s lobbying muscle, Washington\u2019s paralyzing polarization, and the sheer complexity of such a potent, fast-moving technology have kept federal regulation at bay. European officials are facing pushback against rules that some claim hobble the continent\u2019s competitiveness. Although several US states are piloting AI laws, they operate in a tentative patchwork and <a href=\"https:\/\/www.theguardian.com\/us-news\/2025\/dec\/11\/trump-executive-order-artificial-intelligence\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">Donald Trump has attempted<\/a> to render them invalid.<\/p>\n<p class=\"dcr-130mj7b\">Heads of AI platforms like OpenAI\u2019s ChatGPT and Google\u2019s Gemini say they care about safety. But owning the future of AI means pouring billions into models that not even their creators fully understand, and making choices like adding ads \u2013 and the capabilities that the Pentagon is <a href=\"https:\/\/www.nytimes.com\/2026\/02\/23\/us\/politics\/pentagon-anthropic-ai.html\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">now seeking from Anthropic<\/a> \u2013 that raise risk. Anthropic, which styles itself as the most conscientious frontier AI company, <a href=\"https:\/\/www.anthropic.com\/constitution\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">says its model<\/a> is trained to \u201cimagine how a thoughtful senior Anthropic employee\u201d would weigh helpfulness against possible harm. The directive echoes criticisms levied years ago over Silicon Valley companies that shaped the lives of users worldwide from insular boardrooms. Consumers don\u2019t believe they are in good hands. <a href=\"https:\/\/yougov.com\/en-us\/articles\/53701-most-americans-use-ai-but-still-dont-trust-it\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">Fully 77% of Americans<\/a> surveyed last year think AI could pose a threat to humanity.<\/p>\n<p>double quotation markAt least until legislators act, independent oversight offers the potential to adjudicate between AI\u2019s potential and its perils<\/p>\n<p class=\"dcr-130mj7b\">We are not stuck between the elusive hope of robust government regulation and having the most powerful companies in history police themselves. At least until legislators act, independent oversight offers the potential to adjudicate between AI\u2019s potential and its perils. By embracing independent oversight, AI companies can demonstrate that they are serious enough about public trust to be willing to fight for it.<\/p>\n<p class=\"dcr-130mj7b\">The logic behind independent oversight is straightforward. No matter the good intentions of corporate executives, their duties to shareholders and investors shape how they approach trade-offs between cost and safety, incentivizing revenue and profits. While long-term considerations of corporate reputation, customer loyalty and ethics can act as speedbumps, winning the AI race demands appetite for risk. Belated reckonings with how social media could fuel killings, throw elections and impair youth mental health illustrate how the intoxicating power of technology can obscure flashing warning signals.<\/p>\n<p class=\"dcr-130mj7b\">Independent oversight of AI offers the potential to surface, analyze and address its risks, giving advocates and communities a bit more control over how these technologies remake society. Social media provides an example. In 2020, bruised by <a href=\"https:\/\/www.theguardian.com\/world\/2018\/apr\/03\/revealed-facebook-hate-speech-exploded-in-myanmar-during-rohingya-crisis\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">accusations it helped fuel<\/a> the Rohingya crisis in Myanmar, Meta (then Facebook) <a href=\"https:\/\/transparency.meta.com\/oversight\/creation-of-oversight-board\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">created an oversight board<\/a>, hoping to get the company out of the hot seat. Early the following year the company adopted a policy <a href=\"https:\/\/about.fb.com\/wp-content\/uploads\/2021\/03\/Facebooks-Corporate-Human-Rights-Policy.pdf\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">committing to following human rights law<\/a>. While the board, now five years old, has fallen short of what some people hoped might serve as a \u201csupreme court of Facebook\u201d, its record offers key lessons as to the prospects for effective independent oversight for AI, and why it matters.<\/p>\n<p class=\"dcr-130mj7b\">Oversight demands diverse perspectives. Like other frontier AI companies, Meta has users on every populated continent. Deciding what they can and cannot post from the safety of a Menlo Park campus left blind spots and stoked resentments. The oversight board\u2019s 21 members bring broad cultural and professional expertise to the adjudication of sensitive questions of content moderation (such as whether a violent video should be sharable as news or removed as an affront to the victim\u2019s dignity). The board, with members who have <a href=\"https:\/\/about.fb.com\/news\/2020\/05\/welcoming-the-oversight-board\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">lived in more than 27 countries<\/a>, includes conservatives and liberals, journalists, legal scholars, a former prime minister of Denmark and a Nobel peace prize laureate.<\/p>\n<p class=\"dcr-130mj7b\">The oversight board uses Meta\u2019s own \u201ccommunity standards\u201d to assess whether posts violate rules including prohibitions against bullying or support for terrorists. The board holds <a href=\"https:\/\/www.theguardian.com\/technology\/meta\" data-link-name=\"in body link\" data-component=\"auto-linked-tag\" rel=\"nofollow noopener\" target=\"_blank\">Meta<\/a> to its vow to uphold international human rights law, including Article 19 of the International Covenant on Civil and Political Rights, which enshrines freedom of expression. AI companies should make the same commitment and establish oversight to hold them to it. Unlike the first amendment in the US or the European Union\u2019s \u201cright to be forgotten\u201d online, human rights law offers a common currency across borders. Its norms provide methods of reasoning to guide decisions on AI, such as whether a bot\u2019s refusal to answer a question unjustifiably denies a user\u2019s right to information, or whether the repurposing of user data violates privacy rights.<\/p>\n<p class=\"dcr-130mj7b\">Accessibility, consultation and transparency are key. The oversight board accepts appeals from the public, announces the cases it chooses to review, invites public comments, and convenes sessions with experts and relevant communities. It has issued more than 200 decisions in detailed written opinions that have been cited by courts around the world.<\/p>\n<p class=\"dcr-130mj7b\">A voluntary oversight body is only as strong as the powers vested in it by its originating company. While the oversight board would like broader powers, it has given credit to Meta for going well beyond the lightweight advisory councils that other tech players have periodically convened and dissolved. Meta\u2019s oversight board has jurisdiction to decide whether a specific piece of content stays up or comes down, though using that authority over individual posts can feel like fighting a wildfire by blowing out embers. Its more consequential impact lies in choosing emblematic cases of errant content, offering public reasoning for decisions, and issuing recommendations to which Meta must respond. Meta has implemented 75% of the board\u2019s more than 300 recommendations, as reported <a href=\"https:\/\/www.oversightboard.com\/news\/from-bold-experiment-to-essential-institution\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">in December<\/a>, leading to significant changes for billions of users.<\/p>\n<p class=\"dcr-130mj7b\">These include providing notifications about what policy a user is alleged to have violated when content disappears, ensuring that rhetorical taunts and satire don\u2019t get removed as threats, and ensuring that he company surges resources in crises like natural disasters and armed conflict. The board also issues detailed advisory opinions on larger policy issues, such as Meta\u2019s extension of leniency for policy violations by high-profile posters, or how much Covid-related misinformation should be removed as the pandemic died down. Although the board operates independently in making its decisions and recommendations, it relies on Meta for crucial information such as whether specific content determinations are made by human beings or automation, and what precisely went wrong when content was mistakenly removed. AI companies will have to offer at least as much visibility for oversight to have any meaning.<\/p>\n<p class=\"dcr-130mj7b\">As ever, money matters. Meta periodically puts the oversight board\u2019s funding in a trust so that it cannot be cut off overnight. But more diversified and assured resources would enhance the board\u2019s independence. Oversight of cutting-edge tech costs money. It requires funding for an expert staff to support analysis and decision-making and consultants who bring specific cultural and linguistic expertise. Given the hundreds of billions being invested in AI, however, the price of even robust oversight is negligible.<\/p>\n<p class=\"dcr-130mj7b\">AI is taking over our classrooms, colleges and corporations. Independent oversight is the least AI companies can do to make sure that, wittingly or not, they do not take over our rights as well.<\/p>\n","protected":false},"excerpt":{"rendered":"The speed with which AI is transforming our lives is head-spinning. Unlike previous technological revolutions \u2013 radio, nuclear&hellip;\n","protected":false},"author":2,"featured_media":309486,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[365,363,364,111,139,69,145],"class_list":{"0":"post-309485","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-new-zealand","12":"tag-newzealand","13":"tag-nz","14":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/posts\/309485","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/comments?post=309485"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/posts\/309485\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/media\/309486"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/media?parent=309485"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/categories?post=309485"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/tags?post=309485"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}