{"id":385296,"date":"2026-01-23T02:15:16","date_gmt":"2026-01-23T02:15:16","guid":{"rendered":"https:\/\/www.newsbeep.com\/uk\/385296\/"},"modified":"2026-01-23T02:15:16","modified_gmt":"2026-01-23T02:15:16","slug":"anthropic-writes-23000-word-constitution-for-claude-the-register","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/uk\/385296\/","title":{"rendered":"Anthropic writes 23,000-word &#8216;constitution&#8217; for Claude \u2022 The Register"},"content":{"rendered":"<p>The Constitution of the United States of America is about 7,500 words long, a factoid The Register mentions because on Wednesday AI company Anthropic delivered an updated 23,000-word constitution for its Claude family of AI models.<\/p>\n<p>In an <a target=\"_blank\" rel=\"nofollow noopener\" href=\"https:\/\/www.anthropic.com\/news\/claude-new-constitution\">explainer<\/a> document, the company notes that the 2023 version of its constitution (which came in at just ~2,700 words) was a mere \u201clist of standalone principles\u201d that is no longer useful because \u201cAI models like Claude need to understand why we want them to behave in certain ways, and we need to explain this to them rather than merely specify what we want them to do.\u201d<\/p>\n<p>The company therefore describes the updated constitution as two things:<\/p>\n<p>An honest and sincere attempt to help Claude understand its situation, our motives, and the reasons we shape Claude in the ways we do; and<\/p>\n<p>A detailed description of Anthropic\u2019s vision for Claude\u2019s values and behavior; a holistic document that explains the context in which Claude operates and the kind of entity we would like Claude to be.\u201d<\/p>\n<p>Anthropic hopes that Claude\u2019s output will reflect the content of the constitution by being:<\/p>\n<p>Broadly safe: not undermining appropriate human mechanisms to oversee AI during the current phase of development;<\/p>\n<p>Broadly ethical: being honest, acting according to good values, and avoiding actions that are inappropriate, dangerous, or harmful;<\/p>\n<p>Compliant with Anthropic\u2019s guidelines: acting in accordance with more specific guidelines from Anthropic where relevant;<\/p>\n<p>Genuinely helpful: benefiting the operators and users they interact with.<\/p>\n<p>If Claude is conflicted, Anthropic wants the model to \u201cgenerally prioritize these properties in the order in which they are listed.\u201d<\/p>\n<p>Is it sentient?<\/p>\n<p>Note the mention of Claude being an \u201centity,\u201d because the document later describes the model as \u201ca genuinely novel kind of entity in the world\u201d and suggests \u201cwe should lean into Claude having an identity, and help it be positive and stable.\u201d<\/p>\n<p>The constitution also concludes that Claude \u201cmay have some functional version of emotions or feelings\u201d and dedicates a substantial section to contemplating the appropriate ways for humans to treat the model.<\/p>\n<p>One part of that section considers Claude\u2019s moral status by debating whether Anthropic\u2019s LLM is a \u201cmoral patient.\u201d The counterpart to that term is \u201cmoral agent\u201d \u2013 an entity that can discern right and wrong and can be held accountable for its choices. Most adult humans are moral agents. Human children are considered moral patients because they are not yet able to understand morality. Moral agents therefore have an obligation to make ethical decisions on their behalf.<\/p>\n<p>Anthropic can\u2019t decide if Clade is a moral patient, or if it meets any current definition of sentience.<\/p>\n<p>The constitution settles for an aspiration for Anthropic to \u201cmake sure that we\u2019re not unduly influenced by incentives to ignore the potential moral status of AI models, and that we always take reasonable steps to improve their wellbeing under uncertainty.\u201d<\/p>\n<p>TL;DR \u2013 Anthropic thinks Claude is some kind of entity to which it owes something approaching a duty of care.<\/p>\n<p>Would The Register write narky things about Claude?<\/p>\n<p>One section of the constitution that caught this Vulture\u2019s eye is titled \u201cBalancing helpfulness with other values.\u201d<\/p>\n<p>It opens by explaining \u201cAnthropic wants Claude to be used for tasks that are good for its principals but also good for society and the world\u201d \u2013 a fresh take on Silicon Valley\u2019s \u201cmaking the world a better place\u201d platitude \u2013 that offers a couple of interesting metaphors for how the company hopes its models behave.<\/p>\n<p>Here\u2019s one of them:<\/p>\n<p>Elsewhere, the constitution points out that Claude is central to Anthropic\u2019s commercial success, which The Register mentions because the company is essentially saying it wants its models to behave in ways its staff deem likely to be profitable.<\/p>\n<p>Here\u2019s the second:<\/p>\n<p>The Register feels seen!<\/p>\n<p>Anthropic expects it will revisit its constitution, which it describes as \u201ca perpetual work in progress.\u201d<\/p>\n<p>\u201cThis document is likely to change in important ways in the future,\u201d it states. \u201cIt is likely that aspects of our current thinking will later look misguided and perhaps even deeply wrong in retrospect, but our intention is to revise it as the situation progresses and our understanding improves.\u201d<\/p>\n<p>In its explainer document, Anthropic argues that the document is important because \u201cAt some point in the future, and perhaps soon, documents like Claude\u2019s constitution might matter a lot \u2013 much more than they do now.\u201d<\/p>\n<p>\u201cPowerful AI models will be a new kind of force in the world, and those who are creating them have a chance to help them embody the best in humanity. We hope this new constitution is a step in that direction.\u201d<\/p>\n<p>It seems apt to end this story by noting that Isaac Asimov\u2019s <a target=\"_blank\" rel=\"nofollow noopener\" href=\"https:\/\/en.wikipedia.org\/wiki\/Three_Laws_of_Robotics\">Three Laws of Robotics<\/a> fit into 64 words and open \u201cA robot may not injure a human being or, through inaction, allow a human being to come to harm. Maybe such brevity is currently beyond Anthropic, and Claude. \u00ae<\/p>\n","protected":false},"excerpt":{"rendered":"The Constitution of the United States of America is about 7,500 words long, a factoid The Register mentions&hellip;\n","protected":false},"author":2,"featured_media":385297,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[554,733,4308,86,56,54,55],"class_list":{"0":"post-385296","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-technology","12":"tag-uk","13":"tag-united-kingdom","14":"tag-unitedkingdom"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts\/385296","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/comments?post=385296"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts\/385296\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/media\/385297"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/media?parent=385296"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/categories?post=385296"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/tags?post=385296"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}