{"id":44855,"date":"2025-08-05T08:04:11","date_gmt":"2025-08-05T08:04:11","guid":{"rendered":"https:\/\/www.newsbeep.com\/au\/44855\/"},"modified":"2025-08-05T08:04:11","modified_gmt":"2025-08-05T08:04:11","slug":"united-states-china-or-russia-who-writes-the-moral-code-for-artificial-intelligence","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/au\/44855\/","title":{"rendered":"United States, China or Russia: Who writes the moral code for artificial intelligence?"},"content":{"rendered":"<p>Who decides what values are embedded in artificial intelligence? This question may soon matter more than whether AI takes your job. Generative AI systems such as ChatGPT, Claude, Gemini and Grok are becoming the default gateway to knowledge. As people turn to them first (and often last) for information, these systems will shape what can be said and, over time, what can be thought. Their answers reflect values embedded in AI systems that determine which perspectives are amplified, which are silenced, and how political events are framed.<\/p>\n<p>In the United States and Europe, these values are contested but broadly shaped by traditions of individual rights, pluralism and free expression. Companies differ in terms of the values they embed in AI. For example, OpenAI\u2019s ChatGPT is cautious and rights\u2011oriented, whereas Elon Musk\u2019s Grok takes a more libertarian, free\u2011speech\u2011maximalist approach. All, however, operate within America\u2019s political culture.<\/p>\n<p class=\"article-quote-right\">The European Union broadly shares America\u2019s liberal-democratic values, but is wary of US dominance in AI.<\/p>\n<p>At the same time, Washington increasingly frames AI leadership as a front line in a geopolitical and civilisational contest. Success is tied to safeguarding \u201c<a href=\"https:\/\/link.springer.com\/article\/10.1007\/s00146-022-01499-8\" rel=\"nofollow noopener\" target=\"_blank\">American values<\/a>\u201d, including free expression and human rights, and successive administrations have held a firm belief that American private companies will out\u2011innovate authoritarian nations and their state\u2011centric AI development models.<\/p>\n<p>America\u2019s National Security Commission on Artificial Intelligence <a href=\"https:\/\/reports.nscai.gov\/final-report\/\" rel=\"nofollow noopener\" target=\"_blank\">(NSCAI),<\/a> formed in 2018 to address \u201cnational security and defence needs\u201d, frames AI rivalry as a global \u201cvalues competition\u201d to be \u201cembraced\u201d, explicitly naming China. In 2021, bipartisan Senate bills followed, with <a href=\"https:\/\/www.heinrich.senate.gov\/newsroom\/press-releases\/heinrich-portman-urge-national-science-foundation-to-prioritize-safety-and-ethics-in-artificial-intelligence-research-innovation\" rel=\"nofollow noopener\" target=\"_blank\">Senators Martin Heinrich and Rob Portman<\/a> telling the National Science Foundation: \u201cAI leadership by the United States is only possible if AI research, innovation, and use is rooted in American values \u2026 ethics and safety\u201d.<\/p>\n<p>The European Union broadly shares America\u2019s liberal-democratic values, but is wary of US dominance in AI. Brussels argues American AI systems reflect distinctly US priorities, especially Silicon Valley\u2019s commercial culture and engagement with America\u2019s \u201cculture wars\u201d. Through the <a href=\"https:\/\/digital-strategy.ec.europa.eu\/en\/policies\/european-approach-artificial-intelligence\" rel=\"nofollow noopener\" target=\"_blank\">AI Act<\/a>, the EU aims to encode European values, including dignity, privacy, transparency, and the precautionary principle, into AI systems. The EU\u2019s proposed AI sovereignty would ensure that technology used in Europe reflects its interpretation of liberalism, with stronger safeguards against harm than US models impose.<\/p>\n<p>China: AI as a civilisational duty<\/p>\n<p>Beyond Europe, there is stronger resistance to American-built AI systems. Authoritarian governments are already making the case that their societies should not have to accept Western values embedded in AI systems. For example, the Chinese Communist Party insists AI must express <a href=\"https:\/\/digichina.stanford.edu\/work\/translation-measures-for-the-management-of-generative-artificial-intelligence-services-draft-for-comment-april-2023\/\" rel=\"nofollow noopener\" target=\"_blank\">\u201csocialist core values<\/a>\u201d to preserve harmony, stability and national security. This is presented as necessary to protect China\u2019s \u201c<a href=\"https:\/\/www.idcpc.gov.cn\/english2023\/ttxw\/tttp\/202307\/t20230717_152741.html\" rel=\"nofollow noopener\" target=\"_blank\">5000-year-old civilization<\/a>\u201d from <a href=\"https:\/\/jamestown.org\/program\/ccp-cyber-sovereignty-contains-lessons-for-ais-future\/?utm_source=chatgpt.com\" rel=\"nofollow noopener\" target=\"_blank\">\u201cdigital colonisation\u201d and \u201cWestern ideological trends\u201d.<\/a><\/p>\n<p>              <img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/www.newsbeep.com\/au\/wp-content\/uploads\/2025\/08\/OpenAI comp.jpg\" width=\"1200\" height=\"800\" alt=\"OpenAI\" typeof=\"foaf:Image\"\/><\/p>\n<p>Companies differ in terms of the values they embed in AI. For example, OpenAI\u2019s ChatGPT is cautious and rights\u2011oriented (Jonathan Kemper\/Unsplash)<\/p>\n<p>China\u2019s AI governance is deeply integrated into its <a href=\"https:\/\/www.loc.gov\/item\/global-legal-monitor\/2023-07-18\/china-generative-ai-measures-finalized\/\" rel=\"nofollow noopener\" target=\"_blank\">broader system of information control<\/a> and national security law. Developers must register algorithms with authorities, pass security reviews, and filter politically sensitive content. For example, Chinese generative AI services <a href=\"https:\/\/www.theguardian.com\/technology\/2025\/jan\/28\/we-tried-out-deepseek-it-works-well-until-we-asked-it-about-tiananmen-square-and-taiwan\" rel=\"nofollow noopener\" target=\"_blank\">reject user queries<\/a> about Tiananmen Square, Taiwan\u2019s independence, or Party leaders. Some models are programmed to steer conversations back to \u201cpositive\u201d topics including economic growth, technological progress, or traditional culture. Domestically, this is framed as moral responsibility, ensuring AI reflects Chinese cultural and political values rather than alien Western norms. Internationally, Beijing argues every nation should <a href=\"https:\/\/jamestown.org\/program\/ccp-cyber-sovereignty-contains-lessons-for-ais-future\/?utm_source=chatgpt.com\" rel=\"nofollow noopener\" target=\"_blank\">develop AI aligned to its own values<\/a>, rejecting the idea of universal values.<\/p>\n<p>Russia: AI as civilisational defence<\/p>\n<p>Russia likewise rejects liberal norms as universal, framing itself as a <a href=\"http:\/\/en.kremlin.ru\/events\/president\/news\/72444\" rel=\"nofollow noopener\" target=\"_blank\">\u201cstate-civilisation\u201d<\/a> rooted in Orthodoxy, traditional values and centralised authority. Foreign AI systems are portrayed as potential vectors of Western ideological influence, threatening Russia\u2019s cultural integrity and political stability.<\/p>\n<p>Russian AI governance aligns closely with its <a href=\"https:\/\/www.hrw.org\/news\/2020\/06\/18\/russia-growing-internet-isolation-control-censorship\" rel=\"nofollow noopener\" target=\"_blank\">\u201csovereign internet\u201d<\/a> doctrine: mandatory domestic data storage, algorithmic filtering to block \u201charmful\u201d content, and integration with state surveillance systems. These rules form part of wider laws restricting free expression and curbing debate on sensitive issues such as <a href=\"https:\/\/www.hrw.org\/news\/2020\/06\/18\/russia-growing-internet-isolation-control-censorship\" rel=\"nofollow noopener\" target=\"_blank\">LGBT rights, political freedoms and Covid\u201119<\/a>. Other measures erode privacy and online security, leaving no digital communication in Russia safe from state interference. By framing these controls as essential to defending Russian civilisation from ideological subversion, the Kremlin recasts digital authoritarianism as patriotic duty.<\/p>\n<p>A diverging AI future<\/p>\n<p>AI governance has moved beyond technical debates, and is increasingly a contest over which set of values will define the boundaries of speech and political imagination.<\/p>\n<p>It is true that American values are not universal. Many nations reject the idea that American norms should be hardwired into AI systems. Their desire to prevent Americanisation and to see their own norms reflected in the technology is understandable. However, authoritarian governments often frame this desire as a civilisational necessity in order to justify censorship. By presenting AI as an extension of their civilisation, they can close off foreign influence, legitimise censorship, and present domestic information control as patriotic or morally essential. In China and Russia, this rhetoric entrenches surveillance, censorship and state-aligned AI development.<\/p>\n<p>As AI governance becomes locked in this framing, we enter a world of fragmented digital spheres, each with their own boundaries of acceptable speech enforced by the politically powerful. A danger, then, is that the \u201cclash of civilisations\u201d framing becomes the architecture itself, turning political rhetoric into the hard\u2011coded reality of the global digital order.<\/p>\n","protected":false},"excerpt":{"rendered":"Who decides what values are embedded in artificial intelligence? This question may soon matter more than whether AI&hellip;\n","protected":false},"author":2,"featured_media":44856,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[256,254,255,64,63,105],"class_list":{"0":"post-44855","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-au","12":"tag-australia","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/44855","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/comments?post=44855"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/44855\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media\/44856"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media?parent=44855"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/categories?post=44855"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/tags?post=44855"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}