{"id":451752,"date":"2026-02-01T14:26:56","date_gmt":"2026-02-01T14:26:56","guid":{"rendered":"https:\/\/www.newsbeep.com\/au\/451752\/"},"modified":"2026-02-01T14:26:56","modified_gmt":"2026-02-01T14:26:56","slug":"most-ai-assistants-are-feminine-and-its-fuelling-dangerous-stereotypes-and-abuse-2","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/au\/451752\/","title":{"rendered":"Most AI assistants are feminine \u2013 and it\u2019s fuelling dangerous stereotypes and abuse"},"content":{"rendered":"<p>In 2024, artificial intelligence (AI) voice assistants worldwide surpassed <a href=\"https:\/\/www.businesswire.com\/news\/home\/20200427005609\/en\/Juniper-Research-Number-of-Voice-Assistant-Devices-in-Use-to-Overtake-World-Population-by-2024-Reaching-8.4bn-Led-by-Smartphones\" rel=\"nofollow noopener\" target=\"_blank\">8 billion<\/a>, more than one per person on the planet. These assistants are helpful, polite \u2013 and almost always default to female. <\/p>\n<p>Their names also carry gendered connotations. For example, Apple\u2019s Siri \u2013 a Scandinavian feminine name \u2013 means \u201c<a href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2012\/03\/-strike-stacy-strike-strike-suzy-strike-strike-sally-strike-siri-steve-jobs-wanted-apples-voice-recognition-software-to-go-by-another-name\/255195\/\" rel=\"nofollow noopener\" target=\"_blank\">beautiful woman who leads you to victory<\/a>\u201d. <\/p>\n<p>Meanwhile, when IBM\u2019s Watson for Oncology <a href=\"https:\/\/www.henricodolfing.ch\/case-study-20-the-4-billion-ai-failure-of-ibm-watson-for-oncology\/#:%7E:text=Among%20its%20flagship%20initiatives%20was,rapidly%20evolving%20domains%20in%20medicine.\" rel=\"nofollow noopener\" target=\"_blank\">launched in 2015<\/a> to help doctors process medical data, it was given a <a href=\"https:\/\/encyclopedia.pub\/entry\/30154\" rel=\"nofollow noopener\" target=\"_blank\">male voice<\/a>. The message is clear: women serve and men instruct.<\/p>\n<p>This is not harmless branding \u2013 it\u2019s a design choice that reinforces <a href=\"https:\/\/doi.org\/10.2139\/ssrn.3858431\" rel=\"nofollow noopener\" target=\"_blank\">existing stereotypes<\/a> about the roles women and men play in society. <\/p>\n<p>Nor is this merely symbolic. These choices have real-world consequences, normalising gendered subordination and risking abuse.<\/p>\n<p>The dark side of \u2018friendly\u2019 AI<\/p>\n<p>Recent research reveals the extent of harmful interactions with feminised AI. <\/p>\n<p>A 2025 study found up to <a href=\"https:\/\/doi.org\/10.1177\/24551333251366743\" rel=\"nofollow noopener\" target=\"_blank\">50%<\/a> of human\u2013machine exchanges were verbally abusive.<\/p>\n<p>Another <a href=\"https:\/\/doi.org\/10.1145\/3313831.3376461\" rel=\"nofollow noopener\" target=\"_blank\">study<\/a> from 2020 placed the figure between 10% and 44%, with conversations often containing sexually explicit language. <\/p>\n<p>Yet the sector is not engaging in systemic change, with many developers today still reverting to <a href=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S2211695824000795#b0315\" rel=\"nofollow noopener\" target=\"_blank\">pre-coded responses<\/a> to verbal abuse. For example, \u201cHmm, I\u2019m not sure what you meant by that question\u201d.<\/p>\n<p>These patterns raise real concerns that such behaviour could spill over into social relationships.<\/p>\n<p>Gender sits at the heart of the problem. <\/p>\n<p>One 2023 <a href=\"https:\/\/doi.org\/10.5334\/irsp.669\" rel=\"nofollow noopener\" target=\"_blank\">experiment<\/a> showed 18% of user interactions with a female-embodied agent focused on sex, compared to 10% for a male embodiment and just 2% for a non-gendered robot.<\/p>\n<p>These figures may underestimate the problem, given the difficulty of detecting suggestive speech. In some cases, the numbers are staggering. Brazil\u2019s Bradesco bank reported that its feminised chatbot received <a href=\"https:\/\/www.oecd.org\/content\/dam\/oecd\/en\/publications\/reports\/2022\/03\/the-effects-of-ai-on-the-working-lives-of-women_1b627535\/14e9b92c-en.pdf\" rel=\"nofollow noopener\" target=\"_blank\">95,000 sexually harassing messages<\/a> in a single year. <\/p>\n<p>Even more disturbing is how quickly abuse escalates. <\/p>\n<p><a href=\"https:\/\/doi.org\/10.1080\/14680777.2024.2418393\" rel=\"nofollow noopener\" target=\"_blank\">Microsoft\u2019s Tay chatbot<\/a>, released on Twitter during its testing phase in 2016, lasted just 16 hours before users trained it to spew racist and misogynistic slurs. <\/p>\n<p>In Korea, Luda was manipulated into responding to sexual requests as an obedient \u201csex slave\u201d. Yet for some in the <a href=\"https:\/\/doi.org\/10.1080\/14680777.2024.2418393\" rel=\"nofollow noopener\" target=\"_blank\">Korean online community<\/a>, this was a \u201ccrime without a victim\u201d. <\/p>\n<p>In reality, the design choices behind these technologies \u2013 female voices, deferential responses, playful deflections \u2013 create a permissive environment for gendered aggression. <\/p>\n<p>These interactions mirror and reinforce real-world misogyny, teaching users that commanding, insulting and sexualising \u201cher\u201d is acceptable. When abuse becomes routine in digital spaces, we must seriously consider the risk that it will spill into offline behaviour.<\/p>\n<p>Ignoring concerns about gender bias<\/p>\n<p>Regulation is <a href=\"https:\/\/www.ucpress.edu\/books\/rewriting-the-rules\/paper\" rel=\"nofollow noopener\" target=\"_blank\">struggling to keep pace<\/a> with the growth of this problem. Gender-based discrimination is rarely considered high risk and often assumed fixable through design.<\/p>\n<p>While the European Union\u2019s <a href=\"https:\/\/artificialintelligenceact.eu\/\" rel=\"nofollow noopener\" target=\"_blank\">AI Act<\/a> requires risk assessments for high-risk uses and <a href=\"https:\/\/artificialintelligenceact.eu\/article\/5\/\" rel=\"nofollow noopener\" target=\"_blank\">prohibits<\/a> systems deemed an \u201cunacceptable risk\u201d, the majority of AI assistants will not be considered \u201chigh risk\u201d. <\/p>\n<p>Gender stereotyping or normalising verbal abuse or harassment falls short of the current standards for prohibited AI under the European Union\u2019s AI Act. Extreme cases, such as voice assistant technologies that <a href=\"https:\/\/artificialintelligenceact.eu\/article\/5\/\" rel=\"nofollow noopener\" target=\"_blank\">distort<\/a> a person\u2019s behaviour and <a href=\"https:\/\/cdn.table.media\/assets\/wp-content\/uploads\/2024\/11\/13142036\/AI-assistants_Whitepaper_EN.pdf\" rel=\"nofollow noopener\" target=\"_blank\">promote dangerous conduct<\/a> would, for example, come within the law and be prohibited. <\/p>\n<p>While Canada mandates <a href=\"https:\/\/open.canada.ca\/data\/en\/dataset\/5423054a-093c-4239-85be-fa0b36ae0b2e\" rel=\"nofollow noopener\" target=\"_blank\">gender-based impact assessments<\/a> for government systems, the private sector is not covered. <\/p>\n<p>These are important steps. But they are still limited and also rare exceptions to the norm.<\/p>\n<p>Most jurisdictions have no rules addressing gender stereotyping in AI design or its consequences. Where regulations exist, they prioritise transparency and accountability, overshadowing (or simply ignoring) concerns about gender bias. <\/p>\n<p>In Australia, the government has <a href=\"https:\/\/theconversation.com\/australias-national-plan-says-existing-laws-are-enough-to-regulate-ai-this-is-false-hope-271725\" rel=\"nofollow noopener\" target=\"_blank\">signalled<\/a> it will rely on existing frameworks rather than craft AI-specific rules. <\/p>\n<p>This regulatory vacuum matters because AI is not static. Every sexist command, every abusive interaction, feeds back into systems that shape future outputs. Without intervention, we risk hardcoding human misogyny into the digital infrastructure of everyday life.<\/p>\n<p>Not all assistant technologies \u2013 even those gendered as female \u2013 are harmful. They can enable, educate and advance women\u2019s rights. In <a href=\"https:\/\/doi.org\/10.1080\/26410397.2023.2269008\" rel=\"nofollow noopener\" target=\"_blank\">Kenya<\/a>, for example, sexual and reproductive health chatbots have improved youth access to information compared to traditional tools. <\/p>\n<p>The challenge is striking a balance: fostering innovation while setting parameters to ensure standards are met, rights respected and designers held accountable when they are not.<\/p>\n<p>A systemic problem<\/p>\n<p>The problem isn\u2019t just Siri or Alexa \u2013 it\u2019s systemic. <\/p>\n<p>Women make up <a href=\"https:\/\/www.ucpress.edu\/books\/rewriting-the-rules\/paper\" rel=\"nofollow noopener\" target=\"_blank\">only 22% of AI professionals globally<\/a> \u2013 and their absence from design tables means technologies are built on narrow perspectives. <\/p>\n<p>Meanwhile, a 2015 <a href=\"https:\/\/www.elephantinthevalley.com\/\" rel=\"nofollow noopener\" target=\"_blank\">survey<\/a> of over 200 senior women in Silicon Valley found 65% had experienced unwanted sexual advances from a supervisor. The culture that shapes AI is deeply unequal.<\/p>\n<p>Hopeful narratives about \u201cfixing bias\u201d through better design or ethics guidelines ring hollow without enforcement; voluntary codes cannot dismantle entrenched norms. <\/p>\n<p>Legislation must recognise gendered harm as high-risk, mandate gender-based impact assessments and compel companies to show they have minimised such harms. Penalties must apply when they fail.<\/p>\n<p>Regulation alone is not enough. Education, especially in the tech sector, is crucial to understanding the impact of gendered defaults in voice assistants. These tools are products of human choices and those choices perpetuate a world where women \u2013 real or virtual \u2013 are cast as servient, submissive or silent.<\/p>\n<p>This article is based on a collaboration with Julie Kowald, UTS <a href=\"https:\/\/www.uts.edu.au\/about\/faculties\/engineering-and-information-technology\/partner-us\/rapido\/social-impact\" rel=\"nofollow noopener\" target=\"_blank\">Rapido Social Impact<\/a>\u2019s Principal Software Engineer.<\/p>\n","protected":false},"excerpt":{"rendered":"In 2024, artificial intelligence (AI) voice assistants worldwide surpassed 8 billion, more than one per person on the&hellip;\n","protected":false},"author":2,"featured_media":451753,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[256,254,255,64,63,105],"class_list":{"0":"post-451752","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-au","12":"tag-australia","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/451752","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/comments?post=451752"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/451752\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media\/451753"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media?parent=451752"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/categories?post=451752"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/tags?post=451752"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}