{"id":54044,"date":"2025-08-08T23:20:16","date_gmt":"2025-08-08T23:20:16","guid":{"rendered":"https:\/\/www.newsbeep.com\/au\/54044\/"},"modified":"2025-08-08T23:20:16","modified_gmt":"2025-08-08T23:20:16","slug":"stop-policing-punctuation-now-why-ai-detection-needs-a-rethink","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/au\/54044\/","title":{"rendered":"Stop Policing Punctuation Now: Why AI Detection Needs a Rethink\u00a0"},"content":{"rendered":"<p>By copying the HTML below, you will be adhering to all our guidelines.<\/p>\n<p>By Andrew Welsman and Janine Arantes\u00a0<\/p>\n<p>Like many educators, our social media feeds have been filled with commentary on the impact of AI on teaching, learning, and assessment. One problem appears intractable; namely, how can we tell when students have used AI-generated text in their work? We\u2019re not writing to offer an answer to that question; indeed, at this point, it\u2019s clear that there isn\u2019t a reliable method of separating \u2018synthetic\u2019 text from \u2018organic\u2019. Instead, we want to bring attention to a troubling possibility, one that is of perhaps greater significance to our role as educators who don\u2019t necessarily teach the finer points of literacy and writing while teaching just about every subject. It\u2019s this: In our efforts to police the use of AI in learning and assessment, are we likely to diminish the quality and character of our own writing and that of our students in the process?\u00a0<\/p>\n<p>While many of us have tried to keep up with new tools, evolving policies, and changes with detection software, one strategy has become increasingly popular: look for stylistic \u2018tells\u2019. It appears to us that the search for shortcuts in AI detection has led to what psychologists Shah and Oppenheimer call the \u201c<a href=\"&quot;https:\/\/journals.sagepub.com\/doi\/abs\/10.1111\/j.1467-8721.2009.01642.x&quot;\">path of least resistance<\/a>\u201d. That is, we gravitate to cues that are easily perceived, easily evaluated, and easily judged. In other words, heuristics. Em dashes? Colons in titles? Specific words or expressions? All have been called out as signs of AI authorship. But here is the problem: these shortcuts don\u2019t work. Worse, they normalise suspicion of well-crafted, edited and even creative writing. When we start to see polished punctuation and consistent tone as evidence of cheating, we inadvertently signal to students and our peers, that good writing is suspect.\u00a0\u00a0<\/p>\n<p>Why?<\/p>\n<p>Let\u2019s start with an example. On<a href=\"&quot;https:\/\/medium.com\/@brentcsutoras\/the-em-dash-dilemma-how-a-punctuation-mark-became-ais-stubborn-signature-684fbcc9f559&quot;\"> social media<\/a>, we have seen commentators breezily claim that the use of the em dash\u2014the long dash that can be used in place of parentheses or a semi-colon\u2014is the \u201csmoking gun\u201d betraying that a text was authored by AI. As a self-professed fan of the em dash, this prompted Andrew to go searching. A cursory googling using the search phrase \u201cthe em dash as an indicator of AI content\u201d revealed that this is a<a href=\"&quot;https:\/\/www.plagiarismtoday.com\/2025\/06\/26\/em-dashes-hyphens-and-spotting-ai-writing\/&quot;\"> popular topic<\/a>, with plenty of commentary being traded for and against the notion. Some suggest that using em dashes makes for well-styled and cadenced writing, while others claim that em dashes appear so infrequently in so-called \u2018normal\u2019 writing that seeing an em dash \u2018in the wild\u2019 is always suspect. But the conjecture and speculation doesn\u2019t end with the em dash.\u00a0\u00a0<\/p>\n<p>The colon in titling: Another AI tell?\u00a0<\/p>\n<p>So, we dug deeper. Another purported \u201cgive-away\u201d is supposedly the use of the colon to separate titles and sub-titles in a body of text. This seemed a bit of a reach, as academic writing in particular often employs the colon to sub-divide or elaborate on concepts in titles and subtitles. At this point, we realized we needed to consult the proverbial source of these claims, so off to prominent large language models (LLMs) we went. <\/p>\n<p>We each tried out different LLMs. In ChatGPT, Andrew started with the prompt \u201cWhat are some common tells that a text is in fact AI-authored?\u201d It came back with a list of 10. The list ranged from \u201cOverly Formal or Polished Language\u201d and \u201cOveruse of Transitional Phrases\u201d to \u201cToo Balanced or Fence Sitting,\u201d all of which could be claimed to be common in academic writing. Whereas, when Janine asked the same question, Gemini (2.4 Flash) replied: \u201crepetitive phrasing, formulaic sentence structures, overly polished or generic language lacking deep insight or personal nuance, the frequent use of certain transitional phrases and words (e.g., &#8220;delve,&#8221; &#8220;tapestry&#8221;), and occasionally, factual errors or fabricated citations\u201d.\u00a0 <\/p>\n<p>Great questions<\/p>\n<p>Albeit, many of these were stylistic claims, rather than observations about punctuation, so we decided to dig deeper. When ChatGPT was asked \u201cWhat about punctuation?\u201d it replied, \u201cGreat question \u2013 punctuation is another area where AI-generated text can give itself away.\u201d It seemed that ChatGPT was confirming the blogosphere\u2019s punctuation and style concerns in relation to authenticity, noting that the overuse of things like commas, em dashes, colons, and parentheses are \u201cpunctuation-related tells.\u201d Janine asked the same question, and Gemini replied \u201cAI-authored text can exhibit &#8220;tells&#8221; through the overuse of specific punctuation marks like the em dash or colon, the consistent application of flawlessly textbook-correct punctuation lacking human variation or &#8220;casual inconsistencies,&#8221; and the absence of the nuanced stylistic choices typical of human writers.\u201d <\/p>\n<p>Both of our responses of \u201cAI tells\u201d included overly polished work, overuse of some phrases and the consistent, almost perfect use of specific punctuation like em dashes and colons. The similarities were obvious: consistently correct or textbook punctuation and limited to no typos or casual inconsistencies. Was grammatically correct and proof-read work now concerning? Or worse, the sole domain of LLMs? Should we, as authors and educators, be aiming (as it were) to be more \u201ccasually inconsistent\u201d in our writing so as to not appear like we have used an LLM? And to teach our students this in-turn?\u00a0<\/p>\n<p>On the spread of lazy AI detection heuristics \u00a0<\/p>\n<p>In a fascinating paper, \u201c<a href=\"&quot;https:\/\/journals.sagepub.com\/doi\/abs\/10.1111\/j.1467-8721.2009.01642.x&quot;\">The Path of Least Resistance: Using Easy-to-Access Information<\/a>,\u201d Princeton psychologists Shah and Oppenheimer have proposed a framework for understanding how people might use highly accessible cues in everyday decision making. Their framework, in which they explain how people tend to use easily perceived, produced, and evaluated cues to makes decisions, has particular relevance for a scenario in which a teacher is attempting to detect AI -generated text. As visible linguistic markers, punctuation could be regarded as an example of a highly accessible cue. After all, types and patterns of punctuation are easily perceived and evaluated by readers, much more so than more nebulous concepts such as \u201ctone\u201d and \u201ccomplexity\u201d of vocabulary. One could imagine why punctuation as a cue for detecting AI-generated work might make for a seductive proposition, and why it has become the subject of so much social media speculation.\u00a0<\/p>\n<p>Whether the punctuation \u201crules of thumb\u201d for AI detection being promoted on social media are credible or not is one matter. One thing is nevertheless certain: the idea of punctuation as a tool for AI detection is pernicious\u2014the em dash and other proposed AI detection heuristics are now in the public consciousness and being talked about as if they are useful, despite noteworthy appeals to reason<a href=\"&quot;https:\/\/www.rollingstone.com\/culture\/culture-features\/chatgpt-hypen-em-dash-ai-writing-1235314945\/&quot;\"> here<\/a>,<a href=\"&quot;https:\/\/victoriafraise.medium.com\/no-em-dashes-are-not-a-sign-of-ai-f14629a4d217&quot;\"> here<\/a>, and<a href=\"&quot;https:\/\/www.washingtonpost.com\/technology\/2025\/04\/09\/ai-em-dash-writing-punctuation-chatgpt\/&quot;\"> here<\/a>. Our concern as educators is this: Collectively, we may be in real danger of assimilating these \u201ceasy\u201d cues and applying them (whether consciously or otherwise) to our own writing and when assessing the work of our students.\u00a0\u00a0<\/p>\n<p>Where might this end?\u00a0<\/p>\n<p>Educators are not immune to bias. In the absence of certainty, it\u2019s natural for us to lean into intuition. But intuition shaped by social media tropes is not a sound basis for academic judgement. Perhaps the deeper danger here is that lazy heuristics for AI detection reduce our ability to actually teach and also lower our expectations, as they cast suspicions on students and peers who have worked hard to improve their writing.\u00a0<\/p>\n<p>What if we \u2018normalise\u2019 our expectations for authentic writing to be automatically suspicious of polish, punctuation, and proof-reading in our student\u2019s work? Rules-of-thumb and looking for \u201cAI tells\u201d is not the answer. They may make for seductive heuristics in decision-making in the present AI-policing climate; but let\u2019s be clear\u2014they\u2019re lazy and specious within the domains of scholarship and academic writing. Noone has an answer to AI detection; there is no silver bullet, algorithmic or otherwise, to help us. In some meaningful ways,<a href=\"&quot;https:\/\/futurism.com\/ai-model-turing-test&quot;\"> the Turing test appears to have been passed<\/a>.\u00a0\u00a0But what is for sure: we need a new baseline. <\/p>\n<p>Make sure you know the human<\/p>\n<p>What that looks like is currently being debated across the globe. Some have returned to pen and paper, handwritten notebooks and oral defences, but in a context where good writing is suspect, this suggests one thing: if you can\u2019t distinguish AI-generated text from human, make sure you know the human (in this case, your students). That, at least, is what we should be aiming for in Higher Ed. Small classes help; getting to know your students early in a course\u2014even better. Whether you re-engage students with pen and paper, or utilize verbal presentations as components of teaching, learning, and assessment, one thing is clear: In the arms race of academic integrity in the age of AI, knowing your student, rather than relying on rules of thumb or expensive detection algorithms, is the path forward. And to Andrew\u2019s earlier point\u2014no, he will not stop using the em dash.\u00a0<\/p>\n<p><img decoding=\"async\" src=\"&quot;https:\/\/blog.aare.edu.au\/wp-content\/uploads\/2025\/08\/Screenshot-2025-08-06-at-4.07.01\u202fpm.png&quot;\" alt=\"&quot;&quot;\" class=\"&quot;wp-image-26139\" size-full=\"\"\/><\/p>\n<p class=\"&quot;has-small-font-size&quot;\">Andrew Welsman is an educator in initial teacher education at Victoria University. A former secondary school teacher, his teaching and research interests centre on issues affecting STEM education and the pedagogical impacts of digital technologies.\u00a0\u00a0Janine Arantes is a senior lecturer and Research Fellow at Victoria University with expertise in AI governance, education policy, and digital equity. <\/p>\n<p>This article was originally published on <a href=\"&quot;https:\/\/aare.edu.au\/blog&quot;\">EduResearch Matters<\/a>. Read the <a href=\"&quot;https:\/\/aare.edu.au\/blog\/?p=26130&quot;\">original article<\/a>.<img decoding=\"async\" src=\"&quot;https:\/\/aare.edu.au\/blog\/count.php?id=26130&quot;\" alt=\"&quot;AARE&quot;\" width=\"&quot;1&quot;\" height=\"&quot;1&quot;\"\/><\/p>\n","protected":false},"excerpt":{"rendered":"By copying the HTML below, you will be adhering to all our guidelines. By Andrew Welsman and Janine&hellip;\n","protected":false},"author":2,"featured_media":54045,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[44951,256,44952,254,255,64,63,44953,105],"class_list":{"0":"post-54044","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-aare-blog","9":"tag-ai","10":"tag-andrew-welsman","11":"tag-artificial-intelligence","12":"tag-artificialintelligence","13":"tag-au","14":"tag-australia","15":"tag-janine-arantes","16":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/54044","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/comments?post=54044"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/54044\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media\/54045"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media?parent=54044"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/categories?post=54044"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/tags?post=54044"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}