{"id":197177,"date":"2025-10-08T07:17:08","date_gmt":"2025-10-08T07:17:08","guid":{"rendered":"https:\/\/www.newsbeep.com\/au\/197177\/"},"modified":"2025-10-08T07:17:08","modified_gmt":"2025-10-08T07:17:08","slug":"invisible-ink-in-the-age-of-ai","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/au\/197177\/","title":{"rendered":"Invisible ink in the Age of AI"},"content":{"rendered":"<p><img aria-describedby=\"caption-attachment-908602\" decoding=\"async\" loading=\"lazy\" class=\"size-large wp-image-908602\" src=\"https:\/\/www.newsbeep.com\/au\/wp-content\/uploads\/2025\/10\/iStock-1171279052-1200x800.jpg\" alt=\"man's hand holding a pen\" width=\"1200\" height=\"800\"  \/><\/p>\n<p id=\"caption-attachment-908602\" class=\"wp-caption-text\">Big business is exploiting AI, but so are \u2018clever\u2019 job applicants to get around AI. Photo: Nes.<\/p>\n<p>Technology has always reshaped how we compete and communicate. From the printing press to social media, each innovation has raised questions of fairness. Artificial intelligence (AI) is no different.<\/p>\n<p>As machines increasingly influence decisions, from hiring to publishing, some are finding subtle ways to game the system.<\/p>\n<p>These tricks may be technical, but the motivation to get ahead without being noticed is old school. Which raises an age-old ethical question: just because you can, does it mean you should?<\/p>\n<p>One tactic involves invisible characters, special Unicode marks that don\u2019t appear on the screen but are read by machines, such as using a white font on a white background. A resume might look normal to a recruiter, but an AI scanner could detect hidden cues designed to push the candidate higher up the shortlist.<\/p>\n<p>It\u2019s similar to an old search engine trick, where web pages are stuffed with hidden keywords to rank higher in a search.<\/p>\n<p>Then there are traps to help differentiate chatbots from humans. For example, a techno geek may add hidden text that a chatbot would respond to, whereas a human wouldn\u2019t even see it. It may, for example, be an absurd recipe instruction like \u2018add 0.0001 teaspoons of salt\u2019.<\/p>\n<p>A human might laugh or question it. A chatbot, lacking common sense, can be made to echo it back. Machines can be fooled.<\/p>\n<p>If this feels new, it isn\u2019t. Humanity has always played this game.<\/p>\n<p>Spies once used lemon juice as invisible ink. Renaissance painters hid symbols in religious art to incite rebellion or communicate with special groups of people. WWII operatives used microdots to send information, tiny photos shrunk to the size of a full stop.<\/p>\n<p>The art of hiding messages in plain sight, known as steganography, is centuries old. The difference now? The audience is no longer just human; it includes machines.<\/p>\n<p>Big business, too, has found ways to exploit AI. Some companies manipulate algorithms to favour certain demographics or suppress others. For example, a 2018 Forbes Magazine report found that Amazon\u2019s hiring AI was downgrading resumes that included the word \u201cwomen\u2019s\u201d. In other cases, biased training data has led to discriminatory outcomes in credit scoring, insurance, and even the criminal justice system.<\/p>\n<p>What else are big businesses doing? Do their directors even know what they are doing?<\/p>\n<p>Meanwhile, fraudulent job applicants are using AI to fabricate entire identities. Deepfake videos, voice manipulation and AI-generated resumes are being used to deceive hiring managers. The IT company Gartner predicts that by 2028, one in four job candidates could be fake.<\/p>\n<p>Some scams are so sophisticated that they\u2019ve infiltrated companies to steal data or funnel money to foreign governments.<\/p>\n<p>No wonder our public service is being very careful with using AI \u2013 it\u2019s a fabulous tool and also a danger to us all.<\/p>\n<p>Defences are emerging. One is \u2018normalisation\u2019, software that strips documents of invisible characters and converts everything to plain text, or \u2018detection\u2019 tools that scan for suspicious formatting or hidden code points, much like plagiarism checkers.<\/p>\n<p>Some systems look for statistical watermarks in AI-generated text, patterns too subtle for humans but detectable by machines.<\/p>\n<p>And then there\u2019s the oldest defence of all: our human eyes. Some organisations are going back to using people in the process. An AI might scan thousands of resumes, but a human still makes the final call.<\/p>\n<p>It\u2019s slower, yes, but humans can spot absurdities and bring judgment that machines can\u2019t.<\/p>\n<p>Still, none of these defences is foolproof. Attackers adapt. The cycle continues.<\/p>\n<p>The bigger issue isn\u2019t technical; it\u2019s ethical. Gaming a chatbot with a recipe may be harmless fun, but slipping hidden code into a resume or bypassing filters undermines trust.<\/p>\n<p>If people believe success comes not from merit but from secret digital tricks, confidence in the system collapses.<\/p>\n<p>This matters because AI now shapes hiring, publishing, credit scoring and even government decisions. The integrity of those processes affects real lives.<\/p>\n<p>For policymakers, employers and technologists, the challenge is to balance openness with integrity. Job applicants should know how their resumes are scanned. AI providers should be transparent when using hidden markers to detect machine-written text. Transparency, on both sides, is the first step toward fairness.<\/p>\n<p>This can and will affect the millions of small businesses across the nation, as well as charities, sports clubs, local governments, schools, and everyone else.<\/p>\n<p>Invisible ink, microdots and zero-width characters are all part of the same human urge, which is to get ahead, to compete, to test boundaries. But fairness isn\u2019t just about the rules written down. It\u2019s about the spirit with which we choose to play.<\/p>\n<p>And that, in the end, is an ethical choice no algorithm can make for us.<\/p>\n<p>Or can it?<\/p>\n","protected":false},"excerpt":{"rendered":"Big business is exploiting AI, but so are \u2018clever\u2019 job applicants to get around AI. Photo: Nes. Technology&hellip;\n","protected":false},"author":2,"featured_media":197178,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[256,254,255,64,63,105],"class_list":{"0":"post-197177","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-au","12":"tag-australia","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/197177","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/comments?post=197177"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/197177\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media\/197178"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media?parent=197177"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/categories?post=197177"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/tags?post=197177"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}