Blockstars Technology CEO Kosala Aravinda said Aussie workers are risking massive repercussions with the shadow AI trend. (Source: Supplied/Getty)
A large majority of Australian workers are using artificial intelligence (AI) at work in the wrong way and it could have potentially disastrous repercussions for them, their business or their customers. The term ‘Shadow AI’ refers to workers using the technology even though they haven’t been authorised to.
Some are uploading confidential business information to platforms like ChatGPT, which could result in that data being made public. Kosala Aravinda, CEO of AI business implementation firm Blockstars Technology, said business owners needed to quickly come up with policies to keep workers in check.
“With the extremely fast pace of AI adoption, many organisations are desperate to keep up,” he said.
“But efficiency at any cost is a dangerous trade-off. Every confidential document or client record dropped into an AI system could ultimately end up outside your control.
“The reputational, legal and competitive risks are profound.”
Data from the 2025 HP Windows 11 SMB Study found 81 per cent of Aussie employees surveyed admitted to sharing confidential information with free AI tools.
Do you have a story? Email stew.perrie@yahooinc.com
Aravinda said that while ChatGPT and other tools can help with short-term efficiency, they also come with “potential severe long-term consequences”.
Workers could put their business’s trade secrets at risk, clients could revolt, or very private information could end up being leaked.
“Businesses that fail to take data governance seriously risk being caught out by this wave of reform,” Aravinda added.
“The cost of AI if deployed recklessly, far outweighs the costs of compliance. A serious breach can cripple a business financially, reputationally and operationally, hence the need for businesses to build their own in-house Ai tools.”
A temp worker at the NSW Reconstruction Authority (RA) recently found themselves in trouble for using ChatGPT, which is an “unsecured” platform, to upload data that contained the personal information of more than 2,000 people.
The incident happened back in March, but those affected weren’t told until this month.
Name and contact details, residential addresses, dates of birth, and other private information were all part of the data breach for those in the Northern Rivers Resilient Homes Program (RHP).
Story Continues
“We understand this news is concerning and we are deeply sorry for the distress it may cause for those involved in the program,” RA said in a statement.
“The data shared was a Microsoft Excel spreadsheet with 10 columns and more than 12,000 rows of information. All of it had to be thoroughly reviewed to understand what may have been compromised.
“The process was highly complex and time consuming and we acknowledge that it has taken time to notify people. Our focus has been on making sure we had all the information we needed to notify every impacted person correctly.”
It’s not as if people would be able to find your deepest, darkest ChatGPT questions like a Google search.
But AI models are trained on the queries it receives and the information people provide.
During large-scale training, the data uploaded by the temp RA worker could be studied by ChatGPT and absorbed into its ‘brain’.
If another person asked a question related to this topic, the AI model could spit out these personal bits of information.
“There is an area called prompt injection, where you try to manipulate your simple prompts into sophisticated prompts so that these models tend to reveal sensitive information,” CSIRO’s Data61 senior research scientist Chamikara Mahawaga Arachchige told the ABC.
Some AI platforms like OpenAI, which operates ChatGPT, allows users to delete the information they’ve provided so far to ensure those details can’t be found by others.
Using publicly accessible AI platforms to upload sensitive or confidential information can backfire. (Source: Getty) · alexsl via Getty Images
Aravinda said workers and businesses that get caught using shadow AI can be hit with severe penalties.
Fines can be issued for serious data breaches, with 2022 legislation setting a maximum of $50 million or 30 per cent of adjusted turnover, whichever is greater.
There can also be legal expenses, class actions, breach responses and loss of contracts.
The Blockstars Technology CEO said some businesses will suffer most from reputational damage.
“Companies spend decades building their brand,” he added.
“One mishandled dataset in a free AI tool can undo that work overnight. Once the public sees you as careless with their information, the cost of rebuilding that trust is almost insurmountable.
“The strategic question will be simple: do you host your own AI internally, or do you rely on public sources? Those who invest in secure, in-house AI will win the business. Those who don’t will be left behind.”
Get the latest Yahoo Finance news – follow us on Facebook, LinkedIn and Instagram.