U of G professor Ali Dehghantanha is warning companies of the risks when it comes to using AI
Artificial intelligence chatbots are putting businesses at risk, according to a University of Guelph professor.
It took cybersecurity professor and Canada Research Chair Ali Dehghantanha only 47 minutes of speaking with an internal chatbot to steal a Fortune 500 company’s sensitive client data and project information during a proactive security audit.
The method used is a non-destructive process for testing cyber security called red teaming, where they attempt to simulate what an attacker would do if they got access to an organization.
“We were testing the security of the internal chatbots that were used by the employees,” he told GuelphToday.
In less than an hour, they bypassed the guardrails that had been developed for the chatbot, which means they were able to make the chatbot give the answer to any question it had access to, including information from all internal projects and some executive-level communications.
It’s an approach attackers are regularly using to craft convincing phishing emails.
“For example, if you know about one internal project and you include that in your phishing email, the chances that the victims (fall for) that would be significantly higher,” he said.
Before the use of AI was widespread, he said cybersecurity wasn’t as much of a concern for entire companies, since only executive-level information would need to be secure.
But now, AI tools are available to interns, new and low-level employees.
A chatbot like this is especially helpful for an intern, for example, who might have a number of questions as they’re onboarded, and the chatbot can answer them.
“That has improved the new employees’ performance significantly,” he said. “So it’s understandable why they’re making them available.”
“But securing them is a challenge, and in this case, we could get access to a lot of company information.”
AI is like fast food, he said: companies are utilizing these tools because they’re cheap and accessible.
But most companies are “forgetting about the total cost of AI ownership,” he said.
“They just deploy the AI with the assumption that it is important to improve their performance and productivity, without thinking about how to make AI secure, how to test for security.”
His main advice for any company adopting AI is to consider the total cost when deploying something like a chatbot.
“The days that an intern machine was less valuable for an attacker than a CEO machine are long past, if the same AI is running on both machines,” he said. “So you need to make sure that you’re offering uniform security protection on almost all the assets that you have in the organization.”
But it’s not something that’s on the radar for most people, especially small and medium businesses, he said.
“They are underestimating the risk that the AI adoption is exposing their companies,” he said, adding that if they’re working to get the AI implemented quickly with no security checks, they may find themselves in the middle of a high profile attack that was caused through those AI systems.
Dehghantanha said he’s noticed with smaller businesses, sometimes the same AI that offers public facing questions and answers is serving the internal employees as well, which means if the attacker is able to bypass the guards on the public facing AI, they can get access to everything.
“That’s very common with smaller companies, which is unfortunate because it’s actually opening up a big attack vector,” he said.
The risk to the average person interacting with AI chatbots depends on the type of system being used, since in some cases you can interact anonymously, or ask non-personal questions.
But the risk increases if you need to log in to talk to AI, or are asking more sensitive questions.
“In this case it would be the responsibility of the providers to make sure that data AI systems are not storing the queries of people, anything private or sensitive about them, and that they are secure, because whether we want it or not, AI would have some memory. They need to know what they have told you before so they can continue answering you.”
One of the ways organizations can ensure their AI systems are secure is by complying with AI security standards and conducting regular tests, like the one Dehghantanha did for the Fortune 500 company.
Moving forward, he said regulation and policy changes pushing for security requirements would be helpful.
“But my advice to all businesses out there, public or private, is you should not wait for the policies to make you take the right action.
“These days, AI plays a critical role. I am not suggesting to stop deploying AI systems. They are really useful for improving performance and productivity,” he said. “But what is important is to make sure that security is a major evaluation and consideration when we are deploying these systems.”