Copilot is for entertainment purposes only. It can make mistakes, and it may not work as intended. Don’t rely on Copilot for important advice. Use Copilot at your own risk.
The terms also indemnify Microsoft for any losses and expenses caused by Copilot use and add: “We can’t promise that any [of] Copilot’s Responses won’t infringe someone else’s rights (like their copyrights, trademarks, or rights of privacy) or defame them. You are solely responsible if you choose to publish or share Copilot’s Responses.”
Reeve was one of many to highlight updated wording for the tech giant’s terms and conditions for its AI assistant, even if it dates from October last year. Over the past 24 hours, it’s gone semi-viral on Reddit and elsewhere.
“It’s freaking wild,” an AI specialist at one of the Big Four consultancy firms told the Herald.
But other experts said the terms were equivalent for any free version of an AI chatbot.
Microsoft has different terms for its Copilot with commercial data protection, where the AI can be pointed at data from your company only, and other trusted sources, and is not used to train Copilot.
A Microsoft spokeswoman confirmed the terms highlighted by Reeve (and online here) were for the consumer version of Copilot.
‘Standard approach’
“This is a fairly standard approach,” privacy expert Frith Tweedie said.
“These terms seem to apply to the free version. On that basis, I don’t think they are unreasonable.
“Microsoft is essentially pointing to the limitations of the tool, which are – or should be – well known, particularly in respect of hallucinations and other accuracy challenges.
“The reference to Copilot being ‘for entertainment purposes only’ seems to be aimed squarely at individual users of the free version.”
The Simply Privacy principal forwarded terms from Claude maker Anthropic and ChatGPT maker OpenAI, which have similar wording to Microsoft’s various qualifiers around liability and the possibility that their output could be inaccurate.
“It blows my mind how underappreciated the accuracy issue tends to be. Particularly given how clearly it is addressed by the companies themselves,” Tweedie said.
“I think Microsoft also is trying to push Copilot onto individual users for productivity as well in their marketing, so I think it’s mixed messaging”- Victoria University senior lecturer in artificial intelligence Dr Andrew Lensen
Victoria University AI expert Dr Andrew Lensen also said the terms were just reflecting the reality of the technology.
“We are seeing a lot of people take the advice from these AI language models as gospel, when they can be wrong, often subtly,” he said.
While the terms were for the free version, Lensen added: “I think Microsoft also is trying to push Copilot onto individual users for productivity as well in their marketing, so I think it’s mixed messaging.”
Business protections
“Businesses get stronger privacy and security protections under M365 Copilot,” Tweedie said.
“But the warning ‘It can make mistakes, and it may not work as intended. Don’t rely on Copilot for important advice. Use Copilot at your own risk’ remains true for any generative AI chatbot, whether accessed under a subscription or otherwise.
“This appears to be an attempt by Microsoft to limit potential liability by pointing out the unreliability of Copilot outputs,” said Tweedie, who worked as a lawyer at Bell Gully and intellectual property specialist James and Wells earlier in her career, and is currently an adviser to the Department of Internal Affairs’ AI Advisory Panel.
“It blows my mind how underappreciated the accuracy issue tends to be. Particularly given how clearly it is addressed by the companies themselves,” privacy expert Frith Tweedie.
“Businesses need to pay proper attention to the accuracy problems that are fundamental to LLMs, a risk that I often see downplayed.
“Techniques like RAG can help a lot with this, but it’s not typically a complete solution.”
RAG (retrieval-augmented generation) is a name for a framework where AI is ring-fenced and only able to query trusted sources, among other protections.
“Anyone using them [AI assistants] in a business or who is generally concerned over issues such as confidentiality and privacy should steer clear of free versions” – Lowndes Jordan partner Rick Shera
‘Steer clear of free versions’
“Having reviewed terms and conditions for various LLMs and the wrappers that sit over the top of them, anyone using them in a business or who is generally concerned over issues such as confidentiality and privacy should steer clear of free versions,” Lowndes Jordan partner Rick Shera said.
“Assurances that are given for paid versions around security, privacy, confidentiality and non-use for LLM training purposes are a must have where inputting business or sensitive personal information, particularly given recent cases suggesting that LLM platforms may be forced by court order to disclose user prompts and legal privilege may be lost where there is no expectation of confidentiality, as there cannot be with most free versions.”
There are wrinkles. An organisation can subscribe to the paid, data-protected version of an AI, but then have staff “BYO” their favourite chatbot to the office.
Others, like the Department of Corrections, have rules around the use of free AI chatbots, including prohibiting their use with sensitive data, only for some staff to ignore the guidelines.
Chris Keall is an Auckland-based member of the Herald’s business team. He joined the Herald in 2018 and is the technology editor and a senior business writer.