{"id":480395,"date":"2026-03-17T12:52:16","date_gmt":"2026-03-17T12:52:16","guid":{"rendered":"https:\/\/www.newsbeep.com\/uk\/480395\/"},"modified":"2026-03-17T12:52:16","modified_gmt":"2026-03-17T12:52:16","slug":"gartner-suggests-friday-afternoon-copilot-ban-the-register","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/uk\/480395\/","title":{"rendered":"Gartner suggests Friday afternoon Copilot ban \u2022 The Register"},"content":{"rendered":"<p>Gartner analyst Dennis Xu has half-jokingly suggested banning use of Microsoft\u2019s Copilot AI on Friday afternoons, because he fears at that time of week users may be too lazy to properly check its possibly offensive output.<\/p>\n<p>Xu, a Gartner research vice-president, offered the advice at the end of a talk titled \u201cMitigating the Top 5 Microsoft 365 Copilot Security Risks\u201d at the firm\u2019s Security &amp; Risk Management Summit in Sydney on Tuesday.<\/p>\n<p>He raised the possibility of a Friday afternoon AI ban when advising on the fifth risk he has identified: Copilot producing output that is toxic because while it may be factually correct it is culturally unacceptable either in the workplace or among customers. Xu recommended mitigating Copilot\u2019s tendency to produce toxic content by enabling the filters Microsoft supplies, and by training users to always validate the tool\u2019s output.<\/p>\n<p>The analyst reminded the audience that all Copilot output isn\u2019t fit for sharing without review, making validation necessary for all users at all times. He suggested Friday afternoons are a time when workers might just want to get the job done and won\u2019t bother to check for errors that Microsoft\u2019s chatbot produces, perhaps making that slice of the working week a fine time to ban use of Copilot.<\/p>\n<p>Xu\u2019s talk ran for 30 minutes, and he spent the first 20 discussing the risk of Copilot exposing content whose creators didn\u2019t set appropriate sharing permissions.<\/p>\n<p>\u201cCopilot makes over-shared documents more accessible,\u201d he warned. \u201cThis is not a net new risk, but a known risk amplified by AI.\u201d Xu explained why with the example of a worker who uses Copilot to search for information about organizational changes receiving a response that includes a confidential document about an imminent re-org.<\/p>\n<p>Xu said such results are possible because Copilot can search data in SharePoint sites, and Microsoft\u2019s collaboration tool has two overlapping tools users can apply to control access to documents \u2013 labels and an access control list. Both, however, are susceptible to user error that allows unintended access, and fixing that can be laborious.<\/p>\n<p>Xu said Microsoft offers another tool that can apply a superseding access control list, plus automated discovery of over-shared content.<\/p>\n<p>\u201cI keep telling Microsoft to build a single de-risking layer,\u201d Xu said, before recommending the way to reduce the risk of oversharing is by monitoring users to watch for access to restricted content.<\/p>\n<p>His second risk is remote execution through malicious prompts that attempt code injection. Using instruction filters in Copilot and restricting its access to likely sources of malicious prompts such as email will help to mitigate such attacks.<\/p>\n<p>A third risk he identified is Copilot providing access to sensitive data, often when users link the AI tool to third-party SaaS apps. Xu said the Web content plugin Microsoft provides for Copilot is on by default, but the plugin allowing connections to third party applications is off. He recommended allowing Copilot to chat with SaaS sources only when strictly necessary.<\/p>\n<p>His fourth risk is prompt injection, the practice of instructing LLM-powered chatbots to ignore guardrails. Xu said organizations that encourage users to experiment with AI may inadvertently see them conduct prompt injection attacks. Policy and education should control this risk, he said, as will the content safety filters available in the Azure OpenAI service.<\/p>\n<p>Perhaps Friday morning is the time to set that up? \u00ae<\/p>\n","protected":false},"excerpt":{"rendered":"Gartner analyst Dennis Xu has half-jokingly suggested banning use of Microsoft\u2019s Copilot AI on Friday afternoons, because he&hellip;\n","protected":false},"author":2,"featured_media":480396,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[554,733,4308,86,56,54,55],"class_list":{"0":"post-480395","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-technology","12":"tag-uk","13":"tag-united-kingdom","14":"tag-unitedkingdom"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts\/480395","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/comments?post=480395"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts\/480395\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/media\/480396"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/media?parent=480395"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/categories?post=480395"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/tags?post=480395"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}