Home

Gartner suggests Friday afternoon Copilot ban because users may be too lazy to check its mistakes

Gartner analyst Dennis Xu has half-jokingly suggested banning use of Microsoft’s Copilot AI on Friday afternoons, because he fears at that time of week users may be too lazy to properly check its possibly offensive output.

Xu, a Gartner research vice-president, offered the advice at the end of a talk titled “Mitigating the Top 5 Microsoft 365 Copilot Security Risks” at the firm’s Security & Risk Management Summit in Sydney on Tuesday.

He raised the possibility of a Friday afternoon AI ban when advising on the fifth risk he has identified: Copilot producing output that is toxic because while it may be factually correct it is culturally unacceptable either in the workplace or among customers. Xu recommended mitigating Copilot’s tendency to produce toxic content by enabling the filters Microsoft supplies, and by training users to always validate the tool’s output.

The analyst reminded the audience that all Copilot output isn’t fit for sharing without review, making validation necessary for all users at all times. He suggested Friday afternoons are a time when workers might just want to get the job done and won’t bother to check for errors that Microsoft’s chatbot produces, perhaps making that slice of the working week a fine time to ban use of Copilot.

Xu’s talk ran for 30 minutes, and he spent the first 20 discussing the risk of Copilot exposing content whose creators didn’t set appropriate sharing permissions.

“Copilot makes over-shared documents more accessible,” he warned. “This is not a net new risk, but a known risk amplified by AI.” Xu explained why with the example of a worker who uses Copilot to search for information about organizational changes receiving a response that includes a confidential document about an imminent re-org.

Xu said such results are possible because Copilot can search data in SharePoint sites, and Microsoft’s collaboration tool has two overlapping tools users can apply to control access to documents – labels and an access control list. Both, however, are susceptible to user error that allows unintended access, and fixing that can be laborious.

Xu said Microsoft offers another tool that can apply a superseding access control list, plus automated discovery of over-shared content.

“I keep telling Microsoft to build a single de-risking layer,” Xu said, before recommending the way to reduce the risk of oversharing is by monitoring users to watch for access to restricted content.

His second risk is remote execution through malicious prompts that attempt code injection. Using instruction filters in Copilot and restricting its access to likely sources of malicious prompts such as email will help to mitigate such attacks.

A third risk he identified is Copilot providing access to sensitive data, often when users link the AI tool to third-party SaaS apps. Xu said the Web content plugin Microsoft provides for Copilot is on by default, but the plugin allowing connections to third party applications is off. He recommended allowing Copilot to chat with SaaS sources only when strictly necessary.

His fourth risk is prompt injection, the practice of instructing LLM-powered chatbots to ignore guardrails. Xu said organizations that encourage users to experiment with AI may inadvertently see them conduct prompt injection attacks. Policy and education should control this risk, he said, as will the content safety filters available in the Azure OpenAI service.

Perhaps Friday morning is the time to set that up? ®

Source: The register

Previous

Next