Microsoft 365 Copilot was reading and summarising confidential emails from late January. A code error tracked as CW1226324 let the AI assistant bypass data loss prevention policies and confidentiality labels from 21 January through late February, processing emails that organisations had explicitly marked off-limits to automated tools.
The bug affected Copilot’s “Work Tab” chat feature, which pulled content from users’ Sent Items and Drafts folders regardless of sensitivity labels. According to BleepingComputer, customers first reported the issue on 21 January but began rolling out a fix only in early February. As of 20 February, Microsoft stated that “the root cause of this issue has been addressed for most customers,” though deployment remains incomplete for some complex service environments.
This is precisely the type of AI governance failure that enterprise security teams feared when productivity AI tools started rolling out at scale. When the safeguards and the AI sit in the same platform, a single code error can disable every control simultaneously.
The Controls Were There — They Just Did Not Work
Microsoft confirmed that affected emails carried confidentiality sensitivity labels and were protected by DLP policies specifically configured to block AI processing. The Register reported that Microsoft’s own documentation acknowledges these controls should exclude content from Copilot access in Office apps, though it notes that “content remains available to Microsoft 365 Copilot for other scenarios” including Teams and Copilot Chat.
The distinction matters. Organisations that applied sensitivity labels believing they would comprehensively block AI access found that the labels worked inconsistently across Microsoft’s own ecosystem — even before this bug struck. According to Microsoft’s statement to The Register, “a code issue is allowing items in the sent items and draft folders to be picked up by Copilot even though confidential labels are set in place.”
Cybersecurity News reported that the UK’s National Health Service flagged the incident internally as INC46740412, indicating real-world impact for public sector users. The affected content included business agreements, legal communications, government correspondence, and protected health information — exactly the categories that sensitivity labels and DLP policies are designed to protect.
This Is What Happens When AI Adoption Outpaces Governance
Dr Ilia Kolochenko, CEO at ImmuniWeb and member of Europol, told Cybernews that “incidents like this one will likely surge in 2026, possibly becoming the most frequent type of security incident.” His assessment reflects a broader pattern: organisations are deploying AI faster than they can secure it, with governance frameworks lagging behind productivity gains.
Microsoft’s own research supports this view. The company’s Cyber Pulse report found that more than 80% of Fortune 500 companies are deploying AI agents, but only 47% report having adequate security controls for generative AI platforms. Security Boulevard noted that “the visibility that organizations have on the agents is very limited,” according to Microsoft Security’s corporate vice president Vasu Jakkal.
The World Economic Forum’s 2026 Global Cybersecurity Outlook found that data leaks through generative AI are now the top cybersecurity concern among CEOs globally, cited by 30% of respondents. Among cybersecurity professionals, the concern jumped from 21% in 2024 to 34% in 2026. According to the WEF’s 2026 Global Cybersecurity Outlook, roughly one-third of organisations still have no process to validate AI security before deployment.
What Organisations Should Do Now
Check your Microsoft 365 admin centre for updates under reference CW1226324 to confirm the fix has been deployed to your tenant. eSecurity Planet recommends validating that DLP policies and sensitivity labels are properly enforced within Copilot by testing how confidential content is handled across email and document workflows.
Review Copilot activity logs for unusual access to labelled content during the affected period. Fox News also raised the question of whether confidential draft emails need to be retained at all, or whether a shorter retention policy would reduce exposure particularly given that the bug specifically targeted Sent Items and Drafts folders.
Consider restricting Copilot access using role-based controls and conditional access policies until you can verify that governance controls work as intended. Some organisations may want to pause Copilot deployment entirely for departments handling highly regulated data until independent validation is complete.
References
- BleepingComputer: Microsoft says bug causes Copilot to summarize confidential emails
- The Register: Copilot Chat bug bypasses DLP on ‘Confidential’ email
- Cybersecurity News: Microsoft 365 Copilot Flaw Allows AI Assistant to Summarize Sensitive Emails
- eSecurity Planet: Microsoft 365 Copilot Bug Circumvented DLP Controls
- Kiteworks: Microsoft Copilot Bug Bypassed Confidential Email Controls
- Cybernews: Copilot was spying on confidential emails, Microsoft rushes to ship worldwide fix
This post is also available in:
Svenska