Breaking: Microsoft Copilot configuration error exposes confidential enterprise emails
Microsoft has confirmed a critical security failure in its Microsoft 365 Copilot Chat that allowed the AI assistant to access and summarize emails marked as confidential from users' drafts and sent folders. The error, first identified in January 2026 and reported by tech outlet Bleeping Computer, affected enterprise customers globally, including NHS England's IT systems where the bug notice was shared on support dashboards. According to Microsoft's internal service alert, "users' email messages with a confidential label applied are being incorrectly processed by Microsoft 365 Copilot chat," specifically within the work tab that summarizes email messages. The company claims its access controls and data protection policies "remained intact" and that the tool "did not provide anyone access to information they weren't already authorised to see," but acknowledged the behavior "did not meet our intended Copilot experience, which is designed to exclude protected content from Copilot access." Microsoft has deployed a global configuration update to fix the issue, attributing the root cause to a "code issue." The NHS confirmed patient information was not exposed, but the incident reveals a significant gap between Microsoft's security claims and actual implementation. What makes this different from previous AI security incidents is its occurrence within Microsoft's flagship enterprise AI product, which has been marketed specifically for its security controls in regulated environments. Immediate reactions include heightened scrutiny from enterprise security teams and data protection officers who had been evaluating Copilot for deployment in sensitive sectors.