AI Incident
An event where an AI system causes or nearly causes harm, produces unintended outputs, or fails to perform as expected in ways that affect individuals, organizations, or the public. AI incidents require documented response, root cause analysis, and may trigger regulatory reporting obligations.
Why It Matters
AI incidents are inevitable — the question is whether you detect them quickly, respond effectively, and learn from them. Organizations without incident management processes repeat the same failures and face escalating regulatory consequences.
Example
An AI-powered content moderation system starts incorrectly flagging posts in Arabic as hate speech due to a data drift issue, affecting millions of users for 48 hours before the team detects and corrects the problem.
Think of it like...
An AI incident is like a food safety recall — the product seemed fine when it shipped, but something went wrong and now you need to figure out what, how far it spread, and how to prevent it next time.
Related Terms
Kill Switch
A mechanism to immediately stop or disable an AI system when it produces harmful, unsafe, or unauthorized outputs. Kill switches range from simple on/off controls to sophisticated graduated responses that can throttle, redirect, or degrade AI functionality without full shutdown.
Post-Market Monitoring
Ongoing surveillance of an AI system's performance, safety, and compliance after it has been deployed to production. Required under the EU AI Act for high-risk systems, post-market monitoring ensures that AI systems continue to meet their intended specifications as real-world conditions change.
AI Risk Register
A documented inventory of identified AI risks, their likelihood, severity, mitigation measures, and responsible owners. It serves as a living document that tracks risk across the AI portfolio and informs governance decisions about resource allocation and priority.