Data Protection Impact Assessment (DPIA)
A systematic assessment of the potential impact of data processing activities on the rights and freedoms of individuals. Required under GDPR for high-risk processing, a DPIA is particularly relevant for AI systems that process personal data at scale or make automated decisions about people.
Why It Matters
DPIAs are legally mandatory for many AI deployments in the EU and increasingly expected elsewhere. Skipping one doesn't just create legal risk — it means you haven't thought through how your system affects real people.
Example
A retailer deploying AI-powered customer behavior analysis in physical stores conducts a DPIA covering video surveillance data, facial analysis (even if not stored), data retention periods, and customer notification mechanisms.
Think of it like...
A DPIA is like a doctor taking a full medical history before prescribing treatment — you need to understand the patient's vulnerabilities before deciding what's safe.
Related Terms
Algorithmic Impact Assessment (AIA)
A systematic process to evaluate the potential impacts of deploying an algorithmic system on individuals, groups, and society. It identifies risks before deployment and maps out mitigation strategies, serving as both a compliance tool and a design checkpoint.
Fundamental Rights Impact Assessment (FRIA)
An assessment required under the EU AI Act for deployers of high-risk AI systems that evaluates the system's impact on fundamental rights — including non-discrimination, privacy, freedom of expression, and human dignity — before deployment begins.
Automated Decision-Making (ADM)
Decisions made solely by automated means without meaningful human involvement. Under GDPR Article 22, individuals have the right not to be subject to decisions based solely on automated processing — including profiling — that produce legal effects or similarly significant impacts on them.