AI Governance

MEASURE (NIST AI RMF)

The NIST AI RMF function focused on quantifying, assessing, and tracking identified AI risks using metrics, tests, and evaluation methods. MEASURE activities include bias testing, performance benchmarking, explainability assessment, and security evaluation across the AI lifecycle.

Why It Matters

What gets measured gets managed. The MEASURE function turns qualitative risk concerns into quantitative evidence that can inform decisions, satisfy auditors, and demonstrate due diligence.

Example

For a credit scoring model, the MEASURE function involves running disparate impact analysis across demographic groups, testing model robustness against adversarial inputs, measuring prediction accuracy on out-of-distribution data, and assessing explainability using SHAP values.

Think of it like...

MEASURE is like a doctor ordering specific blood tests after a checkup — the general exam (MAP) identified concerns, and now you need precise data to understand the actual severity and track whether treatment (MANAGE) is working.

Related Terms