NIST AI RMF
NIST AI Risk Management Framework (AI RMF), published in 2023 by the National Institute of Standards and Technology, provides guidance to organizations on managing AI risks by promoting trustworthy AI development and use.
What is the NIST AI RMF
In 2023, the United States National Institute of Standards and Technology published NIST A1 100-1: Artificial Intelligence Risk Management Framework (NIST AI RMF). The AI Risk Management Framework is intended to help organizations manage the risks of AI by providing them with prescriptive guidance around developing and using AI in ways that build trust with users of AI systems.
The framework emphasizes managing AI risks in terms of likelihood and severity of harm, including unintended, negative outcomes. It promotes a proactive, iterative, and adaptive process that integrates risk management throughout the AI system's design, development, deployment, and monitoring phases.
The guidance in the act is non-binding. However, the global reputation of NIST as a premier institution for the development of science and technology standards, exerts significant influence on both organizations seeking to implement AI and regulators responsible AI use in their jurisdiction.
Since its release, organizations around the globe have been focused on adopting this framework, in part or in whole, as a way to mitigate the risk associated with AI use. As a result, the AI RMF has become a de facto international standard.
NIST AI RMF focuses on Trustworthy AI
The AI RMF defines seven key attributes of trustworthy AI. AI developed by organizations should seek to achieve these attributes in order to minimize the risk of harm to the individuals expected to use their products and/or services.
- Valid and reliable
- Safe
- Secure, and resilient
- Accountable and transparent
- Explainable and interpretable
- Privacy enhanced
- Fair, with harmful biases managed
Potential Impacts Needing Mitigation
The AI RMF categorizes potential harms from AI systems into three areas:
- Harm to people: Includes harm to civil liberties, rights, physical or psychological safety, or economic opportunity.
- Harm to organizations: Includes harm to business operations, security breaches, monetary loss, or reputation.
- Harm to ecosystems: Includes harm to the global financial system, supply chain, natural resources, the environment, or the planet.
Impediments to Effective AI Risk Management
The NIST AI RMF identifies four overarching impediments or challenges to risk management that should be taken into account as part of implementing an effective overall AI compliance program that mitigates risk and enhances trust in AI.
Risk Management Challenges
Risk Measurement
AI related risks or failures that are not well-defined or adequately understood are difficult to measure quantitatively or qualitatively. Developing strategies for measuring the risk associated with individual AI efforts will be critical in order to effectively manage the risk of AI use in the organization.
Risk Tolerance
Different organizations may have varied risk tolerances due to their particular organizational priorities and resource considerations. Getting agreement at the most senior levels of the organization around what level of legal and operational risk is acceptable is a key part of establishing a risk management framework.
Risk Prioritization
The magnitude of risk varies based on whether AI systems interact with humans and the sensitivity of their data. Systems handling sensitive data or impacting humans directly or indirectly require higher initial prioritization, while non-human-facing systems may need less.
Organizational Processes
AI risks cannot be managed in isolation so must be integrated into broader enterprise-wide risk management practices such as those related to privacy and cybersecurity risks. The AI RMF complements other risk management frameworks to address shared risks like privacy concerns, environmental impacts, and system security.
Effective AI Governance - Key Functions
The core of the framework focuses on four key functions that taken together can help organizations address the risks of AI systems in practice. Those functions are:
- Govern
- Map
- Measure
- Manage
The Govern Function
The Govern function focuses on fostering a risk aware culture of accountability, transparency, and trustworthiness throughout the AI lifecycle.
Govern Function Summary
- Establishes Accountability: Defines roles, responsibilities, and accountability for AI risk management within the organization.
- Policy Development: Creates and enforces policies to guide the legal, effective and ethical development and use of AI systems.
- Oversight Structures: Implements governance mechanisms to monitor and manage AI systems across their lifecycle.
- Promotes Trustworthiness: Focuses on ensuring AI systems are robust, secure, transparent, explainable, and unbiased.
- Mitigate Risks: Addresses risks like bias, privacy breaches, and unintended consequences to enhance system reliability.
- Stakeholders: Engages internal and external stakeholders, including affected communities, for inclusive decision-making.
- Adaptability: Facilitates continuous monitoring and improvement of governance practices as risks and systems evolve.
- Alignment with Values: Ensures AI systems align with organizational goals, societal values, and regulatory requirements.
- Resource Allocation: Provides the necessary resources and support for effective AI governance and risk management.
- Foundational: Supports and enhances the effectiveness of the Map, Measure, and Manage functions in the AI RMF.
The Map Function
The Map function serves as the first step in the iterative risk management process, feeding into the Measure, Manage and Govern functions for comprehensive oversight.
Map Function Summary
- Establishes Context: The MAP function frames the risks related to an AI system by analyzing its lifecycle and associated interdependencies.
- Situational Analysis: Identifies the stakeholders, intended users, and operational environment to ensure the AI system aligns with its goals and avoids unintended outcomes.
- Risk Identification: Focuses on recognizing potential risks throughout the AI lifecycle, including ethical, technical, and operational challenges.
- System Objectives: Ensures the AI system’s goals are well-defined and aligned with the organization's mission, ethical principles, and societal values.
- Use Cases: Explores the specific application scenarios of the AI system, considering its scope and limitations.
- Stakeholders: Engages with diverse stakeholders to incorporate their needs, concerns, and perspectives into the system’s design and operation.
- Risk Sources: Analyzes both internal (e.g., design flaws) and external (e.g., regulatory changes) sources of risk that could impact system performance or trustworthiness.
- Risk Prioritization: Helps prioritize risks based on their likelihood and potential severity to allocate resources effectively.
- System Dependencies: Evaluates dependencies on data, infrastructure, or third-party tools that could influence system reliability and risks.
The Measure Function
The Measure function assesses and monitors AI risks on an ongoing basis.
Measure Function Summary
- Approach: Uses quantitative, qualitative, or mixed-method tools to analyze, assess, and monitor AI risks and negative impacts.
- Connection to Other Functions: Builds on risks identified in the MAP function and informs the MANAGE function for decision-making.
- Pre- and Post-Deployment Testing: Ensures AI systems are tested before deployment and monitored regularly during operation.
- Metrics for Trustworthiness: Tracks metrics related to system functionality, trustworthiness, social impact, and human-AI configurations.
- Rigorous Testing: Implements robust software testing, performance assessments, benchmarks, and formalized reporting to ensure reliability.
- Independent Reviews: Encourages external evaluations to mitigate biases and conflicts of interest, improving testing effectiveness.
- Decision Basis for Trade-offs: Provides traceable data to manage trade-offs among trustworthiness characteristics and inform risk management actions.
- Scientific and Ethical Standards: Measurement methodologies should follow scientific, legal, and ethical norms, emphasizing transparency.
- Scalable and Adaptable Methods: Develops new qualitative and quantitative measurement techniques as AI risks and negative impacts evolve.
- Support for Risk Management: Enables comprehensive evaluation of AI trustworthiness, identification of risks, and integration into Manage function for ongoing risk monitoring and response.
The Manage Function
The Manage function addresses the need to allocate resources to mapped and measured risks.
Manage Function Summary
- Purpose: Allocates resources to address mapped and measured risks, guided by the GOVERN function.
- Risk Treatment: Develops plans to respond to, recover from, and communicate about incidents and events.
- Expert Input: Uses insights from experts and AI actors to reduce the likelihood of system failures and negative impacts.
- Documentation Practices: Incorporates systematic documentation from the GOVERN, MAP, and MEASURE functions to enhance transparency and accountability.
- Emergent Risk Assessment: Establishes processes to identify and assess new or evolving risks continuously.
- Continual Improvement: Implements mechanisms for ongoing risk management enhancement.
- Risk Prioritization: Creates plans for prioritizing risks and regular monitoring for deployed AI systems.
- Resource Allocation: Allocates risk management resources based on assessed and prioritized risks.
- Adaptability: Encourages ongoing application of the MANAGE function as risks, contexts, and stakeholder expectations evolve.
- Outcome: Enhances organizational capacity to manage risks effectively across the AI system lifecycle.
NIST and Generative AI
On July 26, 2024, NIST released NIST-AI-600-1; Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile. This follow on to the initial AI RMF is a set of guidelines designed to help organizations identify and manage specific risks associated with Generative AI technologies.
The guidance specific to Generative AI calls out twelve specific risks and lays out a set of mitigations associated with these risks. The guidance is additive to the that which was laid out in the original framework.
Twelve GenAI Specific Risks
- WMDs: Accessing materially nefarious information or design capabilities related to chemical, biological, radiological, or nuclear (CBRN) weapons of mass destruction (WMD) or other dangerous materials or agents.
- Hallucinations: Creating content that is a confabulation, hallucinations or fabrication.
- Hate Speech: Exposure to hateful and disparaging or stereotyping content.
- Data privacy: Violations due to the inclusion of personally identifiable information or sensitive data.
- Environmental Impacts: Negative impacts to society brought on by the high compute resource requirements of GPAI models
- Systemic Bias: Amplification and exacerbation of historical, societal, and systemic biases
- Anthropomorphizing: Inappropriately assigning human characteristics to a GAI system.
- Conspiracy Propagation: Exchange or consumption of content which may not distinguish fact from opinion or fiction.
- Cyber Threats: Exploitation of IT system vulnerabilities to facilitate hacking, malware, phishing, offensive cyber operations, or other cyberattacks.
- IP: Using copyrighted, trademarked, or licensed content without authorization and eased exposure of trade secrets.
- Pornography: Obscene, degrading, and/or abusive content
- Transparency: Non-transparent or untraceable integration of upstream third-party components
Minimal Viable Governance
Many organizations are unsure about the best way to ensure that they can address NIST guidance as well as that of other guidance and regulations such as the EU AI Act. They want to be able to address guidance and regulation for AI development and use in a way that doesn’t require building a unique governance framework for each regulation. This is where the concept of Minimum Viable Governance (MVG) comes in.
The MVG approach to governance focuses on right sizing the effort involved in establishing an AI governance framework - not too much, not too little, but just enough to protect the organization while maintaining AI innovation cycles.
MVG involves three core facets:
- Establishing a governance inventory to ensure visibility into all AI usage.
- Applying lightweight controls to manage verification, evidence, and approvals without overwhelming innovation.
- Implementing streamlined reporting to achieve transparency and understand how AI is being used.
Govern and Scale All Your Enterprise AI Initiatives with ModelOp Center
ModelOp is the leading AI Governance software for enterprises and helps safeguard all AI initiatives — including both traditional and generative AI, whether built in-house or by third-party vendors — without stifling innovation.
Through automation and integrations, ModelOp empowers enterprises to quickly address the critical governance and scale challenges necessary to protect and fully unlock the transformational value of enterprise AI — resulting in effective and responsible AI systems.
To See How ModelOp Center Can Help You Scale Your Approach to AI Governance