Module 05 · AI Governance
AI Governance Framework
AI adoption is outpacing AI governance in every organization. The result: unmanaged risk, regulatory exposure, and shadow AI proliferating across business units.
A structured governance framework turns AI from an uncontrolled experiment into a managed, auditable capability.
The Urgency
Why AI governance matters now
Regulatory
Laws are here
EU AI Act entered force in 2024. US Executive Order 14110 on Safe AI. China's Generative AI Regulations. Canada's AIDA. Non-compliance penalties rival GDPR fines.
Operational
Shadow AI is real
Employees use ChatGPT, Copilot, and dozens of AI tools daily — often without IT knowledge. Sensitive data flows to third-party models with no oversight or data processing agreements.
Reputational
Failures are public
Biased hiring algorithms, hallucinating customer-facing chatbots, and AI-generated misinformation create headlines. Governance failures become brand crises overnight.
The question is not whether you need AI governance — it's whether you build it proactively or reactively after an incident.
EU AI Act
EU AI Act risk tiers
| Risk tier | Examples | Requirements |
|---|
| Unacceptable | Social scoring, real-time biometric mass surveillance, manipulation of vulnerable groups | Banned entirely within the EU |
| High risk | Credit scoring, hiring/recruitment, critical infrastructure, law enforcement, education | Conformity assessments, risk management systems, human oversight, data governance, transparency |
| Limited risk | Chatbots, AI-generated content, emotion recognition | Transparency obligations — users must know they are interacting with AI |
| Minimal risk | Spam filters, AI in games, inventory management | No specific requirements (voluntary codes of conduct encouraged) |
Most enterprise AI deployments fall into high risk or limited risk. Classify every AI system by risk tier before deployment — this determines your compliance obligations.
NIST AI RMF
NIST AI Risk Management Framework
The NIST AI RMF (AI 100-1) provides a voluntary, flexible framework organized around four core functions:
Function 1
Govern
Establish AI risk management culture, policies, and accountability structures. Define roles, responsibilities, and risk tolerances. This function is cross-cutting — it enables the other three.
Function 2
Map
Identify and contextualize AI risks. Understand the AI system's purpose, stakeholders, and operating environment. Catalog potential harms and benefits across the lifecycle.
Function 3
Measure
Assess identified risks using quantitative and qualitative methods. Track metrics for bias, accuracy, robustness, and security. Establish benchmarks and monitoring thresholds.
Function 4
Manage
Prioritize and act on risks. Implement controls, mitigations, and response plans. Continuously monitor and improve. Communicate residual risk to stakeholders.
NIST AI RMF maps cleanly to existing cybersecurity frameworks (CSF, 800-53), making it the natural choice for security teams already using NIST.
Discovery & Inventory
Shadow AI and AI system inventory
Problem
Shadow AI discovery
Survey business units for AI tool usage. Monitor network traffic for AI API endpoints (api.openai.com, etc.). Review SaaS contracts for embedded AI features. Check browser extensions and desktop apps. Analyze expense reports for AI subscriptions. You cannot govern what you cannot see.
Solution
AI system inventory
Catalog every AI system: vendor, model type, data inputs, data outputs, risk tier, business owner, data classification, and deployment date. Include AI features embedded in existing tools (CRM AI, email AI assistants). Maintain a living register — updated quarterly at minimum.
| Inventory field | Why it matters |
|---|
| Data classification | Determines which data can flow to which AI systems |
| Risk tier (EU AI Act) | Drives compliance obligations and assessment depth |
| Business owner | Accountability — someone must own the risk |
| Third-party model? | Supply chain risk and data processing agreements required |
Responsible AI
Responsible AI principles
Principle
Fairness
Test for bias across protected characteristics. Monitor model outputs for disparate impact. Document training data demographics and known limitations.
Principle
Transparency
Explain how AI systems make decisions. Provide model cards and system documentation. Disclose AI use to affected individuals. Enable meaningful human review.
Principle
Accountability
Assign clear ownership for each AI system. Establish escalation paths for AI failures. Maintain audit trails. Define liability for AI-driven decisions.
Principle
Safety & Security
Red-team AI systems before deployment. Implement kill switches. Monitor for adversarial attacks. Plan for model degradation over time.
Principle
Privacy
Minimize data collection for AI training. Enforce data retention policies. Implement differential privacy where feasible. Honor data subject rights for AI-processed data.
Principle
Human oversight
Humans in the loop for high-stakes decisions. Clear override mechanisms. Prevent automation complacency. Regular human validation of AI outputs.
Key Takeaway
Building your AI governance program
Phase 1
Discover and inventory. Find every AI system in use — sanctioned and shadow. Classify by risk tier. Assign business owners. You cannot govern what you have not cataloged.
Phase 2
Establish policies. Acceptable use policy for generative AI. Data classification rules for AI inputs. Procurement requirements for AI vendors. Incident response procedures for AI failures.
Phase 3
Implement controls. Pre-deployment risk assessments (aligned to NIST AI RMF). Bias testing and fairness audits. Security testing including adversarial red teaming. Human oversight requirements by risk tier.
Phase 4
Monitor and improve. Continuous monitoring of AI system performance and drift. Regular reassessment as regulations evolve. Board-level reporting on AI risk posture. Feedback loops from incidents to policy updates.
Remember this
AI governance is not about slowing innovation — it's about scaling AI safely. Organizations with strong governance deploy AI faster because they have clear guardrails, not endless debates about risk.