Skip to content

Chapter 22: Mitigation Tools

Image 22 - Mitigation Tools

Overview

Now that we've identified potential risks, we can implement ways to mitigate.

Mitigation Strategy

After mapping harms and measuring their presence, mitigation becomes systematic:

  • Layer Defenses: Apply multiple mitigation layers (model, safety system, prompt, UX)
  • Prioritize by Impact: Focus on high-severity, high-probability risks first
  • Test Effectiveness: Measure whether mitigations actually reduce harm
  • Iterate: Refine mitigations based on ongoing evaluation results
  • Document Decisions: Maintain clear records of what was mitigated and why

Azure AI Foundry Mitigation Tools

Azure provides comprehensive mitigation capabilities across all layers:

  • Model Selection: Choose models with built-in safety training
  • Content Safety: Configurable filters for inputs and outputs
  • Prompt Engineering: System message templates with safety guidelines
  • Grounding: RAG with Azure AI Search to reduce hallucinations
  • UX Controls: Rate limiting, disclaimers, human-in-the-loop

With Azure AI Foundry and GitHub working together, you can implement, test, and deploy mitigations confidently—knowing that each layer is measurable, auditable, and continuously improvable.

Next Steps

Continue your learning journey:

← Chapter 21 | Chapter 23 →


Questions or feedback? Join the discussion on our GitHub repository or connect with the community.