Responsible AI

Artificial Intelligence

Course Description


This 3-day course is designed to help participants understand the core principles of building and deploying AI responsibly. Covering key topics such as algorithmic bias, fairness, transparency, accountability, and explainability, the course provides practical strategies and tools for evaluating and improving AI systems. Participants will learn how to identify ethical risks, implement mitigation techniques, and use explainability frameworks to ensure AI systems align with human values, legal standards, and organizational goals.


Duration: 3 Days

Format: Instructor-led, interactive sessions with discussions, case studies, tools demonstrations, and ethical design labs

robot standing near luggage bags

Description

Course Outline


? Day 1: Foundations of Responsible AI and Ethical Frameworks

Session 1: What is Responsible AI?


  • Definition and importance of Responsible AI
  • Overview of responsible AI principles: fairness, transparency, accountability, privacy, and safety
  • Key global frameworks: OECD, UNESCO, EU AI Act, NIST AI RMF


Session 2: Ethics in AI Design and Deployment


  • The role of ethics in the AI lifecycle
  • Value-sensitive design
  • Stakeholder impact assessment
  • Ethical dilemmas and risk trade-offs


Session 3: Identifying and Understanding Bias


  • Types of bias: data, model, labeling, and deployment bias
  • Sources and causes of bias in machine learning pipelines
  • Fairness metrics: demographic parity, equal opportunity, etc.


Lab Activities:


  • Ethical scenario analysis: Group discussion and decision-making
  • Bias audit using sample datasets
  • Map AI stakeholders and their responsibilities


? Day 2: Explainability and Fairness in Practice

Session 1: Explainable AI (XAI) Concepts


  • What is explainability? Why does it matter?
  • Post-hoc vs. intrinsic explainability
  • Model transparency and interpretability trade-offs


Session 2: Tools for Explainability and Fairness


  • Hands-on with LIME, SHAP, and What-If Tool (Google)
  • Fairness evaluation with IBM AI Fairness 360
  • Visualizing model decisions and feature importance


Session 3: Building Trust in AI Systems


  • Human-AI interaction and trust
  • Communicating AI decisions to non-technical stakeholders
  • Explainability for high-risk domains (health, finance, law enforcement)


Lab Activities:


  • Use SHAP or LIME to explain a model prediction
  • Evaluate a trained model using AI Fairness 360
  • Create an explainability report for a non-technical audience


? Day 3: Governance, Accountability, and Project Work

Session 1: AI Governance and Compliance


  • Policies, standards, and auditing AI systems
  • Internal governance frameworks (model cards, data sheets)
  • Legal compliance: GDPR, AI Act, and industry-specific regulations


Session 2: Designing Ethical AI Systems


  • Ethics-by-design principles
  • Embedding fairness and accountability in data collection and model design
  • Risk mitigation strategies


Session 3: Capstone Project + Presentations


  • Develop a responsible AI framework for a use case (e.g., hiring, lending, healthcare)
  • Present framework including ethical considerations, mitigation strategies, and explainability
  • Group reflection and peer feedback


Lab Activities:


  • Build a governance checklist for an AI project
  • Analyze a real-world AI failure case and suggest responsible redesign
  • Present capstone project and receive peer evaluation