Governing and managing risks throughout the lifecycle for trustworthy AI
This report presents research and findings on accountability and risk in AI systems
by providing an overview of how risk-management frameworks and the AI system lifecycle
can be integrated to promote trustworthy AI. It also explores processes and technical
attributes that can facilitate the implementation of values-based principles for trustworthy
AI and identifies tools and mechanisms to define, assess, treat, and govern risks
at each stage of the AI system lifecycle.
This report leverages OECD frameworks – including the OECD AI Principles, the AI system
lifecycle, and the OECD framework for classifying AI systems – and recognised risk-management
and due-diligence frameworks like the ISO 31000 risk-management framework, the OECD
Due Diligence Guidance for Responsible Business Conduct, and the US National Institute
of Standards and Technology’s AI risk-management framework.