We are a group of humans who audit AI systems.

Modern AI systems based on deep learning, reinforcement learning or hybrids thereof constitute a flexible, complex and often opaque technology. Limits in our understanding of an AI system’s behavior constitute risks for system failure. Hence, the identification of failure modes in AI systems is an important pre-requisite for their reliable deployment to real-world settings. is a cooperative of humans working at the intersection of machine learning research, regulation, software development and application domains. We design methods, processes and standardization contributing towards AI technology that can be trusted for use in real-world applications.

This site is the group’s interface to present and organize its work. We maintain information on current projects below. We are committed to equitable collaboration. Project groups form in a self-organized way and are open to interested persons from research, companies, the general public or otherwise.

Core activities