Trustworthy AI

[Read time: 45 min]

We aim to act responsibly, create and promote an AI that is lawful, ethical, inclusive and robust. It is important to ensure the protection of fundamental (human) rights and user safety.

To that end, this document gathers a set of concrete guidelines, that are structured around 7 principles. Those principles about AI trustworthiness were primarily identified by the European Union (EU). The reason why we tackle AI trustworthiness by complying with these principles is that we believe they provide a relevant overview of all components involved in the development of an AI system.

The primary audience of this document is AI practitioners, especially because it conveys a set of technical guidelines. However, concepts are always introduced on a high level scale, before being tackled on a lower level - and technical - scale. As a result, we strongly encourage non-AI practitioners to also read it.

Content

Going further

This document acts as a practical asset about AI trustworthiness, especially dedicated to AI practitioners (advice, practical methodologies, …).

Some broader questions would still need to be answered while describing, evaluating and regulating the socio-technical world surrounding AI:

  • What are our dependencies ?

  • How to make sense of our activity ?

  • What do we participate in ?

  • What are the shifts produced by our activity ?

_images/danone_datacraft.png