Artificial intelligence delivers high performance in a wide range of applications. Nevertheless, because it is so difficult to interpret, it is rarely used in critical applications (e.g., in the medical industry). To change this, one of the groups of the department focuses on methods for explaining and verifying AI systems, as well as for quantifying uncertainty.
Explainability means the ability to comprehend the inner workings of an AI system and its decision-making process, such as understanding the inner logic or a specific decision. Explainability can be made possible globally, i.e., for the whole model, or locally, for a specific data instance. In some cases, explainability also involves recommendations for action in the sense of conditional statements of “if – then”. Last but not least, the explainability of an AI system must be tailored to the target audience. A developer often needs a different explanation than the end user.
Verification is all about assessing the safety-relevant aspects and robustness of an existing AI model. For example, the robustness of an AI system with regard to malfunctions can be checked. Uncertainty quantification enables the measurement of uncertainties in processes or data to be used directly in algorithms. This improves decision-making and makes it possible to plan and predict situations with inherent uncertainty more effectively and reliably.
More info on reliable AI is available here on the AI lead topic web page.