Transparency creates trust
The second basic prerequisite for safe use of AI systems is that they are transparent. In line with the ethical guidelines of the High-Level Expert Group on Artificial Intelligence of the European Commission, or HLEG AI for short, this is one of the key elements for the realization of trustworthy AI. In contrast to the criteria that can be used to check reliability at algorithmic level, this transparency relates exclusively to human interaction at systematic level. Based on the HLEG AI guidelines, there are three points that transparent AI must fulfil: First, the decisions made by the algorithms must be traceable. Second, it must be possible for a person to explain these decisions at a full level of human understanding.
And third, AI systems must communicate with a human to let them know what the algorithm is capable of, including tasks that are beyond its capabilities.
“Users will only trust AI – no matter if it’s being used in road traffic settings or manufacturing factories – when it is possible to test the reliability of self-learning, autonomous AI systems with standardized processes, also taking into account ethical considerations”, Wu predicts. “When this trust is in place, the chasm between research and application will be narrowed.”
The study “Dependable AI - Using AI in safety-critical industrial applications” is available for download at: https://www.ki-fortschrittszentrum.de/de/studien/zuverlaessige-ki.html