BSI__WP - Transparency of AI Systems (2024)
BSI__WP - Transparency of AI Systems (2024)
White paper
Document history
Version Date Editor Description
1.0 01.07.2024 Dr. Oliver
Müller and
Veronika
Lazar
Table of Contents
1 Introduction.............................................................................................................................................................................................. 4
2 Definition ................................................................................................................................................................................................... 6
2.1 Elements.......................................................................................................................................................................................... 6
2.1.1 AI system ................................................................................................................................................................................... 6
2.1.2 Ecosystem.................................................................................................................................................................................. 6
2.1.3 Information............................................................................................................................................................................... 7
2.1.4 Life cycle .................................................................................................................................................................................... 7
2.1.5 Needs and objectives ............................................................................................................................................................ 8
2.1.6 Stakeholders ............................................................................................................................................................................. 8
3 Discussion ................................................................................................................................................................................................ 10
3.1 Approach and procedure ........................................................................................................................................................ 10
3.2 The aim of transparency ......................................................................................................................................................... 11
3.3 Transparency requirements in the AI Regulation ......................................................................................................... 11
3.4 Opportunities through transparency ................................................................................................................................. 12
3.5 Dangers of transparency ......................................................................................................................................................... 13
4 Conclusions ............................................................................................................................................................................................. 14
Bibliography ...................................................................................................................................................................................................... 15
Introduction
1 Introduction
In this white paper, we present a definition of transparency for information technology systems that have
integrated artificial intelligence (AI). The aim of this publication is to develop a common understanding of the
term transparency and to highlight the relevance of transparency for various stakeholders and the BSI.
Therefore, the paper is addressed to all stakeholders of AI systems and is intended to show, among other things,
that different stakeholders may also have different transparency requirements.
1.1 Motivation
AI has now established itself as a digital tool in both the private and professional sectors. Whether it's
determining personal calorie needs using a smartwatch, automatically forwarding calls to improve the customer
experience, or detecting suspicious activity on computer networks: AI is omnipresent, and the list of examples of
possible areas of application is constantly growing. This has been made possible, by the available amounts of
data (big data) for training, testing and validating the AI models, as well as hardware resources that are now
available and can provide corresponding computing power. With increased technical possibilities, the need for
AI-based solutions - in particular to increase efficiency and productivity - has also grown at the same time. This
demand is currently met by an increasing number of AI start-ups. The result is a constantly growing number of
available AI systems, whose technologies are rapidly evolving and increasing in complexity.
On an abstract level, most of these systems are operating in a black box manner: only the inputs to and the
outputs of the system are visible from outside (see for example (Ribeiro, 2016)). How the system reaches the
output usually remains unclear and is often not comprehensible. Moreover, system outputs often lack
explainability, which makes them difficult to verify without expert knowledge. The increasing complexity of AI
systems and poor or missing information about the system make an assessment by eye as well as assessing the
system’s trustworthiness difficult.
Due to the techniques used in the development of AI systems, additional information, such as information on
training data, also becomes relevant. For example, the origin and quality of the training data must be assessed
before AI systems are used in order to minimise the risk of poisoning attacks, in which attackers manipulate the
training data set used by a machine learning model.
In summary, these factors require the development and use of AI systems that enable appropriate traceability
and explainability. Both often go hand in hand with the related criterion of transparency (see Section 2).
Transparency is thematically embedded in the broad field of trustworthiness of AI systems. The different criteria
cannot be clearly separated from one other, but each focuses on a different subject area. This overlap is shown in
Figure 1. This white paper deals with the topic of transparency of AI systems.
In the following, we define the concept of transparency in the context of AI systems and address the individual
elements of the definition. We then discuss our approach and procedure, make reference to the transparency
requirements in the EU AI Act of the European Parliament and of the Council and shed light on the opportunities
and risks of transparent AI systems.
Introduction
Figure 1: Venn diagram to clarify the relationship between transparency, explainability and traceability in the
context of the trustworthiness of AI systems. The different areas overlap, but each topic has its own special focus.
2 Definition
Transparency of AI systems is the provision of information about the entire life cycle of an AI system and its
ecosystem. Transparency promotes accessibility to information that enable an assessment of the system with
regard to different needs and objectives for all stakeholders.
2.1 Elements
The above definition is based on the presentation of the transparency concept in (OECD, 2019) and (BSI, 2021a).
It is compliant with the transparency requirements in the EU AI Act (see section 3.3 for details) and represents
the position of the BSI. In the following subsections, the individual elements of the definition are described in
more detail and their relationship is illustrated graphically in Figure 2.
2.1.1 AI system
The EU AI Act governing the regulation of artificial intelligence, defines an AI system as “a machine-based
system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after
deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate
outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual
environments” (cf. Article 3 EU AI Act). In its definition, the BSI explicitly formulates the hardware component
and defines AI systems as software and hardware systems that utilise artificial intelligence in order to behave
"rationally" in the physical or digital dimension. Based on their perception and analysis of their environment,
these systems act with a certain degree of autonomy in order to achieve certain goals (BSI, 2021b). The
technology integrated into these systems, known as AI, consists of different disciplines like machine learning,
inference and robotics. These include expert systems and neural networks used in AI systems. This is not an
exhaustive list of the techniques used, but is intended to show the abundance of different techniques. An equally
wide range becomes clear when considering the different functionalities of AI systems, ranging from simple to
highly complex tasks. AI systems can perform tasks such as pattern recognition, classifications, forecasts,
recommendations, natural language processing or computer vision and can also be combined with each other in
a variety of ways. In addition, AI can be implemented in the systems in different ways depending on the
requirements and objectives of the application. On the one hand, it can be developed and used as a separate
application and represent the primary function of the system, as is the case, for example, in chatbot applications.
On the other hand, it can also be integrated into existing systems, for example in order to expand their
functionality and/or increase performance in background processes. Considering the way AI systems are
implemented, the degree of automation can also vary greatly. While some systems only use the output of AI as
recommendations and require humans as the final decision-making authority, other (sub-)systems autonomously
implement the decisions and classifications of AI without further human action. Overall, it can be summarised
that there is not one AI system, but rather a wealth of different techniques, functionalities and forms of
implementation.
2.1.2 Ecosystem
In this paper, the term ecosystem refers to the context in which an AI system is developed, deployed and
operated. The information regarding the ecosystem of an AI system goes beyond the actual AI system and
should, for example, also include details about the provider (e.g. location, contact details) or the development
process of the system. The term should also include the entire supply chain of the AI system. The decision to
include information about the ecosystem of an AI system in the definition of transparency is based on the fact
that there is a (conditional) dependency between the actual AI system and its ecosystem. For example, if the AI
system is developed and operated outside the European Union in a third country, corresponding questions and
challenges arise with regard to the underlying level of IT security and data protection level. This meta-
information can support a well-founded assessment of the situation as well as an informed decision by
stakeholders.
2.1.3 Information
Information is the basis for the knowledge needed by stakeholders to form an assessment of the AI system and
its ecosystem. They must be disclosed and made accessible so that they are available for this purpose. In
addition, information must be relevant and appropriate for gaining knowledge. Section 3.3 explains the
transparency requirements in the EU AI Act and sets out the minimum information to be disclosed by providers
and operators of certain AI systems.
Based on the plan, the AI system will be developed, implemented, tested and validated. If planned transparency
and response measures are considered, implemented and tested right from the start, this is referred to as
transparency by design. The response measures allow the AI system and/or the information situation to be
readjusted if transparency is not achieved to the desired extent.
In the event of a retraining of the AI systems, further measures will become relevant to ensure transparency. The
measures then relate, for example, to any new training data sets used and are particularly relevant to the topics
of discrimination/bias and suitability.
2.1.4.6 Decommissioning
There are two options for decommissioning legacy systems: (i) shutting down the legacy system without
continuing it or (ii) migrating to a new system. In each case, it must be made transparent how data that is no
longer required or migrated will be handled and what changes/consequences the decommissioning will entail
for the stakeholders. In addition, the stakeholders must also have options for responding here, e.g. to assert their
rights as data subjects.
2.1.6 Stakeholders
The term "stakeholder" refers to all parties who are either indirectly (e.g. through impacts) or directly (e.g.
through application) affected by an AI system or who interact with the system (e.g. developers). These can be
individual persons or groups of people. A stakeholder does not have to play an ‘active’ role. An overview of
possible different stakeholders and their relation to AI systems is presented in Table 2. While consumers and
users usually only use an AI system, it is possible that experts, developers and companies/organisations provide
an AI system in addition. Indirectly affected persons/third parties do not provide an AI system, nor do they use
it. Nevertheless, they can be affected by the impact and thus become (passive) stakeholders. The presented list
of different stakeholders does not claim to be exhaustive and can be refined as desired. However, the chosen
representation is sufficient to show that there may be different interests with regard to an AI system, which can
be reflected, among other things, in different requirements for transparency - such as the type or level of detail
of the information provided - of an AI system. Therefore, the various stakeholders must be taken into account
when defining the concept of transparency.
Table 2: Exemplary presentation of possible stakeholders of AI systems and their different requirements for transparency
due to different interests. Characters used: ‘+’ (applies), ‘-’ (does not apply).
3 Discussion
3.1 Approach and procedure
The existence of already published definitions of the concept of transparency raises the question of the need for
a further definition. The sheer breadth of the topic of transparency on the definition market ultimately reflects
the different requirements for transparency depending on the stakeholders and the area of application. In order
to provide a basis for further work of the BSI with a focus on broad stakeholder groups and generic areas of
application, it was decided not to use existing definitions.
In addition, the speed at which technologies in the AI sector are developing is enormous. This harbours the risk
that definitions, once established, may lose their validity, especially if they are too specific. In order to keep pace
with technical progress and to avoid constant renewal and adaptation of the definition to the current state of the
art, the definition of transparency presented in this white paper is as technology-neutral and future-proof as
possible. On the one hand, it should be easy to understand, cover all relevant aspects of transparency and at the
same time be open enough to allow individual interpretation depending on the respective stakeholder and the AI
technology used. On the other hand, it should serve as a generic basis for future work of the BSI in this area.
Furthermore, a holistic approach was taken to the definition. This is illustrated in Figure 2: transparency includes
both the provision of information about the AI system itself and about its ecosystem, such as the supply chain of
the AI system or details about the provider. The weighting of the information provided is the responsibility of the
respective stakeholder.
Figure 2: Schematic representation of the holistic transparency approach using the individual elements from the
definition: in addition to information on the AI system itself that is relevant and appropriate for the stakeholders,
information on its ecosystem is also provided/disclosed. This promotes a valid assessment of the suitability and
appropriateness of the AI system by the stakeholders.
Regulation are in line with the definition of transparency presented in this white paper, as information about the
ecosystem are included.
Apart from the specific transparency requirements, the purpose of the AI Regulation is set out in Article 1(1). In
addition to improving the functioning of the internal market, the AI Regulation aims to promote “the
introduction of human-centric and trustworthy” AI. At the same time, it aims to ensure a high level of protection
against the harmful effects of AI systems in the EU. The definition of transparency presented in this white paper
must not run counter to these objectives. The provision of information about the AI system and its ecosystem is
intended to help increase the trustworthiness of an AI system. In addition, transparency can facilitate the
interaction of different actors/stakeholders along the supply chain, which in turn can benefit the functioning of
the internal market. At the same time, transparency must not lead to AI systems being misused by attackers by
disclosing certain (e.g. security-relevant) information (more on this in Section 3.5). The EU AI Act also specifies
that incidents or malfunctions related to AI shall not result in an AI system endangering the health or safety of
persons or property. The EU AI Act also aims to promote “fundamental rights enshrined in the Charter of
Fundamental Rights, including democracy, the rule of law and environmental protection”. Transparency can help
stakeholders to assess the suitability of an AI system. This empowerment enables stakeholders to decide for
themselves whether or not an AI system can/should be used in relation to the above-mentioned points. By
disclosing responsibilities, transparency in the event of damage can help to contain the damage and prevent or
minimise possible consequences. Transparency can also be a driver of innovation. Knowledge about limitations
of AI systems can lead to the development of applications/products that no longer have these limitations. For
example, the interaction with the first chatbots was initially only possible via text language: inputs could be
made via a keyboard, to which the chatbot then reacted on the screen in text language. Meanwhile, chatbots are
used in various areas (e.g. in telephone customer support in the insurance sector), in which an interaction by
voice input and voice output is possible.
In summary, transparency is taken into account in the EU AI Act and initial requirements are formulated. At the
same time, the concept of transparency in the EU AI Act is very broad. The definition of transparency presented
in this white paper does not contradict these transparency requirements laid down in the EU AI Act, but is
intended to provide a more effective formulation and definition of the term transparency.
assessment of risks associated with the use of the system. The identification of roles and responsibilities in the
event of damage and adverse events as part of transparency requirements can also help stakeholders to detect
faulty system behaviour, reduce response times and thus mitigate possible consequential damage.
If transparency is practised in the early phases of the life cycle of an AI system, inconsistencies can be avoided
within the development team from the outset, sources of error minimised and training phases shortened. AI
systems are both developed and increasingly used as development tools - e.g. in AI-supported programming for
the automatic generation of program code - for new systems. In the early development phases, it is also
important from a developer's point of view to know where the training/test/validation data come from, how
they are obtained and whether they are free of bias - e.g. to avoid discrimination. This information is important in
order to be able to prepare the data correctly before training/testing/validating (pre-processing). Current
development trends also show that pre-existing models are often used, which makes the presence and
accessibility of all safety-critical information on existing models particularly relevant. In the absence of this
information, there is a risk of implementing the security risks of the basic models into your own products. Both
systems are then interdependent, and any lack of transparency in the underlying system is transferred to the
system built on top of it. The inheritance of the security risks described above from different areas leads to an
increased overall risk in the examples mentioned, which underlines once again the explosiveness of transparency
for security-relevant aspects when using AI systems.
In addition, as discussed in the previous section, transparency can contribute to better user empowerment in
assessing AI systems. A correct assessment of the application, suitable application scenarios and possible
problems and security risks can promote safe use by users.
4 Conclusions
Due to the black box properties of many AI systems, data and information are processed in a way that is not
transparent to users leading to an unverifiable decision being produced. A lack of knowledge about the AI
system goes hand in hand with a lack of traceability and verifiability of system outputs. It is difficult to assess
whether system outputs are correct and appropriate. Similarly, questions about responsibility, liability or fairness
cannot be answered, if there is a lack of information about the system and its ecosystem. Ultimately, non-
transparent AI systems can lead to a loss of trust and a rejection of the system. The implementation of AI
components into existing systems and the combination of different systems can also increase complexity and
makes it even more difficult to access relevant information. The problems caused by a lack of system insight and
a lack of information about the system are manifold and represent a major challenge. Transparency addresses
this problem and aims to make AI systems more comprehensible by increasing accessibility to system
information and enable a valid assessment of the systems. For these reasons, transparency plays a crucial role for
all stakeholders of an AI system. The challenge is to serve all stakeholders with their individual and different
transparency requirements.
The overall project and future work on the transparency of AI systems are aimed at all BSI stakeholders. The
relevance of the topic for society as a user of the systems is reflected in the expected higher traceability, better
protection against abuse, more valid acceptance and trustworthiness processes as well as a more binding legal
responsibility. The transparency measures are intended to contribute directly to the empowerment of end users
by increasing their trust and autonomy regarding the choice and use of AI systems. Overall, this empowerment
of end users aims to democratise the use of AI systems. In addition, the work in the field of transparency and the
derivation of concrete criteria and measures should contribute to the overarching goal of the trustworthy use of
AI systems. For companies involved in the development of AI systems, the relevance of the topic and the
observance of measures in the development and operation of AI systems should be accelerated. Guidelines and
positions are to be made available as guidance for stakeholders from the economic environment who want to
use third-party AI systems in their organisations or implement them in their systems and products. These
guidelines are intended to make it easier for companies to identify suitable, secure and high-performance
systems. This work is also intended to provide guidance for public authorities wishing to use AI systems. In
addition to their own use, the daily new safety-relevant findings on AI systems, which have to be addressed,
pose the challenge of ensuring a technically qualified and adequately positioned staffing level for public sector
stakeholders and administrations. This and future work in the field of transparency can be used to facilitate and
accelerate permanent and adequate (post-)training of staff. In addition, the establishment of transparency
criteria hoped for by this and future work can facilitate the development of meaningful and reliable quality seals
by public authorities. With regard to the expected further increasing prevalence and widespread roll-out of AI
systems in many areas of life, the relevance of AI systems to society as a whole is steadily increasing.
In order to be able to make competent and valid assessments of these systems in the future, the establishment
of transparency criteria is indispensable. For providers and operators of certain AI systems - such as general-
purpose AI systems or emotion recognition systems - transparency obligations are already defined in the EU AI
Act (cf. Article 50 EU AI Act). These are one of the prerequisites for these systems to be marketed and used in
the European Union. Transparency criteria can strengthen the autonomy of the stakeholders of an AI system by
making informed decisions possible. Therefore, transparency can and should be considered from the outset
(transparency by design).
Bibliography
BSI, Federal Office for Information Security. 2021a. AI Cloud Service Compliance Criteria Catalogue (AIC4).
2021.
BSI, Federal Office for Information Security. 2021b. Federal Office for Information Security, Safe, robust and
comprehensible use of AI - problems, measures and needs for action. 2021.
ISO/IEC 22989:2022. Information technology - Artificial intelligence - Artificial intelligence concepts and
terminology.
Liang, W., Rajani, N., Yang, X., Ozoani, E., Wu, E., Chen, Y., Smith, D. S., & Zou, J. 2024. What’s documented in
AI? Systematic Analysis of 32K AI Model Cards. 2024. http://arxiv.org/abs/2402.05160.
OECD. 2019. Recommendation of the Council on Artificial Intelligence. Recommendation of the Council on
Artificial Intelligence. 2019.
Ribeiro, M. T., Singh, S., & Guestrin, C. 2016. "Why Should I Trust You?": Explaining the Predictions of Any
Classifier. s.l. : Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and
Data Mining, 1135–1144. https://doi.org/10.11, 2016.