0% found this document useful (0 votes)
27 views

BSI__WP - Transparency of AI Systems (2024)

This white paper defines transparency in AI systems, emphasizing the need for accessible information throughout the AI system's life cycle to meet the diverse requirements of stakeholders. It discusses the implications of transparency, including its role in trustworthiness, the requirements set by the EU AI Act, and the opportunities and risks associated with transparent AI systems. The document aims to establish a common understanding of transparency and its significance in the context of rapidly evolving AI technologies.

Uploaded by

Pam Blue
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views

BSI__WP - Transparency of AI Systems (2024)

This white paper defines transparency in AI systems, emphasizing the need for accessible information throughout the AI system's life cycle to meet the diverse requirements of stakeholders. It discusses the implications of transparency, including its role in trustworthiness, the requirements set by the EU AI Act, and the opportunities and risks associated with transparent AI systems. The document aims to establish a common understanding of transparency and its significance in the context of rapidly evolving AI technologies.

Uploaded by

Pam Blue
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Transparency of AI Systems

White paper
Document history
Version Date Editor Description
1.0 01.07.2024 Dr. Oliver
Müller and
Veronika
Lazar

1.1 30.08.2024 Dr. Oliver Translation


Müller and
Veronika
Lazar

Table 1: Change history

Federal Office for Information Security


P.O. Box 20 03 63
53133 Bonn
[email protected]
Internet: https://www.bsi.bund.de
© Federal Office for Information Security 2024
Table of Contents

Table of Contents
1 Introduction.............................................................................................................................................................................................. 4
2 Definition ................................................................................................................................................................................................... 6
2.1 Elements.......................................................................................................................................................................................... 6
2.1.1 AI system ................................................................................................................................................................................... 6
2.1.2 Ecosystem.................................................................................................................................................................................. 6
2.1.3 Information............................................................................................................................................................................... 7
2.1.4 Life cycle .................................................................................................................................................................................... 7
2.1.5 Needs and objectives ............................................................................................................................................................ 8
2.1.6 Stakeholders ............................................................................................................................................................................. 8
3 Discussion ................................................................................................................................................................................................ 10
3.1 Approach and procedure ........................................................................................................................................................ 10
3.2 The aim of transparency ......................................................................................................................................................... 11
3.3 Transparency requirements in the AI Regulation ......................................................................................................... 11
3.4 Opportunities through transparency ................................................................................................................................. 12
3.5 Dangers of transparency ......................................................................................................................................................... 13
4 Conclusions ............................................................................................................................................................................................. 14
Bibliography ...................................................................................................................................................................................................... 15
Introduction

1 Introduction
In this white paper, we present a definition of transparency for information technology systems that have
integrated artificial intelligence (AI). The aim of this publication is to develop a common understanding of the
term transparency and to highlight the relevance of transparency for various stakeholders and the BSI.
Therefore, the paper is addressed to all stakeholders of AI systems and is intended to show, among other things,
that different stakeholders may also have different transparency requirements.

1.1 Motivation
AI has now established itself as a digital tool in both the private and professional sectors. Whether it's
determining personal calorie needs using a smartwatch, automatically forwarding calls to improve the customer
experience, or detecting suspicious activity on computer networks: AI is omnipresent, and the list of examples of
possible areas of application is constantly growing. This has been made possible, by the available amounts of
data (big data) for training, testing and validating the AI models, as well as hardware resources that are now
available and can provide corresponding computing power. With increased technical possibilities, the need for
AI-based solutions - in particular to increase efficiency and productivity - has also grown at the same time. This
demand is currently met by an increasing number of AI start-ups. The result is a constantly growing number of
available AI systems, whose technologies are rapidly evolving and increasing in complexity.

On an abstract level, most of these systems are operating in a black box manner: only the inputs to and the
outputs of the system are visible from outside (see for example (Ribeiro, 2016)). How the system reaches the
output usually remains unclear and is often not comprehensible. Moreover, system outputs often lack
explainability, which makes them difficult to verify without expert knowledge. The increasing complexity of AI
systems and poor or missing information about the system make an assessment by eye as well as assessing the
system’s trustworthiness difficult.

Due to the techniques used in the development of AI systems, additional information, such as information on
training data, also becomes relevant. For example, the origin and quality of the training data must be assessed
before AI systems are used in order to minimise the risk of poisoning attacks, in which attackers manipulate the
training data set used by a machine learning model.

In summary, these factors require the development and use of AI systems that enable appropriate traceability
and explainability. Both often go hand in hand with the related criterion of transparency (see Section 2).
Transparency is thematically embedded in the broad field of trustworthiness of AI systems. The different criteria
cannot be clearly separated from one other, but each focuses on a different subject area. This overlap is shown in
Figure 1. This white paper deals with the topic of transparency of AI systems.

In the following, we define the concept of transparency in the context of AI systems and address the individual
elements of the definition. We then discuss our approach and procedure, make reference to the transparency
requirements in the EU AI Act of the European Parliament and of the Council and shed light on the opportunities
and risks of transparent AI systems.
Introduction

Figure 1: Venn diagram to clarify the relationship between transparency, explainability and traceability in the
context of the trustworthiness of AI systems. The different areas overlap, but each topic has its own special focus.

Federal Office for Information Security 5


Definition

2 Definition
Transparency of AI systems is the provision of information about the entire life cycle of an AI system and its
ecosystem. Transparency promotes accessibility to information that enable an assessment of the system with
regard to different needs and objectives for all stakeholders.

2.1 Elements
The above definition is based on the presentation of the transparency concept in (OECD, 2019) and (BSI, 2021a).
It is compliant with the transparency requirements in the EU AI Act (see section 3.3 for details) and represents
the position of the BSI. In the following subsections, the individual elements of the definition are described in
more detail and their relationship is illustrated graphically in Figure 2.

2.1.1 AI system
The EU AI Act governing the regulation of artificial intelligence, defines an AI system as “a machine-based
system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after
deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate
outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual
environments” (cf. Article 3 EU AI Act). In its definition, the BSI explicitly formulates the hardware component
and defines AI systems as software and hardware systems that utilise artificial intelligence in order to behave
"rationally" in the physical or digital dimension. Based on their perception and analysis of their environment,
these systems act with a certain degree of autonomy in order to achieve certain goals (BSI, 2021b). The
technology integrated into these systems, known as AI, consists of different disciplines like machine learning,
inference and robotics. These include expert systems and neural networks used in AI systems. This is not an
exhaustive list of the techniques used, but is intended to show the abundance of different techniques. An equally
wide range becomes clear when considering the different functionalities of AI systems, ranging from simple to
highly complex tasks. AI systems can perform tasks such as pattern recognition, classifications, forecasts,
recommendations, natural language processing or computer vision and can also be combined with each other in
a variety of ways. In addition, AI can be implemented in the systems in different ways depending on the
requirements and objectives of the application. On the one hand, it can be developed and used as a separate
application and represent the primary function of the system, as is the case, for example, in chatbot applications.
On the other hand, it can also be integrated into existing systems, for example in order to expand their
functionality and/or increase performance in background processes. Considering the way AI systems are
implemented, the degree of automation can also vary greatly. While some systems only use the output of AI as
recommendations and require humans as the final decision-making authority, other (sub-)systems autonomously
implement the decisions and classifications of AI without further human action. Overall, it can be summarised
that there is not one AI system, but rather a wealth of different techniques, functionalities and forms of
implementation.

2.1.2 Ecosystem
In this paper, the term ecosystem refers to the context in which an AI system is developed, deployed and
operated. The information regarding the ecosystem of an AI system goes beyond the actual AI system and
should, for example, also include details about the provider (e.g. location, contact details) or the development
process of the system. The term should also include the entire supply chain of the AI system. The decision to

Bundesamt für Sicherheit in der Informationstechnik 6


Definition

include information about the ecosystem of an AI system in the definition of transparency is based on the fact
that there is a (conditional) dependency between the actual AI system and its ecosystem. For example, if the AI
system is developed and operated outside the European Union in a third country, corresponding questions and
challenges arise with regard to the underlying level of IT security and data protection level. This meta-
information can support a well-founded assessment of the situation as well as an informed decision by
stakeholders.

2.1.3 Information
Information is the basis for the knowledge needed by stakeholders to form an assessment of the AI system and
its ecosystem. They must be disclosed and made accessible so that they are available for this purpose. In
addition, information must be relevant and appropriate for gaining knowledge. Section 3.3 explains the
transparency requirements in the EU AI Act and sets out the minimum information to be disclosed by providers
and operators of certain AI systems.

2.1.4 Life cycle


According to (ISO/IEC 22989:2022), the life cycle of an AI system comprises various phases, which are briefly
described below in the context of the concept of transparency.

2.1.4.1 Planning and conception phase


It is advisable to consider transparency even during the planning of an AI system. In this way, time-consuming
rework in later phases of the life cycle can be avoided from the outset. At the same time, the involved
stakeholders are engaged from the very beginning. The planning phase ends with the existence of a concrete
plan for the AI system and its implementation.

2.1.4.2 Design, development and validation phase

Based on the plan, the AI system will be developed, implemented, tested and validated. If planned transparency
and response measures are considered, implemented and tested right from the start, this is referred to as
transparency by design. The response measures allow the AI system and/or the information situation to be
readjusted if transparency is not achieved to the desired extent.

2.1.4.3 Commissioning and application phase


Once the development/validation phase has been completed, the AI system is rolled out and transferred to
productive operation. During operation, all information relevant to stakeholders must be available, as well as
appropriate possibilities for iteration loops (see Section 2.1.4.4 in conjunction with Section 2.1.4.5).

2.1.4.4 Continuous evaluation phase


Since AI systems can be dynamic and the requirements and/or environment can constantly change, the
evaluation phase must seamlessly follow the start of the application phase. In the case of self-changing systems,
it runs parallel to the application phase in the sense of permanent monitoring. Stakeholder feedback is essential
for the evaluation, e.g. damages, emergencies or incidents. For this purpose, appropriate reaction and feedback
options must be available (cf. Sections 2.1.4.2 and 2.1.4.3).

Federal Office for Information Security 7


Definition

2.1.4.5 System updates


Depending on the findings from the evaluation phase (see Section 2.1.4.4), the planning phase (see Section
2.1.4.1) can now be re-entered if necessary. In addition to bug fixes and performance improvements, updates
often also include new functions. Existing functions may also be removed, modified, or transferred to other
modules. Here it must be ensured that existing transparency and response measures are not restricted in their
intended function or rendered unusable by these adaptations. New measures need to be provided for new
functions.

In the event of a retraining of the AI systems, further measures will become relevant to ensure transparency. The
measures then relate, for example, to any new training data sets used and are particularly relevant to the topics
of discrimination/bias and suitability.

2.1.4.6 Decommissioning
There are two options for decommissioning legacy systems: (i) shutting down the legacy system without
continuing it or (ii) migrating to a new system. In each case, it must be made transparent how data that is no
longer required or migrated will be handled and what changes/consequences the decommissioning will entail
for the stakeholders. In addition, the stakeholders must also have options for responding here, e.g. to assert their
rights as data subjects.

2.1.5 Needs and objectives


These are individual and can be quite different in varying applications. This term is intended to reflect the fact
that transparency is not intended to enable access to specific information, but rather to provide information that
enables the respective stakeholders to make an assessment. What is specifically assessed by the respective
stakeholder is individual and contextual. Stakeholder needs and objectives vary depending on the application
scenario. The aim is to cover a wide range of desirable system information enabling the system to be assessed in
terms of the specific needs of stakeholders.

2.1.6 Stakeholders
The term "stakeholder" refers to all parties who are either indirectly (e.g. through impacts) or directly (e.g.
through application) affected by an AI system or who interact with the system (e.g. developers). These can be
individual persons or groups of people. A stakeholder does not have to play an ‘active’ role. An overview of
possible different stakeholders and their relation to AI systems is presented in Table 2. While consumers and
users usually only use an AI system, it is possible that experts, developers and companies/organisations provide
an AI system in addition. Indirectly affected persons/third parties do not provide an AI system, nor do they use
it. Nevertheless, they can be affected by the impact and thus become (passive) stakeholders. The presented list
of different stakeholders does not claim to be exhaustive and can be refined as desired. However, the chosen
representation is sufficient to show that there may be different interests with regard to an AI system, which can
be reflected, among other things, in different requirements for transparency - such as the type or level of detail
of the information provided - of an AI system. Therefore, the various stakeholders must be taken into account
when defining the concept of transparency.

Bundesamt für Sicherheit in der Informationstechnik 8


Definition

Stakeholders use provide examples of transparency requirements


consumers + - server location in terms of data protection
users + - instructions for use to avoid application
errors
experts +/- +/- functioning of the underlying models
developers + + technical documentation for identification
of interfaces
companies/organisations + + licensing conditions to avoid criminal
consequences
indirectly affected/third parties - - contact person in case of damage

Table 2: Exemplary presentation of possible stakeholders of AI systems and their different requirements for transparency
due to different interests. Characters used: ‘+’ (applies), ‘-’ (does not apply).

Federal Office for Information Security 9


Discussion

3 Discussion
3.1 Approach and procedure
The existence of already published definitions of the concept of transparency raises the question of the need for
a further definition. The sheer breadth of the topic of transparency on the definition market ultimately reflects
the different requirements for transparency depending on the stakeholders and the area of application. In order
to provide a basis for further work of the BSI with a focus on broad stakeholder groups and generic areas of
application, it was decided not to use existing definitions.
In addition, the speed at which technologies in the AI sector are developing is enormous. This harbours the risk
that definitions, once established, may lose their validity, especially if they are too specific. In order to keep pace
with technical progress and to avoid constant renewal and adaptation of the definition to the current state of the
art, the definition of transparency presented in this white paper is as technology-neutral and future-proof as
possible. On the one hand, it should be easy to understand, cover all relevant aspects of transparency and at the
same time be open enough to allow individual interpretation depending on the respective stakeholder and the AI
technology used. On the other hand, it should serve as a generic basis for future work of the BSI in this area.
Furthermore, a holistic approach was taken to the definition. This is illustrated in Figure 2: transparency includes
both the provision of information about the AI system itself and about its ecosystem, such as the supply chain of
the AI system or details about the provider. The weighting of the information provided is the responsibility of the
respective stakeholder.

Figure 2: Schematic representation of the holistic transparency approach using the individual elements from the
definition: in addition to information on the AI system itself that is relevant and appropriate for the stakeholders,
information on its ecosystem is also provided/disclosed. This promotes a valid assessment of the suitability and
appropriateness of the AI system by the stakeholders.

Bundesamt für Sicherheit in der Informationstechnik 10


Discussion

3.2 The aim of transparency


By promoting the transparency of AI systems, the aim is to strengthen the autonomy of stakeholders and enable
them to decide for themselves whether the use, modification or provision of an AI system is appropriate and
justifiable for them. It is not enough to simply describe the capabilities of the AI system. The limitations of the
system must also be analysed and made transparent. Only in this way can a holistic assessment be made by the
stakeholders, e.g. whether an AI system is suitable for a particular purpose or not.
In the area of digital consumer protection, this should help to ensure that consumers can recognise and use safe
and trustworthy AI systems despite increasing digitalisation. Companies and organisations should also be
enabled to develop and operate their own AI systems transparently. Transparent information from the
ecosystem of an AI system should also enable third parties affected by impacts to recognise how they can assert
their rights in the event of damage. The transparency of AI systems thus serves to empower stakeholders.

3.3 Transparency requirements in the AI Regulation


The AI Regulation is the world's first comprehensive legal framework for AI. It was adopted by the Council on 21
May 2024 of the European Union (EU) and regulates the use of AI in the EU. This white paper refers to the
current version of 13 June 2024 at the time of writing. The AI Regulation lists transparency as a key requirement
to ensure the ethical and responsible handling of data. Accordingly, AI systems that fall into certain risk levels
are subject to requirements regarding an appropriate level of transparency and sufficiently transparent operation
of the systems.
Among other things, the AI Regulation sets out harmonised transparency rules for certain AI systems (see AI
Regulation Article 1(2)(d)). Article 13 of the AI Regulation stipulates that high-risk AI systems (e.g. in the field of
biometrics or critical infrastructure (see AI Regulation Annex III)) must be transparent in such a way that
providers and operators can fulfil their relevant obligations, which are also laid down in the AI Regulation. To
this end the AI Regulation also specifies the minimum information that must be included in the operating
instructions. The AI Regulation also stipulates, that providers and operators of AI systems that are intended for
direct interaction with natural persons (e.g. chatbot applications) must also inform the natural persons
concerned that the system is an AI system or that the system output is AI-generated (see AI Regulation Article
50). This disclosure and provision of information for various stakeholders is intended to create transparency with
regard to the respective AI system. Both this specific type of system and the mere identification of the use of AI
systems represent only a subset of what is understood by the definition of transparency presented in this white
paper.
For the implementation of the AI Regulation and for the practical implementation of the transparency
obligations under Article 50, the Commission shall develop guidelines (see Article 96(1)(d)). The procedure for
penalties applicable to infringements of the AI Regulation is laid down in Article 99. Violations of the
transparency obligations for providers and operators laid down in Article 50 are explicitly mentioned again (see
Article 99(4)(g) of the EU AI Act), which underlines the relevance of the issue in the AI Regulation.
As part of the evaluation and review of the AI Regulation, the Commission assesses changes to the list of AI
systems requiring additional transparency measures every four years (in accordance with Article 50 of the EU AI
Act). In addition, "a participative methodology for the evaluation of risk levels based on the criteria outlined in
the relevant Articles and the inclusion of new systems" is to be established in this list (Article 112(11)(c) EU AI
Act).
Annex XII of the EU AI Act deals with "Transparency information referred to in Article 53(1), point (b) [for the
generation of] (...) technical documentation[s] for providers of general-purpose AI models to downstream
providers that integrate the model into their AI system". Paragraph 1 refers to the information on the model,
which must at least be included in the documentation. While paragraph 1 refers to the AI system itself,
paragraph 2 also goes beyond the actual AI system and requires, for example, information on components of the
development process of the model. At this point, too, the transparency obligations laid down in the AI

Federal Office for Information Security 11


Discussion

Regulation are in line with the definition of transparency presented in this white paper, as information about the
ecosystem are included.
Apart from the specific transparency requirements, the purpose of the AI Regulation is set out in Article 1(1). In
addition to improving the functioning of the internal market, the AI Regulation aims to promote “the
introduction of human-centric and trustworthy” AI. At the same time, it aims to ensure a high level of protection
against the harmful effects of AI systems in the EU. The definition of transparency presented in this white paper
must not run counter to these objectives. The provision of information about the AI system and its ecosystem is
intended to help increase the trustworthiness of an AI system. In addition, transparency can facilitate the
interaction of different actors/stakeholders along the supply chain, which in turn can benefit the functioning of
the internal market. At the same time, transparency must not lead to AI systems being misused by attackers by
disclosing certain (e.g. security-relevant) information (more on this in Section 3.5). The EU AI Act also specifies
that incidents or malfunctions related to AI shall not result in an AI system endangering the health or safety of
persons or property. The EU AI Act also aims to promote “fundamental rights enshrined in the Charter of
Fundamental Rights, including democracy, the rule of law and environmental protection”. Transparency can help
stakeholders to assess the suitability of an AI system. This empowerment enables stakeholders to decide for
themselves whether or not an AI system can/should be used in relation to the above-mentioned points. By
disclosing responsibilities, transparency in the event of damage can help to contain the damage and prevent or
minimise possible consequences. Transparency can also be a driver of innovation. Knowledge about limitations
of AI systems can lead to the development of applications/products that no longer have these limitations. For
example, the interaction with the first chatbots was initially only possible via text language: inputs could be
made via a keyboard, to which the chatbot then reacted on the screen in text language. Meanwhile, chatbots are
used in various areas (e.g. in telephone customer support in the insurance sector), in which an interaction by
voice input and voice output is possible.
In summary, transparency is taken into account in the EU AI Act and initial requirements are formulated. At the
same time, the concept of transparency in the EU AI Act is very broad. The definition of transparency presented
in this white paper does not contradict these transparency requirements laid down in the EU AI Act, but is
intended to provide a more effective formulation and definition of the term transparency.

3.4 Opportunities through transparency


The use of transparent AI systems can promote the traceability of decisions and the assessment of the
appropriateness of systems. Transparency can also help to protect against misuse by enabling potential risks and
undesirable effects to be recognised at an early stage. In order to react appropriately to problems, it is important
to know, for example, whether the output of an AI system is free from discrimination or whether it violates
licence conditions. In terms of consumer protection transparency can also act as a support tool. Transparency
provides the basis for a correct assessment of the appropriateness of the system used. In order to be able to
make such an assessment at all, information about the system must be accessible. A valid assessment of the
adequacy of the AI system forms the basis for positive trust and acceptance processes. Initial publications show,
for example, higher download numbers for transparent AI models, which could be an indication of better
acceptance of these systems among developers (Liang, 2024). Lack of transparency complicates the valid
assessment of the adequacy of a system, and thus the assessment of its trustworthiness. The latter is a
prerequisite for establishing and maintaining a positive relationship of trust with the system and the related
output. In addition, transparency can enable users to exercise rights more easily by requiring transparency
accompanied by clearer definitions of legal responsibilities and identification of those responsible for the use of
AI systems. The listed aspects of traceability, abuse protection, acceptance and trustworthiness as well as legal
responsibility show the relevance of transparency in the use of AI systems. This relevance is also reflected in
regulatory and legal requirements (see Section 3.3).
Transparency can, on the one hand, contribute to the security of AI systems and, on the other hand, promote the
safe use of AI applications. In this way, transparency can enable the identification of possible problems and
vulnerabilities, make undesirable system behaviour visible and contribute to problem identification and the
prevention of misuse. In the context of IT security, transparency also provides the basis for the disclosure and

Bundesamt für Sicherheit in der Informationstechnik 12


Discussion

assessment of risks associated with the use of the system. The identification of roles and responsibilities in the
event of damage and adverse events as part of transparency requirements can also help stakeholders to detect
faulty system behaviour, reduce response times and thus mitigate possible consequential damage.
If transparency is practised in the early phases of the life cycle of an AI system, inconsistencies can be avoided
within the development team from the outset, sources of error minimised and training phases shortened. AI
systems are both developed and increasingly used as development tools - e.g. in AI-supported programming for
the automatic generation of program code - for new systems. In the early development phases, it is also
important from a developer's point of view to know where the training/test/validation data come from, how
they are obtained and whether they are free of bias - e.g. to avoid discrimination. This information is important in
order to be able to prepare the data correctly before training/testing/validating (pre-processing). Current
development trends also show that pre-existing models are often used, which makes the presence and
accessibility of all safety-critical information on existing models particularly relevant. In the absence of this
information, there is a risk of implementing the security risks of the basic models into your own products. Both
systems are then interdependent, and any lack of transparency in the underlying system is transferred to the
system built on top of it. The inheritance of the security risks described above from different areas leads to an
increased overall risk in the examples mentioned, which underlines once again the explosiveness of transparency
for security-relevant aspects when using AI systems.
In addition, as discussed in the previous section, transparency can contribute to better user empowerment in
assessing AI systems. A correct assessment of the application, suitable application scenarios and possible
problems and security risks can promote safe use by users.

3.5 Dangers of transparency


So far, the positive aspects of transparency have mainly been presented. Increasing/improving the transparency
of AI systems can also have unintended negative effects. For example, the provision of information on the
functionality or architecture of an AI system can reveal new attack vectors that attackers can exploit to misuse or
compromise the system.
Information about limitations or excluded areas of application of an AI system could also be deliberately
exploited by attackers, e.g. to deliberately generate erroneous behaviour or destructive output.
Conversely, attackers can also abuse the trust that transparency is supposed to create in order to deliberately
provide incorrect information. For example, in reality, safety-critical applications can be presented as uncritical.
In addition, non-transparent systems can also be marked as transparent. Such pseudo-transparency can be used
for marketing purposes of the own product and lead to a wrong assessment of the system by consumers if the
transparency label is not checked. Therefore, questions about the trustworthiness of the information
disclosed/provided must also be answered in the future. An official transparency label and verifiable
transparency criteria could provide a remedy here.
The transparency of AI systems is therefore a double-edged sword and should hence be used with caution. The
goals and problems are sometimes contradictory and cannot be solved simultaneously. Answering the key
questions “What information does a stakeholder need to make a decision?” and “What information is not
relevant?” can be helpful. Similar to the EU General Data Protection Regulation (GDPR) the principle of data
minimisation is also recommended here, which is answered separately for each individual use case: as much
information as necessary, but no more than strictly required, should be disclosed. This "need-to-know" principle
applies especially to safety-critical information. The goal should be an appropriate level of transparency, which is
sufficient, and at the same time aspects such as security should not be too disadvantaged.

Federal Office for Information Security 13


Conclusions

4 Conclusions
Due to the black box properties of many AI systems, data and information are processed in a way that is not
transparent to users leading to an unverifiable decision being produced. A lack of knowledge about the AI
system goes hand in hand with a lack of traceability and verifiability of system outputs. It is difficult to assess
whether system outputs are correct and appropriate. Similarly, questions about responsibility, liability or fairness
cannot be answered, if there is a lack of information about the system and its ecosystem. Ultimately, non-
transparent AI systems can lead to a loss of trust and a rejection of the system. The implementation of AI
components into existing systems and the combination of different systems can also increase complexity and
makes it even more difficult to access relevant information. The problems caused by a lack of system insight and
a lack of information about the system are manifold and represent a major challenge. Transparency addresses
this problem and aims to make AI systems more comprehensible by increasing accessibility to system
information and enable a valid assessment of the systems. For these reasons, transparency plays a crucial role for
all stakeholders of an AI system. The challenge is to serve all stakeholders with their individual and different
transparency requirements.
The overall project and future work on the transparency of AI systems are aimed at all BSI stakeholders. The
relevance of the topic for society as a user of the systems is reflected in the expected higher traceability, better
protection against abuse, more valid acceptance and trustworthiness processes as well as a more binding legal
responsibility. The transparency measures are intended to contribute directly to the empowerment of end users
by increasing their trust and autonomy regarding the choice and use of AI systems. Overall, this empowerment
of end users aims to democratise the use of AI systems. In addition, the work in the field of transparency and the
derivation of concrete criteria and measures should contribute to the overarching goal of the trustworthy use of
AI systems. For companies involved in the development of AI systems, the relevance of the topic and the
observance of measures in the development and operation of AI systems should be accelerated. Guidelines and
positions are to be made available as guidance for stakeholders from the economic environment who want to
use third-party AI systems in their organisations or implement them in their systems and products. These
guidelines are intended to make it easier for companies to identify suitable, secure and high-performance
systems. This work is also intended to provide guidance for public authorities wishing to use AI systems. In
addition to their own use, the daily new safety-relevant findings on AI systems, which have to be addressed,
pose the challenge of ensuring a technically qualified and adequately positioned staffing level for public sector
stakeholders and administrations. This and future work in the field of transparency can be used to facilitate and
accelerate permanent and adequate (post-)training of staff. In addition, the establishment of transparency
criteria hoped for by this and future work can facilitate the development of meaningful and reliable quality seals
by public authorities. With regard to the expected further increasing prevalence and widespread roll-out of AI
systems in many areas of life, the relevance of AI systems to society as a whole is steadily increasing.
In order to be able to make competent and valid assessments of these systems in the future, the establishment
of transparency criteria is indispensable. For providers and operators of certain AI systems - such as general-
purpose AI systems or emotion recognition systems - transparency obligations are already defined in the EU AI
Act (cf. Article 50 EU AI Act). These are one of the prerequisites for these systems to be marketed and used in
the European Union. Transparency criteria can strengthen the autonomy of the stakeholders of an AI system by
making informed decisions possible. Therefore, transparency can and should be considered from the outset
(transparency by design).

Bundesamt für Sicherheit in der Informationstechnik 14


Bibliography

Bibliography
BSI, Federal Office for Information Security. 2021a. AI Cloud Service Compliance Criteria Catalogue (AIC4).
2021.
BSI, Federal Office for Information Security. 2021b. Federal Office for Information Security, Safe, robust and
comprehensible use of AI - problems, measures and needs for action. 2021.
ISO/IEC 22989:2022. Information technology - Artificial intelligence - Artificial intelligence concepts and
terminology.
Liang, W., Rajani, N., Yang, X., Ozoani, E., Wu, E., Chen, Y., Smith, D. S., & Zou, J. 2024. What’s documented in
AI? Systematic Analysis of 32K AI Model Cards. 2024. http://arxiv.org/abs/2402.05160.
OECD. 2019. Recommendation of the Council on Artificial Intelligence. Recommendation of the Council on
Artificial Intelligence. 2019.
Ribeiro, M. T., Singh, S., & Guestrin, C. 2016. "Why Should I Trust You?": Explaining the Predictions of Any
Classifier. s.l. : Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and
Data Mining, 1135–1144. https://doi.org/10.11, 2016.

Federal Office for Information Security 15

You might also like