0% found this document useful (0 votes)
14 views

ECOSOC Agenda 2 Chair Report

The document discusses the transformative role of Artificial Intelligence (AI) in driving sustainable development while highlighting the ethical challenges it poses, such as bias, privacy concerns, and environmental impact. It emphasizes the need for ethical AI governance frameworks to ensure that AI technologies are developed and used responsibly, aligning with the United Nations' Sustainable Development Goals (SDGs). Various organizations, including the OECD and the UN, are working to establish guidelines that promote transparency, accountability, and inclusivity in AI applications to benefit society as a whole.

Uploaded by

05defne12
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

ECOSOC Agenda 2 Chair Report

The document discusses the transformative role of Artificial Intelligence (AI) in driving sustainable development while highlighting the ethical challenges it poses, such as bias, privacy concerns, and environmental impact. It emphasizes the need for ethical AI governance frameworks to ensure that AI technologies are developed and used responsibly, aligning with the United Nations' Sustainable Development Goals (SDGs). Various organizations, including the OECD and the UN, are working to establish guidelines that promote transparency, accountability, and inclusivity in AI applications to benefit society as a whole.

Uploaded by

05defne12
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 12

MEF SCHOOLS MODEL UNITED

NATIONS 2025
"AI for Inclusive Growth: Navigating Opportunities, Equity, and Ethical
Challenges in the Digital Era,"

Committee: ECOSOC
Agenda Item: Endorsing for ethical AI governance and emphasizing AI's direct role in driving
sustainable development
Student Officer: Melis Tatoğlu
Position: Deputy Chair

Introduction

AI has become a transformative force in the 21st century, bringing changes in various
sectors such as health, education, and finance. Beyond improving the quality of life for humans,
AI has the power to revolutionize industries and resolve some of humanity's most urgent issues.
It offers tools that can speed up the processes and actions to achieve global goals, such as the
United Nations' Sustainable Development Goals-SDGs, which range from many different
sectors, including health to environmental care. However, all these opportunities are
accompanied by great ethical and societal concerns, among which are bias, privacy concerns, and
the environmental impact of AI systems. As AI holds the potential to revolutionize industries,
there is a greater need for frameworks that make sure the development and deployment are done
responsibly. Ethical AI governance looks forward to ensuring that the design and operation of AI
systems respect fundamental rights and do not cause harm or violate privacy. Clear ethics are
vital for building public confidence in the systems that become part of daily life so that AI acts in
service of people. That is why many organizations, such as the OECD, have called for ethical
governance of AI, elaborating principles that foster transparency and accountability. While there
is AI-related uneasiness on the path to greater sustainability, the same intelligence comes with
massive potential to ensure further sustainable development while tackling the most pressing
global issues in poverty, healthcare, education, climate change, energy consumption
optimization, and accessibility in medical treatment. AI technologies serve as potential and
powerful drivers to advance UN SDGs in an unprecedented way. However, the full potential of
AI for sustainability can be realized only by being very considerate of its ethical implications and
making a commitment that technologies are for the benefit of all people. The balance between
ethical responsibility and sustainable development is crucial for shaping a future in which AI
contributes positively to both society and the environment.

Definition of Significant Terms

Artificial Intelligence (AI)


Artificial Intelligence refers to that field of computer science which works towards developing
systems or machines with capabilities of mimicking tasks, which require intelligence if done by
humans. The applications include machine learning from data, reasoning, problem-solving,
understanding of neutral language, pattern recognition, and decision-making. AI systems work
on emulating cognitive functions of perception, understanding of language, and logical inference
such that it is in a position to get adapt to newer input, performing tasks in a completely
autonomous or semi-autonomous manner.

Sustainable Development
Sustainable development is defined as the development that happens without affecting the future
generations. This concept was founded in 1987 by the United Nations, and it emphasizes the
balanced approach to economic growth, environmental protection, and social equality.
Sustainable development aims to promote equality while ensuring the responsible use of natural
resources, and addressing other global challenges, like climate change. It underpins the United
Nations Sustainable Development Goals (SDGs), which provide a blueprint for achieving a
sustainable future by 2030.

Explainable AI (XAI)
Explainable AI, (XAI) is an artificial intelligence system that can provide understandable
explanations of processes, decisions, and predictions made by AI. The development of XAI,
therefore, aims at making the AI system transparent while making its users understand how and
why an AI system reached a particular decision. This will be important in applications where
accountability and ethical issues are crucial, like in health, finance, and law enforcement.
Explainable AI can be mainly described as the techniques that involve simplifying complex
models and shedding light on how decisions are made by letting users question the outputs of AI.
XAI is important in the assurance that AI can operate within the lines of human values and
expectations.

Digital Divide
The digital divide refers to the gap between individuals, communities, or regions in their access
to information and communication technologies. This may occur in several ways, which include
issues related to the difference in access to the internet, digital services, or digital literacy skills.
The digital device has wide ramifications for education, economic opportunities, healthcare
access, and social inclusion in the way that those who advocate for digital access may struggle to
part faithfully in the increasingly digitized world. Efforts addressing digital devices focus on
improving infrastructure, affordability, digital skills, training, and inclusive policies to provide
incredible access to technology for everyone.

Detailed Background of the Issue


The rapid advancement of Artificial Intelligence (AI) has sparked both excitement and concern
globally, particularly regarding its ethical implications, and this potential impact on sustainable
development. As AI technologies continue to transform industries, societies, and economies, the
need for ethical governance and policies and frameworks for its implementation to ensure AI
contributes positively to global goals, such as sustainable development has become a critical
issue.

The Need for Frameworks and Guidelines:


Ethical AI governance refers to the development of policies, regulations, and frameworks that
ensure AI tech technologies are designed, deployed, and used in ways that align with societal
well-being. The rise of AI has raised numerous concerns, including:

Bias and Discrimination


AI systems particularly those driven by machine learning algorithms may show biases
based on race, gender, age, and other factors. This occurs because AI systems often learn from
historical data, which may contain biased patterns.
Transparency and Accountability
Decisions, especially in high-stakes areas such as healthcare, finance, and law can
significantly affect individual’s lives. The lack of transparency and explainability in AI models,
especially complex ones like deep learning, can make it difficult to understand how decisions are
made leading to a lack of accountability.
Privacy and Security
AI systems often require large datasets to function effectively, raising concerns about the
collection, storage, and use of personal data. This has implications for privacy and security,
particularly in sensitive sectors.
Autonomy and Control
As AI systems become more autonomous, there are concerns about losing human control
over critical decisions, especially in military, healthcare, and legal contexts.

In response to these challenges organizations like the OECD and UNESCO, along with
governments and their leaders have worked to develop ethical guidelines and principles for AI
governance. The goal is to ensure that AI development respects human rights, promotes fairness,
and is in alignment with societal goals (such as the SDGs).

OECD’s Approach to Ethical AI Governance


The OECD has been at the forefront of advocating for ethical AI governance. In May 2019, the
OECD adopted the OECD AI Principles, a set of guidelines aimed at promoting responsible AI
development and use. These principles are designed to ensure that AI technologies are used in
ways that respect human rights and contribute to societal well-being. The OECD emphasizes that
AI systems should be designed to serve human needs, ensuring that they respect human rights
and freedoms. This includes preventing discrimination, and ensuring that AI does not undermine
social equality. AI should contribute to inclusive growth and sustainable development by helping
to address global challenges, such as climate change, poverty, and inequality. The development
of AI should benefit all people, regardless of their social and economic status. AI systems to be
transparent and explainable, meaning that individuals should be able to understand how
decisions are made by AI systems. Moreover, developers and users of AI systems must be
accountable for the consequences of AI use, including addressing potential harm and risks.
Additionally, AI systems should be secure, minimizing the risk of malfunction, misuse, or harm.
This includes ensuring that systems are resilient to attacks and are safe throughout their cycle.
The OECD’s AI Principles have influenced global discussions on AI governance and have been
adopted by a wide range of countries, including members of G20 and the European Union. The
principles are also referenced in other international frameworks, such as the UNESCO AI Ethics
Framework, which further underscores the global effort to reach ethical AI.

AI and Sustainable Development


The United Nations Sustainable Development Goals (SDGs), adopted in 2015, provide a
blueprint for addressing global challenges like poverty, inequality, climate change and health.
The image to the right shows all seventeen SDGs.1 Over the course of the last four years, there
have been many pieces of research about AI’s environmental impact have been researched. Even
though it is not fully understood yet the research shows that it can have a heavy impact on the
environment. All AI uses a vast amount of data and most of these data are kept in data centers.
Often building data centers creates electronic waste, which usually includes hazardous
substances, and they also require a huge amount of resources including valuable elements, which

1 “Social Development for Sustainable Development | Division for Inclusive Social Development (DISD).” United
Nations, United Nations, social.desa.un.org/2030agenda-sdgs. Accessed 5 Jan. 2025.
creates another problem concerning
environmentally friendly mining. Another
problem with the construction of data centers is
the amount of water that is used during them.
There already is a water crisis and millions of
people who don’t have access to clean water,
which creates a huge issue with the use of AI.
Lastly one of the major needs that AI has is
energy. To both build and power data centers
an immense amount of energy is needed. As
the energy demand increases it is less likely
for sustainable energy technologies to be used
widely since they might not be enough for the
large energy demands that are imposed in this
age. Here is a table to show the amount of water and electricity used by the data centers in 2023:

Another concern that AI creates is that it could further the digital divide. As new technologies
and new ways are being used, wealthier states have more advantages when it comes to the
implementation of these. The environment isn’t the only objective for sustainable development,
equality for all is a significant part of the SDGs. Therefore the change in educational systems or
access to information not being equal would further the problems that have arisen from the
digital divide which started many years before AI was this much integrated into our lives.

However AI can also play a critical role in advancing the SDGs, but its impact depends on how
it’s still deployed, and governed. Even though AI systems have not fully integrated into these
fields, it started to affect them in a lot of different aspects. For example, education has advanced
through personal learning programs, assisting educators, and many other systems that are created
with the help of AI. Additionally in healthcare fields diagnostics and treatment plans have faced
reforms thanks to AI. However, the potential of AI to drive sustainable development is not
2 (Li et al. 7)
without challenges. Issues such as data privacy, inequality, access to AI technologies, and the
digital divide need to be addressed to ensure that AI contributes rather than further existing
disparities.

Other Global Efforts


There have been many Non-Governmental Organizations (NGOs) and both intergovernmental
and international AI advisory committees created for the use of AI. An example could be the
Center for AI Safety (CAIS) which is a nonprofit research organization, aimed at reducing the
societal scale risks of AI. An example of a governmental committee could be the National AI
Advisory Committee (NAIAC), which is an intergovernmental committee in the US, tasked with
providing the President and the National AI Initiative Office (NAIIO) with research on AI
technology. Their research varies from ethics to education, therefore the reports are quite
inclusive with their subjects. An international initiative could be the UN AI Advisory Body to
provide the UN with different perspectives on the developing AI.
The OECD, G20, United Nations, and other international bodies are actively working to establish
global frameworks for AI governance that align with ethical principles and promote sustainable
development. For example, the G20 AI Principles, endorsed in 2019, build on the OECD’s work
and emphasize the importance of AI in addressing global challenges while ensuring that it is
developed and used responsibly.

Timeline of Key Events

Date Description of Event

1956 The term “Artificial Intelligence” is coined at the Dartmouth


Conference, marking the formal beginning of AI research

2012 With the spread of AI, the number of data centers went up from
500,000 to 8 million.

2016 Major advancements are made in AI technology and AI is now used in


many apps, making its way into our daily lives. This year is also
described as “The Year AI Came of Age” by the Guardian (“2016: The
Year”).
2017 UN establishes the AI for Good program.

2018 The European Union’s General Data Protection Regulation (GDPR)


introduces data protection and privacy requirements, influincşng AI
practices indirectly
2019 The Organization for Economic Cooperation and Development (OECD)
adopts AI principles, emphasizing human-centered, values,
transparency, and accountability.
2021 The European Union proposes the AI Act the first comprehensive
regulatory framework for AI systems. The same year UNESCO adopted
the Recommendation on the Ethics of Artificial Intelligence, the first
global standard on AI ethics.
2023 The United Nations established the Global Partnership on AI to
promote ethical AI development and address global challenges.

2024 Many of the frameworks and regulations were updated to meet the new
standards of AI.

Major Countries and Organizations Involved

European Union:
The European Union has been a leading party for ethical AI governance and emphasizes
the role of artificial intelligence and driving development initiatives, like the European AI
Strategy and efforts such as the AI Act. The EU underscores the importance of aligning AI
development with fundamental rights, democratic values, and environmental sustainability and
actively promotes the use of AI to address global challenges, including climate change,
healthcare, and resource efficiency. The European Union AI Act is also the first regulation on
AI, set by a major regulator.

United States:
The United States' activity is supported through national policies and international
collaboration. As a signatory to the OECD AI Principles, the US promotes trustworthy AI by
advocating for transparency, accountability, and inclusivity. Domestically initiatives like the
White House’s Blueprint for an AI Bill of Rights highlight the importance of fairness, privacy,
and civil rights in AI systems. The US also uses AI for sustainable development by addressing
challenges in climate change, healthcare, and energy optimization, through federal agencies, and
private sector innovation. Globally the US collaborates with organizations like the Global
Partnership on AI (GPAI) and the G7 to establish ethical standards and encourage AI-driven
solutions for global issues. While focusing on fostering innovation, the US balances its approach
by addressing societal concerns and ensuring AI technologies align with democratic values and
human rights. Many American-based NGOs and programs are also run on the issue of AI
research.

China:
China, as one of the tech giants, has participated in international frameworks like
UNESCO’s Recommendation on the Ethics of Artificial Intelligence and developed domestic
guidelines such as the New Generation AI Ethics Code, which prioritizes human centricity,
fairness, and security. China integrates AI into sustainable development initiatives, using the
technology for smart cities, environmental monitoring, healthcare, and power alleviation in
alignment with the UN Sustainable Development Goals. China seeks to influence AI governance
while using its potential for societal and economic progress. However, safety concerns are also a
big issue from China’s perspective. They have expressed their concerns about the safety issues
that AI has.

United Nations:
The United Nations (UN) has been actively involved in promoting ethical AI governance
and relating it to sustainable development, through actions such as UNESCO’s Recommendation
on the Ethics of Artificial Intelligence, the UN advocates for principles as transparency and
human rights. The AI for Good initiative, led by the International Telecommunication Union
(ITU), shows that AI can be used to address and advance the SDGs, with applications in many
different fields such as healthcare, education, and climate action. The UN also acknowledges the
importance of global cooperation to ensure all nations benefit from these actions, minimizing
bias and inequality.

OECD:
In May 2019, the OECD adopted the OECD AI Principles which provide the global
framework for responsible development and use of AI. These principles emphasize that AI
should respect human rights, fairness, and transparency while also ensuring that AI systems
contribute positively to society. By advocating for human-centered values, the OECD stresses
that AI should be developed and deployed in a way that promotes sustainable development. In
addition to the emphasis on sustainable development, the OECD also encourages the safety and
transparency of technology for all people, promoting equality all around the world.

Previous Attempts to Solve the Issue

The European Union General Data Protection Regulation


(May 25, 2018)
The General Data Protection Regulation (GDPR), enacted by the European in May 2018 is a
comprehensive law designed to protect the personal data privacy of individuals within the EU
and regulate how organizations worldwide process such data. Even though this regulation
doesn’t directly include information on the use of AI, it indirectly affects it, since AI systems
often rely on vast amounts of data to function. Additionally, GDPR’s emphasis on data
minimization and a purpose limitation reduces the risk of misuse or overreach. GDPR also
addresses concerns about automated decision-making by granting individuals direct to contest
decisions made by possibly AI or other oversight mechanisms.

The OECD AI Principles


(Adopted in 2019, Updated in 2024)
The OECD I principles adopted in May 2009, provide the global framework for the responsible
development and use of artificial intelligence. These principles emphasize that AI should
promote sustainable development while respecting human-centered values such as fairness and
human rights. transparency and accountability are seen as the key aspects to ensure that AI
systems function, responsibly and security and safety are prioritized policy. Makers are
encouraged to foster innovation through investment education and international collaboration,
creating systems that support trustworthy AI the reason why the OCD principles are important is
because it is the first intergovernmental stand taken on AI and it also influenced global AI
governance, including the G20 AI principles, setting a starting point for the ethical AI
development.

The Eurpoean Union AI Act


(April 2021)
European Union AI Act proposed in April 2021 is a comprehensive regulative framework
designed to ensure that AI systems used within the EU are safe and ethical. It adopts a risk-based
approach that categorizes AI systems into three levels: unacceptable risk, high risk, and
unregulated. The unacceptable risk group includes actions that are banned, such as social
scoring. The high-risk group applications are in strictly regulated areas like healthcare and law
enforcement and an example of this could be AI being used as a CV scanning tool for job
applicants. For that kind of AI specific legal requirements would be needed to ensure
transparency and accountability. The unregulated group consists of applications that are not
listed as high-risk, such as systems used in video games. The act prohibits harmful AI practices
and mandates transparency and accountability for AI systems. The European Union AI Act is the
first regulation on AI set by a major regulator.

Alternative Solutions
Addressing the challenges for ethical AI governance also alignment with sustainable
development goals requires a multifaceted approach considering the impacts that it may have in
the future with new developments happening each day. One of the possible solutions could be
the establishment of a global charter to have a universally recognized framework for the ethical
principles of the development and usage of AI. Even though the EU’s AI Act has set a
regulation, having a regulative framework that is recognized internationally and possibly done
under the UN would be more sufficient for every party involved. Encouraging the Member
States to adapt to already existing regulations and frameworks could be the starting point of any
further action being taken to create new frameworks or regulations. The new frameworks that
would be created could be more region-specific in order to achieve more sustainable results, with
being regularly updated as AI and technology evolve every day. Additionally, the evaluation of
the AI considering any risks it may cause socially and environmentally before its deployment
could also be helpful for the prevention of any risks the new systems may cause. Providing
regulations that AI systems can be tested and overseen by could also allow the developers to
embed these regulations during the development process and later be checked according to set
standards approved by every member involved. Implementing independent certification
processes for the AIs could be a part of the process. The development and widespread of XAI
could also address some of the concerns such as transparency and accountability. Including
standards for energy-efficient projects on the building and use of AI with the help of experts
could provide sufficient energy and water-saving projects. Raising public awareness of the
ethical use of AI is also significant. Increasing awareness of how AI could be helpful for
sustainable development and how it should be used and developed in order to achieve ethical
standards could be done under UN oversight with different perspectives and situations in
countries taken into account. For instance, in places where the digital divide has caused the loss
of information on new technologies, they should first be introduced. AI’s impact on SDGs could
be huge if it is addressed accordingly. Projects in which AI is used to advance the progress that
has been made to reach the SDGs should be discussed and implemented. With these solutions
and fruitful debates done by the delegates, sustainable solutions to the issues that AI presents
could be solved accordingly.

Useful Links

https://www.theguardian.com/technology/2016/dec/28/2016-the-year-ai-came-of-age
https://www.weforum.org/stories/2016/08/2016-might-seem-like-the-year-of-ai-but-we-could-
be-getting-ahead-of-ourselves/
https://gdpr-info.eu/
https://oecd.ai/en/ai-principles
https://artificialintelligenceact.eu/ai-act-explorer/
https://www.unesco.org/en/articles/recommendation-ethics-artificial-intelligence
https://aiforgood.itu.int/
https://www.un.org/en/ai-advisory-body
https://www.un.org/counterterrorism/sites/www.un.org.counterterrorism/files/countering-terrorism-
online-with-ai-uncct-unicri-report-web.pdf

Bibliography
“Artificial Intelligence (AI) Coined at Dartmouth.” Dartmouth,
home.dartmouth.edu/about/artificial-intelligence-ai-coined-dartmouth. Accessed 4 Jan.
2025.

“What Is GDPR, the EU’s New Data Protection Law?” GDPR.Eu, 29 Aug. 2024, gdpr.eu/what-
is-gdpr/.

“2016: The Year Ai Came of Age.” The Guardian, Guardian News and Media, 28 Dec. 2016,
www.theguardian.com/technology/2016/dec/28/2016-the-year-ai-came-of-age.

“Ai Principles | OECD.” OECD, www.oecd.org/en/topics/sub-issues/ai-principles.html.


Accessed 4 Jan. 2025.

“The EU Artificial Intelligence Act.” EU Artificial Intelligence Act,


https://artificialintelligenceact.eu/ Accessed 4 Jan. 2025.

“AI Has an Environmental Problem. Here’s What the World Can Do About That.” UNEP,
www.unep.org/news-and-stories/story/ai-has-environmental-problem-heres-what-world-
can-do-about. Accessed 4 Jan. 2025.

“Artificial Intelligence.” Encyclopædia Britannica, Encyclopædia Britannica, inc., 31 Dec. 2024,


www.britannica.com/technology/artificial-intelligence.

“European Approach to Artificial Intelligence.” Shaping Europe’s Digital Future, digital-


strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence. Accessed 4
Jan. 2025.
Sheehan, Matt “China’s Views on AI Safety Are Changing-Quickly.” Carnegie Endowment for
International Peace, carnegieendowment.org/research/2024/08/china-artificial-
intelligence-ai-safety-regulation?lang=en. Accessed 4 Jan. 2025.

“Sustainable Development.” Encyclopædia Britannica, Encyclopædia Britannica, inc., 21 Nov.


2024, www.britannica.com/topic/sustainable-development.

“What Is Explainable AI (XAI)?” IBM, 19 Dec. 2024, www.ibm.com/think/topics/explainable-


ai#:~:text=Explainable%20AI%20is%20used%20to,putting%20AI%20models%20into
%20production.

“Digital Divide.” Encyclopædia Britannica, Encyclopædia Britannica, inc., 26 Nov. 2024,


www.britannica.com/topic/digital-divide.

“Artificial Intelligence (AI).” U.S. Department of State, U.S. Department of State,


www.state.gov/artificial-intelligence/. Accessed 4 Jan. 2025.

“Recommendation on the Ethics of Artificial Intelligence.” Unesdoc.Unesco.Org,


unesdoc.unesco.org/ark:/48223/pf0000381137. Accessed 4 Jan. 2025.

“AI for Good.” AI for Good, 18 Dec. 2024, https://aiforgood.itu.int/. Accessed 4 Jan. 2025.

“Ai Advisory Body.” United Nations, United Nations, www.un.org/en/ai-advisory-body.


Accessed 4 Jan. 2025.

Marr, Bernard. “The 15 Biggest Risks of Artificial Intelligence.” Forbes, Forbes Magazine, 20
Feb. 2024, www.forbes.com/sites/bernardmarr/2023/06/02/the-15-biggest-risks-of-
artificial-intelligence/.

“Common Ethical Challenges in Ai - Human Rights and Biomedicine - Www.Coe.Int.” Human


Rights and Biomedicine, 29 Sept. 2023, www.coe.int/en/web/human-rights-and-
biomedicine/common-ethical-challenges-in-ai.

Competition, Bureau of, et al. “Consumers Are Voicing Concerns about AI.” Federal Trade
Commission, 19 Jan. 2024,
www.ftc.gov/policy/advocacy-research/tech-at-ftc/2023/10/consumers-are-voicing-
concerns-about-ai.

Jasper, Paul. “Can AI Help Us Achieve the SDGs?” SDG Action, 9 July 2024,
sdg-action.org/can-ai-help-us-achieve-the-sdgs/.

Li, Pengfei, et al. University of California, Riverside, California, 2023, pp. 1–16, Making AI
Less “Thirsty”: Uncovering and Addressing the Secret Water Footprint of AI Models, 29
October 2023, https://arxiv.org/pdf/2304.03271.

“Center for AI Safety (CAIS).” Center for AI Safety (CAIS), www.safe.ai/. Accessed 5 Jan. 2025.
“NAIAC.” Center for AI and Digital Policy, www.caidp.org/resources/naiac/. Accessed 5 Jan.
2025.

“Social Development for Sustainable Development | Division for Inclusive Social Development
(DISD).” United Nations, United Nations, social.desa.un.org/2030agenda-sdgs. Accessed 5
Jan. 2025.

You might also like