CCS345-Ethics-and-AI-Lecture-Notes
CCS345-Ethics-and-AI-Lecture-Notes
COURSE OBJECTIVES:
□ Study the morality and ethics in AI
UNIT I INTRODUCTION 6
Model Process for Addressing Ethical Concerns During System Design - Transparency of
Autonomous Systems-Data Privacy Process- Algorithmic Bias Considerations -
Ontological Standard for Ethically Driven Robotics and Automation Systems
30 PERIODS
UNIT -1 INTRODUCTION
Machine learning is the term used for AIs which are capable of learning or, in the case
of robots, adapting to their environment.
Ethics are moral principles that govern a person's behaviour or the conduct of an
activity.
Kant's PRINCIPLE:
'act as you would want all other people to act towards all other people'
AI ETHICS:
AI ethics is concerned with the question of how human developers, manufacturers and
operators should behave in order to minimise the ethical harms that can arise from AI
in society, either arising from poor (unethical) design, inappropriate application or
misuse.
X X
IMPACT ON SOCIETY:
1. Labour Market: AI and robotics have been predicted to destroy jobs and create
irreversible damage to the labour market.
Robotics added an estimated 0.4 percentage points of annual GDP growth and labour
productivity for 17 countries between 1993 and 2007
• 48 percent believed that robots and digital agents would displace significant
numbers of both 'blue' and 'white' collar workers, with many expressing concern
that this would lead to vast increases in income inequality, large numbers of
unemployable people, and breakdowns in the social order (Smith and Anderson,
2014).
• However, the other half of the experts who responded to this survey (52%)
expected that technology would not display more jobs than it created by 2025.
• For example, the share of workers in leisure and hospitality sectors could
increase if household incomes rose, enabling people to afford more meals out and
travel
• According to their analysis, telemarketers, watch repairers, cargo agents, tax
preparers, photographic process workers, new accounts clerks, library
technicians, and data-entry specialists have a 99 percent chance of having their
jobs computerised.
• At the other end of the spectrum, emergency management directors, mental
health social workers, health care social workers, surgeons, firefighter
supervisors and dieticians have less than a one percent chance of this.
2. InEquality
In Equality in Revenues will be split across fewer people, increasing social inequalities.
Consequently, individuals who hold ownership in AI-driven companies are set to
benefit disproportionately.
Inequality: exploitation of workers:
Ex:1.manually tagging objects in images in order to create training data sets for machine
learning systems (for example, to generate training data sets for driverless car AIs)
One of the key ethical issues is that – given the price of the end-products – these
temporary workers are being inequitably reimbursed for work that is essential to the
functioning of the AI technologies.
The accumulation of technological, economic and political power in the hands of the top
five players – Google, Facebook, Microsoft, Apple and Amazon – affords them undue
influence in areas of society relevant to opinion-building in democracies: governments,
legislators, civil society, political parties, schools and education, journalism and
journalism education and — most importantly — science and research.
The privacy and dignity of AI users must be carefully considered when designing service,
care and companion robots, as these devices work in people's homes .
These voice activated devices are capable of learning the interests and behaviour of their
users, but concerns have been raised about the fact that they are always on and
listening in the background.
Human rights
If AI can be used to determine people's political beliefs, then individuals in our society
might become susceptible to manipulation.
Political strategists could use this information to identify which voters are likely to be
influenced to make a Party win.In India, sentiment analysis tools are increasingly
deployed to gauge the tone and nature of speech online, and are often trained to carry out
automated content removal.
x x
IMPACT ON HUMAN PSYCHOLOGY
As AI machines interact with humans, the impact on real human relationship should be
studied.
1. Relationships:
In the future, robots are expected to serve humans in various social roles: nursing,
housekeeping, caring for children and the elderly, teaching, and more. People may start to
form emotional attachments to robots.
Danger: Manipulation
• for example, a hacker could take control of a personal robot and exploit its
unique relationship with its owner to trick the owner into purchasing products.
• While humans are largely prevented from doing this by feelings like
empathy and guilt, robots would have no concept of
this.
EXPERIMENT: small groups of people worked with a humanoid robot to lay railroad
tracks in a virtual world.
Stock Markets are well suited to automation, as they now operate almost entirely
electronically, generating huge volumes of data at high velocity,
The dynamism of markets means that timely response to information is critical, and
hence slow thinking humans will not succeed.
Algorithmic trading can generate profits at a speed and frequency that is impossible for
a human trader.
• AI can learn the technique of order-book spoofing, which involves placing orders
with no intention of ever buying them in order to manipulate honest participants in
the marketplace.
Social bots have also been shown to exploit markets by artificially inflating stock
through fraudulent promotion, before selling its position to unsuspecting parties at an
inflated (High) price
IMPACT ON THE ENVIRONMENT AND THE PLANET
AI and robotics technologies require considerable computing power, which comes with
an energy cost.
1. Use of Natural Resources:
• The extraction of nickel, cobalt and graphite for use in lithium ion batteries –
commonly found in electrical cars and smartphones - has already damaged the
environment
• AI will likely increase this demand.
www.EnggTree.com
AI will also require large amounts of energy for manufacturing and training – for
example, it would take many hours to train a large-scale AI model to understand and
recognise human language such that it could be used for translation
X X
IMPACT OF AI ON TRUST:
1. Fairness
2. Transparency
3.Algorithm Auditors
4.Accountability
2. Transparency
3. Algorithm auditors
ROLE: Provide simulated traffic scenarios to ensure that the vehicle did not increase
the risk to pedestrians or cyclists.
4. Accountability:
In the event of damages incurred, there must be a mechanism for redress so that victims
can be sufficiently compensated.
The Institute for Ethical AI United Kingdom Based on eight principles for responsible machine
& Machine Learning learning:
H R B T W P T S
1. maintenance of human control,
2. redress for AI impact,
3.evaluation of bias,
4. transparency,
5. effect of AI automation on workers,
6. privacy, 7. trust, and 8. security.
The Future of Life Institute United States Focus on safety : autonomous weapons arms race,
The Association for United States The transparency, usability, security, accountability
Computing Machinery of AI in terms of research, development, and
implementation.
The Foundation for The Netherlands Responsible robotics with Proactively taking
Responsible Robotics actions (Anticipating or Foreseeing)
X X
ETHICAL HARMS AND CONCERNS
1. Human rights and well-being : Is AI in the best interests of humanity and human well-
being?
3. Accountability and responsibility : Who is responsible for AI, and who will be held
accountable for its actions?
6. Social harm and social justice: How do we ensure that AI is inclusive, free of bias and
discrimination, and aligned with public morals and ethics?
7. Financial harm : How will we control for AI that negatively affects economic
opportunity and employment, and either takes jobs from human workers or decreases the
opportunity and quality of these jobs?
8 .Control and the ethical use – or misuse – of AI : How might AI be used unethically -
and how can we protect against this? How do we ensure that AI remains under complete
human control, even as it develops and 'learns'?
10.Informed use: What must we do to ensure that the public is aware, educated, and
informed about their use of AI?
x x
CASE STUDY: HEALTHCARE ROBOTS:
Artificial Intelligence and robotics are rapidly moving into the field of healthcare and will
increasingly play roles in 1.diagnosis 2. clinical treatment.
As robots become more prevalent, the potential for future harm will increase.
1. SAFETY
Robots should not harm people, and that they should be safe to work with.
This point is especially important in areas of healthcare that deal with the ill people,
elderly, and children.
Digital healthcare technologies offer the potential to improve accuracy of diagnosis and
treatments, but to thoroughly establish a technology's long-term safety and performance
investment in clinical trials is required.
2. USER UNDERSTANDING
EXAMPLE SCENARIO:
A Machine learning algorithm erroneously considered a low risk asthmatic patient as
high risk and took to ICU.
3. DATA PROTECTION
EXAMPLE SCENARIO:
Data : Danger:
Personnal health data of persons at hered Data mightbe sold to third parties like
by Fitness trackers insurance companies
4. LEGAL RESPONSIBILITY
when issues occur, legal liability must be established. If equipment can be proven to be
faulty then the manufacturer is liable, but it is often tricky to establish what went wrong
during a procedure and whether anyone, medical personnel or machine, is to blame
5. EQUALITY OF ACCESS
But people with less digital Knowledge will not be able to use these advancements and
lead to the inequality in Medical treatments.
6. AUTONOMY;
Robots could be used to help elderly people live in their own homes for longer, giving
them greater freedom and autonomy.
Question?
If a patient asked a robot to throw them off the balcony, should the robot carry out that
command?
Hence the degree of autonomy for robots should be under control.
X X
AI technology has the potential to transform warfare dangerous than the use of nuclear
weapons, aircraft, computers and biotechnology.
As automatic and autonomous systems have become more capable, militaries are trying
to adopt them. Widespread adoption of AI lead to arms race.
The Russian Military industrial committee has already approved an aggressive plan
whereby 30% of Russian combat power will consist of entirely remote-controlled and
autonomous robotic platforms by 2030.
2. Drone technologies
• For the price of a single high-end aircraft, a military could acquire one million
drones.
3. Robotic assassination
EXAMPLES:
• Dome system detects incoming rockets, predicts their trajectory using AI, and
then sends this information to a human soldier who decides whether to launch an
interceptor rocket
• Robot SGR-A1 built by Samsung, uses a low-light AI camera and pattern-
recognition software to detect intruders and then issues a verbal warning. If the
intruder does not surrender, the robot has a machine gun that can be fired remotely
by a soldier the robot has alerted, or by the robot itself if it is in fully automatic
mode.
X X
UNIT III AI STANDARDS AND REGULATION 6
IEEE Std 7000: The goal of this standard is to enable organizations to design systems
with explicit consideration of
PURPOSE:
IEEE Std 7000(TM) does not give specific guidance on the design of algorithms to
apply ethical values such as fairness and privacy.
Projects conforming to IEEE Std 7000 balance management commitments for time
and budget constraints with the long-term values of social responsiveness and
accountability.
X X
• Transparency can be defined as the extent to which the system discloses the
processes or parameters that relate to its functioning.
• Transparency can also be considered as the property that makes it possible to
discover how and why the system made a particular decision or acted the way it
did.
• The aim of P7001 is to provide a standard that sets out “measurable, testable
levels of transparency, so that autonomous systems can be objectively assessed
• An autonomous system is defined in P7001 as “a system that has the capacity to
make decisions itself, in response to some input data or stimulus, with a
varying degree of human intervention depending on the system’s level of
autonomy”.
• The intended users of P7001 are
a) specifiers,
b) designers,
c) manufacturers,
d) operators and maintainers of autonomous systems.
• Furthermore P7001 is generic; it is intended to apply to all autonomous systems
including robots (autonomous vehicles, assisted living robots, drones, robot
toys, etc.), as well as software-only AI systems, such as medical diagnosis AIs,
chatbots, loan recommendation systems, facial recognition systems, etc.
To ensure the transparency of autonomous systems to a range of stakeholders the
IEEE P7001 standard address the following issues.
X X
Data Privacy Process- IEEE STANDARD P7002
• This standard specifies how to manage privacy issues for systems or software
that collect personal data.
• It will do so by defining requirements that cover corporate data collection policies
and quality assurance.
• It also includes a use case and data model for organizations developing
applications involving personal information.
• The standard will help designers by providing ways to identify and measure
privacy controls in their systems utilizing privacy impact assessments.
X X
ALGORITHMIC BIAS CONSIDERATIONS : The IEEE P7003 standard
Any system that will produce different results for some people than for others is open
to challenges of being biased.
Algorithmic systems in this context refers to the combination of algorithms, data and
the output deployment process that together determine the outcomes that affect end
users.
Unjustified bias refers to differential treatment of individuals based on criteria for which
no operational justification is given.
Inappropriate bias refers to bias that is legally or morally unacceptable within the
social context where the system is used, e.g. algorithmic systems that produce outcomes
with differential impact strongly correlated with protected characteristics (such as race,
gender, etc).
X X
ONTOLOGICAL STANDARD FOR ETHICALLY DRIVEN ROBOTICS AND
AUTOMATION SYSTEMS : IEEE P 7007 STANDARD
Ontologies are formal representations of the concepts, relations, and constraints in a domain of
knowledge. They are widely used in artificial intelligence (AI) to provide a common vocabulary,
structure, and reasoning for various tasks and applications.
The ontological specification reports provide methods to assess AI systems and organizations in their
ethical performance regarding the key ethical principles of transparency, accountability, bias, and
privacy.
EXAMPLES OF ONTOLOGIES:
ETHICAL RISKS :
1. For example, an ontology for a criminal justice AI system might include concepts such as crime,
criminal, and victim. If the ontology is biased against certain groups of people, such as African
Americans, then the AI system will be more likely to recommend harsher punishments for
members of those groups.
2. For example, an ontology for a military AI system might include concepts such as weapon,
target, and enemy. If the ontology is biased against certain groups of people, then the AI system
will be more likely to identify members of those groups as enemies and recommend that they be
attacked.
IEEE P 7007:
This standard establishes a set of ontologies with different abstraction levels that contain
concepts, definitions, axioms, and use cases that are appropriate to establish ethically
driven methodologies for the design of robots and automation (R&A) systems.
Purpose:
The purpose of the standard is to establish a set of definitions and their relationships to
enable the development of R&A in accordance with shared values and internationally
accepted ethical principles that facilitate trust in the creation and use of R&A.
BENEFITS:
The use of ontologies for representing knowledge in any domain has several benefits that
include the following:
b) Tools for analyzing concepts and their relationships in searching for inconsistency,
incompleteness, and redundancy
c) Language being used in the communication process among robots from different
manufacturers is standardized
UNIT-4 ROBO ETHICS
ROBOETHICS:
Ethics is the branch of philosophy which studies human conduct, the concepts of good
and evil.
Roboethics --also called machine ethics-- deals with the code of conduct that robotic
designer engineers must implement in the Artificial Intelligence of a robot.
Isaac Asimov developed the Three Laws of Robotics arguing that intelligent robots
should be programmed in a way that they obey the following three laws:
□ A robot must obey the orders given it by human beings except where such orders
would conflict with the First Law
□ A robot must protect its own existence as long as such protection does not conflict
with the First or Second Law.
ETHICAL CONCERNS
1. Social cues: robots should deliver a service is as human-like as possible ..
However, a customer service robot should not hinder or replace human-to-
human interactions. It is important to guarantee this aspect when a company
wants to use robots in a service delivery context.
2. Trust and safety: The extent to which a robot is deemed safe and trustworthy is
important to the user’s intention to use the technology. Companies using a service
robot must always guarantee the trust and safety.
3. Autonomy: Even though, in our case, this variable did not have an influence on
the user’s intention to use a robot, the idea of being able to restrict a robot’s
autonomy can be found in ethical charters. Therefore, we argue that a company
using a service robot should always be able to regulate a robot’s autonomy,
especially in cases when the consequences of the robot’s actions cannot be totally
controlled.
4. Privacy and data protection: Privacy and data protection play a big role in the
intention to use a robot. First, a company using a service robot should always
respect its customers’ right to privacy. As transparency (i.e., disclosure about what,
how and why data is collected) leads to a better user experience, companies (and
their robots) have to be transparent about the collection and use of their customers’
data. Secondly, companies using customer service robots should ensure that they
protect their customer’s data by encrypting.
X X
ETHICS MORALITY
1. Ethics is the branch of philosophy 1. morality is the right or wrong
concerned with the evaluation of of an action, a way of life
human conduct.
2. Ethics are universal 2. Morality is often culture-specific.
3. Ethics applies to groups 3. morality applies to individuals
and
organizations
X X
MORAL THEORIES
1. Utilitarianism is a theory of morality that advocates actions that foster happiness and
oppose actions that cause unhappiness. Utilitarianism promotes "the greatest amount of
good for the greatest number of people."
2. Contractualism: is the theory based on the mutual contract between the designer and
the contract provider.
An action is morally wrong if it is contrary to the general system of moral rules upon
which there could be informed unforced agreement.
3. Deontologism;
Deontological ethics is the ethical theory that the morality of an action should be based
on whether that action itself is right or wrong under a series of rules and principles,
rather than based on the consequences of the action.
Ex: 1. Do not kill 2. do not steal 3. do not lie 4. do not cheat etc
X X
ETHICS IN SCIENCE AND TECHNOLOGY
Science ethics: In science, ethical principles, especially honesty and integrity, should
guide all stages of scientific practice—including data collection, peer review,
publishing, and replication of findings—to assure that scientific knowledge is unbiased
and trustworthy.
BAD PRACTICES:
1. Scientific honesty
Scientists should not commit scientific fraud by, for example, destroying, or
misrepresenting data.
2. Carefulness
3. Intellectual freedom
Scientists should be free to pursue new ideas and criticize old ones and conduct
research on anything they find interesting.
4. Openness
Whenever possible, scientists should share data, results, methods, theories, equipment,
and so on; allow people to see their work; and be open to criticism.
5. Attribution of credit
Scientists should not plagiarise the work of other scientists. They should give credit
where credit is due but not where it is not due.
6. Public responsibility
Scientists should report research in the public media when the research has an
important and direct bearing on human happiness and when the research has been
sufficiently validated by scientific peers.
Technology ethics: Ethics in Technology governs principles of right and wrong while
using technological advancements . Examples:
X X
Ethical Issues in an ICT Society
COMPONENTS OF ICT:
COMMUNICATI
ON
DATA
TECHNOLOGIE
S
TRANSACT INTERNET
IONS SERVICES
CLOUD HARDWA
SERVICES RE
SOFTWA
RE
To deal with ICT society it is important to find out the ethical issues.
Some of the major ethical issues faced by Information & communication Technology
(ICT) are:
1. Personal Privacy
Due to the distribution of the network on a large scale, data or information transfer in
a big amount takes place which leads to the hidden chances of disclosing information
and violating the privacy of any individuals or a group.
2. Access Right
Network on the internet cannot be made secure from unauthorized access. Generally,
the intrusion detection system are used to determine whether the user is an intruder or
an appropriate user.
3. Harmful Actions
Harmful actions in the computer ethics refers to the damage to the IT such as loss of
important information, loss of property, loss of ownership. To recover from the harmful
actions extra time and efforts are required to remove the viruses from the computer
systems.
4. Copyright
Copyright law works as a very powerful legal tool in protecting computer software, both
before a security breach and after a security breach.
5. Liability
Software developer should be aware of the liability issue in making ethical decisions.
Software developer makes promises about the nature and quality of the product that is
given as warranty.
6. Piracy
Piracy is the creation of illegal copy of the software . The software industry is prepared
to do encounter against software piracy. The efforts an dealing with an increasing number of
actions concerning the protection of software.
X X
Harmonization of Principles
Harmonization is the act of making systems or laws the same or similar in different
companies, countries, etc. so that they can work together more easily.
Internationally recognized institutions such as the United Nations, the World Health
Organization (WHO), have identified general ethical principles that have been adopted by
most nations, cultures, and people of the world.
X X
• nondiscrimination
• responsibility towards the biosphere
Privacy: What information about ones self or ones associations must a person reveal to
others
Property:Who owns information? What are the just and fair prices for its exchange?
ROBOETHICS TAXONOMY
Roboethics Taxonomy refers to the classification of types of Robots available and the
ethics corresponding to their usage.
1. Humanoids: A humanoid robot is a robot resembling the human body in shape.
Artificial intelligence will be able to lead the robot to fulfill the missions required by the
end users.
To achieve this goal, over the past decades scientists have worked on AI techniques in
many fields, including:
The increasing autonomy of the robots could give rise to unpredictable and non-
predictable behaviors.
Hence the designers must predict the threats and foresee the dangers.
2. Industrial Robotics
5. Outdoor Robotics
Outdoor robots are intelligent machines that explore, develop, secure our world.
Use of outdoor robots could lead to excessive exploitation of the planet, which can
become a threat to biodiversity and life on the planet.
6. Surgical Robotics
Surgical Robots are robots that allow doctors to perform many types of complex
surgeries with more precision, flexibility and control than is possible with
conventional techniques.
Typical applications are:
• Robotic telesurgical workstations
• Robotic systems for diagnosis ( Ct Scan – Computerized Tomography Scan)
Issues:
• High cost of robotic systems in the medical field could widen the digital divide
between developed and developing countries
X X
UNIT-5
OPPORTUNITIES OF AI:
interest pattern .
b) Dynamic Pricing Structure: It’s a smart way of fixing price of the product
based on the demand data.
c) Fake Review Detection: AI algorithms are used to detect and delete Fake
Reviews.
2. AI in Education Purpose
Making automated messages to the students, and parents regarding any vacation, and
test results are done by Artificial Intelligence.
AI applications in Education.
a) Voice Assistant: With the help of AI algorithms, this feature can be used in
multiple ways to save time.
b) Gamification: This feature has enabled e-learning companies to design attractive
game models, so that kids can learn in a fun way. This will also ensure that they
are catching the concepts.
Artificial Intelligence is one of the major technologies that provide the robotics field
with a boost to increase their efficiency. AI provides robots to make decisions in real
time and increase productivity. For example, suppose there is a warehouse in which
robots are used to manage good packages. The robots are only designed to deliver the
task but Artificial Intelligence makes them able to analyze the vacant space and make the
best decision in real-time. Let’s take a closer look at AI applications in Robotics.
• NLP: Natural Language Processing plays a vital role in robotics to interpret the
command as a human being instructs. This enables AI algorithms & techniques
such as sentimental analysis, syntactic parsing, etc.
GPS technology uses Artificial Intelligence to make the best route and provide the best
available route to the users for traveling. This is also suggested by research provided by
the MIT Institute that AI is able to provide accurate, timely, and real-time information
about any specific location. It helps the user to choose their type of lane and roads which
increases the safety features of a user. GPS and navigation use
the convolutional and graph neural network of Artificial Intelligence to provide these
suggestions. Let’s take a closer look at AI applications in GPS & Navigation.
• Voice Assistance: This feature allows users to interact with the AI using a hands-
free feature & which allows them to drive seamlessly while communicating
through the navigation system.
• Personalization (Intelligent Routing): The personalized system gets active based
on the user’s pattern & behavior of preferred routes. Irrespective of the time &
duration, the GPS will always provide suggestions based on multiple patterns &
analyses.
• Positioning & Planning: GPS & Navigation requires enhance support of AI for
better positioning & planning to avoid unwanted traffic zones. To help with this,
AI-based techniques are being used such as Kalman, Sensor fusion, etc. Besides
this, AI also uses prediction methods to analyze the fastest & efficient route to
surface the real-time data.
5. Healthcare
Artificial Intelligence is widely used in the field of healthcare and medicine. The
various algorithms of Artificial Intelligence are used to build precise machines that are
able to detect minor diseases inside the human body. Also, Artificial Intelligence uses the
medical history and current Situtation of a particular human being to predict future
diseases. Artificial Intelligence is also used to find the current vacant beds in the hospitals
of a city that saves the time of patients who are in emergency conditions. Let’s take a
closer look at AI applications in Healthcare.
• Insights
& Analysis: With the help of AI, a collection of large datasets, that
includes clinical data, research studies, and public health data, to identify trends
and patterns. This inversely provides aid in surveillance and public health
planning.
• PatientMonitoring: In case of any abnormal activity and alarming alerts during the
care of patients, an AI system is being used for early intervention. Besides this,
RPM, or Remote Patient Monitoring has been significantly growing & is expected
to go up by USD 6 Billion by 2025, to treat and monitor patients.
• SurgicalAssistance: To ensure a streamlined procedure guided by the AI
algorithms, it helps surgeons to take effective decisions based on the provided
insights to make sure that no further risks are involved in this while processing.
6. Automobiles
• Emission Reduction: This feature detects and learns patterns from the given
inputs i.e. from the driving pattern of the user and based on this it strategizes to
perform efficient driving patterns by reducing emissions. This algorithm is
well capable of analyzing routes, traffic, car performance patterns, and so on.
7. Agriculture
Artificial Intelligence is also becoming a part of agriculture and farmers’ life. It is used
to detect various parameters such as the amount of water and moisture, amount of
deficient nutrients, etc in the soil. There is also a machine that uses AI to detect where
the weeds are growing, where the soil is infertile, etc. Let’s take a closer look at AI
applications in Agriculture.
• StockMonitoring: To have rigorous monitoring, and ensure that crops that not
being affected by any disease, AI uses CN to check crop feeds live and alarms
when any abnormality arises.
• Forecasting: With the help of AI, analyzing the weather forecast and crop growth
has become more convenient in the field of agriculture and the algorithms help
farmers to grow crops with effective business decisions.
X X
CHALLENGES OF AI:
But there are challenges with AI, it is necessary to be vigilant (Cautious) about these
issues to make sure that artificial intelligence is not doing more harm than good.
1. Biases
We need data to train our artificial intelligence algorithms, and we need to do everything
we can to eliminate bias in that data.
The ImageNet database, for example, has far more white faces than non-white faces.
When we train our AI algorithms to recognize facial features using a database that
doesn’t include the right balance of faces, the algorithm won’t work as well on non-white
faces, creating a built-in bias that can have a huge impact.
I believe it’s important that we eliminate as much bias as possible as we train our AI,
instead of shrugging our shoulders and assuming that we’re training our AI to accurately
reflect our society. That work begins with being aware of the potential for bias in our AI
solutions.
As we use more and more artificial intelligence, we are asking machines to make
increasingly important decisions.
For example, right now, there is an international convention that dictates the use of
autonomous drones. If you have a drone that could potentially fire a rocket and kill
someone, there needs to be a human in the decision-making process before the missile
gets deployed. So far, we have gotten around some of the critical control problems of AI
with a patchwork of rules and regulations like this.
The problem is that AIs increasingly have to make split-second decisions. For example,
in high-frequency trading, over 90% of all financial trades are now driven by algorithms,
so there is no chance to put a human being in control of the decisions.
The same is true for autonomous cars. They need to react immediately if a child runs out
on the road, so it’s important that the AI is in control of the situation. This creates
interesting ethical challenges around AI and control.
Privacy
Privacy (and consent) for using data has long been an ethical dilemma of AI. We need
data to train AIs, but where does this data come from, and how do we use it? Sometimes
we make the assumption that all the data is coming from adults with full mental
capabilities that can make choices for themselves about the use of their data, but we don’t
always have this.
For example, Barbie now has an AI-enabled doll that children can speak to. What does
this mean in terms of ethics? There is an algorithm that is collecting data from your
child’s conversations with this toy. Where is this data going, and how is it being used?
As we have seen a lot in the news recently, there are also many companies that collect
data and sell it to other companies. What are the rules around this kind of data collection,
and what legislation might need to be put in place to protect users’ private information?
Power Balance
Huge companies like Amazon, Facebook, Google, are using artificial intelligence to
squash their competitors and become virtually unstoppable in the marketplace. Countries
like China also have ambitious AI strategies that are supported by the government.
President Putin of Russia has said, “Whoever wins the race in AI will probably become
the ruler of the world.”
How do we make sure the monopolies we’re generating are distributing wealth equally
and that we don’t have a few countries that race ahead of the rest of the world? Balancing
that power is a serious challenge in the world of AI.
Ownership
Who is responsible for some of the things that AIs are creating?
We can now use artificial intelligence to create text, bots, or even deepfake videos that
can be misleading. Who owns that material, and what do we do with this kind of fake
news if it spreads across the internet?
We also have AIs that can create art and music. When an AI writes a new piece of music,
who owns it? Who has the intellectual property rights for it, and should potentially get
paid for it?
Environmental Impact
Sometimes we don’t think about the environmental impact of AI. We assume that we are
using data on a cloud computer to train an algorithm then that data is used to run
recommendation engines on our website. However, the computer centers that run our
cloud infrastructure are power-hungry.
Training in AI, for example, can create 17 times more carbon emissions than the average
American does in about a year.
How can we use this energy for the highest good and use AI to solve some of the world’s
biggest and most pressing problems? If we are only using artificial intelligence because
we can, we might have to reconsider our choices.
Humanity
My final challenge is “How does AI make us feel as humans?” Artificial intelligence has
now gotten so fast, powerful, and efficient that it can leave humans feeling inferior. This
issue may challenge us to think about what it actually means to be human.
AI will also continue to automate more of our jobs. What will our contribution be, as
human beings? I don’t think artificial intelligence will ever replace all our jobs, but AI
will augment them. We need to get better at working alongside smart machines so we can
manage the transition with dignity and respect for people and technology.
X X
AI applications in healthcare have literally changed the medical field, including imaging
and electronic medical records (EMR), laboratory diagnosis, treatment, augmenting the
intelligence of the physicians, new drug discovery, providing preventive and precision
medicine.
• In healthcare, current laws are not enough to protect an individual’s health data.
• Clinical data collected by robots can be hacked into and used for malicious
purposes that minimize privacy and security.
As robots become more prevalent, the potential for future harm will increase.
1. SAFETY
Robots should not harm people, and that they should be safe to work with.
This point is especially important in areas of healthcare that deal with the ill people,
elderly, and children.
Digital healthcare technologies offer the potential to improve accuracy of diagnosis and
treatments, but to thoroughly establish a technology's long-term safety and performance
investment in clinical trials is required.
2. USER UNDERSTANDING
EXAMPLE SCENARIO:
A Machine learning algorithm erroneously considered a low risk asthmatic patient as
high risk and took to ICU.
3. DATA PROTECTION
EXAMPLE SCENARIO:
Data : Danger:
Personnal health data of person athered Data should be sold to third parties
like
by Fitness trackers insurance companies
4. LEGAL RESPONSIBILITY
when issues occur, legal liability must be established. If equipment can be proven to be
faulty then the manufacturer is liable, but it is often tricky to establish what went wrong
during a procedure and whether anyone, medical personnel or machine, is to blame
5. EQUALITY OF ACCESS
But people with less digital Knowledge will not be able to use these advancements and
lead to the inequality in Medical treatments.
6. AUTONOMY;
Robots could be used to help elderly people live in their own homes for longer, giving
them greater freedom and autonomy.
Question?
If a patient asked a robot to throw them off the balcony, should the robot carry out that
command?
Hence the degree of autonomy for robots should be under control.
--------------------------X----------------------------X------------------------------X----------------
Manufacturing in the near future would be fully automated. The manufacturing processes
enabled by Artificially Intelligent Systems would be able to perform the required
processes. It will also be able to inspect, improve, and quality checks the products
without any human intervention.
1. Predictive Maintenance
With even a minor mistake occurring on the assembly line proving hazardous, a stage
towards AI implies less human assistance needed to complete unsafe work.
As robots support people and perform unsafe exercises, the number of working
environment mistakes will diminish . As a result, this will lead to safer working
conditions than before.
For example, instead of a human doing a crash test of a car, an AI would be a natural
option.
3. Human-Robo Collaboration
There are a large number of robots working in manufacturing plants everywhere
throughout the world. People are concerned that their occupations may be replaced by
robots.
Individuals can be recruited for more elevated level situations for programming and the
executives of the business forms.
4. Quality 4.0
Industry 4.0 methods, can deliver good quality items by utilizing AI calculations. If any
issue is found at the starting stages, we can deal with it right away.
5. Cost Optimization
Bringing AI into the production lines would require an enormous capital venture, but
the ROI (RETURN ON INVESTMENT) is high. As smart machines begin dealing with
everyday exercises, organizations can get lower working costs.
6. Digital Twins
AI and Digital Twins make a virtual portrayal that reproduces the physical attributes of
the plant, items, or machine segments.
By utilizing cameras, sensors, computerized twin can make a live model of the
production line plant that helps to precisely foresee wear, development, and
collaborations with different gadgets.
X X X
1. #AIforAll.
3. Decoding Explainable AI
The Explainable AI (XAI) program aims to create a suite of machine learning techniques
that:
• Produce more explainable models
• The machine learning algorithms of tomorrow should have the built-in capability
to explain their logic
INTERNATIONAL Strategies
Initiative Country Key issues tackled
The Institute for Ethical AI United Kingdom Based on eight principles for responsible machine
& Machine Learning learning:
H R B T W P T S
1. maintenance of human control,
2. redress for AI impact,
3.evaluation of bias,
4. transparency,
5. effect of AI automation on workers,
6. privacy, 7. trust, and 8. security.
The Future of Life Institute United States Focus on safety : autonomous weapons arms race,
The Association for United States The transparency, usability, security, accountability
Computing Machinery of AI in terms of research, development, and
implementation.
The Foundation for The Netherlands Responsible robotics with Proactively taking
Responsible Robotics actions (Anticipating or Foreseeing)