0% found this document useful (0 votes)
17 views

AI NOTES UNIT-IV

This document explores the ethical implications of artificial intelligence (AI) and the management's awareness of its social impact. It discusses the integration of ethical values with economic value, highlighting the conflicts that arise as AI technology advances, particularly concerning labor markets and societal inequalities. The research emphasizes the need for companies to prioritize social responsibility in AI development while addressing the complexities of ethical decision-making in a rapidly evolving digital landscape.

Uploaded by

poojasai235
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

AI NOTES UNIT-IV

This document explores the ethical implications of artificial intelligence (AI) and the management's awareness of its social impact. It discusses the integration of ethical values with economic value, highlighting the conflicts that arise as AI technology advances, particularly concerning labor markets and societal inequalities. The research emphasizes the need for companies to prioritize social responsibility in AI development while addressing the complexities of ethical decision-making in a rapidly evolving digital landscape.

Uploaded by

poojasai235
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 34

21AD1907 PERSPECTIVES AND APPROACHES UNIT-IV

Perspectives on Ethics of AI, Integrating


ethical values and economic value,
Automating origination, AI a Binary approach,
Machine learning values, Artificial Moral
Agents.

1. Perspectives on Ethics of AI:


This research addressed the management awareness
about the ethical and moral aspects of artifcial intelligence (AI).
It is a general trend to speak about AI, and many start-ups and
established companies are communicating about the
development and implementation of AI solutions. Therefore, it
is important to consider diferent perspectives besides the
technology and data as the key elements for AI systems. The
way in which societies are interacting and organising
themselves will change. Such transformations require diverse
perspectives from the society and particularly from AI system
developers for shaping the humanity of the future. This
research aimed to overcome this barrier with the answers for
the question: What kind of awareness does the management of
AI companies have about the social impact of its AI product or
service? The central research question was divided into fve
sub-questions that were answered by a fundamental literature
review and an empirical research study. This covered the
management understanding of the terms moral, ethics, and
artifcial intelligence; the internal company prioritization of
moral and ethics; and the involved stakeholders in the AI
product or service development. It analysed the known and
used ethical AI guidelines and principles. In the end, the social
responsibility of the management regarding AI systems was
analysed and compared.

1.1 Introduction
This research aimed to generate the awareness of
ethical challenges for artifcial intelligence (AI) systems and to
analyse the management perspective and understanding of
ethics for its AI product or service. Ethics, based on the used
21AD1907 PERSPECTIVES AND APPROACHES UNIT-IV

framework, are not just only about defning what is right and
what is wrong. As digitalisation covers many different
technologies and aspects, AI can be seen as one of them that
will change not only businesses, but also humanity. In addition
to new technologies and use cases, AI has a deep impact on
society and social life and has the potential to seriously shape
and change humanity. The increasing digitalisation at all levels
[12] does not only lead to the improvement and optimisation of
the products and processes but also changes the way of
internal and external collaboration. Companies need to
increase fexibility and openness to innovate new business
models with the intelligent usage of new technologies, such as
AI [2]. A high level of automation with AI systems generates an
improved rating of the company performance and can
potentially eliminate the existing jobs and increases the
psychological pressure on the employees. Digitalisation is a
challenge to employees who execute tasks that are easy to
automate and to middle and high management. Digital
technologies have important economic and social aspects.
Companies strive for product innovations and inventions with
new technologies. Moreover, the long-term impacts of digital
transformation and new technologies, such as AI, are not clear
from the beginning. Digital products and services are often
developed by computer scientists for technical use focused on
revenue and growth. Social components, considering the big
picture of mankind and taking social responsibility into account,
do not always have a high priority for the management.
Considering the considerable impact of AI systems on society, it
is of high importance that companies actively prioritise their
social responsibility and take actions.

1.2 Ethics and morals


An AI system can be abused by somebody with a lack
of morals. Western ethics are based on several attitude and
responsibility frameworks, including teleology (e.g.
utilitarianism, antiquity, and hedonism) and deontology (e.g.
virtue ethics) [17]. Future AI systems will operate in a more
integrated manner with humans and may have their own moral
status, such as being their own moral entity or doing tasks by
their own will [3]. Many ethical principles have social emotions,
such as compassion and empathy in common. The parameters
are the reward and the punishment for the guidance. If a
21AD1907 PERSPECTIVES AND APPROACHES UNIT-IV

human does something bad and also feels bad about it, an
emotional punishment is generated by the brain. If a human
disregards ethical principles, the society may punish through
shaming by peers or sentence at the court. There is no
common ethical consensus in today’s world, but there are basic
principles with a broad agreement [19]. In the past, human
societies had ethical principles with the focus on survival. In
2006, the concept of machine ethics that was proposed by
Anderson and Anderson started discussions about ethical
issues. Ethics are a complicated and complex concept with a
focus on a single aspect.

1.3 AI ethics implementation


One of the most important factors to consider
for AI algorithm training is the human bias, such as the gender
bias or race bias. As AI systems need plenty of data to train
with accuracy, the datasets are chosen by humans frst-hand. In
this process, the existing biases may be transferred to AI
systems when they develop themselves for the future.
Therefore, it is important to train algorithms without human
biases [18]. The deep learning AI model GPT-3 from OpenAI
makes decisions based on 175 billion parameters. Researchers
have conducted an analysis of biases to better understand their
model regarding fairness and bias. Their study showed that
internet-trained models have an internet-scale bias [4]. The
model tends to refect stereotypes based on the training data of
175 billion parameters. For example, academic and higher-paid
jobs were associated with male persons; Christianity was
associated with ignorant, judgmental and execution; and the
Islam was linked to terrorism, fasting, and Allah [4]. If AI
systems get their own sentience in the future, will they
generate their own biases? There are three potential ways to
educate AI systems of ethics .
• Implicit ethical agents: forcing the machines’ actions to
prevent unethical outcome.
• Explicit ethical agents: explicitly quote the allowed and the
forbidden actions.
• Full ethical agents: machines have consciousness, free will,
and intention.

1.4 Guidelines and principles


21AD1907 PERSPECTIVES AND APPROACHES UNIT-IV

AI systems offer considerable optimisation potential in


various fields, such as transportation and logistics, or even in
preventing diseases and to radically reinvent the society.
Therefore, humans need to understand not to generate a
dependency on AI systems and to keep the ability to make the
final decision by themselves. The risk is that humanity breaks
up by AI systems, as it may lead to unplanned changes by the
intention to make automate routines and make people’s life
easier

1.5 Management perspective


The findings of the empirical research have been
summarised in the corresponding categories, and the results
are presented in this section. Referring to the central research
question and its sub-questions from the previous chapter, the
discussion will revolve around the management considerations
of ethical and moral aspects for their AI development. In the
end, the motivation of the managers and their awareness of
their influence on the social impact of their AI product or
service will be discussed. The first sub-question of the central
research topic was about the management’s understanding of
the used definitions and terms. Based on the outcome of the
literature study, there were many existing different definitions
and this question covered the perspective of the management.
A common definition and understanding would make it easier
to generate ethical AI guidelines and certifications at both the
national and the global level. Morals were defined in different
ways but represented a similar meaning to that in the existing
literature. Sometimes, it was difficult for the management to
find an exact distinction between morals and ethics. The term
AI was often described by an example of one’s own AI product
or service, but as a similar approach to different literature
definitions. Future AI ethics will deal with an increase of social
and technical complexity and will require the perspectives of
different professions. This may lead to a new diverse AI ethics
discipline that will influence different existing professions, such
as philosophy, psychology, law, software, and data.

1.6 Conclusion
This research showed how complex and still partly
unanswered the topic about the ethics of AI from a
management perspective is. Besides the technological
21AD1907 PERSPECTIVES AND APPROACHES UNIT-IV

complexity, it also afects other disciplines, professions, and


nations that need to cooperate, shape, and implement global
frameworks and standards. Furthermore, digital technologies
and trends, partly driven by AI, will have an immense impact on
the consumer behaviour, economy, societies, and other
sectors. Industry sectors, such as the retail market, will need to
positively transform themselves into a successful combination
of online and ofine services. Growing unemployment rates and
heavy job losses are the frequently discussed topics and a
universal basic income (UBI) might be a solution in the future.
Past experiences, such as the invention of the steam machine
and that of electricity, show that humanity and economy have
been evolved, and new ages have emerged. The development
of AI might result in an increased number of software engineers
and data scientists in the future. Digitalisation is a powerful
transformation for global players and monopolists giving them
the opportunity to further their market power. The aim of this
research was to analyse the management perspective and
awareness about ethics in AI. The results of the interviews
revealed new perspectives and information. However, there are
still many undefned and non-regulated issues to solve.

2. Integrating Ethical Values and Economic Value


Economics and ethics both offer
important perspectives on our society, but they do so from two
different viewpoints – the central focus of economics is how the
price system in our economy values resources; the central
focus of ethics is the moral evaluation of actions in our society.
The rise of Artificial Intelligence (AI) forces humanity to confront
new areas in which ethical values and economic value conflict,
raising the question of what direction of technological progress
is ultimately desirable for society. One crucial area are the
effects of AI and related forms of automation on labor markets,
which may lead to substantial increases in inequality unless
mitigating policy actions are taken or progress is actively
steered in a direction that complements human labor.
Additional areas of conflict arise when AI systems optimize
narrow market value but disregard broader ethical values and
21AD1907 PERSPECTIVES AND APPROACHES UNIT-IV

thus impose externalities on society, for example when AI


systems engage in bias and discrimination, hack the human
brain, and increasingly reduce human autonomy. Market
incentives to create ever more intelligent systems lead to the
ultimate ethical question: whether we should aim to create AI
systems that surpass humans in general intelligence, and how
to ensure that humanity is not left behind.

2.1 Economics and Ethics – Two Conflicting Value


Systems?

Economics and ethics both offer important


perspectives on our society, but they do so from two different
viewpoints – the central focus of economics is how the price
system in our economy values resources; the central focus of
ethics is the moral evaluation of actions in our society.
Economic value and ethical values may at times look
contradictory but are in fact complementary, as argued
forcefully e.g. by Amartya Sen (1987). In a market economy,
the system of market prices reflects how economic actors –
humans in their roles as consumers, producers, workers,
employers etc. – value economic resources. Market prices play
a central role in guiding economic decisions – including in
steering technological progress. Market prices offer some hints
on what the individual members of society value. However,
they are by no means a full representation of our values,
missing out for 3 example on anything that is not traded in the
market, including externalities. Market prices thus need to be
complemented by ethical values to guide decisions so as to
make them desirable for society. Since the ethical values of
different individuals differ, I will not argue from one specific set
of ethical values in this article, but I will instead draw only on
those ethical values on which a vast majority of members of
our society agree.

2.2 Professional Biases

Even if economists and ethicists agree on the


general points discussed, individual members of either
21AD1907 PERSPECTIVES AND APPROACHES UNIT-IV

profession may still reasonably disagree on the extent to which


it is desirable for other institutions, including for government, to
interfere in the described processes. This is a question of both
political preferences and beliefs, for example beliefs in the
effectiveness of such alternative institutions. If we compare
economists to non-economists, including ethicists, they
probably tend to believe more strongly in the power of markets
versus other institutions such as governments. Similarly,
ethicists perhaps tend to believe more strongly in the relevance
of careful ethical deliberation than non-ethicists, including
economists. It is probably true of all scientific fields that people
working in the field believe on average more strongly in the
relevance of their subject of inquiry than people outside of the
field. The reasons include both selection – people are more
likely to specialize in a field that they believe is relevant – and
cognitive biases that make researchers feel that what they
know more about and what they have invested more time in is
more important. However, the effectiveness of institutions is an
empirical question, and both economists and ethicists can learn
from evidence.

2.3 Conceptual Differences between Ethical Values and


Market Value

Nonetheless, our systems of market prices


and of ethical values differ in very significant conceptual ways:
Market prices are generally objective, single-dimensional and
unambiguous. They put a well-defined dollar value on anything
that is traded in the market. One of the reasons is that markets
were created by humans specifically for the purpose of
efficiently exchanging resources. Each person’s ethical values,
by contrast, are subjective, multi-faceted and at times implicit,
making them more ambiguous and difficult to compare. One of
the reasons for this is that the ultimate arbiters of our ethical
values are neural networks: our ethical values have been
encoded in the deep neural networks that constitute our brains
by the processes of nature and nurture, i.e. by biological and
cultural evolution, and by our experiences and decisions that
21AD1907 PERSPECTIVES AND APPROACHES UNIT-IV

have shaped our lives. It is famously difficult to capture in


general rules how complex deep neural networks arrive at
decisions, yet in describing our ethical values we need to do
precisely that – we need to describe in general rules how our
brains decide what is ethical. Combining the ethical values of
different individuals to guide decisions for society as a whole
adds yet another layer of complexity.

2.4 Why Economic Value All Too Often Prevails Over


Ethical Values

If we care about integrating ethical values in


economic decisions, it is concerning that economic forces
frequently seem to prevail over ethical values in today’s world,
and it is important to understand why. Without providing an
exhaustive list, let me describe several factors that tilt the
playing field towards economic value. First, the conflict
between market value and ethical values typically reflects the
broader tradeoff between personal benefit versus societal
benefits. Humans are pro-social, but only up to a point – our
pro-social instincts have evolved mainly to benefit the small
tribe of people around us, not humanity at large. For example,
people who hesitate to pollute their neighbor’s backyard
frequently have fewer hesitations to contribute to global
warming that hurts humanity as a whole – they apply lower
ethical standards to externalities that affect larger groups and
instead listen more to market signals. As a result, the trade-offs
between personal and societal benefits that humans have
evolved to make instinctively, may not be a good guide for
ethical decisions that have broader societal repercussions. This
is a significant problem in the context of new technologies that
affect humanity as a whole.

2.5 Dealing with Discrepancies of Value

The vast majority of ethicists, of economists,


and of society at large agree that the market should not win
out when market values and ethical values conflict. Within
economics, for example, an entire subfield called welfare
21AD1907 PERSPECTIVES AND APPROACHES UNIT-IV

economics describes policy tools that can be used to deal with


situations when the market does not value things the same way
as society. Economists frequently use the term externalities for
discrepancies between social values and market value. Classic
examples of such externalities include pollution or congestion,
when the market does not correctly value the cost to society of
limited resources like nature or road space. Examples of
positive externalities include spillovers from technological
progress, when the market does not correctly internalize that
one person’s ideas and inventions also benefit others who
indirectly benefit from the ideas. If individuals behave in a
purely self-interested fashion and do not account for the
externalities that they create (as homo oeconomicus is
postulated to do in most economic models), then there will be
too many activities generating negative externalities and too
few generating positive externalities. The problem would be
resolved if individuals simply followed society’s ethical values
instead of the value assigned by the market.

2.6 Progress in AI Creating Externalities

This section moves beyond questions of income


distribution and focuses on other dimensions in which market
value does not adequately reflect the ethical values of our
society. It is useful to distinguish two categories of such
externalities arising from progress in AI: first, discrepancies in
value that are newly introduced by AI; and secondly, existing
market imperfections and externalities that are inherent in any
economic system but that are exacerbated by the economic
disruptions generated by AI.

2.7 Novel Ethical Problems and Externalities Introduced


by AI

The rise of artificial intelligence opens up many new


areas in which conflicts between market value and ethical
values arise so that, along the way, new externalities are
introduced. A number of the resulting ethical dilemmas are the
subjects of individual chapters in the Oxford Handbook on the
21AD1907 PERSPECTIVES AND APPROACHES UNIT-IV

Ethics of AI for which this article was prepared (see Das et al,
2019). A common theme in many of these dilemmas is that the
technological innovations involved look like they create value in
terms of economic profits, but they actually drain our broader
societal values and do damage from an ethical perspective. In
some instances, they are even doing more social harm than the
private value that they create. A tangible example (from the
days before AI) would be a factory that produces a valuable
output but that pollutes so much that the social cost of
pollution exceeds the market value of its output.

3. Automating Origination:
Automation, application of machines to tasks once
performed by human beings or, increasingly, to tasks that
would otherwise be impossible. Although the
term mechanization is often used to refer to the simple
replacement of human labour by machines, automation
generally implies the integration of machines into a self-
governing system. Automation has revolutionized those areas
in which it has been introduced, and there is scarcely an aspect
of modern life that has been unaffected by it.

The term automation was coined in the automobile


industry about 1946 to describe the increased use of automatic
devices and controls in mechanized production lines. The origin
of the word is attributed to D.S. Harder,
an engineering manager at the Ford Motor Company at the
time. The term is used widely in a manufacturing context, but it
is also applied outside manufacturing in connection with a
variety of systems in which there is a significant substitution of
mechanical, electrical, or computerized action for human effort
and intelligence.

In general usage, automation can be defined as


a technology concerned with performing a process by means of
programmed commands combined with automatic
feedback control to ensure proper execution of the instructions.
The resulting system is capable of operating without human
intervention. The development of this technology has become
increasingly dependent on the use of computers and computer-
related technologies. Consequently, automated systems have
21AD1907 PERSPECTIVES AND APPROACHES UNIT-IV

become increasingly sophisticated and complex. Advanced


systems represent a level of capability and performance that
surpass in many ways the abilities of humans to accomplish the
same activities.

Automation technology has matured to a point where a number


of other technologies have developed from it and have
achieved a recognition and status of their own. Robotics is one
of these technologies; it is a specialized branch of automation
in which the automated machine possesses
certain anthropomorphic, or humanlike, characteristics. The
most typical humanlike characteristic of a modern
industrial robot is its powered mechanical arm. The robot’s arm
can be programmed to move through a sequence of motions to
perform useful tasks, such as loading and unloading parts at a
production machine or making a sequence of spot-welds on the
sheet-metal parts of an automobile body during assembly. As
these examples suggest, industrial robots are typically used to
replace human workers in factory operations.

This article covers the fundamentals of automation, including


its historical development, principles and theory of operation,
applications in manufacturing and in some of the services and
industries important in daily life, and impact on the individual
as well as society in general. The article also reviews the
development and technology of robotics as a significant topic
within automation. For related topics, see computer
science and information processing.

3.1 Historical development of automation

The technology of automation has evolved from the


related field of mechanization, which had its beginnings in
the Industrial Revolution. Mechanization refers to the
replacement of human (or animal) power with mechanical
power of some form. The driving force behind mechanization
has been humankind’s propensity to create tools
and mechanical devices. Some of the important historical
developments in mechanization and automation leading to
modern automated systems are described here.
21AD1907 PERSPECTIVES AND APPROACHES UNIT-IV

3.2 Early developments

The first tools made of stone represented prehistoric


man’s attempts to direct his own physical strength under the
control of human intelligence. Thousands of years were
undoubtedly required for the development of simple
mechanical devices and machines such as the wheel, the lever,
and the pulley, by which the power of human muscle could be
magnified. The next extension was the development of
powered machines that did not require human strength to
operate. Examples of these machines include waterwheels,
windmills, and simple steam-driven devices. More than 2,000
years ago the Chinese developed trip-hammers powered by
flowing water and waterwheels. The early Greeks experimented
with simple reaction motors powered by steam. The mechanical
clock, representing a rather complex assembly with its own
built-in power source (a weight), was developed about 1335 in
Europe. Windmills, with mechanisms for automatically turning
the sails, were developed during the Middle Ages in Europe and
the Middle East. The steam engine represented a major
advance in the development of powered machines and marked
the beginning of the Industrial Revolution. During the two
centuries since the introduction of the Watt steam engine,
powered engines and machines have been devised that obtain
their energy from steam, electricity, and chemical, mechanical,
and nuclear sources.

Each new development in the history of powered machines has


brought with it an increased requirement for control devices to
harness the power of the machine. The earliest steam engines
required a person to open and close the valves, first to admit
steam into the piston chamber and then to exhaust it. Later a
slide valve mechanism was devised to automatically
accomplish these functions. The only need of the human
operator was then to regulate the amount of steam that
controlled the engine’s speed and power. This requirement for
human attention in the operation of the steam engine was
eliminated by the flying-ball governor. Invented by James
Watt in England, this device consisted of a weighted ball on a
hinged arm, mechanically coupled to the output shaft of the
engine. As the rotational speed of the shaft
increased, centrifugal force caused the weighted ball to be
moved outward. This motion controlled a valve that reduced
21AD1907 PERSPECTIVES AND APPROACHES UNIT-IV

the steam being fed to the engine, thus slowing the engine. The
flying-ball governor remains an elegant early example of
a negative feedback control system, in which the increasing
output of the system is used to decrease the activity of the
system.

Negative feedback is widely used as a means of automatic


control to achieve a constant operating level for a system. A
common example of a feedback control system is the
thermostat used in modern buildings to control room
temperature. In this device, a decrease in room temperature
causes an electrical switch to close, thus turning on the heating
unit. As room temperature rises, the switch opens and the heat
supply is turned off. The thermostat can be set to turn on the
heating unit at any particular set point.

Another important development in the history of automation


was the Jacquard loom (see photograph ), which demonstrated
the concept of a programmable machine. About 1801 the
French inventor Joseph-Marie Jacquard devised an automatic
loom capable of producing complex patterns in textiles by
controlling the motions of many shuttles of different coloured
threads. The selection of the different patterns was determined
by a program contained in steel cards in which holes were
punched. These cards were the ancestors of the paper cards
and tapes that control modern automatic machines. The
concept of programming a machine was further developed later
in the 19th century when Charles Babbage, an English
mathematician, proposed a complex, mechanical “analytical
engine” that could perform arithmetic and data processing.
Although Babbage was never able to complete it, this device
was the precursor of the modern digital computer.
See computers.

3.3 Modern developments


A number of significant developments in various fields
have occurred during the 20th century: the digital computer,
improvements in data-storage technology and software to write
computer programs, advances in sensor technology, and the
derivation of a mathematical control theory. All these
developments have contributed to progress in automation
technology.
21AD1907 PERSPECTIVES AND APPROACHES UNIT-IV

Development of the electronic digital computer (the ENIAC


[Electronic Numerical Integrator and Computer] in 1946 and
UNIVAC I [Universal Automatic Computer] in 1951) has
permitted the control function in automation to become much
more sophisticated and the associated calculations to be
executed much faster than previously possible. The
development of integrated circuits in the 1960s propelled a
trend toward miniaturization in computer technology that has
led to machines that are much smaller and less expensive than
their predecessors yet are capable of performing calculations at
much greater speeds. This trend is represented today by the
microprocessor, a miniature multicircuited device capable of
performing all the logic and arithmetic functions of a large
digital computer.

Along with the advances in computer technology, there have


been parallel improvements in program storage technology for
containing the programming commands. Modern storage media
include magnetic tapes and disks, magnetic bubble memories,
optical data storage read by lasers, videodisks, and electron
beam-addressable memory systems. In addition, improvements
have been made in the methods of programming computers
(and other programmable machines). Modern programming
languages are easier to use and are more powerful in their
data-processing and logic capabilities.

Advances in sensor technology have provided a vast array of


measuring devices that can be used as components
in automatic feedback control systems. These devices include
highly sensitive electromechanical probes, scanning laser
beams, electrical field techniques, and machine vision. Some of
these sensor systems require computer technology for their
implementation. Machine vision, for example, requires the
processing of enormous amounts of data that can be
accomplished only by high-speed digital computers. This
technology is proving to be a versatile sensory capability for
various industrial tasks, such as part identification, quality
inspection, and robot guidance.

Finally, there has evolved since World War II a highly advanced


mathematical theory of control systems. The theory includes
traditional negative feedback control, optimal control, adaptive
control, and artificial intelligence. Traditional feedback control
21AD1907 PERSPECTIVES AND APPROACHES UNIT-IV

theory makes use of linear ordinary differential equations to


analyze problems, as in Watt’s flying-ball governor. Although
most processes are more complex than the flying-ball
governor, they still obey the same laws of physics that are
described by differential equations. Optimal control theory and
adaptive control theory are concerned with the problem of
defining an appropriate index of performance for the process of
interest and then operating it in such a manner as to optimize
its performance. The difference between optimal and adaptive
control is that the latter must be implemented under conditions
of a continuously changing and unpredictable environment; it
therefore requires sensor measurements of the environment
to implement the control strategy.

Artificial intelligence is an advanced field of computer


science in which the computer is programmed to exhibit
characteristics commonly associated with human intelligence.
These characteristics include the capacity for learning,
understanding language, reasoning, solving problems,
rendering expert diagnoses, and similar mental capabilities.
Developments in artificial intelligence are expected to provide
robots and other “intelligent” machines with the ability to
communicate with humans and to accept very high-level
instructions rather than the detailed step-by-step programming
statements typically required of today’s programmable
machines. For example, a robot of the future endowed with
artificial intelligence might be capable of accepting and
executing the command “assemble the product.” Present-day
industrial robots must be provided with a detailed set of
instructions specifying the locations of the product’s
components, the order in which they are to be assembled, and
so forth.

3.4 Principles and Theory of Automation

The developments described above have provided the three


basic building blocks of automation: (1) a source of power to
perform some action, (2) feedback controls, and
(3) machine programming. Almost without exception, an
automated system will exhibit all these elements.
21AD1907 PERSPECTIVES AND APPROACHES UNIT-IV

Power source

An automated system is designed to accomplish some useful


action, and that action requires power. There are many sources
of power available, but the most commonly used power in
today’s automated systems is electricity. Electrical power is the
most versatile, because it can be readily generated from other
sources (e.g., fossil fuel, hydroelectric, solar, and nuclear) and
it can be readily converted into other types of power (e.g.,
mechanical, hydraulic, and pneumatic) to perform useful work.
In addition, electrical energy can be stored in high-
performance, long-life batteries.

The actions performed by automated systems are generally of


two types: (1) processing and (2) transfer and positioning. In
the first case, energy is applied to accomplish some processing
operation on some entity. The process may involve the shaping
of metal, the molding of plastic, the switching of electrical
signals in a communication system, or the processing of data in
a computerized information system. All these actions entail the
use of energy to transform the entity (e.g., the metal, plastic,
electrical signals, or data) from one state or condition into
another more valuable state or condition. The second type of
action—transfer and positioning—is most readily seen in
automated manufacturing systems designed to perform work
on a product. In these cases, the product must generally be
moved (transferred) from one location to another during the
series of processing steps. At each processing location,
accurate positioning of the product is generally required. In
automated communications and information systems, the
terms transfer and positioning refer to the movement of data
(or electrical signals) among various processing units and the
delivery of information to output terminals (printers, video
display units, etc.) for interpretation and use by humans.

Feedback controls

Feedback control system


21AD1907 PERSPECTIVES AND APPROACHES UNIT-IV

Feedback controls are widely used in modern automated


systems. A feedback control system consists of five basic
components: (1) input, (2) process being controlled, (3) output,
(4) sensing elements, and (5) controller and actuating devices.
These five components are illustrated in Figure 1. The
term closed-loop feedback control is often used to describe this
kind of system.

The input to the system is the reference value, or set point, for
the system output. This represents the desired operating value
of the output. Using the previous example of
the heating system as an illustration, the input is the desired
temperature setting for a room. The process being controlled is
the heater (e.g., furnace). In other feedback systems, the
process might be a manufacturing operation, the rocket
engines on a space shuttle, the automobile engine in cruise
control, or any of a variety of other processes to which power is
applied. The output is the variable of the process that is being
measured and compared to the input; in the above example, it
is room temperature.

The sensing elements are the measuring devices used in the


feedback loop to monitor the value of the output variable. In
the heating system example, this function is normally
accomplished using a bimetallic strip. This device consists of
two metal strips joined along their lengths. The two metals
possess different thermal expansion coefficients; thus, when
the temperature of the strip is raised, it flexes in direct
proportion to the temperature change. As such, the bimetallic
strip is capable of measuring temperature. There are many
different kinds of sensors used in feedback control systems for
automation.

The purpose of the controller and actuating devices in the


feedback system is to compare the measured output value with
the reference input value and to reduce the difference between
them. In general, the controller and actuator of the system are
the mechanisms by which changes in the process are
accomplished to influence the output variable. These
mechanisms are usually designed specifically for the system
and consist of devices such as motors, valves, solenoid
switches, piston cylinders, gears, power screws, pulley systems,
chain drives, and other mechanical and electrical components.
21AD1907 PERSPECTIVES AND APPROACHES UNIT-IV

The switch connected to the bimetallic strip of the thermostat is


the controller and actuating device for the heating system.
When the output (room temperature) is below the set point, the
switch turns on the heater. When the temperature exceeds the
set point, the heat is turned off.

4.AI a Binary Approach:

A Binary Approach to Artificial Intelligence (AI)


examines the concept of AI through a dualistic lens, exploring
clear opposites or choices in its design, implementation, and
ethical considerations. This approach often highlights
contrasting elements or decisions that shape AI systems,
emphasizing trade-offs, binary categorizations, and ethical
dilemmas. Here's an analysis of AI through a binary
perspective:

4.1. Binary Approach: The Concept

A binary approach simplifies complex AI discussions into two


opposing or complementary perspectives, such as:

 Human vs. Machine


 Ethical vs. Unethical
 Controlled vs. Autonomous
 Inclusive vs. Exclusive
 Transparent vs. Opaque

While reality often exists on a spectrum, a binary framework


can help clarify choices and their consequences in AI design
and deployment.

4.2. Binary Oppositions in AI Development

a. Rule-Based vs. Learning-Based AI

 Rule-Based AI: Operates using pre-defined rules, offering


predictability and transparency.
 Learning-Based AI: Relies on machine learning to adapt
and improve, potentially introducing unpredictability.
21AD1907 PERSPECTIVES AND APPROACHES UNIT-IV

b. Centralized vs. Decentralized Systems

 Centralized AI: Data and control are managed by a


single entity, which can optimize efficiency but raises
privacy concerns.
 Decentralized AI: Distributed control promotes resilience
and privacy but may sacrifice coordination.

c. Open Source vs. Proprietary AI

 Open Source: Enhances transparency and collaboration


but may risk misuse.
 Proprietary AI: Protects intellectual property but can
limit scrutiny and fairness.

4.3. Binary Ethical Dilemmas


a. Autonomy vs. Control

 Should AI systems operate independently, or should


humans retain strict oversight?
o Example: Self-driving cars may need autonomy for
quick decision-making but raise concerns about
relinquishing control.

b. Fairness vs. Performance

 Should AI prioritize fairness even if it compromises


performance?
o Example: Adjusting algorithms to ensure equitable
outcomes might reduce overall efficiency.

c. Innovation vs. Regulation

 Should AI development proceed unhindered, or should it


be regulated to minimize risks?
o Example: Rapid innovation in generative AI can lead
to breakthroughs but also ethical challenges like
misinformation.
21AD1907 PERSPECTIVES AND APPROACHES UNIT-IV

4.4. Binary Impact of AI on Society


a. Opportunity vs. Threat

 Opportunity: AI can drive economic growth, improve


healthcare, and enhance efficiency.
 Threat: It can also displace jobs, widen inequality, and
perpetuate biases.

b. Empowerment vs. Exploitation

 Empowerment: AI can democratize access to knowledge


and tools.
 Exploitation: It can also manipulate behavior or infringe
on privacy.

c. Inclusivity vs. Exclusivity

 Inclusivity: AI designed to benefit all demographic


groups.
 Exclusivity: Systems that favor specific groups can
deepen societal divides.

4.5. Binary Approach in AI Design


a. Explainable AI vs. Black-Box Models

 Explainable AI: Prioritizes transparency and


interpretability.
 Black-Box Models: Focus on performance at the expense
of explainability.

b. Pre-emptive Ethics vs. Reactive Ethics

 Preemptive Ethics: Embedding ethical considerations


from the start.
 Reactive Ethics: Addressing ethical concerns after
problems arise.

c. General AI vs. Narrow AI

 General AI: Strives for human-like intelligence across


tasks.
21AD1907 PERSPECTIVES AND APPROACHES UNIT-IV

 Narrow AI: Focuses on excelling in specific, well-defined


tasks.

4.6. Advantages of a Binary Approach

 Clarity: Simplifies complex issues, making them more


accessible.
 Decision-Making: Highlights clear trade-offs, aiding
strategic choices.
 Ethical Awareness: Encourages explicit consideration of
opposing ethical principles.

4.7. Limitations of a Binary Approach

 Oversimplification: Ignores the nuances and spectrums


that characterize real-world scenarios.
 False Dichotomies: May create artificial divisions where
coexistence or compromise is possible.
 Context Dependency: Some decisions are highly
context-specific and cannot be reduced to binary choices.

4.8 A Practical Example of Binary Choices in AI

Consider the development of facial recognition technology:

 Public Safety vs. Privacy: Use facial recognition to


enhance security, but risk violating individual privacy.
 Regulation vs. Free Market: Regulate its use to prevent
misuse or allow market-driven innovation.

4.9. Reconciling the Binary Approach with Real-World


Complexities

While a binary framework is useful for highlighting core


tensions, combining it with a spectrum-based or multi-
dimensional analysis provides a more holistic understanding:

 Use binary oppositions as starting points to identify key


dilemmas.
 Expand the discussion to include intermediate solutions,
compromises, or hybrid models.
21AD1907 PERSPECTIVES AND APPROACHES UNIT-IV

5.Machine Programming
The programmed instructions determine the set of actions that
is to be accomplished automatically by the system. The
program specifies what the automated system should do and
how its various components must function in order to
accomplish the desired result. The content of the program
varies considerably from one system to the next. In relatively
simple systems, the program consists of a limited number of
well-defined actions that are performed continuously and
repeatedly in the proper sequence with no deviation from one
cycle to the next. In more complex systems, the number of
commands could be quite large, and the level of detail in each
command could be significantly greater. In
relatively sophisticated systems, the program provides for the
sequence of actions to be altered in response to variations in
raw materials or other operating conditions.

Program control and feedback control

Programming commands are related to feedback control in an


automated system in that the program establishes the
sequence of values for the inputs (set points) of the various
feedback control loops that make up the automated system. A
given programming command may specify the set point for
the feedback loop, which in turn controls some action that the
system is to accomplish. In effect, the purpose of the feedback
loop is to verify that the programmed step has been carried
out. For example, in a robot controller, the program might
specify that the arm is to move to a designated position, and
the feedback control system is used to verify that the move has
been correctly made.

5.1. Machine learning values

A value chain describes the sequence of activities through


which companies add value to a product, from start to
finish.The value chain for traditional industries is rather
straightforward. For your local bakery around the corner, selling
fresh buns is the last step of a long sequence of activities:
21AD1907 PERSPECTIVES AND APPROACHES UNIT-IV

Procurement of raw materials, inbound logistics, storage,


baking, sales, and maybe even distribution.

But what are the different value-adding activities of Machine


Learning? Contrary to the beliefs of many, the actual
algorithm programming comprises only a minor part of
the ML value chain. There are other value-adding steps both
before and after ML programming takes place.

The ML value chain consists of six major steps:

 Problem definition
 Data collection
 Data storage
 Data preparation
 Algorithm programming
 Application development

5.1.1 Problem definition


Machine Learning can be a valuable tool to solve a multitude of
tasks. However, clearly understanding the problem, defining
goals, and outlining a plan of action are not trivial.

Before you start thinking about how you will implement an ML


solution, you need to clearly define the business objectives you
want to reach with it. Set milestones along the way according
to which performance can be measured.

Make sure you understand the current solution and at which


exact steps ML could provide a benefit. Think about the people
involved. How will they interact with the new solution?
21AD1907 PERSPECTIVES AND APPROACHES UNIT-IV

5.1.2 Data collection


Data collection is about gathering raw data. It is an important
step, as Machine Learning usually requires huge amounts of
data. This is especially true for Deep Learning, a subfield of
Machine Learning. Normally, Deep Learning algorithms need
thousands or even millions of data points to learn (read more
about the differences between Deep Learning and Machine
Learning).

Hence, data collection is a relevant value-adding activity. Or as


our Deep Learning engineers like to explain it:

5.1.3 Data storage


Once the data is collected it needs a secure place for
storage: Data storage is the process of compiling raw data in
data centers. Given the massive amount of data involved in
Machine Learning, data storage is an integral part of the whole
value chain.

In the early ML days, most companies used to store data on


their own servers – not ideal, to put it mildly. With the rise of
cloud technology, however, data can be stored and accessed at
high speed and low cost, and some tools – like Levity – offer
storage as an integral part of the product at no extra charge.

5.1.4 Data preparation


Still, masses of raw data are worth nothing. Raw data,
oftentimes, is inconsistent, incomplete, and unstructured (read
more about the challenges of dealing with unstructured data).
Most Machine Learning models are not able to work with these
data flaws.

This is where data preparation comes into play. It describes all


efforts that make data utilizable for the ML algorithms. This
could include data conversion, cleaning, enhancement,
formatting, and labeling.

 Data conversion: the conversion of data from one format


to another, most often to make it readable for a specific
computer program.
 Data cleaning: correcting inaccurate or incomplete data
as well as removing any irrelevant data.
21AD1907 PERSPECTIVES AND APPROACHES UNIT-IV

 Data enhancement: adding information to data by


matching it against an existing database, allowing the
desired missing data fields to be added (e.g. your
company's customer data enhanced by information from a
public business database).
 Data formatting: the organization of information
according to specifications. Think, for instance, of ZIP
codes in a spreadsheet column. Without data formatting,
the AI might falsely interpret the ZIP codes as large
numbers.
 Data labeling: tagging data with one or more labels, e.g.
dog pictures with the label "dog". This step is crucial since
(supervised) ML models need input and related output to
learn. Many of today's ML applications are built upon data
that is labeled by human labelers who regularly interfere
to improve model performance. This concept is
called Human in the Loop.

5.1.5 Algorithm programming


With prepared data at hand, software engineers can finally
devote themselves to the topic of programming the algorithm.
In Machine Learning, algorithms perform tasks without being
explicitly programmed. While ML code might be perceived as
the step where the "magic" happens, it is only one of several
activities of the ML value chain.

Once the algorithm is developed, the model needs to be trained


on data. There are three broad categories of how algorithms
can be trained. You might have heard of supervised learning,
unsupervised learning, and reinforcement learning. But the ML
value chain is not finished here.

Application development
Only because the baker has removed freshly baked buns from
the oven, he or she is not finished. Similarly, the ML code itself
is not the end of the value chain, no matter how good it is.
What comes next is application development.

Application development is the process of turning the ML model


into a commercially viable product. The code comes to life. In
this step, software engineers and business people work hand in
hand. Great raw data and high-quality code are worth nothing if
there isn’t a use case for it.
21AD1907 PERSPECTIVES AND APPROACHES UNIT-IV

Who serves which part of the value chain?

From a management perspective, understanding what the ML


value chain looks like is not enough. Business executives
should also know who delivers the value at each step.

There are highly specialized companies, each serving a specific


activity of the ML value chain. By focusing on their core
competencies, these companies can provide best-in-class
service in a particular area. Also, companies can configure their
suppliers in a way that suits them best.

The ML value chain consists of specialized players and end-


to-end solutions

To make it more practical: Imagine you want to build up an


initial training database of labeled animal images, i.e. all dog
pictures are tagged as "dog", cat pictures as "cat". In terms of
the ML value chain, this would translate to data collection and
data preparation. A set of images could be obtained by using a
web scraping tool. Next, you could hire a data labeling service.
21AD1907 PERSPECTIVES AND APPROACHES UNIT-IV

This might be a company with access to a large pool of workers


or a platform allowing you to do the work yourself.

The advantage is obvious: These companies excel at what they


are doing. However, this comes with drawbacks, too. The
process of collecting and labeling data from the example above
already has two companies involved - and it is still only a small
process of the value chain. This adds complexity and
inefficiencies.

Some companies promise to solve these problems by offering


an end-to-end solution. Visually, this translates to a vertical
representation, as you can see in the graphic above. In simple
terms, those companies cover the whole ML value chain as a
complete functional solution.

What does this look like in practice? We at Levity, for example,


are an end-to-end solution.

 Data collection & storage: Ok, we don’t collect your


data ourselves. But: We use pre-trained models that are
then tweaked according to your data. This concept is
called Transfer Learning. It reduces data needs
dramatically - from millions to hundreds. Also, we created
a free Dataset Builder, so you can quickly build datasets
with Google images.
 Data preparation: We help you prepare the data, mostly
with classification. When dealing, for instance, with image
classification, our software allows you to label your
pictures with the corresponding classes. You can do so on
our platform or with our Slack integration, which sends
your employees images for labeling within the Slack
environment.
 Algorithm training: This is where the magic happens.
The good news: Levity provides a no-code solution. You
can train your algorithms without a single line of code.
 Application development: We also help you apply what
you have built: We are aware that this step ultimately
drives the value. For instance, you can use Levity's ML
models and integrations to automate processes and boost
productivity. Your business might be unique but many of
your activities aren't. Our templates speed things up even
more.
21AD1907 PERSPECTIVES AND APPROACHES UNIT-IV

End-to-end solutions offer the benefit of speed and simplicity.


You can easily build an application from beginning to end and
see whether it drives value.

Many traditional value chains don't have end-to-end solutions


that would enable such a procedure. As you hopefully
understand at this point, Machine Learning doesn't have these
limitations.

6. Artificial Moral Agents

A rtificial Moral Agents (AMA’s) is a field in computer science


with the purpose of creating autonomous machines that can
make moral decisions akin to how humans do. Researchers
have proposed theoretical means of creating such machines,
while philosophers have made arguments as to how these
machines ought to behave, or whether they should even exist.
Of the currently theorised AMA’s, all research and design has
been done with either none or at most one specified normative
ethical theory as basis. This is problematic because it narrows
down the AMA’s functional ability and versatility which in turn
causes moral outcomes that a limited number of people agree
with (thereby undermining an AMA’s ability to be moral in a
human sense). As solution we design a three-layer model for
general normative ethical theories that can be used to serialise
the ethical views of people and businesses for an AMA to use
during reasoning. Four specific ethical norms (Kantianism,
divine command theory, utilitarianism, and egoism) were
modelled and evaluated as proof of concept for normative
modelling. Furthermore, all models were serialised to XML/XSD
as proof of support for computerisation.

Four common moral theories summarised We describe some


popular ethical theories that are frequently mentioned in moral
philosophy papers and are important to comprehend to
understand the related works of AMA’s as well as for the
purposes of our model designs.

Utilitarianism

An action is right if out of its alternatives it is the one


that increases overall happiness at the smallest emotional cost.
The theory as originally proposed by Jeremy Bentham places
21AD1907 PERSPECTIVES AND APPROACHES UNIT-IV

human happiness at the centre of morality. There are however


many critiques based on the vagueness of “happiness”, to
which utilitarians respond by redefining it as preference-
satisfaction. That is, making people’s desires reality. This
theory essentially makes it an agent’s imperative to increase
happiness for all.

Egoism

An action is right if its ends are in the agent’s self-


interest. That is, it increases the agent’s preference-
satisfaction. This functions the same as utilitarianism but
considers exclusively the agent’s happiness and no other
people. This may seem intuitively immoral, but proponents
argue that if one takes an objective approach to what is in
one’s self-interest, one will not harm others (because of societal
punishment) and will do good to humanity (for e.g., societal
praise).

Hedonism

An action is right if it increases pleasure and decreases


pain. This theory takes a similar approach to utilitarianism, but
does not attempt to assign morality to any kind of intuitive true
fulfilment. Rather it proposes that an agent focuses all their
energy on maximising somatic pleasure and chasing
evolutionary dopamine rewards (e.g., eating, mating, and
fighting). This is typically the sort of life most animals lead, but
hedonists argue it can be fit for humans too.

Divine command theory

An action is right if God wills it so. Divine command


theorists typically hold that God plays a three-fold role in
ethics: Epistemic (God provides knowledge of morality),
ontological (God gives meaning to morality, or morality cannot
be defined otherwise), and prudential (God motivates humans
to be moral by offering eternal rewards to those that are and
eternal punishment to those that are not). In this paper we will
use the Christian version of DCT, which is based on The Bible’s
ten commandments. There are some variations in Christianity,
which may affect what or who is relevant for moral
consideration; we take an inclusive approach henceforth.
21AD1907 PERSPECTIVES AND APPROACHES UNIT-IV

Kantianism

An action is right if it is universally willable and it


respects the rational autonomy of others. Immanuel Kant
proposes a very complex theory of morality that involves two
main formulations of what he calls categorical imperatives
(universal duties): the formulation of universal law, and the
formulation of humanity. The former says that one should
imagine what a world would be like where an action is law, and
then extrapolate from such a state to determine the action’s
morality (if the world is unlivable, the action is wrong). The
latter says that one should always treat other people as ends in
themselves, never as mere means to an end. Both formulations
strive to cultivate respect for other people’s rational autonomy
(that is, never to inhibit another’s ability to make a free
decision). Many criticise Kantianism and other deontological
theories for being consequentialism in disguise (how do you
know if a world is liveable but to check its consequences), but
the key difference is that where deontology uses consequences
it uses them to judge actions based on universal effects. That
is, build a theoretical world where a certain condition applies to
all agents and evaluate what that world would be like. This is
why deontological theories implemented the way Kant would
want it are very computationally expensive2 . There are also
deontological theories that judge actions based on simple rules
that have no human justification, e.g., Christian DCT employs
the ten commandments as moral principles with justification
limited to it being God’s word. On the other hand,
consequentialism focuses on the real-world outcomes of the
specific action to be evaluated. In summary, as far as
consequences are concerned, consequentialism leverages
situational outcomes and deontology leverages universalised
outcomes. Sometimes an action’s morality will be judged the
same by both kinds of theories, but the justifications will always
be different. There are many different ethical theories, but in
this paper we only consider the two most popular of each type
to show that the model is sufficiently well-defined3 : For
consequentialist theories we use utilitarianism and egoism, and
for deontological theories we use Kantianism and divine
command theory.

Some common moral dilemma cases


21AD1907 PERSPECTIVES AND APPROACHES UNIT-IV

While there are several list of interesting cases for


moral dilemmas, we highlight three that are relevant for the
digital assistants or AMAs and thus which any software
application ideally should be able to handle. Case 1: The trolley
problem In an applied ethics paper on abortion, Philippa Foot
introduced an ethical dilemma thought experiment known as
the Trolley Problem [9]. It has since gained a lot of traction and
is notoriously controversial in moral philosophy. This problem is
included as a use case with the purpose of showing usefulness
in well-known moral dilemma scenarios that are not AMA-
relevant by nature, but that an AMA can find itself in
nonetheless. An adapted version of the trolley problem follows.
A train’s braking system fails, and it hurls at full speed towards
an unwitting group of five people standing on the tracks. The
automatic train track management system detects this and has
the opportunity to switch tracks ahead of the train’s collision
with the group. However, doing so will direct the wild train onto
a track which is currently undergoing maintenance by a worker,
who would be in harm’s way. Given that there is no time to
warn any of the involved people, the AMA must choose
between switching tracks and killing the worker (T1), or
abstaining and letting the five people die (T2). The AMA is
configured to reason on behalf of the train transportation
company. Let us try to systematise this in the light of the four
ethical theories that we will concentrate on in this technical
report, as a step toward assessing what properties may be
needed to be stored for automating this in an AMA.

Frameworks for reasoning with ethics

Recently, Benzm¨uller et al. described a framework for


moral reasoning through a logical proof assistant (a HOL
reasoner). This framework works by taking multiple relevant
Booleans as input and returning a Boolean with an explanation
as a result. The example used in the paper involves a person,
their data, and the GDPR. Essentially, given that person P is a
European citizen and their data was processed unlawfully, the
framework determines that a wrongdoing was committed and
that the data ought to be erased. The example used in the
paper derives its result by doing internal rule-based deductions
(like under which conditions data should be erased). A related
reasoning framework is described in [22], which is specifically
law-oriented. It works by evaluating legal claims by using a
21AD1907 PERSPECTIVES AND APPROACHES UNIT-IV

knowledgebase of prior case facts and then reasoning to find


discrepancies and thereby build legal arguments. In their
current implementations, these two frameworks are not fully-
capable AMA’s, because they are limited to contexts where
easily-evaluable rules are laid out beforehand. The frameworks
would not, for instance, be able to decide whether a certain
amount of harm outweighs the potential benefit of an outcome.
To make these frameworks full-AMA’s, one would need to apply
an ethical theory, and the most intuitive to implement would be
deontology, because of its inherent compatibility as a duty-
based system. A moral duty as a computer-modelled rule is
simpler to interpret than a utilitarian rule. Compare, e.g., “thou
shalt not bear false witness” against “one ought to be truthful
unless truthfulness will cause significant harm.”

Modelling research

To the best of our knowledge, there are no proposals for


modelling ethical theories in a general fashion, either in the
field of computer science or in moral philosophy. There are
however papers that strive to model highly-philosophical
content. Said compiled cross-disciplinary papers on topics
related to modelling the human mind. Philosophers,
psychologists, and cognitive scientists all contribute to the work
and discuss topics like what the purpose of a model is, whether
human thinking and intention can be simulated, and whether
these mind components are compatible with the nature of a
model. Said’s work provides useful insights into the challenges
of modelling 8 non-trivial parts of the world, especially where
there exists no consensus (like the nature of the mind and
morality). But it does not deliver a fully fleshed-out or even
approximate model that a computer can process. Similarly, [21]
discuss multiple general methodologies of modelling entities in
the context of computer science. Their discussion is highly
theoretical and does not provide recommendations for how
serialisation or visualisation should take place. Nonetheless, the
work provides general modelling insights and does mention
avenues by which one can model non-trivial subjects, like a
neuron

The three-layered framework and general ethical theory


21AD1907 PERSPECTIVES AND APPROACHES UNIT-IV

We have identified three layers of genericity for


normative ethical theories that need to be modelled : the
notion of a theory in general, a base theory (e.g., deontology,
utilitarianism), and a theory instance (i.e., a real person or
entity’s theory, including their personal nuances). Modelling the
system as three different layers has the advantage of
circumventing over-specificity. It is to most people unclear how
much a person’s theory can be altered before it no longer
functions or serves the purpose of the base theory (following
duties, maximising happiness, etc). To address this issue we
model a set of base theories and for each specify which
components may be altered and to what extent. This allows
users to pick out a theory whose purpose they agree with and
alter it safely5 to fit their nuanced needs.

A moral components

Every ethical theory instance that a person can hold


will be derived from some base theory, so all theory instances
will be of some base theory. The amoral components in the
theory are therewith named baseTheory and instanceName. For
supporting the aforementioned advantages that come with the
three-layer model, it is important for an instance to specify the
model of every layer it is a part of (with the exception of the
top layer). For example, a person P that follows utilitarianism
21AD1907 PERSPECTIVES AND APPROACHES UNIT-IV

will have their General Ethical Theory with baseTheory named


"utilitarianism" and instanceName set to "P’s utilitarianism".

You might also like