75% found this document useful (4 votes)
2K views

Decoding AI Class X Supplement_2025-26

The CBSE has released an updated Artificial Intelligence syllabus for Class X for the academic year 2025-26, which includes changes to the curriculum and a new supplement to accompany the book 'Decoding Artificial Intelligence'. This supplement is provided free of charge to users of the book and covers various units including Employability Skills and Subject-Specific Skills such as AI Project Cycle, Computer Vision, and Natural Language Processing. The syllabus outlines the theory and practical components, total marks, and detailed curriculum topics to be covered in the course.

Uploaded by

sachink893243
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
75% found this document useful (4 votes)
2K views

Decoding AI Class X Supplement_2025-26

The CBSE has released an updated Artificial Intelligence syllabus for Class X for the academic year 2025-26, which includes changes to the curriculum and a new supplement to accompany the book 'Decoding Artificial Intelligence'. This supplement is provided free of charge to users of the book and covers various units including Employability Skills and Subject-Specific Skills such as AI Project Cycle, Computer Vision, and Natural Language Processing. The syllabus outlines the theory and practical components, total marks, and detailed curriculum topics to be covered in the course.

Uploaded by

sachink893243
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 80

SUPPLEMENT

Dr. Sachin Gupta • Dr. Bhoomi Gupta

The CBSE, in its recently-released Artificial Intelligence syllabus for 2025-26, has
announced certain changes in the Class X curriculum (Code 417). To apprise students
of the latest changes and provide them with the new content in printed form, we have
prepared this Supplement, which completes our book ‘Decoding Artificial Intelligence’
in all respects vis-a-vis the latest syllabus. We are offering this Supplement FREE OF COST
to all those using our book.
For a soft copy of this Supplement, write to us at [email protected].

Educational Publishers
Syllabus
ARTIFICIAL INTELLIGENCE (Code No. 417)
CLASS X (2025–26)
Total Marks: 100 (Theory 50 + Practical 50)
UNITS NO. OF HOURS MAX. MARKS
(Theory and Practical) (Theory and Practical)
EMPLOYABILITY SKILLS
Unit 1 : Communication Skills–II 10 2
Unit 2 : Self-Management Skills–II 10 2
PART A Unit 3 : ICT Skills–II 10 2
Unit 4 : Entrepreneurial Skills–II 10 2
Unit 5 : Green Skills–II 10 2
Total 50 10
SUBJECT-SPECIFIC SKILLS Theory Practical
Unit 1 : Revisiting AI Project Cycle & Ethical Frameworks 11 4
7
for AI
Unit 2 : Advanced Concepts of Modeling in AI 18 7 11
Unit 3 : Evaluating Models 21 4 10
PART B
Unit 4 : Statistical Data – 28 –
Unit 5 : Computer Vision 10 20 4
Unit 6 : Natural Language Processing 20 7 8
Unit 7 : Advance Python 10 –
Total 160 40
PRACTICAL & PROJECT WORK
Practical File with minimum 15 Programs 15
Practical Examination
Unit 4: Statistical Data
Unit 5: Computer Vision
Unit 6: Natural Language Processing 15
PART C Unit 7: Advance Python
Viva Voce 5
Project Work / Field Visit / Student Portfolio (Any one to be
10
done)
Viva Voce (related to project work) 5
Total 50
GRAND TOTAL 210 100

DETAILED CURRICULUM/TOPICS FOR CLASS X:


Part A: Employability Skills
S.No. UNITS DURATION IN HOURS
1. Unit 1: Communication Skills–II 10
2. Unit 2: Self-Management Skills–II 10
3. Unit 3: Information and Communication Technology Skills–II 10
4. Unit 4: Entrepreneurial Skills–II 10
5. Unit 5: Green Skills–II 10
TOTAL 50
Note: The detailed curriculum/ topics to be covered under Part A: Employability Skills can be downloaded from CBSE website
Part B: (Subject-Specific Skills)
Unit 1: Revisiting AI Project Cycle & Ethical Frameworks for AI
Unit 2: Advanced Concepts of Modeling in AI
Unit 3: Evaluating Models
Unit 4: Statistical Data
Unit 5: Computer Vision
Unit 6: Natural Language Processing
Unit 7: Advance Python
UNIT 1: REVISITING AI PROJECT CYCLE & ETHICAL FRAMEWORKS FOR AI
SUB-UNIT LEARNING OUTCOMES SESSION/ ACTIVITY/ PRACTICAL
AI Project Cycle Understand the stages of the AI Project Cycle. Session: Revisiting AI Project Cycle
Introduction to Understand the concept of Artificial Intelligence (AI) domains and Session: The three domains of AI
AI Domains the illustrations of practical applications within each AI domain. and their applications.
Session: Frameworks, Ethical
Framework and need of Ethical
Frameworks for AI.
Ethical Learn about the ethical framework for AI and its category. Explore Activity: My Goodness
Frameworks Bioethics, a popular framework that is used in the healthcare https://www.my-goodness.net/
of AI industry. Session: Types of Ethical
Frameworks.
Session: Bioethics and a case study
in bioethics.
UNIT 2: ADVANCE CONCEPTS OF MODELLING IN AI
SUB-UNIT LEARNING OUTCOMES SESSION/ ACTIVITY/ PRACTICAL
Understand AI, ML and DL Session: Differentiate between AI,
Revisiting AI, ML, and DL
ML, DL Session: Common terminologies
used with data
Session: Session: Types of AI
Models: Rule-based Approach,
Learning-based Approach
Session: Categories of Machine
Learning-based models: Supervised
Learning
(https://teachablemachine.
withgoogle.com/),
Unsupervised Learning
(https://experiments.withgoogle.
• Familiarize with supervised, unsupervised and reinforcement
com/ai/drum- machine/view/),
learning-based approach
Modelling Reinforcement Learning
• Understand subcategories of Supervised, Unsupervised and
Session: Subcategories of
deep-learning models
Supervised Learning Model:
Classification Model, Regression
Model
Session: Subcategories of
Unsupervised Learning Model:
Clustering, Association
Session: Subcategories of Deep
Learning: Artificial Neural Networks
(ANN), Convolutional Neural
Network (CNN)
• Understand Neural Networks Session: What is Neural Network?
• Understand how AI makes a decision Session: How does AI make a
Decision?
Artificial
Session: Human Neural Network—
Neural
The Game
Networks
Suggested Neural Network
Activity:
https://playground.tensorflow.org/
UNIT 3: EVALUATING MODELS
SUB-UNIT LEARNING OUTCOMES SESSION/ ACTIVITY/ PRACTICAL
Importance Session: What is evaluation?
Understand the role of evaluation in the development and
of Model Session: Need of model evaluation
implementation of AI systems.
Evaluation
Splitting Session: Train-test split
the training Understand Train-test split method for evaluating the performance
set data for of a machine learning algorithm
Evaluation
Session: Accuracy
Accuracy and Understand Accuracy and Error Session: Error
Error for effectively evaluating and improving AI models Activity: Find the accuracy of the
AI model
Session: What is Classification?
Session: Classification metrics
Activity: Build the confusion matrix
Evaluation
Learn about the different types of evaluation techniques in AI, such from scratch
metrics for
as Accuracy, Precision, Recall and F1 Score, and their significance. Activity: Calculate the accuracy of
classification
the classifier model
Activity: Decide the appropriate
metric to evaluate the AI model
Ethical Session: Bias, Transparency,
concerns Accuracy
Understand ethical concerns around model evaluation
around model
evaluation

UNIT 4: STATISTICAL DATA (TO BE ASSESSED THROUGH THEORY)


SUB-UNIT LEARNING OUTCOMES SESSION/ ACTIVITY/ PRACTICAL
Session: No-Code AI tool
• Introduction to Data Science & its
Define the concept of Statistical Data and understand its
applications
Introduction & applications in various fields.
• Meaning of No-Code AI
No-Code AI Define No-Code and Low- Code AI.
Tool • No-Code and Low-Code.
Identify the differences between Code and No-Code AI concerning
Statistical Data. • Some no-code tools
Orange Data Mining Tool: https://
orangedatamining.com/download/
Session:
• Important concepts in Statistics.
• Orange data mining
• AI project cycle in Orange data
mining (Palmer penguins case
study)
Activity: MS Excel for Statistical
Analysis.
Relate AI project stages to the stages of No-Code AI projects Able Link: https://docs.google.com/
Statistical to use no-code tool Orange Data mining.
Data: Use Case spreadsheets/d/1f5 G-
Walkthrough To perform data exploration, modeling and evaluation with JXyP7EV2fy1hax47YVaH5gyq8KZy/
Orange data mining. edit?usp=drive_link&ouid=10
9928090180926267402&rtpof=tr
ue&sd=true
Case study using Orange data
mining (Palmer Penguins).
Link: https://drive.google.com/
drive/u/0/folders/1fmcRVb-
ilTyUhmUv4DWT1BFsaCoQ2BmF
UNIT 5: COMPUTER VISION (TO BE ASSESSED THROUGH THEORY)
SUB-UNIT LEARNING OUTCOMES SESSION/ ACTIVITY/ PRACTICAL
Session: Introduction to Computer
Define the concept of Computer Vision and understand its Vision
Introduction
applications in various fields.
Session: Applications of CV
Session: Understanding CV
Concepts
• Computer Vision Tasks
• Basics of Images—Pixel, Resolution,
Pixel value
• Grayscale and RGB images
Activities:
Concepts of
Understand the basic concepts of image representation, feature • Game- Emoji Scavenger Hunt
Computer https://emojiscavengerhunt.
extraction, object detection, and segmentation.
Vision withgoogle.com/
• RGB Calculator:
https://www.w3schools.com/
colors/colors_r gb.asp
• Create your own pixel art:
www.piskelapp.com
• Create your own convolutions:
http://setosa.io/ev/image-kernels/

UNIT 5: COMPUTER VISION (TO BE ASSESSED THROUGH PRACTICALS)


SUB-UNIT LEARNING OUTCOMES SESSION/ ACTIVITY/ PRACTICAL
To demonstrate proficiency in using no-code AI tools for computer Introduction to Lobe:
vision projects. To deploy models, fine-tune parameters, and https://www.lobe.ai/
interpret results. Skills acquired include data preprocessing, model Teachable Machine: https://
selection, and project deployment. teachablemachine.withgoogle.com/
• Activity: Build a Smart Sorter
Orange Data Mining Tool: https://
No-Code AI orangedatamining.com/download/
Tools • Activity: Build a real-world
Classification Model: Coral
Bleaching (Use Case Walkthrough)
• Link to the steps involved in
project development and dataset:
https://drive.google.com/drive/
folders/1ppJ 4d-8yOFJ2G2rHH
pjNrK0ejdIAe5Q?usp=sharing
Image Features Apply the convolution operator to process images and extract Session: Understanding
& Convolution useful features. Convolution operator
Operator Activity: Convolution Operator
Understand the basic architecture of a CNN and its applications in Session: Introduction to CNN
Convolution computer vision and image recognition. Session: Understanding CNN
Neural • Kernel
Network • Layers of CNN
Activity: Testing CNN

UNIT 6: NATURAL LANGUAGE PROCESSING


SUB-UNIT LEARNING OUTCOMES SESSION/ ACTIVITY/ PRACTICAL
Comprehend the complexities of natural languages and elaborate Session: Features of natural
on the need for NLP techniques for machines to understand languages.
Introduction
various natural languages effectively. Session: Introduction to Natural
Language Processing
Explore the various applications of NLP in everyday life, such as, Session: Various real-life
Applications voice assistants, auto generated captions, language translation, applications of NLP
of Natural sentiment analysis, text classification and keyword
Language Activity: Keyword Extraction
extraction. https://cloud.google.com/natural-
Processing
language
Stages of Understand the concepts like lexicon, syntax, semantics, and Session: Explore the various
Natural logical analysis of input text. stages of NLP that involve in
Language understanding and processing
Processing human language.
(NLP)
Understand the concept of chatbot and the differences between Activity: Play with chatbots
smartbots and script bots. Elizabot – https://www.masswerk.at/
elizabot/
Mitsuki – https://www.kuki.ai/
Chatbots Cleverbot – https://www.cleverbot.
com/
Singtel – https://www.singtel.com/
personal/support
Session: Script Bot V/s Smart Bot
Learn about the Text Normalization technique used in NLP and the Session: Text Processing
popular NLP model • Text Normalisation
Concepts
- Bag-of-Words • Bag of Words
of Natural
Language Activity: Text processing
Processing: • Data Processing
Text Processing
• Bag of Words
• TFIDF

UNIT 6: NATURAL LANGUAGE PROCESSING (TO BE ASSESSED THROUGH PRACTICALS)


SUB-UNIT LEARNING OUTCOMES SESSION/ ACTIVITY/ PRACTICAL
Natural Explore the sentiment analysis process using real-life datasets with Session: Examples of Code and
Language the Orange Data Mining tool. No-Code NLP Tools
Processing: Session: Applications of NLP-
Use Case Introduction to Sentiment Analysis
Walkthrough
Hands-on: Case Walkthrough
– Steps involved in project
development
Link to steps and dataset:
https://drive.google.com/drive/u/2/
folders/1geFLXx
V5890kfcakMfEg_KsH1LPcS_Iz

UNIT 7: ADVANCED PYTHON (TO BE ASSESSED THROUGH PRACTICALS)


SUB-UNIT LEARNING OUTCOMES SESSION/ ACTIVITY/ PRACTICAL
Understand to work with Jupyter Notebook, creating virtual Session: Jupyter Notebook
environments, installing Python Packages.
Recap Able to write basic Python programs using fundamental concepts Session: Introduction to Python
such as variables, data types, operators, and control structures.
Able to use Python built-in functions and libraries. Session: Python Basics

Part C: PRACTICAL & PROJECT WORK


• Write a program to add the elements of the two lists.
• Write a program to calculate mean, median and mode using Numpy
• Write a program to display line chart from (2,5) to (9,10).
Suggested Programs • Write a program to display a scatter chart for the following points (2,5), (9,10),(8,3),(5,7),(6,18).
List • Read the csv file saved in your system and display 10 rows.
• Read csv file saved in your system and display its information
• Write a program to read an image and display using Python
• Write a program to read an image and identify its shape using Python
Link to AI Activities & steps to AI project development considering real-life problem statement
along with the required dataset
Important Links https://docs.google.com/spreadsheets/d/1ZQCTT8RM-l7QfeTzH0n- 5wJLBAoiXu7TFM0Pcp31cX0/
edit?usp=sharing
Project Work / Field Visit / Student Portfolio
* relate it to Sustainable Development Goals
Suggested Projects/ Field Visit / Portfolio (any one activity to be one)
AI Project Development Using
1. Statistical Data for AI: Prediction of palmer penguin species
Sample Projects 2. Computer Vision: Early detection of coral bleaching
3. Natural Language Processing: Sentiment Analysis
Students’ participation in the following:
• AI for Youth Bootcamp
Field Work • AI Fests / Exhibition
• Participation in any AI training sessions
• Virtual tours of companies using AI to get acquainted with real-life usage
• Maintaining a record of all AI activities
Student Portfolio (to be • Hackathons
continued from class IX) • Competitions (CBSE / Inter-school)
Note: Portfolio should contain minimum 5 activities
CONTENTS
SUBJECT-SPECIFIC SKILLS
1. Revisiting AI Project Cycle & Ethical Frameworks for AI 1–9
Need for Ethical Frameworks ....................................................................................... 1
Bioethics: The Guiding Principles for Life and Technology ....................................................................................... 4

2. Advanced Concepts of Modelling in AI 10–21


Data Terminologies in Artificial Intelligence ....................................................................................... 10
Subcategories: Supervised Learning Model ....................................................................................... 11
Subcategories of Unsupervised Learning Model ....................................................................................... 13
Deep Learning: When Simple Learning isn’t Enough ....................................................................................... 15

3. Evaluating Models 22–28


Train-Test Split ....................................................................................... 22
Understanding Accuracy and Error ....................................................................................... 24
Classification Metrics ....................................................................................... 24

4. Statistical Data 29–48


AI for Everyone ....................................................................................... 29
Defining Statistical Data ....................................................................................... 29
Orange Data Mining Tool ....................................................................................... 33
No-Code AI – Orange Data Mining ....................................................................................... 35

5. Computer Vision 49–58


Overview of Computer Vision ....................................................................................... 49
Applications of Computer Vision ....................................................................................... 50
Classifying Dandelions vs Sunflowers using Orange
Data Mining ....................................................................................... 52

6. Natural Language Processing 59–70


Features of Natural Languages ....................................................................................... 59
Applications of NLP in Everyday Life ....................................................................................... 60
No-Code NLP ....................................................................................... 64
Sentiment Analysis using Orange Data Mining ....................................................................................... 65

Answers to Objective Type Questions 71–72


1 Revisiting AI Project Cycle
& Ethical Frameworks for AI

Prerequisite: Basic Understanding of Ethical Concepts


(See Book Unit 2, ‘Ethical Frameworks of AI’, pages 131–136.)

NEED FOR ETHICAL FRAMEWORKS


An ethical framework is a structured set of guidelines or principles that help individuals
and organizations make morally responsible decisions. These frameworks provide a
foundation for assessing what is right or wrong, to help us ensure fairness, accountability
and transparency while making decisions in various fields, including business, healthcare
and technology.

Why Ethical Frameworks are needed in AI


With the rapid adoption of AI systems across the globe and their significant impact on
society, it has become essential to ensure that these systems are designed ethically and used
responsibly. It must also be ensured that the systems:
1. Prevent Bias and Discrimination: AI should make fair decisions and avoid reinforcing
social biases.
2. Ensure Transparency and Explainability: AI models should be understandable and
interpretable.
3. Protect Privacy and Data Security: User data must be handled responsibly and
securely.
4. Promote Accountability: AI developers and organizations must take responsibility for
AI decisions.
5. Enhance Human Well-being: AI should be designed to benefit society, not harm it.
Without ethical frameworks, AI can lead to unintended consequences like biased hiring
systems, misinformation spread or abuse of privilege. Ethical AI ensures trust, fairness and
safety in technology use.
The Bias inside Us
Let us explore how personal biases influence decision-making using an online activity available
on https://my-goodness.net/

To begin, play the MyGoodness game and complete the 10 giving decisions. Pay attention to
how you make choices, especially when some details are hidden. How did you choose whom to
give?
1. Did you prefer giving to certain people, causes or locations?
2. Did hidden information affect your choices?
3. Were your decisions based on emotions, personal experiences, or assumptions?
AI systems learn from human data, which may include biases like favouring certain groups
overlooking hidden factors or making assumptions based on incomplete information.
Just as you made decisions based on limited data, AI can also develop biases depending on how
it is trained.
Factors affecting human decision-making:
1. Personal and Emotional Factors: Our decisions are usually influenced by emotions,
past experiences and upbringing. People may favour choices that are connected with their
values, beliefs or personal experiences.
2. Perception of Need and Impact: Our choices are also governed by how urgent or
effective an option appears. We tend to prioritize actions that seem to have a direct or
visible impact.
3. Bias in Human vs Non-Human Considerations: Humans are most likely to prioritize
their own needs over those of animals or the environment. However, emotional
attachment or ethical beliefs can shift preferences.
4. Geographic and Demographic Biases: People are more likely to make decisions
that benefit those in familiar locations or social groups. Stereotypes and personal
identification can shape preferences and priorities.
5. Religious and Ethical Views: Faith and moral beliefs influence decision-making,
affecting judgments on fairness, responsibility and what is considered right or wrong.
6. Transparency and Trust: People prefer options that feel reliable and verifiable. Lack of
information or fear of deception can discourage certain choices.

2 SUPPLEMENT—Decoding Artificial Intelligence–X


FACTORS INFLUENCING HUMAN DECISION-MAKING
Transparency and Trust Personal and
Preference for reliable Emotional Factors
and verifiable options Decisions influenced by emotions
and personal experiences

Religious and Ethical Views Perception of Need


Guidance from faith and Impact
and moral beliefs Choices driven by urgency and
perceived effectiveness

Geographic and Bias in Considerations


Demographic Biases Prioritizing personal needs
Decisions shaped by familiarity over others
and social groups

Do it Yourself
Play the MyGoodness game again and see if your decisions change when you actively try
to reduce bias. Discuss how bias in AI could impact areas like hiring, loan approvals or
criminal justice.

Classification of Ethical Frameworks in AI


AI ethics can be broadly classified into sector-based and value-based frameworks. Both
approaches are important and provide different ways to address ethical concerns in AI
decision-making.
Ethical Frameworks
in AI

Sector-based Value-based Ethical


Ethical Frameworks Frameworks

Rights-based Utility-based Virtue-based


Ethics Ethics Ethics

1. Sector-based Ethical Frameworks


These frameworks apply ethical principles to specific industries where AI is used and help
us tackle unique challenges in each field. For example—
• Bioethics: Ensures AI in healthcare respects patient privacy, fairness and autonomy.
• Business Ethics: Prevents bias and promotes transparency in hiring, lending and
customer interactions.
• Legal and Justice Ethics: Ensures fairness and accountability in AI-assisted law
enforcement and court decisions.
• Environmental Ethics: Examines AI’s impact on sustainability, climate change and
nature conservation.
SUPPLEMENT—Decoding Artificial Intelligence–X 3
2. Value-Based Ethical Frameworks
These frameworks focus on core moral values that guide AI decision-making across all
sectors. They reflect human values in AI-driven choices and are categorized as:
• Rights-based Ethics: Protects fundamental human rights such as privacy, dignity and
freedom. It ensures AI prioritizes human lives and treats individuals fairly.
• Utility-based Ethics: Aims to maximize overall good by evaluating AI decisions based
on their impact. It prioritizes solutions that benefit most people, even if trade-offs are
needed.
• Virtue-based Ethics: Focuses on choosing ethical decision-makers who uphold
honesty, compassion and integrity in AI governance. It ensures AI behaviour is
guided by moral values and not just rules.
While sector-based frameworks apply ethics to specific fields, value-based frameworks provide
universal moral principles. Using both ensures AI is fair, responsible, and guided by human
values. Let us understand Bioethics to understand the importance of sector-based frameworks.

BIOETHICS: THE GUIDING PRINCIPLES FOR LIFE AND TECHNOLOGY


Ethics is the framework that helps us find answers to questions regarding right and wrong,
fairness, justice, responsibility and care in our personal and collective lives. It acts as a compass
to guide human behaviour in ways that uphold the dignity and well-being of individuals and
communities.
As new challenges and opportunities emerge with rapidly changing technological and scientific
advances that impact human lives, the importance of ethical thinking is becoming increasingly
significant.
Definition of Bioethics
Bioethics is the study of ethical issues and principles that
arise in biology, medicine and healthcare. This domain of
ethics examines how we should act when dealing with complex
questions about life, health and human condition. As a
domain, bioethics is guided by four key principles: Autonomy,
Bioethics
Beneficence, Non-Maleficence and Justice.
You might be wondering ‘How does the world of biology, medicine, life and death connect with
the abstract world of AI?’ The answer lies in recognizing the fact that both AI and bioethics
involve and impact real human beings and ethical decision-making. AI is becoming increasingly
embedded in healthcare today and is impacting the way we define life and existence. It is
important for us to carefully understand where bioethics and AI ethics meet.
The Hippocratic Oath
Bioethical principles aren’t just theoretical ideas—they have a deep-
rooted significance in human history, experiences and values across
cultures. For example, consider the ancient Hippocratic Oath, written
in the 5th century BCE, in which physicians pledged to ‘do no harm’
(non-maleficence). This principle remains central to medical ethics
even thousands of years later. Similarly, many cultures emphasize

4 SUPPLEMENT—Decoding Artificial Intelligence–X


the importance of respecting individual autonomy. Modern healthcare reflects this value when
families are included in important health decisions, ensuring their voices are heard and respected.
By integrating such age-old principles with AI ethics, we can ensure that new technologies
serve humanity in ways that are responsible, compassionate and fair.

Principles of Bioethics
Respect for Persons/Autonomy: This principle recognizes that each
person has inherent value, dignity and is capable of making their own
decisions. As doctors, it is not enough to simply treat someone—you
must also honour their choices, allowing them to be active
participants in the decision-making process. In the context of
medicine, autonomy demands that doctors fully inform patients about
proposed procedures, obtain their consent and respect their refusal. Autonomy

Beneficence (Doing good): This principle is a call for action and a moral
imperative to act in the best interests of others, seeking ways to help
them. Medical interventions, treatments and research should be driven
by a desire to bring maximum benefit and provide improved care to those
seeking help.
Developing vaccines for diseases like polio, smallpox or COVID-19 are
some examples of the principle of beneficence, with the sole aim of
Beneficence
improving the well-being of millions. These treatments were successful
because the intention to help others reigned supreme.
Non-Maleficence (Avoiding harm): This bioethics principle
is the commitment to ‘do no harm’. Doctors, researchers and
healthcare providers must be cautious about potential risks, actively
avoiding unnecessary or unjustifiable harm to their patients.
Justice (Fairness): It is the ethical principle that reminds us to
treat everyone fairly, irrespective of social, economic or other
differences. Resources should be distributed equitably and access
to healthcare should be guaranteed for all. This principle requires
that healthcare must be a right and not a privilege of every human. Non-Maleficence

For example, in the early days of kidney dialysis


and transplants, only the rich patients got access
to treatment. We now recognize the fact that
fairness requires that resources like dialysis or organ
transplants be allocated on the basis of medical
needs and not on social or economic standing. Most
countries have added the principles of justice and
Justice
fairness to their medical guidelines.
Bioethics and AI Ethics
While ethical guidelines are shaping life science, they are equally important for AI as it is
gradually becoming an important part of our lives and healthcare. Recent advances in AI are
merging biology and technology, making it essential to bring bioethics into AI ethics. Since
AI can influence medical decisions, it is essential that ethical principles of bioethics guide its
development and use.
SUPPLEMENT—Decoding Artificial Intelligence–X 5
The adoption of AI in healthcare introduces challenges that intersect both technological
and ethical considerations. While on the one hand, AI improves diagnosis, treatment and
personalized care, on the other, it raises concerns about data privacy, algorithmic bias and
equitable care for all patients.
• How do we ensure AI does not harm vulnerable populations?
• How much control over a patient’s care should be given to a machine?
• How do we protect human autonomy in this new era?
While AI in research speeds up drug discovery and medical advancements, it also raises
concerns about transparency and trust. When AI analyzes medical images or suggests
treatments, how do we validate its findings and ensure they are reliable? Responsible use of AI
is essential for faster and ethical scientific breakthroughs.
The use of AI in personal well-being tools, like health monitoring or mental health support,
poses questions about over-reliance on machines. Can AI ever replace human empathy and
compassion in patient care? How is sensitive data managed and what happens if a breach occurs?
Let us understand the joint application of bioethics and AI ethics with a hypothetical case study.

Case Study
SMART MEDICINE DISPENSER AND THE VILLAGE DOCTOR
Consider Asha Gram, a rural village in India. Like many other villages, Asha Gram faces challenges in
healthcare access. There is one primary health centre run by Dr Sharma, a dedicated doctor who works
long hours in the service of the people. Remote parts of the village are laden with challenges as delivering
medicines on time becomes difficult.
HealTech, a tech company, has developed a new ‘Smart Medicine Dispenser’. This is a small, AI-powered
device that is designed to automatically dispense the right medicine and dosage to patients, based on the
doctor’s prescription and the patient’s unique identification (through a fingerprint scan or Aadhaar card).
It is equipped with a screen that shows simple instructions while recording details of each dispensing.
This could be particularly helpful in rural areas with shortage of trained medical staff.
HealTech proposes a pilot program for Asha Gram—they will install multiple
smart dispensers at community centres and train local volunteers to assist
people in using them. Dr Sharma will initially prescribe medicines as
usual. However, eventually, the AI dispenser could also give suggestions
based on the data it collects, such as a patient’s prior health records
and symptom descriptions (entered by the local health volunteer or the
patient themselves). This can also help to track usage of medicines and
provide analytics to public health workers to identify outbreaks or gaps
in health service delivery.

Key Issues
• Limited Access: Asha Gram has limited access to healthcare professionals and medications and
solely relies on Dr Sharma.
• New Technology: Smart Medicine Dispenser offers a potential solution but also brings up ethical
questions.

6 SUPPLEMENT—Decoding Artificial Intelligence–X


• Patient Data: The dispenser collects patient-identifiable information and medicine usage
patterns, raising concerns regarding data privacy and security.
• AI Decision-Making: If and when AI starts suggesting medications, where does Dr Sharma’s
authority fit in?
• Equity of Access: Can the villagers access and use the AI dispenser, especially the elderly and the
illiterate? How does this differ from the people in urban settings?
• Data Security: How can the villagers be assured that their data is protected and not being sold to
private companies?

Bioethical Considerations
1. Autonomy: Does Smart Medicine Dispenser respect the autonomy of patients? Do the patients
have a right to choose if they want to use this technology or not? Who makes the ultimate
decision on their healthcare?
2. Beneficence: How can Smart Medicine Dispenser improve healthcare in Asha Gram? What are the
benefits for the patients and the community? Does using this device truly do any good?
3. Non-Maleficence: What are the risks associated with this technology? How might it potentially
harm patients? What safeguards need to be put in place?
4. Justice: Would using these dispensers be fair to everyone in Asha Gram? How might differences
in technology literacy, health awareness or accessibility create inequities in healthcare access?

AI Ethics Considerations
1. Data Privacy: What are the ethical concerns about collecting and storing patient data with Smart
Medicine Dispenser? How should this data be protected? Is consent being obtained fairly and in a
culturally appropriate manner?
2. Transparency: How does the AI system determine which medication to dispense? How transparent
is the decision-making process to Dr Sharma and the patients?
3. Bias: How can we ensure that the AI system is not biased against certain groups of patients? Is the
AI using data from other countries which may not be suitable for patients in India?
4. Accountability: Who is responsible if the AI system makes an error and causes harm to a patient—
HealTech, Dr Sharma or the village volunteer?
5. Solutions: What steps should Dr Sharma, the villagers and HealTech take to ensure that Smart
Medicine Dispenser is implemented ethically and effectively? How should the villagers’ concerns
be addressed? What happens when things go wrong?

Exercises
Objective Type Questions
I. Multiple Choice Questions (MCQs):
1. Which of the following is not a principle of bioethics?
(a) Autonomy (b) Justice
(c) Accountability (d) Beneficence
2. What is the primary focus of the principle of beneficence?
(a) Avoiding harm to patients (b) Acting in the best interest of others
(c) Ensuring fairness in treatment (d) Respecting patients’ autonomy
SUPPLEMENT—Decoding Artificial Intelligence–X 7
3. The Hippocratic Oath is most closely associated with which bioethical principle?
(a) Non-Maleficence (b) Autonomy
(c) Justice (d) Beneficence
4. What ethical concern arises when AI systems make independent decisions in healthcare?
(a) Data transparency (b) Patient autonomy
(c) Equitable resource allocation (d) All of these
5. Which of the following is a primary reason for needing ethical frameworks in AI?
(a) To increase profit margins in tech companies
(b) To ensure fairness, accountability and transparency
(c) To promote rapid adoption of AI systems
(d) To eliminate the need for human oversight

II. Fill in the blanks:


1. The principle of ..................................... ensures that healthcare resources are distributed fairly
and without bias based on social or economic status.
2. ..................................... is the bioethical principle that requires actively avoiding harm in medical
practice.
3. Ethical frameworks in AI are essential to prevent ..................................... and discrimination in
decision-making systems.

III. State whether the following statements are True or False:


1. The principle of autonomy emphasizes that decisions should always be made by doctors, without
patient input.
2. One of the key concerns of AI ethics in healthcare is the issue of data privacy.
3. Ethical frameworks in AI ensure that all AI systems operate without any errors.

IV. Assertion and Reasoning Based Questions:



Read the following questions based on Assertion (A) and Reasoning (R). Mark the correct choice as:
(i) Both A and R are true and R is the correct explanation for A.
(ii) Both A and R are true but R is not the correct explanation for A.
(iii) A is true but R is false.
(iv) A is false but R is true.
1. Assertion: The principle of Justice in bioethics requires equitable access to healthcare
for all.
Reasoning: Resources like organ transplants should only be available to wealthy patients.
2. Assertion: AI-based healthcare systems may enhance diagnosis and treatment.
Reasoning: AI systems always operate without biases or errors.
3. Assertion: Ethical frameworks are necessary to enhance human well-being in AI systems.
Reasoning: AI systems are naturally designed to prioritize human safety and fairness.

Subjective Type Questions


I. Unsolved Questions
1. Explain the principle of Autonomy in bioethics. How does it apply to decision-making in healthcare,
particularly in scenarios involving AI systems?

8 SUPPLEMENT—Decoding Artificial Intelligence–X


2. What are the ethical challenges posed by AI systems in handling data privacy and transparency?
Provide two examples to illustrate your answer.
3. Discuss the principle of Justice in bioethics and its application in AI-driven healthcare. How can the
implementation of AI systems ensure fairness and equity, especially in underprivileged communities?
Support your answer with examples.
4. Why is it important to ensure transparency and explainability in AI systems?
5. Explain how ethical frameworks in AI help in protecting privacy and data security. Provide examples
of potential risks without ethical considerations.
6. Discuss the importance of accountability in AI development. How can ethical frameworks ensure
responsible decision-making by AI developers and organizations?

II. Case-Based/HOTS Questions


1. A new AI-powered diagnostic tool has been introduced in a rural hospital to help doctors detect
diseases based on medical images. While the tool has significantly improved diagnostic accuracy,
there have been instances where it failed to identify rare conditions due to limitations in the training
data. Moreover, patients are concerned about how their medical data is being used and whether
their consent is being respected. The hospital administration is considering expanding its use but
wants to ensure that ethical principles are followed.
(a) What bioethical principles should the hospital prioritize before expanding the use of this AI
diagnostic tool?
(b) How can the hospital address concerns related to data bias and patient autonomy?

SUPPLEMENT—Decoding Artificial Intelligence–X 9


2 Advanced Concepts of
Modelling in AI

Prerequisite: Basic Understanding of AI, ML and DL


(See Book Unit 2, ‘AI Taxonomy’, pages 114-118; Unit 3,
‘Neural Networks and Deep Learning’, pages 172-176.)

DATA TERMINOLOGIES IN ARTIFICIAL INTELLIGENCE


AI models rely on data to learn patterns and make decisions. Some most commonly used terms
that you may come across while learning Artificial Intelligence with examples from tabular
data, images and text are as follows:
1. Dataset: A dataset is a structured collection of data that is used for training AI models.
Datasets may consist of multiple examples (rows) and attributes (columns) or multiple
images/videos or even a large corpus of text. For example, a dataset of customer
transactions with columns for age, product category and total expenditure (tabular) or
a dataset of cat and dog images used to train an image classifier (images) or a dataset of
movie reviews with labels like ‘Positive’ or ‘Negative’ for sentiment analysis (text).
2. Features: Features are input variables that describe an observation. They vary based
on the type of data. In a data science-based machine learning project like housing
price prediction model, features could include values in tabular data, e.g., square footage,
number of rooms and location.
Features in computer vision-based projects can be pixel values, color histograms or
detected edges in an image. AI models extract these automatically. In Natural Language
Processing, features include word frequency, sentence length, presence of certain
keywords or word embeddings that capture the meaning of text.
Feature Type Example
Tabular Age, Salary, Purchase History
Image Pixel Intensities, Color Histogram, Edge Detection
Text Word Count, Sentiment Score, Word Embeddings

3. Labels (Target Variable): Labels are the expected output that AI models try to
predict. They also vary depending on the application and input data, e.g., in a spam
detection system, labels would be ‘Spam’ and ‘Not Spam’ while in a cat vs dog classifier,
labels would be ‘Cat’ and ‘Dog’. In a sentiment analysis model, labels would be ‘Positive’,
‘Negative’ and ‘Neutral’.
Data Type Features (Input) Label (Output)
Tabular Email contains ‘Win’, includes link Spam / Not Spam
Image Pixels, Edges, Color Distribution Cat / Dog
Text Words, Punctuation, Sentiment Score Positive / Negative

4. Labelled and Unlabelled Data: In AI, data can be categorized as labelled or unlabelled,
depending on whether the outputs (labels) are provided. The choice of data type affects
how AI learns and makes predictions.
• Labelled data consists of input examples that are tagged with the correct output (label).
This type of data is used in supervised learning, where AI learns by mapping inputs
to known outputs. Some examples are as follows:
Data Type Features (Input) Label (Output)
Tabular Age, Symptoms Disease Name
Image Pixels, Colors Cat / Dog
Text Words, Length Positive / Negative

• Unlabelled data consists of input examples without predefined labels. AI models analyze
patterns and group similar data without knowing the correct answer. This is used in
unsupervised learning, where AI finds hidden structures in data. For example—
 Tabular Data: A dataset of customer transactions without predefined categories,
where AI groups similar spending behaviours.
 Image Data: A collection of photos without labels, where AI clusters similar-looking
images.
 Text Data: A set of news articles where AI automatically categorizes topics (e.g.,
sports, politics, entertainment) without predefined labels.
5. Training Data and Test Data: These are derived from a dataset by dividing it into
two parts. Usually, the training dataset is 70-80% of the main dataset while the testing
dataset is the remaining 20-30%. Training data is used to teach the AI model while test
data is used to evaluate how well the AI model performs on unseen data.
For example, in a handwriting recognition AI model, 80% of digit images are used for
training, while 20% are used for testing. Similarly, in a chatbot, historical chat logs
are used for training while new user messages serve as test data.

SUBCATEGORIES: SUPERVISED LEARNING MODEL


We don’t know everything from birth, right? We learn eventually by seeing examples, noticing
patterns and figuring out how different things are related. Consider a very simple example:
learning to tell the difference between a cat and a dog; it is not by memorizing rules but by
seeing lots of cats and dogs and making connections. In this process, our parents, teachers,
books and friends help us by pointing to new objects or unseen phenomena and telling us what
we are looking at, i.e., they help ‘label’ our experience and ‘supervise’ our learning.
Supervised learning works in a similar way. We train computers to learn from examples so they
can make decisions or predictions when they see new data. In this type of learning, we have
two major model types—Classification and Regression.

SUPPLEMENT—Decoding Artificial Intelligence–X 11


Classification Models: Sorting Data into Categories
A classification model is designed to take input data and then place it in one of the several
predefined ‘categories’ or ‘classes’. The main goal is to sort data into distinct groups based on
what it has learned. For example, consider a computer program that can tell the difference
between pictures of apples and oranges.
• The Data: We give the program lots of images—some of apples while others of oranges.
We also label each image as ‘apple’ or ‘orange’.
• Pattern Recognition: The program looks at the images and notes down features like
color, shape, etc., that help tell them apart.

Apple Apple Apple


Training

Orange Orange Apple


Machine Learning Classification
Algorithm Model

Apple Orange
Labelled Data

• Building the Model: The program then builds a model which helps it classify future images.
• Prediction: When we give it a new image, the model tries to classify it, saying “Is this an
apple or an orange?” It outputs a classification label or a category corresponding to
the new input based on its learning, as illustrated below:

It’s an ‘Orange’

New Input Classification Model Output Input

• Performance: The most important goal is to get a good classification accuracy, which
means that it can correctly label the images most of the time.
Many daily activities use classification models. For example, when you upload a video on a
platform, it is automatically categorized as ‘education’, ‘entertainment’, etc. Another example
is how your email puts new emails in different folders like inbox, spam, etc., or a customized
label that you may create.

Regression Models: Predicting Numerical Values


A regression model tries to find a mathematical connection between different input
variables and an output variable. Regression models predict numbers on a continuous scale, so
it could be any number and not just from a limited set.
The aim here is to predict a numerical value. For example, if we are trying to predict the price
of a house, consider the following:
• The Data: We gather information on many houses, e.g., size, location, number of rooms
and the prices they are sold for.
• Relationship Analysis: Regression model looks at this data to find any relationships.
For example, it may observe that bigger houses in upscale areas cost more.
12 SUPPLEMENT—Decoding Artificial Intelligence–X
Rooms
Size Price
Location

Rooms
Training
Size Price
Location

Rooms Machine Learning Regression


Size Price Algorithm (Regression) Model
Location

Labelled Data

• Model Creation: The model creates an equation which best describes how house size
and location relates to its price.
• Price Prediction: When we have details of a new house, we put these values into the
equation and the model predicts its likely selling price.

Rooms Predicted
Size Price
Location

New Data Regression Output


Input Model Value

• Goal: The goal is to make predictions as close to the actual prices as possible. We
check the model’s performance by measuring how close the predictions are to the actual
values. In the process, we try to make sure the difference or the error is as small as
possible.
It is the same way how programs used in predicting stock markets work. They analyze a lot of
historical data to predict the price of a stock in the future. These models are used in a number
of applications including financial predictions, in predicting sales, in weather forecasting, in
medical research and even in environmental protection.

SUBCATEGORIES OF UNSUPERVISED LEARNING MODEL


In the last section, we learned about supervised learning, where we teach computers using
labelled examples. What happens when we do not know anything about the data and there are
no labels? Let us explore unsupervised learning. Here, the computer learns patterns without
any specific labels or guidance.
Consider an analogy: Imagine you are given a big pile of photos, but no one tells you who
is there in the pictures or what they are about. You would probably still be able to group them
just by looking at them and analyzing what feature is common and categorizes them as a
group.
For example, in one case, the photographs may be colored or black-and-white, which will give
two neat categories. In another case, office photographs and vacation photographs may have
different grouping features. This is what unsupervised learning does—it tries to find patterns
and group data all by itself.

SUPPLEMENT—Decoding Artificial Intelligence–X 13


Let us look at two common types of unsupervised learning—Clustering and Association.

Clustering: Grouping Similar Data


A clustering model takes input data and automatically groups similar data points together into
clusters or groups. The aim here is to find natural groupings or structures in the data that may
not be immediately obvious without any guidance.

Clustering

Illustrative Example: Think about a situation where you have a mix of different card games,
e.g., Uno, regular playing cards and Bingo cards, all jumbled together. How would we sort them
out? We want to group them based on their properties without knowing their names or game
types.
• The Data: We have a mixed set of cards, i.e., Uno cards, Bingo cards and regular playing
cards. For each card, we note down features like its color, whether it has a number or
symbol, its shape (square for Bingo, rectangular for others) and what type of game it is used
for. However, we don’t have labels telling us if a card is Uno, Bingo or a regular playing card.
• Finding Similarities: The clustering algorithm looks at the different features of each
card and tries to find the ones that are similar.
• Grouping: It groups together cards with similar features into a cluster. For example, one
cluster might have all the number cards from the regular playing cards, another cluster
may contain the brightly-colored Uno cards and yet another cluster may include square-
shaped Bingo cards.
• Results: In the end, we will have several clusters and the cards in each cluster will be
more similar to each other than cards in other clusters. We may have a cluster for Uno
cards, another cluster for regular playing cards and yet another cluster for Bingo cards.

</>

Clustering

14 SUPPLEMENT—Decoding Artificial Intelligence–X


• Without Labels: The model does this automatically, without anyone explicitly telling it
which type of card is which—it discovers the groupings on its own.
Varied Applications: Clustering is used for customer segmentation, image segmentation and
many other applications. Mobile phone companies use clustering to segment their customers
based on usage patterns. A social media company might use clustering to group together users
who have similar interests so they can provide recommendations.

Associations: Discovering Relationships between Data


Now, let us look at another type of unsupervised learning—association. This model is
designed to find interesting connections between data elements. Think about shopping in a
supermarket. If you buy bread, you might also buy butter or if you buy diapers, you might also
buy wet wipes for babies. Association models try to find these types of relationships.

Associations

Definition: An association model looks for rules or patterns that describe how different items
or events are connected. It tries to find which data tends to occur together.
Illustrative Example: Consider the items that people buy in a supermarket.
• The Data: We collect lots of data on customer transactions that indicate the items
people usually buy together.
• Finding Relationships: The association model analyzes this data to find out the items
that are frequently purchased together.
• Discovering Patterns: The model might discover rules like ‘if a customer buys milk,
they are also likely to buy cereal’ or ‘if a customer buys a pizza, they will also buy a soft
drink’.
• Generating Rules: The model creates rules about what occurs together frequently and
how strong this pattern is.
• Actionable Insights: These discovered relationships can be used to make decisions.
In practice, e-commerce companies use association models to provide product
recommendations to their customers based on past purchases. To increase its sales, a retail
store can also rearrange products based on what customers frequently buy together.

DEEP LEARNING: WHEN SIMPLE LEARNING ISN’T ENOUGH


In previous sections, we discovered that computers can learn from data to make predictions or
find patterns using Machine Learning. We understand how supervised learning uses labelled
data and how unsupervised learning finds patterns on its own.
Sometimes, we need more powerful ways to learn. The tasks to be done or the patterns to
be learned may be too complex for basic Machine Learning, like in translating languages or
understanding handwritten text. This is where Deep Learning comes in.

SUPPLEMENT—Decoding Artificial Intelligence–X 15


Deep Learning: Mimicking Human Brain Through Layers of Neural Networks

Deep Learning uses more complex models and utilizes neural networks that have many layers.
This is inspired by how the human brain works. They can learn very complex patterns from
data and give us better results than simpler machine learning models.
We will discuss two popular types of deep learning models—Artificial Neural Networks (ANN)
and Convolutional Neural Networks (CNN).

Artificial Neural Networks (ANN): Inspired by the Brain


An ANN model is a type of deep learning network where artificial neurons are arranged in
layers, with connections between them. These connections are adjusted based on the input
data to learn patterns and make predictions.

Input Layer

Hidden Layer 1

Hidden Layer 2

Output Layer

Understanding ANN: Let us assume that we want to build a program that can understand
and translate a sentence from English to Hindi, without needing to manually write the rules of
translation.
The Challenge: Normally, for translation, we would need to have a lot of labelled data,
like sentences in English along with their Hindi translations, and have manual rules that
define how each word is translated. This can be difficult and time-consuming. This is where
supervised learning cannot perform well as it cannot generalize to unseen data in such a
complex environment or create the rules required to convert a language.
• ANN Solution: An ANN can learn to translate by looking at a large number of sentences
in both languages, without us having to tell it all the specific translation rules.
• Neurons and Connections: ANN consists of interconnected artificial neurons arranged
in layers that process words sequentially.
• Learning Process: ANN learns how to represent the meanings of words and how to
translate them correctly by adjusting the connections in the network.
• Making Translations: After learning, when given a new sentence in English, ANN uses
this knowledge to give the corresponding sentence in Hindi.
• Complex Relationships: ANN can automatically learn complicated rules of language
translation, which are difficult to define in traditional ways. This is why supervised
learning is not adequate for this case.

16 SUPPLEMENT—Decoding Artificial Intelligence–X


ANN are also used in things like language processing to understand the sentiment of a tweet or
for generating text in chatbots, which also requires complex relationships between data.

Perceptron: A Basic Unit of ANN


A perceptron is the simplest type of artificial neuron, inspired by the way biological neurons
process information. It takes multiple inputs, applies weights, sums them up, passes the result
through an activation function and then produces an output. A perceptron consists of the
following:
1. Inputs (x1, x2…,xn): The features fed into the model.
2. Weights (w1, w2…, wn): Each input has an associated weight that determines its
importance.
3. Bias (b): An additional parameter to adjust the decision boundary.
4. Summation Function: Computes the weighted sum of inputs.
   z = w1x1 + w2x2 +...+ wnxn + b
5. Activation Function: Applies a threshold to decide the output: y = f(z)
INPUT WEIGHTS
VALUES
x1 w1 STEP FUNCTION
SUMMATION (Activation Function)
x2 w2 Σ OUTPUT

x3 w3

Simple Mathematical Example


Consider the choice between buying a coffee or a cold drink. The human decision process
for choosing between a coffee and a cold drink considers factors like the weather (hot or cold),
energy levels (tired or alert), time of day (morning or evening) and personal preference. If it is
a hot day and you prefer cold drinks, you will likely choose a cold beverage. However, if you are
tired in the morning, you may opt for a cup of coffee instead.
Let us represent it mathematically using a perceptron to decide whether to get a coffee (1) or
a cold drink (0) based on different factors.
Step 1: Define Inputs (Features) and Weights
We consider the following four factors affecting the decision:
• Temperature (x1): Higher values mean hotter weather.
• Tiredness (x2): Higher values mean feeling more tired.
• Time of Day (x3): Morning (1) or Evening (0).
• Preference (x4): Personal preference for coffee (1) or cold drink (0).
Given weights and bias:
• w1 = −0.7 (Lower temp → more likely coffee)
• w2 = 0.8 (More tired → more likely coffee)
• w3 = 0.5 (Morning → more likely coffee)
• w4 = 0.9 (Preference for coffee → stronger influence)
• Bias (b) = −0.5
SUPPLEMENT—Decoding Artificial Intelligence–X 17
Step 2: Example Inputs
A person experiences:
• Temperature = 30°C (hot) → x1 = 30
• Tiredness Level = 7 (moderate) → x2 = 7
• Time of Day = Morning → x3 = 1
• Preference = Cold Drink → x4 = 0
Step 3: Compute Weighted Sum
Z = (−0.7×30) + (0.8×7) + (0.5×1) + (0.9×0) + (−0.5) = −15.4
Step 4: Apply Activation Function
Using a step function:
y = { 1, if z ≥ 0 (Choose Coffee) or 0, if z < 0 (Choose Cold Drink) }
Since z = −15.4 < 0, the perceptron outputs 0 (Cold Drink).
Perceptron Decision-Making Process for Beverage Choice

Input Variables Activation Function

Temperature Step Function


Tiredness Level Perceptron Decision Output
Decision-
Time of Day Making
Preference Process

Weighted Sum
Calculate z

Point to Remember
The weights and features used in this perceptron example are for demonstration purposes only. They
are simplified and may vary based on individual preferences, environmental factors and specific
contexts. In real-world AI models, weights are learned from data and feature importance depends
on the dataset and training process.

Convolutional Neural Networks (CNN): For Images and Patterns


CNNs are designed to work with images, videos and other visual data. A CNN model uses
special layers, called convolutional layers, to automatically extract features from image data,
making it very effective for image recognition and analysis.
Fully Connected
Convolution Pooling
Input Output

Feature Extraction Classification

18 SUPPLEMENT—Decoding Artificial Intelligence–X


Understanding CNN: Consider a scenario where we are training a program to recognize
different types of handwritten text like letters and digits.
The Challenge: With normal supervised learning, you will need to tell every single detail
about the images for the program to understand. You will need to tell it the thickness of
each stroke, the curvature, etc. This is very difficult, time-consuming and prone to error.
It would be very hard to manually define all these features and then train a machine
learning model.
• CNN Solution: CNN uses layers that can extract key features by itself from the images
without requiring labels. This means that you can just give it the dataset and the model will
figure it out without a requirement for explicit labelling of each pixel or specific feature.
• Feature Extraction: CNN automatically figures out the important features that help
recognize the handwritten text.
• Image Analysis: CNN automatically figures out what makes ‘a’ different from ‘b’ or
‘1’ different from ‘0’.
• Image Classification: When presented with a new handwritten image, CNN can classify
it as a specific letter or number.
• No Manual Features: Unlike other traditional methods, it doesn’t require manual
definition of every feature in the data.
CNNs can be used for things like diagnosing disease from medical imaging or for automatic
number plate recognition, which requires complex image analysis since manual feature
selection is difficult.

Exercises
Objective Type Questions
I. Multiple Choice Questions (MCQs):
1. What is the primary goal of a classification model?
(a) To group similar data points together (b) To predict numerical values
(c) To sort data into predefined categories (d) To discover relationships between data
2. In supervised learning, which of the following is NOT required for training?
(a) Labelled data (b) Training examples
(c) Feedback from the algorithm (d) Unlabelled data
3. Which of the following statements about regression models is correct?
(a) They classify data into groups.
(b) They predict a category for input data.
(c) They predict continuous numerical values.
(d) They discover associations between variables.
4. Which of these is an example of clustering in unsupervised learning?
(a) Identifying spam emails
(b) Grouping customers based on purchasing behaviour
(c) Predicting housing prices based on size and location
(d) Translating English to Hindi

SUPPLEMENT—Decoding Artificial Intelligence–X 19


5. What does a Convolutional Neural Network (CNN) primarily deal with?
(b) Text Processing (b) Numerical Prediction
(c) Image and Visual Data Analysis (d) Data Clustering
6. Which type of learning does not require labelled datasets?
(a) Supervised Learning (b) Unsupervised Learning
(c) Deep Learning (d) Reinforcement Learning
7. Which of the following best describes a perceptron?
(a) A type of clustering algorithm
(b) A basic unit of Artificial Neural Networks (ANN)
(c) A tool for text processing in NLP
(d) A data preprocessing method
8. In the context of data terminologies in AI, which of the following is a target variable that an AI model
tries to predict?
(a) Feature (b) Label
(c) Dataset (d) Clustering

II. Fill in the blanks:


1. A regression model uses ..................................... to predict numerical outputs based on input features.
2. Clustering algorithms group data points into ..................................... based on their similarities.
3. In Deep Learning, artificial neural networks are inspired by the structure and functioning of
the ..................................... .
4. In supervised learning, ..................................... are the expected outputs used to train the model.
5. A perceptron applies weights to inputs and passes the weighted sum through an ..............................
function to produce an output.
6. In AI, ..................................... are the input variables used to describe observations, which the model
uses to learn patterns.

III. State whether the following statements are True or False:


1. Association models find patterns and relationships between data items that occur together.
2. Convolutional Neural Networks are used to classify textual data.
3. Supervised learning always requires labelled training data for its operation.
4. Labels are only required in unsupervised learning models.
5. Features are the inputs to a model that help in making predictions.

IV. Assertion and Reasoning Based Questions:



Read the following questions based on Assertion (A) and Reasoning (R). Mark the correct choice as:
(i) Both A and R are true and R is the correct explanation for A.
(ii) Both A and R are true but R is not the correct explanation for A.
(iii) A is true but R is false.
(iv) A is false but R is true.
1. Assertion: Deep learning models require large datasets for training.
Reasoning: Larger datasets improve the generalization of deep learning models.
2. Assertion: Classification models predict continuous numerical outputs.
Reasoning: Regression models are a subset of supervised learning for numerical predictions.
3. Assertion: Clustering algorithms can automatically discover hidden patterns in data.
Reasoning: Clustering requires predefined labels to group data points effectively.

20 SUPPLEMENT—Decoding Artificial Intelligence–X


4. Assertion: Convolutional layers in CNNs extract features from image data.
Reasoning: CNNs rely heavily on manual feature engineering.

Subjective Type Questions


I. Unsolved Questions
1. Explain the key differences between Supervised Learning and Unsupervised Learning, providing
suitable examples for each.
2. How do classification models and regression models differ in terms of their goals and outputs?
Provide one example for each.
3. Discuss the role of feature extraction in clustering algorithms. Why is it essential for effective
clustering?
4. Deep learning models like Artificial Neural Networks (ANN) are said to be inspired by the human
brain. Explain this analogy, focusing on the structure of ANN.
5. Describe the working of a regression model using the example of predicting housing prices. Explain
the steps involved from data collection to prediction.
6. What are Convolutional Neural Networks (CNNs)? Discuss their architecture and how they are used
for image classification. Provide a relevant example.
7. Unsupervised learning is often considered harder to evaluate as compared to supervised learning.
Discuss the challenges in evaluating clustering models and suggest possible solutions.
8. Explain the importance of labelled data in supervised learning. How does the quality of labelled
data impact the performance of machine learning models? Illustrate with example.
9. What is the purpose of an activation function in a perceptron? Explain with example.
10. Differentiate between features and labels in the context of AI models. Provide examples.

II. Case-Based/HOTS Questions


1. Customer Segmentation using Clustering
A company wants to segment its customers based on their purchasing behaviour.
(a) Explain how clustering algorithms can be used to group customers.
(b) What features might the company consider for clustering?
(c) How can the results of clustering help the company improve its business strategy?
2. Predicting Stock Prices with Regression
Suppose you are tasked with building a regression model to predict the future stock price of a
company.
(a) What data would you collect for training the model?
(b) Explain the steps you would follow to build the regression model.
(c) How would you evaluate the performance of your model?
3. Applications of Deep Learning in Healthcare
Deep Learning is promising in revolutionizing healthcare.
(a) Explain how deep learning models such as CNNs are used in medical imaging for disease
diagnosis.
(b) Discuss one specific challenge faced in deploying these models in real-world healthcare
scenarios.
(c) Propose a solution to overcome this challenge.

SUPPLEMENT—Decoding Artificial Intelligence–X 21


3 Evaluating Models

Prerequisite: Understand the role of evaluation in the development and implementation of AI systems
(See Book Unit 3, ‘Evaluation’, pages 177-178; Unit 8, ‘Evaluation’, pages 395-398.)

TRAIN-TEST SPLIT
Evaluation: How Do We Know If Our Model is Good?
We have already learned how to train machine learning models using data. But how do we know
if the model is any good, i.e., if it is making good predictions or classifications.
Consider an analogy: You are preparing for an exam. You need to study using the training
material available but you also need a way to test how well you have learned. Similarly, to see
how well our model is performing, we need to evaluate its performance. Evaluation is usually
done using a special method called train-test split. This helps us understand if our model is
actually learning and making good predictions.

Train-Test Split: Learning and Testing


Train-test split method is like dividing your study time and exam time. You learn from a lot
of examples and then you may test yourself on other ‘unseen’ examples to see how well you
have understood the concepts. In Machine Learning, we divide our data into two parts—the
training set and the testing set.
• Training Set: This is the portion of data that is used to train the machine learning
model. It is your study material and solved practice questions. The model learns patterns
and relationships from this data.
• Testing Set: This is the portion of data that is used to evaluate the trained model. It is
the exam. This data is kept separate and is not used during training. The model is given
this unseen data to see how well it generalizes and how accurately it makes predictions.
Example: Evaluating Classification of Cats and Dogs
Problem Statement: Build a machine learning model that can classify different types of
animals like cats and dogs.
The stepwise procedure for the same will be as follows:
1. Gathering Data: Collect many photos of cats and dogs.
2. Splitting Data: Split this data into two parts (Train-Test Split):
• Training Set (70%): Use 70% of the photos to train the model. This is where the model
learns the features of different animals.
• Testing Set (30%): Keep 30% of the photos aside and don’t use them for training. Use
these photos later to test how well the model can identify new animals it has never
seen before.
3. Training the Model: The model studies the features of animals in the training data and
learns what makes a cat a cat and a dog a dog.
4. Testing the Model: After training, give the testing data to the model. It will try to
classify each image.
5. Evaluation: Compare the predictions made by the model for the testing set with the
actual results from the test set. Use this to understand how well the model is doing.
Let us now understand the process visually. Consider the following illustration:
Complete Data

Complete Data (With Split Percentage Marked)

Split

Training Data Testing Data

Image 1

1. Complete Data: Start with the entire collection of cat and dog images. This is
represented in the top row in Image 1, i.e., ‘Complete Data’.
2. Marked Split: Decide how to divide the data for training and testing. This split is shown
in the second row in Image 1, i.e., ‘Complete Data (With Split Percentage Marked)’, where
the blocks are now colored differently. You may decide to divide the data in 70:30 ratio,
i.e., 70% for training and 30% for testing. (The split ratio can be 80:20, 60:40, 75:25 or
any other ratio depending upon the problem and data. However, a 70:30 split is usually
considered acceptable.)
3. Split Data: Separate the data into two parts, as shown with arrows in the image, which
results in ‘Training Data’ and ‘Testing Data’. This is a random split, having photos of both
cats and dogs in the training dataset as well as in the testing dataset.

SUPPLEMENT—Decoding Artificial Intelligence–X 23


Training Data ML Model Trains on
Training Dataset

ML Model Predicts on
Test Values

Testing Data

Compare Predicted Value


and Test Data Value

Image 2

4. Training: Train the model using the ‘Training Data’, as shown in Image 2, by feeding
it to the machine learning model. The model studies these images to learn how to
differentiate a cat from a dog based on features such as color, shape, size, etc.
5. Testing: Once the training is completed, use the ‘Testing Data’ to test the model’s
accuracy. Check how accurately the model classifies the images in the test data that it has
never seen during training.
6. Evaluation: The predicted classifications are then compared to the correct labels in the
testing set. If the model is accurate, the predicted labels should be the same as the correct
labels.

UNDERSTANDING ACCURACY AND ERROR


After training and testing the model, you need to measure how well it is performing. Two
important metrics that are used are Accuracy and Error. These help us understand how well
the model is classifying cats and dogs.
• Accuracy: This tells us the percentage of predictions that the model made correctly. It
answers the question: How often is the model right?
• Error: This tells us the percentage of predictions that the model made incorrectly. It
answers the question: How often is the model wrong?
If the accuracy of the model is 90%, this means it correctly classified 90 out of the 100 images
and its error is 10%, i.e., it classified 10 out of the 100 images incorrectly.

CLASSIFICATION METRICS
To understand what is meant by correct or incorrect predictions, we need to define something
called the positive class. In the given example, let us choose to define ‘Cat’ as the positive
class and ‘Dog’ as the negative class. True Positive, True Negative, False Positive and False
Negative are defined with respect to this chosen positive class. Let us look at this with our
classification example:
• True Positive (TP): The number of times the model correctly predicts a cat’s image as
‘cat’, i.e., the image is of a cat.

24 SUPPLEMENT—Decoding Artificial Intelligence–X


• True Negative (TN): The number of times the model correctly predicts a dog’s image as
‘not a cat’, i.e., the image is of a dog.
• False Positive (FP): The number of times the model incorrectly predicts a dog’s image
as ‘cat’, i.e., the image is of a dog.
• False Negative (FN): The number of times the model incorrectly predicts a cat’s image
as ‘not a cat’, i.e., the image is of a dog.
Calculating Accuracy: Suppose you used the testing set and the model classified a total
number of 100 images and it performed as follows:
• True Positive (TP): The number of cat images correctly identified as cats = 45
• True Negative (TN): The number of dog images correctly identified as dogs = 40
• False Positive (FP): The number of dog images incorrectly identified as cats = 5
• False Negative (FN): The number of cat images incorrectly identified as dogs = 10
To calculate Accuracy, we use the following formula:
Number of Correct Predictions
Accuracy =
Total Number of Predictions
TP + TN
Accuracy =
TP + TN + FP + FN
45 + 40
Accuracy =
45 + 40 + 5 + 10
85
Accuracy =
100
Accuracy = 0.85
Accuracy = 85%
This means that the model has correctly classified 85% of the images.
Calculating Error: Error is simply the percentage of incorrect predictions.
Number of Incorrect Predictions
Error =
Total Number of Predictions
FP + FN
Error =
TP + TN + FP + FN
5 + 10
Error =
45 + 40 + 5 + 10
15
Error =
100
Error = 0.15
Error = 15%
This means that our model classifies 15% of the images incorrectly.

SUPPLEMENT—Decoding Artificial Intelligence–X 25


Point to Remember
Accuracy and Error are interrelated. Their percentages add up to 100. So, if a model has an accuracy
of 90%, it will have an error of 10%.

Why are Accuracy and Error Important?


• Model Evaluation: These metrics help us understand how well the model performs
overall.
• Model Comparison: We can compare different models to see which one is more
accurate.
• Model Improvement: By identifying where the errors are, we can make decisions to
improve the model.

Ethical Concerns around Model Evaluation: Are our models fair?


While accuracy and error are important measures, they don’t tell us everything about the
models. We also need to think about the ethical implications and how they can affect real
people. Every machine learning model should follow some ethical principles like:
• Fairness: Machine learning models must be fair and not discriminate against anyone.
• Trust: We should be able to trust that the models are working correctly and making
responsible decisions.
• Accountability: We must be able to check, review and understand models and their
decisions.
Some ethical concerns may include the following:
1. Bias: This means that a model is unfairly prejudiced towards or against certain groups.
For example, assume that you trained the cats and dogs classification model only on
photos taken under bright sunlight. The model may work well when classifying cats and
dogs in similar lighting but might perform poorly when classifying images taken in poor
light. Similarly, a model trained on images of Indian breeds of cats and dogs will perform
poorly and be biased if we ask it to classify breeds from another country because it has
been trained on images of Indian cats and dogs.
Real-World Example: A model used by a bank to give out loans may be biased against
one gender, resulting in them being unfairly denied loans. Biased models are a problem
because they can reinforce existing inequalities in society and cause real harm.
Solution for Bias: Collect data that fairly represents different people and situations and
try to identify and remove bias in the models through other approaches.
2. Transparency: This refers to ensuring we understand how the model makes decisions
and how we can see what inputs lead to what outcomes. If we have a very accurate
but complex model, we may not know what features the model is using to identify the
images. Is it focusing on the color, the shape, size or something else?
Real-World Example: Consider a model that decides whether someone should be hired
for a job. If we don’t understand ‘why’ it is making its decision, we cannot know if it is
unfair or biased.

26 SUPPLEMENT—Decoding Artificial Intelligence–X


Without transparency, we cannot trust a model’s results and cannot catch errors and
biases in time.
Solution for Transparency: Design and use models that are easy to understand
and ensure that they do not make decisions based on irrelevant information, e.g., if the
model is a recruitment tool, make sure that decisions are not made based on a person’s
religion.
3. Accuracy: While accuracy is important, it is not the only thing that matters and it can
be misleading. For example, a model could be very good at recognizing cat images but
very bad at recognizing dog images simultaneously. Although the overall accuracy may be
good, yet it will still not be a fair or useful model.
Real-World Example: A model used for medical diagnosis could be accurate for
detecting most diseases but it can fail to detect a rare disease. While the model may look
very accurate, it can still cause serious harm to individuals with rare conditions.
High accuracy can hide unfair or unreliable behaviour from the models. It also doesn’t
convey why an error is occurring.
Solution for Accuracy: Instead of merely focusing on overall accuracy, evaluate the
model based on specific details to ensure fair performance across different scenarios.

Exercises
Objective Type Questions
I. Multiple Choice Questions (MCQs):
1. What is the purpose of a train-test split in machine learning?
(a) To collect data
(b) To evaluate model performance on unseen data
(c) To ensure 100% accuracy of the model
(d) To optimize training data usage
2. Which dataset is used to evaluate the model’s performance?
(a) Training dataset (b) Testing dataset
(c) Validation dataset (d) Full dataset
3. In train-test split method, what does the testing set represent?
(a) Data used for model training
(b) Data used for hyperparameter tuning
(c) Unseen data to check the model’s generalization ability
(d) Data used for increasing model accuracy
4. If a model has an accuracy of 90%, what is its error rate?
(a) 10% (b) 90%
(c) 80% (d) Cannot be determined
5. What does ‘True Negative’ represent in classification?
(a) Model incorrectly predicts the positive class
(b) Model correctly predicts the positive class
(c) Model correctly predicts the negative class
(d) Model incorrectly predicts the negative class

SUPPLEMENT—Decoding Artificial Intelligence–X 27


II. Fill in the blanks:
1. The dataset used to train a machine learning model is known as the ..................................... .
2. The formula to calculate accuracy is ..................................... .
3. ..................................... measures the percentage of incorrect predictions made by the model.

III. State whether the following statements are True or False:


1. The training set and testing set should overlap to ensure accuracy.
2. Accuracy and error are complementary metrics and the sum of their percentage equals 100.

IV. Assertion and Reasoning Based Questions:



Read the following questions based on Assertion (A) and Reasoning (R). Mark the correct choice as:
(i) Both A and R are true and R is the correct explanation for A.
(ii) Both A and R are true but R is not the correct explanation for A.
(iii) A is true but R is false.
(iv) A is false but R is true.
1. Assertion: Train-test split ensures the model is evaluated on unseen data.
Reasoning: Testing data is used to improve the model’s training performance.
2. Assertion: A higher accuracy percentage indicates better model performance.
Reasoning: Accuracy only considers correctly classified examples and ignores incorrect ones.

Subjective Type Questions


I. Unsolved Questions
1. Explain the importance of splitting data into training and testing sets in machine learning.
2. What is the relationship between Accuracy and Error in the context of model evaluation?
3. Derive the formula for accuracy using True Positives (TP), True Negatives (TN), False Positives (FP)
and False Negatives (FN). Provide an example to illustrate its application.
4. Discuss the implications of a high error rate in a machine learning model and suggest strategies to
reduce it.

II. Case-Based/HOTS Questions


1. A machine learning model is used to classify emails as ‘Spam’ or ‘Not Spam’. The model’s classification
results on the testing set are as follows:
• True Positives (TP): 150
• True Negatives (TN): 200
• False Positives (FP): 50
• False Negatives (FN): 30
(a) Calculate the accuracy and error rate of the model.
(b) Explain the significance of True Positives and False Positives in this scenario.
(c) Based on the results, what recommendations can you provide to improve the model’s
performance?

28 SUPPLEMENT—Decoding Artificial Intelligence–X


4 Statistical Data

(To be Assessed Through Practicals)

AI FOR EVERYONE
We have learned about the complexities of Machine Learning and Deep Learning but any
AI-based application needs data to understand the problem and to be able to analyze it. Once
we have gathered sufficient data, we can build AI models. Let us understand what statistical
data is and where it is used.

DEFINING STATISTICAL DATA


Statistical data refers to any type of information that is collected and organized in a way that can
be analyzed to find patterns and draw conclusions. It is the information that helps us understand
and make sense of the world. It can be as simple as the number of students in a classroom or
something more complex like monthly rainfall data, the types of items that are being bought in a
supermarket or the price of a house. Statistical data can be categorized as follows:
1. Numerical Data (Quantitative): Data that represents numbers and can be measured
or counted. It can be further divided into:
• Discrete Data: Countable values using whole numbers, e.g., number of students, cars or
items.
• Continuous Data: Measurable values that can take any value within a range, e.g., height,
weight, temperature.
2. Categorical Data (Qualitative): Data that represents categories or groups that are
often described with labels. Categorical data can be of two types:
• Nominal Data: Categories without any order, e.g., gender, color, city.
• Ordinal Data: Categories with a meaningful order but without precise intervals,
e.g., ratings like ‘poor’, ‘good’, ‘excellent’.

Applications of Statistical Data


Since statistical data is used for making informed decisions, it is used in almost every field.
Some applications of statistical data are as follows:
• Science: To analyze experimental results, test new medicines, analyze weather patterns
and understand how plants grow.
• Business: To understand sales trends, predict popular products, to understand customer
behaviour and how to price products.
• Economics: To track economic growth, measure unemployment rates and predict market
trends.
• Education: To evaluate student performance, assess the effectiveness of teaching
methods and decide how to assign students to different classes.
• Healthcare: To track disease outbreaks, measure the effectiveness of treatments,
manage hospital resources and determine treatment strategies.
• Sports: To track player performance, analyze game strategies and even predict the
outcome of games.
• Government: To plan cities, create infrastructure, manage populations and allocate
resources effectively.

Statistical Data and AI


Traditionally, Artificial Intelligence has been a stronghold of the people with exceptional
mathematical and coding skills. In code-based AI tools, every single step of the process, from
data preparation to model building, is written using programming languages like Python.
Although this approach allows maximum flexibility, it is also the most complex to manage
and requires expert programmers. The computer has to be instructed on how to perform
statistical analysis, what kind of data to consider, what type of models to choose, etc.
However, it is now possible to build AI-based solutions without writing long, complicated
programs using no-code and low-code AI tools. They make it easier for everyone to get started
with AI by analyzing statistical data.

No-Code and Low-Code AI


No-Code AI is a collection of tools which can help build AI models
without writing any code at all. These platforms provide visual
tools in which you can drag and drop components and build AI
models by connecting them to each other in a visual way. For
example, Orange Data Mining is a no-code AI tool that allows
the use of pre-built functions to perform data analysis and build
models using a simple drag-and-drop interface.
1. Statistical Analysis: In Orange, you can create visualizations such as histograms and
box plots without any programming.
2. Machine Learning: You can build machine learning models without needing to code.
You can also try different models and see which one is giving better performance.
Need for No-Code Tools
No-code tools simplify AI and data science by allowing users to build models without
programming knowledge. They enable businesses, educators, and non-technical users to utilize
AI for tasks like data analysis, automation, and predictions without writing complex code.
Advantages of No-Code Tools
1. No-code tools make AI and data science accessible to users without technical expertise,
allowing them to use AI without learning programming.
2. These tools significantly reduce the time required to build and deploy AI models.

30 SUPPLEMENT—Decoding Artificial Intelligence–X


3. With drag-and-drop interfaces and visual workflows, no-code tools make AI model
development relatively easier and user-friendly.
4. Businesses and individuals can save costs by reducing their dependence on AI developers
and data scientists, making AI implementation more affordable.
5. No-code platforms allow users to quickly test and iterate on AI models, making
experimentation easier without needing extensive technical knowledge.
Despite several advantages, no code tools are still incapable to match the code based AI
development due to some disadvantages as mentioned below.
Disadvantages of No-Code Tools
1. Since users rely on predefined functions and templates, there is scope for limited
customization only. Users may not be able to fine-tune models or implement advanced
custom features.
2. No-code tools are often designed for small to medium-scale projects and may struggle
with handling large datasets or complex AI applications.
3. Users are dependent on the tool providers like Orange for updates, feature
enhancements and security, which can limit flexibility and control.
4. Unlike traditional coding approaches, where every aspect of an AI model can be
modified, no-code tools offer limited control over algorithmic decisions and performance
optimizations.
Examples of No Code AI Tools
1. Azure Machine Learning: A cloud-based AI platform by Microsoft that allows users to
build, train and deploy machine learning models without writing code.

2. Google Cloud AutoML: A suite of no-code machine learning tools by Google that
enables users to train AI models for tasks like image recognition, text classification and
translation without coding. It uses Google’s powerful AI infrastructure to automatically
optimize models based on provided data.

SUPPLEMENT—Decoding Artificial Intelligence–X 31


3. Orange Data Mining: An open-source, visual programming tool for data science and
machine learning that allows users to analyze data, build models and visualize insights
using a simple drag-and-drop interface. It is widely used in education and research for
exploratory data analysis and predictive modelling.
4. Lobe AI: [Now discontinued] A no-code AI tool by Microsoft designed for training
machine learning models, particularly for image classification tasks. Users can simply
upload labelled images and Lobe automatically trains a model that can be deployed in
various applications without writing any code.
5. Teachable Machine: A web-based tool by Google that lets users train AI models in
real-time using their webcam, microphone or images. It is designed for beginners and
educators, allowing them to create and experiment with AI models for classification tasks
such as recognizing objects, sounds or poses.
Low-Code AI tools, on the other hand, help you build AI models with very little coding. A
low-code tool will give you a similar visual interface as a no-code tool for ease of use. However,
it will allow you to add some custom coding for more advanced tasks, data analysis or to create
more complex models.
AI Tools vis-à-vis Statistical Data
The following table shows how the various AI tools differ from each other while working with
statistical data.
High Code No-Code AI Low-Code AI
Requires manual coding for Provides visual tools for analysis Offers visual tools with coding for
statistical operations customization
Code is required to create models Model creation with drag-and-drop Model creation with visual tools
tools and code options
Highest level of customization Limited customization Moderate level of customization
with custom coding
High programming knowledge No programming knowledge Moderate programming knowledge
required required required

32 SUPPLEMENT—Decoding Artificial Intelligence–X


No-code and Low-code AI tools are gaining importance not just because of the popularity of
Artificial Intelligence but also because of the following reasons:
• Accessibility: These tools make statistical data analysis and AI development accessible
to more people, even to those with limited coding skills.
• Speed: They enable faster model development and quicker insights into statistical data.
• Innovation: They promote innovation by allowing more people to create new ways of
analyzing data and build new AI applications.
Statistical data is a valuable resource that is used in many fields. No-code and Low-code AI
provide simpler and easier ways to analyze data.
Let us explore the Orange Data Mining tool in detail, with a simple project.

ORANGE DATA MINING TOOL


Orange is a free, open-source and user-friendly no-code AI tool that is designed for data
visualization, analysis and Machine Learning. It provides an intuitive drag-and-drop interface,
allowing even users without programming knowledge to design workflows and analyze data.
Orange is ideal for teaching, rapid prototyping and exploring datasets interactively. It supports
tasks like classification, regression, clustering and more through pre-built widgets.
Key Features
• Interactive data visualization (scatter plots, box plots and more)
• Machine learning algorithms for classification and regression
• Add-ons for text mining, bioinformatics and time series analysis
• Easy-to-understand workflows, making it suitable for beginners and educators

Installation Steps
Prerequisites: Ensure Python (3.6 or a newer version) is installed on your computer. You can
download Python from python.org
Install Orange: The software can be installed from command line or from an installer. You
can also visit the official website https://orangedatamining.com/download/

SUPPLEMENT—Decoding Artificial Intelligence–X 33


Follow the instructions in the setup wizard to complete
the installation. Upon installation, Orange presents the
user with a Welcome screen, as shown in the following
image. You may click Video Tutorials to learn more
about the tool.

To access the blank interface of Orange Data Mining tool, click New and the following screen
will appear.

Case Study
LOAN CLASSIFICATION
Let us discuss a case study where we will classify loan application status using loan data, containing
two files, one each for training data and testing data. The dataset, which can be downloaded from
https://bit.ly/loan-data, has been used here along with the hands-on example to learn more about Orange
Data Mining tool.
Understanding Use Case: Loan Classification
In this case study, we will build an AI model to classify loan applications. This is a common problem faced
by banks and financial institutions.
• Problem: Banks receive many loan applications every day. Making decisions on which loans to
approve and which to deny can be quite time-consuming.
• Solution: We will use historical data to train a model that can classify loan applications as ‘Approved’
or ‘Denied’, based on the information provided.
• Objective: The aim is to automate the decision-making process.
Steps for AI Project Cycle
We will consider AI project cycle steps for this project. Thus, the steps for our project will include the following:
1. Problem Definition: Classify loan applications as ‘Approved/Yes’ or ‘Denied/No’ based on the
features provided. We wish to automate the decision-making process using historical data to create
a classification model to predict whether a bank will approve a loan application or not. The banks
usually decide whether to give loan to an applicant or not based on some factors, which are provided
as features in the dataset.

34 SUPPLEMENT—Decoding Artificial Intelligence–X


2. Data Collection/Acquisition: Consider the following for this step:
Dataset provided: The folder Loan Data.zip contains historical data about loan approvals (both
Training and Testing Data).
Features: The dataset includes features such as:
Loan_ID: A unique loan ID; Gender: Either male or female; Married: Whether Married (Yes) or Not
Married (No); Dependents: Number of persons dependent on the client; Education: Applicant’s
Education (Graduate or Undergraduate); Self_Employed: Self-employed (Yes/No); Applicant Income:
Applicant’s income; Co-applicant Income: Co-applicant’s income; Loan Amount: Loan amount in
thousands; Loan_Amount_Term: Term of loan in months; Credit_History: Whether credit history
meets guidelines; Property_Area: Applicants living in Urban, Semi-Urban or Rural area; Loan_Status:
Loan approved (Y/N)*.
(*Important—The last feature is not available in testing data as this is our target variable.)
3. Verify data quality: Ensure no missing columns or corrupted entries exist; else either use impute
operation or remove rows with missing data.
4. Data Preparation (Data Cleaning and Preprocessing): Prepare data for training the model by
loading, exploring, cleaning and preprocessing the data.
5. Exploratory Data Analysis (EDA): Explore and visualize the data to understand its characteristics
and find any patterns or outliers.
6. Feature Engineering: Transform and prepare your data to make it suitable for the model, e.g., clean
and encode any categorical data.
7. Model Building: Select specific models to train and build your machine learning model.
8. Model Evaluation: To see how well the trained model performs, measure its accuracy and
performance using multiple models and observe which one performs best.

NO-CODE AI – ORANGE DATA MINING


Now, let us perform the following steps in Orange Data Mining using a no-code interface.
Before we begin, download the compressed folder available on https://bit.ly/loan-data and
unzip the contents to your computer. This zipped folder contains two files – train_loan.csv
and test_loan.csv. We will use the train_loan.csv file for training our model.

Data Acquisition
Step 1: Load Data: Start Orange and load train_loan.csv dataset using File widget, as
shown in the following screenshots. This will import the data into our tool. Consider the
following steps:
Step 1(a): Drag File widget and drop in the workflow area.

SUPPLEMENT—Decoding Artificial Intelligence–X 35


Step 1(b): Double-click it to open the file selection interface. Select the file by clicking on the
folder icon, which opens a menu.

Step 1(c): Select the downloaded train_loan.csv file from your computer.

Step 1(d): Check the data available in the train_loan.csv file by double-clicking the File
widget.

36 SUPPLEMENT—Decoding Artificial Intelligence–X


Step 1(e): Right-click the File widget and Rename it as TrainingData.

Step 1(f): Repeat the steps to import test_loan.csv on a File widget and rename it as
TestData.

Exploratory Data Analysis


Step 2: Explore Data—After loading the data, it is important to check the properties of data.
Double-click the TrainingData again and check the Info part of the imported file. You can
observe that 2% values are missing in 11 features.

Which features have missing values? Let us find out with the help of Feature
Statistics widget. Drag and drop the Feature Statistics widget on the
workflow canvas. Observe carefully that this widget has two surrounding
dotted lines on both left and right flanks as compared to only one
dotted line in the File Widget.
These dotted lines which flank a widget are the Input and Output Interfaces for the widgets,
which help them connect to other widgets.
Step 2(a): Connecting Widgets—Try connecting the right flank dotted line (output
interface) of TrainingData to the left flank dotted line (input interface) of Feature Statistics
by dragging a line from TrainingData to Feature Statistics. The result of joining the two is
shown below:

SUPPLEMENT—Decoding Artificial Intelligence–X 37


This joining operation automatically sends data from the TrainingData (File widget) to Feature
Statistics. Let us observe the properties of the 11 features available in our training data by
double-clicking the Feature Statistics widget. This widget shows the statistical distribution
properties of each feature.

The last column shows the number of missing values in each feature. We can see that Credit
History is missing in 50 out of the 614 records. We can either drop those rows for improving
data quality or use imputation to fill the missing values with average values in the feature.
Step 3: Imputation—Add the Impute widget available under
Transform tab and connect TrainingData to Impute. Data flows
automatically to the Impute widget upon connection.
Let us apply an imputation operation by double-clicking the
Impute widget. We can choose the default impute settings
under ‘Default Method’ where we have chosen ‘Average/Most
Frequent’ value for filling in the missing values.
In case we wish to apply a different imputation method to a specific feature, select the feature
in the list and choose the preferred method.

38 SUPPLEMENT—Decoding Artificial Intelligence–X


Do it Yourself
Connect another Feature Statistics widget to Impute and see that all missing values have
been handled.

Step 3(a): Selection of Target


In TrainingData, the columns are classified as Numeric as well as Categorical Features. In
supervised learning, we work with both features and labels, where labels represent the
predicted output. For our Loan Classification model, Loan Status will serve as our label since
we are trying to predict whether or not a loan will be approved.

To achieve this, we must update the Feature Type of Loan Status from a Categorical Feature
to a Categorical Label. Let us use the Select Columns widget under the Transformation tab
and connect the output of Impute widget to it.
To select Loan_Status as the target label, we can drag and drop the Loan_Status from
features to the Target window as illustrated below:

Step 4: Split Data


Having completed the exploratory data analysis, we may
now proceed to modelling. Let us split our training
data into two parts, one for training the model and
the other for evaluating its performance. This is
for the purpose of applying some Machine Learning
algorithm (or Learner) and checking its performance.

SUPPLEMENT—Decoding Artificial Intelligence–X 39


Insert the Data Sampler widget from under the Transformation tab and connect the Select
Columns widget to Data Sampler.
This widget helps us in selecting different sampling strategies, including
Fixed Proportion, with random sampling of a fraction of data (like 80%)
using a slider OR fixed sample size with a chosen number of instances
from the training dataset OR cross-validation with a specified number of
subsets. We are choosing 80% data under Fixed Proportion of Data.
After positioning the sample slider to 80%, click on the Sample Data
button.
Let us observe the split data by adding a Data Info widget from under
Data tab and connecting Data Sampler to it.

But which part of the data split is this data info showing?
Let us find out by clicking on the ‘Link Label’ as highlighted below. You can highlight the link
label by a single click while a double click shows which part of the split data is populated in the
Data Info.
The data sampler interface has two output check
boxes and a line showing which part of the split
data is passed on to the Data Info widget. By
default, the data sample is passed to Data Info.
Let us inspect the remaining data by adding
another Data Info and connecting Data Sampler to
this widget.
Double-click the link label and edit it to remove
connection between Data Info and Data Sampler
and create a different connection from ‘Remaining
data” to Data widget. You can edit the connection
by clicking on the connecting line and then drawing
another line between boxes you wish to connect.
Now you can observe both 80% data in the first Data Info widget and 20% remaining data
in the second Data Info widget by double-clicking to see the properties. The Link Labels are
automatically updated as well!

40 SUPPLEMENT—Decoding Artificial Intelligence–X


Step 5: Modelling
It is now time to give our data to a learning
model or a Learner to learn the patterns from
it. For this purpose, we will use a combination
of two widgets—a learner from the Model
tab and Test and Score widget from the
Evaluate tab as illustrated.
We shall add the Test and Score widget first and connect it to the
Data Sampler widget. You must ensure that Sample Data is
connected to the widget and not the remaining data. The flow
of data in widgets will appear as illustrated below.

The configuration of the link label between Data Sampler


and Test and Score widget is also shown alongisde.
After adding the Test and Score widget, connect your
chosen learner (Machine Learning algorithm) to Test and
Score widget from the Model tab as illustrated below:

SUPPLEMENT—Decoding Artificial Intelligence–X 41


In this case, we are applying a Logistic Regression Learner to the workflow.
Upon successful connection, the Test and Score widget will show the output of evaluation
when we double-click it.

We have used a Random Sampling Method with a Training Size of 80%. But we can also let
Test and Score widget to evaluate the Learner by connecting remaining data for testing its
performance.
Let us make another connection from the data sampler to the Test and Score widget but this
time we will change the link label to use Remaining Data. Also, double-click Test and Score
widget and select the Radio Button—Test on Test Data.
All settings are illustrated below for clarity.

42 SUPPLEMENT—Decoding Artificial Intelligence–X


Check the model performance again by double-clicking Test and Score widget which may have
different results for the evaluation parameters after introducing evaluation data from data
sampler as well.

Let us apply another learner to check comparative performance of two Machine Learning
algorithms on the same training dataset.
We shall use Random Forest as the second learner and connect this learner to Test and Score,
keeping all other connections same as before.

Double-click Test and Score widget for a comparative analysis after applying both learners.

SUPPLEMENT—Decoding Artificial Intelligence–X 43


Check the difference between the performance of the two learners. The learners can also be
configured by double-clicking and changing parameter values, but we are using default values
without any change for both learners.
We can add further evaluation by connecting the output of Test and Score widget to a
Confusion Matrix widget and ROC Analysis widget as illustrated below:

The outputs of both Confusion Matrix and ROC Analysis can be obtained by double-clicking
the widgets and we have already learned how to understand the outputs in previous chapters.
Use the model with best results after testing all classification models in the same way.

Step 6: Prediction on Unseen Data:


To make predictions on new unseen data, we shall use the Predictions widget from Evaluate
tab, and connect TestData to this widget. Follow the steps to connect the learners to the
Predictions widget.

44 SUPPLEMENT—Decoding Artificial Intelligence–X


1. Connect Logistic Regression learner to the Predictions as illustrated below:


2. The connection line appears dashed or broken. This happens because we have not
connected the training data to our learner. Let us connect the data sampler to Logistic
Regression learner and test predictions again.
3. We can add the Tree learner to the Predictions widget and connect the Data Sampler to
Tree Learner in the same way. The complete flow is shown below.

In the end, double-click the Predictions widget which will show the Predicted loan
approval status for each entry in the TestData for both Learners in separate columns 1 and
column 2.

SUPPLEMENT—Decoding Artificial Intelligence–X 45


Congratulations! You have successfully applied machine learning for Loan Classification
prediction using a no-code tool and have applied the AI project lifecycle to a real-life problem.

Activity 1
PALMER PENGUINS
Objective: Analyze the Palmer Penguins dataset using Orange Data Mining to classify penguin
species based on physical measurements.
Steps to Engage:
1. Data Preparation:
• Download the Palmer Penguins dataset from https://bit.ly/4atlCN7
• Ensure the dataset is clean and organized for importing into Orange.
2. Load Data into Orange:
• Open Orange and create a new workflow.
• Drag and drop Import Data widget to load the Palmer Penguins dataset.
• Connect Data Table and Scatter Plot widgets to explore the dataset visually.
3. Feature Extraction and Classification:
• Use Image Embeddings widget to extract relevant features, if using images, or directly
proceed with Data Table, if using numerical features.
• Add a Classifier Learner (e.g., Logistic Regression or Random Forest) and connect it
to Test & Score widget.
4. Evaluation and Analysis:
• Evaluate the model using Test and Score widget.
• Observe metrics like accuracy, precision and recall.
• Connect Confusion Matrix widget to analyze misclassifications.
Evaluate the model’s performance and suggest improvements.
Discuss the practical implications of using no-code tools for data analysis.
Resource Name: Palmer Penguins Case Study in Orange Data Mining
Link to Dataset: https://bit.ly/4atlCN7

46 SUPPLEMENT—Decoding Artificial Intelligence–X


Exercises
Objective Type Questions
I. Multiple Choice Questions (MCQs):
1. Statistical data is defined as:
(a) Only quantitative data collected for experiments
(b) Organized information that can be analyzed for patterns
(c) Only data used in machine learning models
(d) Data collected randomly without categorization
2. Which of the following is an example of nominal data?
(a) Height of students (b) Ratings: Poor, Good, Excellent
(c) Cities: New York, Mumbai, Tokyo (d) Monthly rainfall
3. What is the main advantage of using no-code AI tools?
(a) They require no human intervention.
(b) They automate AI decision-making entirely.
(c) They allow users without programming knowledge to create models.
(d) They do not require data preprocessing.
4. Discrete data refers to:
(a) Continuous variables that can take any value
(b) Data representing categories without any specific order
(c) Countable values such as number of students
(d) Data without numerical representation
5. What is one primary application of statistical data in healthcare?
(a) Designing software for autonomous vehicles
(b) Tracking disease outbreaks and treatment effectiveness
(c) Developing user-friendly AI tools
(d) Planning urban infrastructure
6. Which statement is not true about low-code AI tools?
(a) They require basic programming knowledge for advanced tasks.
(b) They offer drag-and-drop visual tools for model building.
(c) They provide the highest level of customization among AI tools.
(d) They are easier to use as compared to code-based AI tools.

II. Fill in the blanks:


1. ..................................... data represents numerical values that can be counted or measured while
categorical data represents categories or labels.
2. Statistical analysis can be used in ..................................... to evaluate teaching methods and student
performance.
3. ..................................... is an example of a no-code AI tool that is used for data visualization and
analysis.
4. Machine learning algorithms in no-code AI tools typically require ..................................... inputs,
requiring categorical variables to be converted.

SUPPLEMENT—Decoding Artificial Intelligence–X 47


III. State whether the following statements are True or False:
1. Nominal data has a meaningful order while ordinal data does not have a meaningful order.
2. No-code AI tools are popular because they make statistical data analysis accessible to
non-programmers.

IV. Assertion and Reasoning Based Questions:



Read the following questions based on Assertion (A) and Reasoning (R). Mark the correct choice as:
(i) Both A and R are true and R is the correct explanation for A.
(ii) Both A and R are true but R is not the correct explanation for A.
(iii) A is true but R is false.
(iv) A is false but R is true.
1. Assertion: Low-code AI tools provide a visual interface for building AI models.
Reasoning: They eliminate the need for any programming knowledge or customization.
2. Assertion: Statistical data helps predict customer behaviour in business.
Reasoning: It involves collecting and analyzing organized data for identifying patterns and
trends.

Subjective Type Questions


I. Unsolved Questions
1. Define statistical data and explain its types using examples.
2. Discuss the differences between No-Code AI, Low-Code AI and Code-Based AI in terms of
customization and ease of use.
3. Explain the role of statistical data in healthcare and give two real-world examples of its application.
4. Explain the AI project lifecycle with a step-by-step example of a classification problem using
statistical data.
5. Discuss the advantages and limitations of using no-code AI tools like Orange Data Mining for
statistical data analysis.
6. Categorical data needs to be converted to numerical data for most machine learning algorithms.
Justify the given statement using examples and explain the methods used for such conversions

II. Case-Based/HOTS Questions


1. A supermarket chain wants to predict the sales of a newly introduced product based on historical
data. The dataset includes features such as product category, store location, pricing and previous
sales.
(a) Outline how you would use statistical data and AI tools to build a predictive model for this
problem.
(b) Include the steps involved in data preprocessing, feature engineering, model building and
evaluation.
2. A bank has been using statistical data to identify patterns of fraudulent transactions. Recently, they
transitioned from code-based AI to low-code AI for this task.
(a) Evaluate the possible benefits and challenges of this transition.
(b) Suggest how low-code AI tools can still ensure high accuracy and customization while being
user-friendly.

48 SUPPLEMENT—Decoding Artificial Intelligence–X


5 Computer Vision

Prerequisite: Basic Understanding of Computer Vision


(See Book Unit 6, ‘Computer Vision’; Unit 5, ‘Classification’ pages 295–299; Unit 8, ‘Evaluation Metrics’,
pages 398–410)
(To be Assessed Through Practicals)

OVERVIEW OF COMPUTER VISION


Computer Vision (CV) is a field of Artificial Intelligence (AI) that enables machines to see,
interpret and understand images or videos like humans. It uses AI algorithms to recognize
objects, detect patterns and extract meaningful information from visual data.
Example Applications:
• Face recognition (face ID, security cameras)
• Self-driving cars (detecting pedestrians, traffic signs)
• Medical imaging (identifying diseases in X-rays or MRIs)
Computer Vision and AI
Computer Vision is a subfield of AI that focuses on visual perception of human beings. AI
provides the learning techniques (such as Machine Learning and Deep Learning) that allow CV
models to recognize and analyze images intelligently.
• AI teaches computers to recognize patterns in data.
• Computer Vision applies AI to visual information (images/videos) for tasks like object
detection and classification.

Artificial Intelligence
Machine
Learning

Deep
Computer Vision Learning
Difference between Computer Vision and Image Processing
Computer Vision Image Processing
CV is an AI-driven technology that allows computers Image processing refers to enhancing or modifying
to interpret images/videos and make decisions. images using techniques like filtering, resizing and
color adjustment.
It helps in understanding visual content, e.g., object It helps in improving image quality, e.g., noise removal,
recognition, motion tracking. sharpening, color correction, etc.
It uses deep learning and neural networks to provide It implements image filters, transformations, pixel
feature extraction and object detection. manipulation and other simple techniques for
processing images.
Examples: Facial recognition, self-driving cars, Examples: Image resizing, contrast enhancement,
medical diagnostics watermark removal
CV is a subset of AI and strongly relies on AI and Machine Learning while image processing
does not necessarily use AI and is usually done using mathematical techniques.

APPLICATIONS OF COMPUTER VISION


• Face Filters (AR Filters – Snapchat,
Instagram: Face filters use facial recognition
technology of Computer Vision to detect
important facial landmarks like eyes, nose and
mouth. Additionally, Augmented Reality (AR)
is then used to overlay objects such as dog
ears, makeup effects or animated masks on
the detected face.
These filters track facial movements in real time using Computer Vision and produce
interactive effects that adjust dynamically as the user moves.
• Google Search by Image (Reverse Image Search):
Google’s Reverse Image Search tool allows users to
upload an image to find visually similar images and
related information. It uses CV’s image recognition
technology to analyze the uploaded picture and match
it against a vast database.
This feature is useful for identifying landmarks,
finding the source of an image or locating shopping
links for products. For instance, if you upload a picture
of a sneaker, Google can find similar sneakers and direct you to online stores selling
them.
• Google Lens (Visual Search and Object Recognition): Google Lens is an AI-powered
image recognition tool that helps users identify objects, translate text and extract useful
information from images in real time.
With the Google Lens app, you can aim your smartphone camera at an object, text or
landmark and the app uses computer vision to recognize what it is, providing relevant
details using a match with its stored information. The app can also perform actions such
as scanning QR codes, translating text or identifying plants and animals.

50 SUPPLEMENT—Decoding Artificial Intelligence–X


No-Code Computer Vision Tools
No-code computer vision tools allow users to build and deploy AI models for image
recognition, object detection or segmentation tasks without writing any code. These platforms
provide easy-to-use interfaces, allowing users to upload images, train models and apply AI to
real-world tasks.

Examples of No-Code Computer Vision Tools


1. Teachable Machine (by Google): A simple web-based tool to train AI models for image
and pose recognition. This also works with sound recognition and classification.
2. Lobe (by Microsoft): A user-friendly desktop app for training AI models using drag-
and-drop features. (This application is no longer under development.)
3. Orange Data Mining Tool: A desktop application with easy drag-and-drop components
for computer vision tasks such as classification.
4. Segment Anything (by Meta): A powerful AI tool that can automatically segment and
identify objects in any image without manual labelling.

SUPPLEMENT—Decoding Artificial Intelligence–X 51


Do it Yourself
Visit Teachable Machine (https://teachablemachine.withgoogle.com/ ) and start a new
Image Project. Create two classes—Biodegradable Waste and Non-Biodegradable Waste.
Use your webcam to capture at least 20 images for each type of waste. Click Train Model
and wait for the AI to process the data.
Test the model by showing different waste products to the webcam and observe if it correctly
classifies them. Experiment with different lighting conditions and backgrounds to see how
they affect accuracy.

CLASSIFYING DANDELIONS vs SUNFLOWERS USING ORANGE DATA MINING


Let us use Orange Data Mining for classifying dandelion and sunflower images. If the Image
Analytics tab in Orange is not visible, you need to install Image Analytics Add-on. Follow these
steps:
Step 1: Install Image Analytics add-on.
• Open Orange3; go to Options → Add-ons.


• In the search bar, type Image Analytics. Select Image Analytics and click OK.
• After adding the extension, load Orange3 and create a New Workflow (File → New).


Step 2: Load image data.
• Drag Import Images widget onto the canvas.
• Click the widget and select the “Directory” to
upload dandelion and sunflower images from your
computer. (Do ensure that images in the folders
are correctly labelled.)

52 SUPPLEMENT—Decoding Artificial Intelligence–X


• Check if the images have been imported correctly by using Image Viewer widget
and connecting it to Import Images widget.


Step 3: Extract image features.
• Add Image Embedding widget. Connect Import Images to Image Embedding.
• Choose an Embedder (embedding model, e.g., SqueezeNet or InceptionV3).

g ont EMBEDDING
a r er
J Al Embedding refers to turning an image into numbers that a computer can understand.
When you look at a picture of a dandelion or sunflower, you see colors, shapes and textures.
But a machine doesn’t ‘see’ like we do—it needs numbers.
An embedding model (like SqueezeNet or InceptionV3) is a smart tool that looks at an image
and creates a list of numbers (features) that represent important details like shape, texture and
color patterns.
These numbers (embeddings) help the computer recognize and classify images correctly.
Instead of comparing raw pictures, the model compares these numerical features to decide if
an image is a dandelion or a sunflower.

SUPPLEMENT—Decoding Artificial Intelligence–X 53


Step 4: Test and Score Widget
• Add Test and Score widget under Evaluate tab in the workflow. Connect Image
Embeddings → Test and Score.
• Test and Score also needs a Classifier Learner for Supervised Learning based on
class labels. Connect Logistic Regression → Test and Score.

• Double-click on Test and Score widget and choose Cross validation or Random
sampling to split data for training and testing.

The results of classification in terms of common Classification metrics can be seen by


clicking Test and Score widget after adding the Learner. Check Precision, Recall, etc.

54 SUPPLEMENT—Decoding Artificial Intelligence–X



Step 5: Analyze performance with Confusion Matrix.
• Add Confusion Matrix widget. Connect Test and Score → Confusion Matrix.

Double-click Confusion Matrix widget to check results. The matrix will show how
well the model distinguishes between dandelions and sunflowers.

Step 6: Interpret results.


• Confusion Matrix: Displays misclassifications
• Precision and Recall: Assess performance per class
To observe misclassified images, connect another Image Viewer widget to
Confusion Matrix, select the misclassified images on Confusion Matrix and view in
the Image Viewer.

SUPPLEMENT—Decoding Artificial Intelligence–X 55


You may also compare another Learner by connecting an additional Learner to
Test and Score. Observe the comparative performance of the Learners. Which is a
better performing model?

56 SUPPLEMENT—Decoding Artificial Intelligence–X


Do it Yourself
1. Compare Performance of Learners
You used two Learners (models) to classify images.
(a) Which learner performed better? Compare their Precision and Recall from Test and
Score widget.
(b) Why do you think one model performed better than the other? Consider factors
like overfitting, dataset size or feature extraction.
2. Interpret the Confusion Matrix
(a) Look at the Confusion Matrix results.
(i) How many dandelion images were misclassified as sunflowers?
(ii) How many sunflower images were correctly classified?
(b) What does this tell you about the model’s strengths and weaknesses?
3. Analyze the ROC Curve
(a) Look at the ROC Analysis results.
(i) What is the AUC (Area Under the Curve) value for each Learner?
(ii) A perfect model has an AUC of 1.0 while random guessing is 0.5. Where do
your models fall?
(b) Which learner has a better ROC curve?
(c) What does this mean in terms of model reliability?
4. Real-World Applications
(a) Where else could image classification like this be used? Think about applications
in medicine, security, self-driving cars, etc.
(b) How might errors in classification impact real-world decisions?

Activity 1
Learn how to plot AUC curves using Orange; the outputs should look like the image below.
Hint: Connect another widget from Evaluate tab to Test and Score.

SUPPLEMENT—Decoding Artificial Intelligence–X 57


Exercises
Objective Type Questions
I. Multiple Choice Questions (MCQs):
1. Which of the following is a common task in Computer Vision?
(a) Image Classification (b) Speech Recognition
(c) Text Summarization (d) Sentiment Analysis
2. Which component is essential for feature extraction in computer vision?
(a) Text Parser (b) Image Embeddings
(c) Audio Decoder (d) Data Scraper
3. In Orange Data Mining, which widget is used to import images for analysis?
(a) Data Table (b) Import Images
(c) Image Viewer (d) Test & Score

Subjective Type Questions


I. Unsolved Questions
1. Explain the role of ‘extraction’ in computer vision and how embedding models assist in this
process.
2. Discuss two real-world applications of computer vision and the challenges associated with
each.
3. Describe the workflow of building an image classification model using Orange Data Mining.

58 SUPPLEMENT—Decoding Artificial Intelligence–X


6 Natural Language
Processing

Prerequisite: Basic Understanding of NLP


(See Book Unit 7, ‘Natural Language Processing’)
(To be Assessed Through Theory)

FEATURES OF NATURAL LANGUAGES


Natural languages, like English, Spanish and Swahili, have several basic features that make
them unique compared to artificial or programming languages, as explained below:
1. Arbitrariness: The relationship between words and their meanings is mostly arbitrary
(e.g., the word dog doesn’t look or sound like a dog). Different languages use different
words for the same concept (dog in English, chien in French, perro in Spanish).
2. Productivity (Creativity): Humans can create and understand new sentences they
have never heard before. For example, ‘The purple dinosaur is eating spaghetti on the
moon.’ (This is a new sentence but you still understand it.)
3. Discreteness: Language is made up of small, separate units (sounds, letters, words)
that we combine in different ways. For example, changing one sound in ‘cat’ to ‘bat’
completely changes the meaning.
4. Duality of Patterning: Language operates on two levels:
(a) Sounds (phonemes)—meaningless by themselves (e.g., ‘c’, ‘a’, ‘t’)
(b) Meaningful units (words, sentences)—combining sounds to create meaning
(e.g., ‘cat’)
5. Displacement: We can talk about things that are not present (past, future, imaginary).
For example, ‘Dinosaurs lived millions of years ago’ or ‘I will go to Paris next year.’
6. Cultural Transmission: Unlike animal communication (which is mostly instinctive),
humans learn language from their environment. A baby born to English-speaking
parents but raised in Japan will speak Japanese, not English.
7. Variability & Change: Natural languages evolve over time (new words appear,
meanings change). For example: ‘Cool’ once meant ‘cold’ but now it also means
“awesome.”
Unique Features of Natural Languages

Arbitrariness Productivity

Different words for same concept New sentences understood


No inherent connection Creative expression

Discreteness #?! Duality of Patterning

Sounds, letters, words Phonemes


Combination changes meaning Words, sentences
Natural
Displacement ! Languages Cultural Transmission

Past events Environment influence


Future events Language acquisition

Variability & change

New words
Meaning changes

APPLICATIONS OF NLP IN EVERYDAY LIFE


Natural Language Processing (NLP) is everywhere. It helps machines understand, interpret,
and respond to human language. Some common real-world applications of NLP are
presented below:
1. Voice Assistants: NLP helps voice assistants such as Alexa, Siri, Google Assistant
understand spoken commands, process them and respond naturally. You may have used,
“Hey Siri, what’s the weather today?” → The assistant processes the request and provides
a response.
2. Auto-Generated Captions and Speech-to-Text: NLP converts spoken language
into written text using speech recognition. The YouTube auto-captions videos for
accessibility and Google Docs’ voice typing feature transcribes spoken words into text.
3. Language Translation: NLP-powered translation models like Google Translate analyze
grammar, sentence structure and context to provide accurate translations. For example,
translating ‘How are you?’ into French (‘Comment ça va?’) or Spanish (‘¿Cómo estás?’).
4. Sentiment Analysis: NLP analyzes text to detect emotions—whether a comment is
positive, negative or neutral. Companies analyze X (earlier Twitter) posts and reviews
to see if customers like or dislike a product.
5. Text Classification: NLP categorizes text into different groups, like news apps organize
articles into categories such as sports, politics or entertainment based on the
content. It also helps in spam detection in emails.
6. Keyword Extraction: NLP identifies important words from text to improve search
and content organization. A common use case can be seen as Google extracts keywords
from a search query to find the best results.

60 SUPPLEMENT—Decoding Artificial Intelligence–X


7. Chatbots & Customer Support: NLP helps chatbots understand and respond to
customer queries.
8. Grammar & Writing Assistants: NLP detects grammar mistakes, suggests better word
choices and improves writing clarity. You must have noticed that Google Docs suggests
better word choices with Smart Compose. Other assistants may include Grammarly, MS
Editor, etc.
9. Auto-Summarization: NLP extracts key points from long documents to generate short
summaries. This helps News apps like inshorts summarize articles in 60 words or less.
Applications of NLP
Voice Auto-Generated Language Sentiment Text
Assistants Captions Translation Analysis Classification
NLP helps voice NLP converts NLP analyzes NLP detects NLP categorizes
assistants spoken language grammar and emotions in text to text into different
understand spoken into written text context to provide gauge public groups based on
commands and for accessibility. accurate opinion. content.
respond naturally. translations.

Keyword Chatbots Grammar Auto-


Extraction Assistants Summarization
NLP indentifies NLP enables NLP detects NLP extracts key
important words chatbots to grammar mistakes points to generate
to improve search understand and and suggests concise summaries
results. respond to customer improvements in of documents.
queries. writing.

Stages of Natural Language Processing (NLP)


Natural Language Processing (NLP) works through a combination of multiple steps that help
machines understand and process human language. The different stages used in processing a
natural language data include the following:
1. Lexical Analysis (Words & Vocabulary): This step breaks the text into individual
words (tokens) and checks their meanings. The process uses a Lexicon which can be
defined as a dictionary of words and their possible meanings. For example, consider a
sentence ’The cat sat on the mat.’ Lexical analysis splits it into—[‘The’, ‘cat’, ‘sat’, ‘on’,
‘the’, ‘mat’].
2. Syntax Analysis (Grammar & Structure): Syntax is the arrangement of words to form
meaningful sentences. After lexical analysis, NLP examines sentence structure to ensure
it follows grammatical rules or syntax. For example, ‘The cat sat on the mat.’ is correct as
per grammar syntax while ‘Sat cat the mat on’ is incorrect (wrong word order).
3. Semantic Analysis (Literal Meaning): The meaning of words in a sentence relies on
Semantics, which may be defined as the study of literal meaning of words in language
based on context. Consider for example, ‘The bank is near the river.’ (bank = riverbank) is
semantically different as compared to ‘I deposited money in the bank.’ (bank = financial
institution). NLP must understand context to choose the correct meaning of words.
SUPPLEMENT—Decoding Artificial Intelligence–X 61
4. Pragmatic Analysis (Real-World Meaning): It goes beyond literal meaning to
understand speaker intent and real-world context. This phase considers who is
speaking, the situation, and implied meanings. For example, in the sentence ‘Can you
pass the salt?’, the semantic meaning is asking about physical ability to pass salt but the
pragmatic meaning is a polite request for someone to pass the salt.
5. Logical Inference & Discourse Analysis (Reasoning & Context): It denotes a high-
level language understanding which connects sentences logically and understands longer
conversations.
Stages of Natural Language Processing

Lexical Syntax Semantic Pragmatic Logical Inference


Analysis Analysis Analysis Analysis & Discourse
Analysis
Breaking text into Ensuring the Understanding the Interpreting Connecting
individual words arrangement of literal meaning of speaker intent and sentences logically
and checking words follows words based on real-world context to understand
meanings grammatical rules context context

Consider an example: ‘Riya forgot her umbrella. She got wet on way home.’ NLP infers

that it was raining, even though the sentence does not explicitly say so. This reasoning is
essential for making sense of the text in a human-like way.

Exercises
Objective Type Questions
I. Multiple Choice Questions (MCQs):
1. Which feature of natural languages allows humans to create and understand new sentences they
have never heard before?
(a) Arbitrariness (b) Productivity (Creativity)
(c) Displacement (d) Cultural Transmission
2. In NLP, which stage is responsible for breaking text into individual words or tokens?
(a) Syntax Analysis (b) Lexical Analysis
(c) Semantic Analysis (d) Pragmatic Analysis
3. What is the primary function of sentiment analysis in NLP?
(a) Translating text between languages (b) Detecting grammatical errors
(c) Identifying emotions in text (d) Classifying news articles
4. Which of the following is NOT an application of NLP?
(a) Google Translate (b) Video Compression
(c) Auto-Summarization (d) Speech-to-Text

II. Fill in the blanks:


1. The ..................................... phase of NLP examines sentence structure to ensure it follows grammatical
rules.
2. The process of converting spoken language into written text is known as ..................................... .
3. NLP helps search engines extract ..................................... from user queries to return relevant
results.

62 SUPPLEMENT—Decoding Artificial Intelligence–X


III. State whether the following statements are True or False:
1. Natural languages remain static and do not evolve over time.
2. Pragmatic Analysis in NLP helps determine the real-world intent of a sentence.
3. In NLP, the Discreteness feature signifies that languages do not have separate units like
phonemes and words.

IV. Assertion and Reasoning Based Questions:



Read the following questions based on Assertion (A) and Reasoning (R). Mark the correct choice as:
(i) Both A and R are true and R is the correct explanation for A.
(ii) Both A and R are true but R is not the correct explanation for A.
(iii) A is true but R is false.
(iv) A is false but R is true.
1. Assertion: Language Translation tools like Google Translate rely on NLP.
Reasoning: NLP helps in understanding grammar, sentence structure, and context for
translation.
2. Assertion: Speech-to-text systems can perfectly transcribe all spoken words.
Reasoning: NLP-based speech recognition struggles with accents, background noise, and
homophones.

Subjective Type Questions


I. Unsolved Questions
1. Explain the role of Lexical Analysis in NLP with an example.
2. How does NLP help in spam detection in emails?
3. Describe the difference between Semantic Analysis and Pragmatic Analysis in NLP with suitable
examples.
4. Discuss any four real-world applications of NLP and explain how they work.
5. What is tokenization and why is it important in NLP?

II. Case-Based/HOTS Questions


1. A social media company wants to analyze public opinion about a new product launch by processing
user comments. They need to determine if feedback is positive, negative or neutral. Which NLP
technique should be used for this purpose? Explain how it works.
2. An AI-based voice assistant struggles to understand certain user requests accurately. Identify two
possible NLP challenges the assistant might face and suggest solutions to improve its performance.

SUPPLEMENT—Decoding Artificial Intelligence–X 63


(To be Assessed Through Practicals)

NO CODE NLP
Natural language processing has progressed very rapidly during the past five years and its
widespread usage has resulted in development of tools which can be used by anyone with basic
knowledge, without any programming requirements. Tools like Orange Data Mining for NLP
make it accessible for beginners with a no-code approach to problem-solving. A comparative
analysis of No-Code tools with code-based NLP libraries is presented below.
No-Code NLP Tools vs Code-based NLP Libraries

Feature No-Code Tools (e.g., Orange Data Code-based Libraries (e.g., spaCy, NLTK)
Mining)
Ease of Use Drag-and-drop interface does not Requires coding skills (Python), setup and scripting
require programming knowledge
Flexibility Limited customization, mainly pre- Highly customizable with deep control over NLP
built components tasks
Speed & User-friendly but may be slower for Faster processing with optimized algorithms,
Performance large datasets especially in spaCy
Supported Tasks Basic NLP tasks like tokenization, Advanced NLP tasks such as named entity
sentiment analysis and word recognition (NER), dependency parsing and custom
clouds. model training
Machine Learning Provides built-in models but limited Allows full control over training custom ML models
Integration fine-tuning for NLP
Use Case Ideal for beginners, educators and Preferred for research, industry applications and
Suitability quick exploratory analysis production-level NLP systems
One popular application of NLP is sentiment analysis. Sentiment analysis is a Natural
Language Processing (NLP) technique used to determine the emotional tone behind a piece
of text. It classifies text as positive, negative or neutral, helping to analyze opinions and
emotions.
Some applications of Sentiment Analysis include:
• Social Media Monitoring: Analyzing tweets, comments or posts to understand public
sentiment about brands, products or events.
• Customer Feedback Analysis: Companies use it to assess product reviews and improve
services based on customer opinions.
• Stock Market Prediction: Investors analyze news articles and social media sentiment
to predict stock trends.
• Political Opinion Analysis: Used to understand public sentiment towards political
candidates or policies.

64 SUPPLEMENT—Decoding Artificial Intelligence–X


SENTIMENT ANALYSIS USING ORANGE DATA MINING
Let us use the Orange Data Mining Tool for a basic sentiment
analysis task on election tweets (now X posts). If you don’t see the
Text Mining tab in Orange, you need to install the Text add-on.
Follow these steps:
Step 1: Install Text Add-on. Open Orange3. Go to
Options → Add-ons.
• In the search bar, type Text. Select Text and click Install. After adding the extension,
let us load Orange and create a New Workflow.
• Open Orange3 again on your computer. Create a New Workflow (File → New).
Step 2: Load Text Data
In the sentiment analysis practical, we will generate a data file first by collecting
tweets or opinions of our friends/family on some recent event and storing them in
a csv format. You may also download the sample data file available on https://bit.ly/
electweets for the purpose of experimentation.
S.No. id Tweet (now X-post)
1. 1 Delhi elections prove that democracy is thriving! #DelhiElections
2. 2 Great leadership wins again in Delhi! #Victory
3. 3 The voter turnout was amazing this year! #DemocracyWins
4. 4 A new era of governance begins in Delhi! #Hope
5. 5 Delhi’s development is on the right track. #Progress

• Use the Data tab and add the CSV File Import widget to the workflow. Double-click
it and select the csv file to be uploaded from your computer.

• Let us load our data to a corpus by adding and connecting the Corpus widget from
the Text Mining tab to the CSV File Import widget.

We may add multiple data files as the corpus signifies a collection of documents which
shall be used for Natural Language Processing tasks.
SUPPLEMENT—Decoding Artificial Intelligence–X 65
• To view the contents of the corpus, add the Corpus Viewer widget from the
Text Mining tab to the workflow and connect it to the Corpus widget.

• You can inspect the corpus data by double clicking the Corpus Viewer widget, which
illustrates the content of all the documents added to the corpus.

Step 3: Preprocess Text


We can now preprocess the data in corpus by connecting the Preprocess Text widget from the
Text Mining tab.

• Understanding the Preprocess Text


During preprocessing, the text is cleaned and non-important parts of the text are
eliminated. The text is first transformed by applying some basic cleaning steps,
including—
1. Converting all tokens to lowercase for uniformity
2. Removing accents
3. Removing URLs from the text, if any

66 SUPPLEMENT—Decoding Artificial Intelligence–X


We then apply Tokenization and convert text to tokens by breaking it into words
or sentences or splitting it using whitespaces. In our present example, we are using
a pre-trained tokenizer for tweets—
Following this step, we filter numbers, stop words and any unwanted elements using
regular expression filters. We may also use a lexicon to keep only pre-decided tokens
stored in a reference file.

Step 4: Sentiment Analysis


Our corpus is now ready for Sentiment Analysis. Add the Sentiment Analysis widget to the
workflow from the Text Mining tab and connect it to the output of Preprocess Text widget.

SUPPLEMENT—Decoding Artificial Intelligence–X 67


The Sentiment Analysis widget offers to choose from
various sentiment analysis algorithms, depending on
the type of text we are using and the output that we are
looking for.
In our example, we are using the popular sentiment analysis
model VADER, which analyzes each sentence for positive,
negative and neutral sentiments in different parts, and then
takes a compound sentiment of the sentence based on all
three.

You may experiment with different methods like Liu Hu, SentiArt or Multilingual Sentiment to
observe how well they perform.
Also connect a Corpus Viewer and a Data Table widget to the output of sentiment analysis
and observe the output sentiments associated with each post, as illustrated below:

68 SUPPLEMENT—Decoding Artificial Intelligence–X


Finally, let us connect a WordCloud widget to the output of Sentiment Analysis and double-
click to observe the relative weightage of the words during our sentiment analysis process. A
word cloud is a visual representation of text data where words appear in different sizes based
on their frequency or importance. More frequent words are displayed in larger fonts, while less
common words appear smaller.
Word clouds are commonly used for text analysis, summarization and data visualization in
applications like social media analysis, feedback reviews and research papers. They help quickly
identify key themes and trends in large text datasets.

The complete workflow is presented below for your reference.

SUPPLEMENT—Decoding Artificial Intelligence–X 69


Do it Yourself
SENTIMENT ANALYSIS ON MOVIE REVIEWS USING ORANGE DATA MINING
Students will collect product reviews from an e-commerce website or use a pre-collected
dataset (in CSV format). They will use Orange Data Mining to analyze the sentiments expressed
in the reviews. The activity will involve the following steps:
1. Install the Text Add-on in Orange.
2. Load the dataset using the CSV File Import widget.
3. Convert the dataset to a corpus using the Corpus widget.
4. Preprocess the text by removing unnecessary elements and tokenizing the words.
5. Perform Sentiment Analysis using the VADER model in the Sentiment Analysis widget.
6. Visualize the results using WordCloud and Data Table widgets.
7. Interpret the results to determine the overall sentiment trends (positive, negative,
neutral).
You may use the Case Walkthrough available on https://bit.ly/OrangeNLP for reference and
guidance.

Exercises
Objective Type Questions
I. Multiple Choice Questions (MCQs):
1. Which of the following tools can be used for No-Code NLP sentiment analysis?
(a) Jupyter Notebook (b) Orange Data Mining
(c) TensorFlow (d) Visual Studio Code
2. In No-Code NLP, which widget in Orange is used to visualize the frequency of words?
(a) Sentiment Analysis Widget (b) Corpus Viewer Widget
(c) WordCloud Widget (d) Preprocess Text Widget

II. State whether the following statements are True or False:


1. In No-Code NLP, tokenization involves splitting text into individual sentences only.
2. The VADER model in Orange’s Sentiment Analysis widget can analyze positive, negative and neutral
sentiments.

Subjective Type Questions


I. Unsolved Questions
1. Explain how WordCloud widget helps in analyzing text data in Orange.
2. Describe the steps involved in performing sentiment analysis using Orange Data Mining for a set of
tweets.
3. Compare and contrast manual NLP coding with No-Code NLP tools like Orange. Highlight the
advantages and limitations of each approach.

70 SUPPLEMENT—Decoding Artificial Intelligence–X


ANSWERS TO OBJECTIVE TYPE QUESTIONS
Chapter 1
I. Multiple Choice Questions (MCQs):
1. (c) 2. (b) 3. (a) 4. (d) 5. (b)
II. Fill in the blanks:
1. Justice 2. Non-maleficence 3. bias
III. True or False:
1. False 2. True 3. False
IV. Assertion and Reasoning Based Questions:
1. (iii) 2. (iii) 3. (iii)

Chapter 2
I. Multiple Choice Questions (MCQs):
1. (c) 2. (d) 3. (c) 4. (b) 5. (c)
6. (b) 7. (b) 8. (b)
II. Fill in the blanks:
1. mathematical equations 2. clusters 3. human brain
4. Labels 5. Activation 6. Features
III. True or False:
1. True 2. False 3. True 4. False 5. True
IV. Assertion and Reasoning Based Questions:
1. (i) 2. (iv) 3. (iii) 4. (iii)

Chapter 3
I. Multiple Choice Questions (MCQs):
1. (b) 2. (b) 3. (c) 4. (a) 5. (c)
II. Fill in the blanks:
1. Training set 2. (TP + TN) / (TP + TN + FP + FN)
3. Error
III. True or False:
1. False 2. True
IV. Assertion and Reasoning Based Questions:
1. (iii) 2. (iii)

Chapter 4
I. Multiple Choice Questions (MCQs):
1. (b) 2. (c) 3. (c) 4. (c) 5. (b)
6. (c)
II. Fill in the blanks:
1. Numerical 2. education
3. Orange Data Mining 4. numerical
III. True or False:
1. False 2. True
IV. Assertion and Reasoning Based Questions:
1. (iii) 2. (i)

SUPPLEMENT—Decoding Artificial Intelligence–X 71


Chapter 5
I. Multiple Choice Questions (MCQs):
1. (a) 2. (b) 3. (b)

Chapter 6
I. Multiple Choice Questions (MCQs):
1. (b) 2. (b) 3. (c) 4. (b)
II. Fill in the blanks:
1. Syntax Analysis 2. Speech-to-text 3. Keywords
III. True or False:
1. False 2. True 3. False
IV. Assertion and Reasoning Based Questions:
1. (i) 2. (iv)

Chapter 6 (Practicals)
I. Multiple Choice Questions (MCQs):
1. (b) 2. (c)
II. True or False:
1. False 2. True

72 SUPPLEMENT—Decoding Artificial Intelligence–X

You might also like