Artificial Intelligence (Unit - 2)
Artificial Intelligence (Unit - 2)
S. Ramya
PGT Teacher, Computer Science Dept
AI Applications and Methodologies
Chatbots
➢ Chatbot is one of the applications of AI which simulate conversation with
humans either through voice commands or through text chats or both.
➢ We can define Chatbot as an AI application that can imitate a real
conversation with a user in their natural language.
➢ They enable communication via text or audio on websites, messaging
applications, mobile apps, or telephones.
➢ Some interesting applications of Chatbot are Customer services, E-
commerce, Sales and Marketing, Schools and Universities etc.
➢ Natural Language Processing is what allows chatbots to understand your
messages and respond appropriately.
➢ When you send a message with “Hello”, it is the NLP that lets the
chatbot know that you’ve posted a standard greeting, which in turn allows the
chatbot to leverage its AI capabilities to come up with a fitting response. In this
case, the chatbot will likely respond with a return greeting.
➢ Without Natural Language Processing, a chatbot can’t meaningfully
differentiate between the responses “Hello” and “Goodbye”. To a chatbot without
NLP, “Hello” and “Goodbye” will both be nothing more than text-based user
inputs.
➢ Natural Language Processing (NLP) helps provide context and
meaning to text-based user inputs so that AI can come up with the best
response.
➢ Please visit the link to get a first-hand experience of banking related conversations
with EVA : https://v1.hdfcbank.com/htdocs/common/eva/index.html
➢ You can also visit https://watson-assistant-demo.ng.bluemix.net/ for a completely
new experience on the IBM Web based text Chatbot
➢ Apollo Hospitals URL: https://covid.apollo247.com/
Types of Chatbots
Chatbots can broadly be divided into Two Types:
1. Rule based Chatbot
2. Machine Learning (or AI) based Chatbot risk
source: https://dzone.com/articles/how-to-make-a-chatbot-with-artificial-intelligence
Natural Language Processing (NLP)
➢ Do you understand the technology behind the Chatbots? It is called is NLP,
short for Natural Language Processing. Every time you throw a question to Alexa,
Siri or Google assistant, they use NLP to answer your question. Now we are faced
with the question, what is Natural Language processing (NLP)?
➢ Natural language means the language of humans. It refers to the
different forms in which humans communicate with each other – verbal,
written or non-verbal or expressions (sentiments such as sad, happy, etc.).
➢ The Technology which enables the machines (software) to understand
and process the natural language (of humans), is called natural language
processing (NLP).
➢ A Machine can able to read, write, speak, translate like human through
natural language.
➢ Therefore, we can safely conclude that NLP essentially comprises of natural
language understanding (human to machine) and natural language generation
(machine to human).
➢ NLP is a sub – area of Artificial Intelligence deals with the capability of
software to process and analyze human language, both verbal and written language.
***Example:
Email filters, Digital phone calls
Smart assistants, Spell check
Search results, Language translation
Social Media Monitoring.
➢ Camera / machine captures the image of number plate, the image is transferred to
neural network layer of NLP application, NLP extracts the vehicle’s number
from the image. However, correct extraction of data also depends on the quality of
image.
Quick Question?
➢ Question: Where and how is NLP used? Answer: NLP is natural language
processing and can be used in scenarios where static or predefined answers, options
and questions may not work. In fact, if you want to understand the intent and
context of the user, then it is advisable to use NLP.
➢ Let’s take the example of pizza ordering bot. When it comes to pre-listed pizza
topping options, you can consider using the rule-based bot, whereas in case you
want to understand the intent of the user where one person is saying “I am hungry”
and another person is saying “I am starving”, it would make more sense to use
NLP, in which case the bot can understand the emotion of the user and what he/she
is trying to convey.
2. Summarization by NLP
NLP not only can read and understand the paragraphs or full article but
can summarize the article into a shorter narrative without changing the meaning.
It can create the abstract of entire article.
There are two ways in which summarization takes place – one in which key
phrases are extracted from the document and combined to form a summary
(extraction-based summarization) and the other in which the source document is
shortened (abstraction-based summarization).
https://techinsight.com.vn/language/en/text-summarization-in-machine-learning/
3.Information Extraction
➢ Information extraction is the technology of finding a specific information in a
document or searching the document itself. It automatically extracts structured
information such as entities, relationships between entities, and attributes
describing entities from unstructured sources.
Example
➢ For example, a school’s Principal writes an email to all the teachers in his school -
➢ “I have decided to organize a teachers’ meet tomorrow. You all are requested to
please assemble in my office at 2.30 pm. I will share the agenda just before the
meeting.”
➢ NLP can extract the meaningful information for teachers:
➢ What: Meeting called by Principal
➢ When: Tomorrow at 2.30 pm
➢ Where: Principal’s office
➢ Agenda: Will be shared before the meeting
➢ Another very common example is Search Engines like Google retrieving results
using Information Extraction.
4. Sentiment Analyzer:
➢ It is used to analysis quickly by detect emotions in text data/emoji/etc.
➢ Sentiment analysis is contextual mining of text which identifies and extracts
subjective information in source material, and helping a business to understand the
social sentiment of their brand, product or service while monitoring online
conversations.
➢ Sentiment is the attitudes, opinions, and emotions of a person towards a
person, place, thing, or entire body of text in a document.
➢ Sentiment analysis studies the subjective information in an expression,
that is, the opinions, appraisals, emotions, or attitudes towards a topic, person
or entity. Expressions can be classified as positive, negative, or neutral.
➢ Sentiment analysis works by breaking a message down into topic chunks and
then assigning a sentiment score to each topic.
5. Speech Processing
➢ The ability of a computer to hear human speech and analyze and understand
the content is called speech processing. When we talk to our devices like Alexa or
Siri, they recognize what we are saying to them. For example:
➢ You: Alexa, what is the date today?
➢ Alexa: It is the 18-March-2020.
➢ What happens when we speak to our device? The microphones of the device hears
our audio and plots the graphs of our sound frequencies. As light-wave has a
standard frequency for each colour, so does sound. Every sound (phonetics) has a
unique frequency graph. This is how NLP recognizes each sound and
composes an individual’s words and sentences.
Below table shows the step-wise process of how speech recognition works:
***Speech Recognition System:
We now a days have pocket assistants that can do a lot of tasks at just one
command. Alexa, Google Assistant are some common examples of the voice assistants
which are a major part of our digital devices.
***6. Translate (Neural Machine Translation (NMT)
2. Cornel (https://stanfordnlp.github.io/CoreNLP/)
Based on the activity you just did, answer the following questions:
Q1: Can we say ‘computer vision’ acts as the eye of a AI machine / robot?
Q2: In the above activity, why did we have to supply many examples of fish before
the AI model actually started recognizing the fish?
Q4: Let me check your imagination – You have been tasked to design and develop a
prototype of a robot to clean your nearby river/ ponds. Can you use your imagination
and write 5 lines about features of this ‘future’ robot?
Computer Vision: How does computer see an image
Consider the image below:
Looking at the picture, the human eye can easily tell that a train/engine has crashed
through the statin wall (https://en.wikipedia.org/wiki/Montparnasse_derailment). Do
you think the computer also views the image the same was as humans? No, the
computer sees images as a matrix of 2-dimensional array (or three-dimensional
array in case of a colour image). 9*9 = 81
➢ The above image is a grayscale image, which means each value in the 2D matrix
represents the brightness of the pixels. The number in the matrix ranges between
0 to 255, wherein 0 represents black and 255 represents white and the values
between them is a shade of grey.
➢ For example, the above image has been represented by a 9x9 matrix. It shows 9
pixels horizontally and 9 pixels vertically, making it a total of 81 pixels (this is a
very low pixel count for an image captured by a modern camera, just treat this as an
example).
➢ In the grayscale image, each pixel represents the brightness or darkness of the pixel,
which means the grayscale image is composed of only one channel*. But colour
image is the right mix of three primary colors (Red, Green and Blue), so a
colour image will have three channels*. #darkblue=122*11*81=108702
➢ Since colour images have three channels*, computers see the colour image as a
matrix of a 3-dimensional array. If we have to represent the above locomotive
image in colour, the 3D matrix will be 9x9x3. Each pixel in this colour image has
three numbers (ranging from 0 to 255) associated with it. These numbers represent
the intensity of red, green and blue colour in that particular pixel.
➢ * Channel refers to the number of colors in the digital image. For example, a
colored image has 3 channels – the red channel, the green channel and the blue
channel. Image is usually represented as height x width x channels, where channel
is 3 for coloured image and channel is for 1 for grayscale image.
Computer Vision: Primary Tasks
There are primarily four tasks that Computer vision accomplishes:
1. Semantic Segmentation (Image Classification)
2. Classification + Localization
3. Object Detection
4. Instance Segmentation
113b*155g*177r=1245698 (darkpink)
1. Semantic Segmentation
➢ Semantic Segmentation is also called the Image classification. Semantic
segmentation is a process in Computer Vision where an image is classified
depending on its visual content. Basically, a set of classes (objects to identify in
images) are defined and a model is trained to recognize them with the help of
labelled example photos. In simple terms it takes an image as an input and outputs a
class i.e. a cat, dog etc. or a probability of classes from which one has the highest
chance of being correct.
➢ For human, this ability comes naturally and effortlessly but for machines, it’s a
fairly complicated process. For example, the cat image shown below is the size of
248x400x3 pixels (297,600 numbers)
(Source : http://cs231n.github.io/assets/classify.png)
2. Classification and Localization
Once the object classified and labelled, the localization task is evoked
which puts a bounding box around the object in the picture. The term
‘localization’ refers to where the object is in the image. Say we have a dog in an
image, the algorithm predicts the class and creates a bounding box around the object
in the image.
Source: https://medium.com/analytics-vidhya/image-classification-vs-object-detection-vs-
image-segmentation-f36db85fe81 3.
3. Object Detection
When human beings see a video or an image, they immediately identify the objects
present in them. This intelligence can be duplicated using a computer. If we have multiple
objects in the image, the algorithm will identify all of them and localize (put a bounding
box around) each one of them. You will therefore, have multiple bounding boxes and labels
around the objects.
Source: https://pjreddie.com/darknet/yolov1/
4. Instance Segmentation
Instance segmentation is that technique of CV which helps in identifying
and outlining distinctly each object of interest appearing in an image.
This process helps to create a pixel-wise mask for each object in the
image and provides us a far more granular understanding of the object(s) in the
image. As you can see in the image below, objects belonging to the same class are
shown in multiple colours.
Source: https://towardsdatascience.com/detection-and-segmentation-through-
convnets-47aa42de27ea
Activity : Let’s have some fun now!
➢ Given a grayscale image, one simple way to find edges is to look at two neighboring pixels
and take the difference between their values. If it’s big, this means the colours are very
different, so it’s an edge.
➢ The grids below are filled with numbers that represent a grayscale image. See if you can
detect edges the way a computer would do it.
➢ Try it yourself!
Grid 1: If the values of two neighboring squares on the grid differ by more than 50, draw a
thick line between them.
Grid 2: If the values of two neighboring squares on the grid differ by more than 40, draw a
thick line between them.
Weather Predictions
➢ Weather forecasting deals with gathering the satellites data, identifying
patterns in the observations made, and then computing the results to get accurate
weather predictions. This is done in real-time to prevent disasters.
➢ Artificial Intelligence uses computer-generated mathematical programs
and computer vision technology to identify patterns so that relevant weather
predictions can be made. Scientists are now using AI for weather forecasting to
obtain refined and accurate results, fast!
➢ In the current model of weather forecasting, scientists gather satellite
data i.e. temperature, wind, humidity etc. and compare and analyze this data
against a mathematical model that is based on past weather patterns and
geography of the region.
➢ This is done in real time to prevent disasters. This model being primarily
human dependent (and the mathematical model cannot be adjusted real time) faces
many challenges in forecasting.
➢ This has resulted in scientists preferring AI for weather forecasting. One of
the key advantages of the AI based model is that it adjusts itself with the
dynamics of atmospheric changes.
***CNN:(Convolution Neural Network)
➢ A Convolutional Neural Network (CNN) is a type of artificial neural
network used in image recognition and processing, classification,
segmentation that is specifically designed to process pixel data.
➢ CNN have their “neurons” arranged more like those of the frontal lobe,
the area responsible for processing visual stimuli in humans and other animals.
➢ Convolutional Neural Network (CNN) allows researchers to generate
accurate rainfall predictions six hours ahead of when the precipitation occurs.
Application of CNN:
✓ Understanding Climate, Visual Search, Analyzing Documents.
✓ Advertising, Image Tagging, Character recognition.
✓ Predictive Analytics - Health Risk Assessment
✓ Historic and Environmental Collections.
The AI based commodity forecasting system can produce transformative results,
such as:
1. The accuracy level would be much higher than the classical forecasting model
2. AI can work on broad range of data and due to which it can reveal new insights
3. AI model is not rigid like classical model therefore its forecasting is always based
on the most recent input.
Question 2: In the age of self-driving cars, do we need zebra crossings? After all, self-
driving cars can potentially make it safe to cross a road anywhere.
_____________________________________________________________________
Self-driving cars or autonomous cars, work on a combination of technologies.
Here is a brief introduction to them:
1. Computer Vision: Computer vision allows the car to see / sense the surrounding. Basically,
it uses:
‘Camera’ that captures the picture of its surrounding which then goes to a deep learning model
to processing the image. This helps the car to know when the light is red or where there is a
zebra crossing etc.
‘Radar’ – It is a detection system to find out the how far or close the other vehicles on the
road are
‘Lidar’ – It is a surveillance method with which the distance to a target is measured. The
lidar (Lidar emits laser rays) is usually placed in a spinning wheel on top of the car so that they
can spin around very fast, looking at the environment around them. Here you can see a Lidar
placed on top of the Google car.
2. Deep Learning:
This is the brain of the car which takes driving decisions on the information
gathered through various sources like computer visions etc.
Robotics:
The self-driven cars have a brain and vision but still its brain needs to
connect with other parts of the car to control and navigate effectively. Robotics helps
transmit the driving decisions (by deep learning) to steering, breaks, throttle etc.
Navigation:
Using GPS, stored maps etc. the car navigates busy roads and hurdles to
reach its destination.
***Price Forecast for Commodities:
➢ Forecasting Energy Commodity Prices Using Neural Networks.
➢ The use of neural networks as an advanced signal processing tool may be
successfully used to model and forecast energy commodity prices, such as crude oil, coal,
natural gas, copper and electricity prices.
➢ A Commodity's price is determined primarily by the forces of supply and demand
for the commodity in the market. If the weather in a certain region is going to affect the
supply of a commodity, the price of that commodity will be affected directly.
Examples: corn, soybeans, and wheat.
➢ Copper, one of the oldest and most commonly used commodities, ChAI came
together as a team with a goal to leverage the latest AI techniques in order to provide the
most accurate forecasts over a range of commodities.
❑ Prediction Accuracy.
❑ Confidence Bound Improvement.
❑ Explain ability of our models.
Google Duplex(Speech Recognition System: )
What Is Google Duplex?
Google Duplex technology is created to sound normal, to make the conversation
experience more friendly. It’s important to us that clients and businesses have a decent
experience with this service, and transparency is a key piece of that. We need to be clear about
the purpose of the call so organizations understand the context.
What will I use Google Duplex for?
Google Duplex won’t say whatever you instruct it to — it only works for specific
kinds of over-the-phone requests. The two illustrations Google introduced at I/O involved
setting up a hair salon appointment and a reservation at a restaurant. Another illustration
is getting some information about business hours. In other words, it only works for those
scenarios because it’s been trained for them, and cannot speak freely in a general context.
How does Google Duplex work?
Duplex relies upon a machine learning model drawn from real world data - for this
circumstance, phone conversations. It consolidates the organization’s most recent leaps
forward in speech recognition and content to-speech synthesis with a range of contextual
details, including the purpose and history of the conversation.
To make Duplex sound normal, Google has even attempted to reproduce the flaw
of common human speech. Duplex consolidates a short delay before issuing certain
responses and says “uh” and “well” as it artificially reviews data. The outcome is
phenomenally exact with amazingly short latency.
https://www.youtube.com/watch?v=D5VN56jQMWM
Characteristics and Types of AI
There are many applications and tools, being backed by AI, which has a direct impact
on our daily life. So, it is important for us to understand what kind of systems can be
developed, in a broad sense, using AI.
***Characteristics and Types of AI
The ideal characteristic of artificial intelligence is its ability to
rationalize and take actions that have the best chance of achieving a
specific goal.
Eliminate Dull Tasks
➢ We all have days where we have to do busy work: dull, boring tasks that may be
necessary but are neither important nor valuable. Fortunately, machine learning
and AI are beginning to address some of this through human-computer
interaction technologies.
➢ More robust use of this technology can be found in companies like Dialog flow,
a subsidiary of Google, and formerly known as Api.ai. Such companies build
conversational interfaces that use machine learning to understand and
meet customer needs.
➢ Virtual assistants like Alexa, Siri, Cortana, and Google Assistant perform basic
tasks, conversing with the user in natural language.
➢ Want to send 5,000 calendar invites, or book a flight from San Francisco to
Paris? Need a reliable method to answer basic customer questions online,
instantaneously? Solutions like those offered by Dialog flow cut through the
dull work that would otherwise require hours of human time.
➢ Sun Express, a Turkish-based airline, has become the first such airline in
the world to allow users to book tickets using the Alexa voice assistant from
Amazon, according to Simple Flying.
***Focus Diffuse Problems
➢ The expanse of available data has gone beyond what human beings are capable
of synthesizing, making it a perfect job for machine learning and AI.
➢ For example, Elucify helps sales teams automatically update their contacts.
➢ By simply clicking a button, information is pulled from a multitude of public
and private data sources.
➢ Elucify takes all of that diffuse data, compares it, and makes changes
where necessary.
Distribute data
➢ Modern cybersecurity drives a need for comparing terabytes of inside data with
a comparable amount of outside data.
➢ This has been a very difficult problem to solve, but machine learning and AI are
perfect tools for the job.
➢ Vectra Networks - a security company that uses AI to fight cyber - attacks.
➢ By comparing outside network data to the log inside the enterprise, Vectra
Networks can automate the process of detecting attacks.
➢ As the problems of cyber security change and increase, organizations like these
will be vital in addressing the problems of distributed data.
***Solve Dynamic Data
➢ This process not only reduces cost and improves efficiency for the
company, but creates much safer environments for the human
workers.
➢ This makes it possible to check real time inventory of the object and
display it to the user.
***Knowledge Based Recommender System:
➢ This type of recommender system attempts to suggest objects based
on inferences about a user’s needs and preferences.
➢ Knowledge based recommendation works on functional knowledge: they
have knowledge about how a particular item meets a particular user
need, and can therefore reason about the relationship between a
need and a possible recommendation.
➢ The Recent development in cheap data storage (hard disk etc.), fast processors
(CPU, GPU or TPU) and sophisticated deep learning algorithm has made it
possible to extract huge value from data and that has led to the rise of data
centric AI systems.
➢ Such AI systems can predict what will happen next based on what they’ve
experienced so far, very efficiently. At times these systems have managed to
outperform humans.
➢ Data Driven AI systems, get trained with large datasets, before it makes
predictions, forecasts or decisions.
➢ In our research, we are developing solutions for these kinds of issues and
implement and test them on the robots in our laboratory.
***Types of Automatous System:
***Autonomous System
An autonomous system (AS) is a collection of connected Internet
Protocol (IP) routing prefixes under the control of one or more network
operators on behalf of a single administrative entity or domain that presents
a common, clearly defined routing policy to the internet.
An autonomous system number (ASN) is a unique number that's
available globally to identify an autonomous system and which enables that
system to exchange exterior routing information with other neighboring
autonomous systems. A unique ASN is allocated to each AS for use in
Border Gateway Protocol (BGP) routing.
➢ Fixed automation,
➢ Programmable automation,and.
➢ Flexible automation.
***FIXED AUTOMATION
It is a system in which the sequence of processing (or assembly)
operations is fixed by the equipment configuration. The operations in
the sequence are usually simple.
It is the integration and coordination of many such operations into
one piece of equipment that makes the system complex.
--------------------------------------------------------------------------------------------------------
➢ The dream of making machines that think and act like humans, is not new.
We have long tried to create intelligent machines to ease our work.
➢ These machines perform at greater speed, have higher operational ability and
accuracy, and are also more capable of undertaking highly tedious and monotonous
jobs compared to humans.
➢ Humans do not always depend on pre-fed data as required for AI. Human
memory, its computing power, and the human body as an entity may seem
insignificant compared to the machine’s hardware and software infrastructure.
➢ But, the depth and layers present in our brains are far more complex and
sophisticated, and which the machines still cannot beat at sin the near future.
➢ These days, the AI machines we see, are not true AI. These machines are
super good in delivering a specific type of jobs.
Think of Jarvis in “Iron Man” and you’ll get a sneak peek of Artificial
General Intelligence (AGI) and that is nothing but human-like AI. Although we still
have a long way to go in attaining human-like AI.
Non-Technical Explanation of Deep Learning
➢ The excitement behind artificial intelligence and deep learning is at its peak. At the
same time there is growing perception that these terms are meant for techies to
understand, which is a misconception.
➢ A courier delivery person (X) has to deliver a box to a destination, which is right
across the road from the courier company. In order to carry out the task, X will pick
up the box, cross the road and deliver the same to the concerned person.
➢ In this example the starting and the ending are connected with a straight line. This
is an example of a simple neural network.
➢ The life of delivery man is not so simple and straightforward in reality. They
start from a particular location in the city, and go to different locations across
the city. For instance, as shown below, the delivery man could choose multiple
paths to fulfil his/her deliveries as shown in the image below:
1. Option 1: ‘Address 1’ to ‘Destination 1’and then to the Final Destination’.
2. Option 2: ‘Address 2’ to ‘Destination 2’ and then to the ‘Final Destination’
3. Option 3: ‘Address 3’ to ‘Destination 3’ and then the ‘Final Destination’.
➢ All of the above (and more combinations) are valid paths that take the delivery man
from ‘start’ to ‘finish’.
However, some routes (or “paths”) are better than the rest. Let us assume that
all paths take the same time, but some are really bumpy while some are smooth.
Maybe the path chosen in ‘Option3’ is bumpy and the delivery man has to burn a
(loss) lot more fuel on the way! Whereas, choosing ‘Option 2’ is perfectly smooth, so
the deliveryman does not lose anything!
➢ The above picture is a representation of a neural network of the most efficient path to be
taken by the deliveryman to reach the goal i.e. to go from starting point ‘X’ to ‘Final
Destination’ using the best possible route (which is most fuel efficient). This is a very
small neural network consisting of 3 neurons and 2 layers (address and destination).
➢ In reality neural networks are not as simple as the above discussed example. They may
look like the one shown below!
(Source - http://www.rsipvision.com/exploring-deep-learning/ )
➢ The term ‘deep’ in Deep Learning refers to the various layers you will find in a neural
network. This closely relates to how our brains work.
➢ The neural network shown above, is a network of 3 layers (hidden layer1, hidden layer 2
and hidden layer 3) with each layer having 9 neurons each.
***Non-Technical Explanation of Deep Learning:
➢ These Four Capabilities: Classifying what we perceive, Predicting
things in a sequence, Reasoning and Planning are some of the key
aspect of intelligence.
➢ we were shown just a few pictures of cats and dogs and from that we
could easily spot a new picture of cat? We learnt to classify things we
perceive
➢ Also, we learnt to predict what would happen next in a sequence of
events. For instance, in a peekaboo game when someone hid their face
with their hands and took their hands away we expected to see their face.
➢ As we aged, we learnt not to just go by what we see or perceive, but
reason. For instance even though we only saw a cat with three legs we
would reason it perhaps lost a leg in an unfortunate accident or was born
with a congenital deformity. We would not conclude it is a new species
of cat.
➢ Lastly we learnt very early to start planning to get what we want from
our parents/guardians.
The Future with AI, and AI in Action
➢ Transportation: Although it could take a decade or more to perfect them,
autonomous cars will one day ferry us from place to place. The place where AI may
have the biggest impact in the near future is self-driving cars. Unlike humans, AI
drivers never look down at the radio, put on mascara or argue with their kids in
the backseat. Thanks to Google, autonomous cars are already here, but watch for
them to be universal by 2030. Driverless trains already rule the rails in European
cities, and Boeing is building an autonomous jetliner (pilots are still required to put
info into the system).
➢ Manufacturing: AI powered robots work alongside humans to perform a limited
range of tasks like assembly and stacking, and predictive analysis sensors keep
equipment running smoothly.
➢ Medicine: AI algorithms will enable doctors and hospitals to better analyze data
and customize their health care to the genes, environment and lifestyle of each
patient. From diagnosing brain tumors to deciding which cancer treatment will work
best for an individual, AI will drive the personalized medicine revolution.
➢ Education: Textbooks are digitized with the help of AI, early-stage virtual tutors
assist human instructors and facial analysis gauges the emotions of students to help
determine who’s struggling or bored and better tailor the experience to their
individual needs.
Entertainment:
➢ In the future, you could sit on the couch and order up a custom movie featuring
virtual actors of your choice. Meanwhile, film studios may have a future without
flops: Sophisticated predictive programs will analyze a film script’s storyline and
forecast its box office potential.
➢ The film industry has given us a fair share of artificial intelligence movies
with diverse character roles — big to small, anthropomorphic to robotic, and evil to
good. And over the years, this technology has advanced to make its way into the media
and entertainment industry. From video games to movies and more, AI technologies have
been playing a crucial role in improving efficiencies and contributing to growth therein.
➢ Immersive visual experience is also another popular application of Artificial
intelligence. Virtual Reality (VR) and Augmented Reality (AR) are all the rage
nowadays. Development of VR content for games, live events, reality shows and other
entertainment have been gaining immense popularity and attention.
Vital Tasks:
➢ AI will assistants the tools to help older people stay independent and live in their
own homes longer. AI tools will keep nutritious food available, safely reach objects
on high shelves, and monitor movement in a senior’s home.
➢ Could mow lawns, keep windows washed and even help with bathing and
hygiene. Many other jobs that are repetitive and physical are perfect for AI-based
tools. But the AI-assisted work may be even more critical in dangerous fields like
mining, firefighting, clearing mines and handling radioactive materials.
Chatbots:
➢ AI-enabled chatbots have become a crucial part of sales and customer service.
Helping brands serve their customers and improve the entire support experience,
chatbots are bringing a new way for businesses to communicate with the world.
Earlier versions of chatbots were configured with a limited number of inputs and
weren’t able to process any queries outside of those parameters. As a result, the
communication experience was far less appealing and frustrating for a customer.
➢ However, artificial intelligence and its subset of technologies have transformed the
overall functionality and intelligence of chatbots. AI-Powered Machine Learning
(ML) Algorithms and Natural Language Processing (NLP) provided the ability
to Learn and Mimic Human Conversation. Deploying such advanced level of
processing makes chatbots extremely helpful in a wide variety of applications
ranging from Digital Commerce and Banking to Research, Sales and Brand
Building.
➢ Time-consuming and repetitive manual business tasks can be handled by AI-enabled
chatbots, thereby improving efficiency, reducing personnel costs, eliminating human
error and ensuring quality services.
Cybersecurity:
➢ There were about 707 million cybersecurity breaches in 2015, and 554 million
in the first half of 2016 alone. Companies are struggling to stay one step ahead of
hackers. USC experts say the self-learning and automation capabilities enabled
by AI can protect data more systematically and affordably, keeping people safer
from terrorism or even smaller-scale identity theft.
➢ AI-based tools look for patterns associated with malicious computer viruses
and programs before they can steal massive amounts of information or disaster.
Save Our Planet:
➢ Artificial Intelligence can help us create a better world by addressing complex
environmental issues such as climate change, pollution, water scarcity,
extinction of species, and more. Transforming the traditional sectors and
systems, AI can prevent the adverse effects that threaten the existence of
biodiversity.
➢ According to a report published by Intel in 2018, 74% of the 200
professionals in this field surveyed agreed that AI can help solve
environmental challenges.
➢ AI’s ability to gather and process vast amount of unstructured data and
derive patterns from them prove beneficial in resolving many
environmental issues.
➢ An autonomous floating garbage truck powered by AI is another case in
point. This autonomous truck is designed to remove large amounts of plastic
pollution from oceans and safeguard sea life.
Customer Service:
➢ Google is working on an AI assistant that can place human-like calls
to make appointments at, say, your neighborhood hair salon. In addition to
words, the system understands context and hint.
Predictive Analytics
➢ Do you know how some of the largest companies in the world make so much
money? They use many different strategies, but AI-powered predictive analytics is
one of the most important. Take a look at Netflix for example.
➢ A popular media service provider saves more than $1 billion every year thanks
to the powerful content recommendation system.
➢ Instead of giving irrelevant suggestions, Netflix analyzes users’ real interests to
provide them with highly personalized content suggestions.
➢ Amazon does pretty much the same thing – it utilizes AI to display tailored product
ads to each website visitor individually. The system understands users’ earlier
interactions with the company and hence designs perfect cross-sales offers.
Automotive Industry
➢ Tesla is not only a leader in the electric vehicle market but also a pioneer of AI
deployment in the automotive industry. The company is famous for its
autonomous car program that relies on AI to design independent vehicles that can
operate on their own.
➢ Their cars already amassed more than a billion miles of autopilot data in the US, thus
helping the developers to improve the system and add a wide range of other
functions. That includes features such as ABS breaking, smart airbags, lane
guidance, cruise control, and many more.
Healthcare
➢ The entire healthcare industry is getting better due to the influence of AI. Watson for
Oncology is a true gem in the crown as it combines years of cancer treatment studies
and practice to come up with individualized therapies for patients.
There are many other use-cases we should discuss, but let’s just mention a few:
➢ AI is used in biopharmaceutical research.
➢ AI is used for earlier cancer and blood disease detection.
➢ AI is used in radiology for scan analysis and diagnostic.
➢ AI is used for rare disease treatment.
Music Industry:
➢ If you are into music, you must know Pandora. A music streaming service is
known for its machine learning algorithms that allow the provider to analyze
audio content and make incredibly accurate song suggestions.
➢ Namely, Pandora analyzes a database of music to look at over 450 attributes.
AI helps it to analyze everything from vocals and percussions to grungy guitars and
synthesizer echoes. Such accurate data analytics enables Pandora to create song
lists and recommendations that perfectly match listeners’ preferences.
➢ AI helps the musical composer by giving appropriate suggestion in the
tunes.
Creating Powerful Websites:
➢ Create the website may sound difficult, but today there are so many
different website builders that use AI. It’s never been easier to create a website
with sophisticated AI design tools.
➢ Most require little to no coding skills, offer nearly endless options of
well-designed templates, and AI features to optimize your site seamlessly.
➢ Artificial intelligence has made web design easier than ever. However, you
still need to choose the right tools. With AI website builders, designing and
optimizing your site is no longer hard or expensive.
➢ We can provide you customized Artificial intelligence mailing list
depending on your requirement.
Listed below are some of the challenges posed by AI
1. Integrity of AI
➢ AI systems learn by analysing huge volumes of data. What are the consequences
of using biased data (via the training data set)) in favors of a particular
class/section of customers or users?
➢ In 2016, the professional networking site LinkedIn was discovered to have a
gender bias in its system. When a search was made for the female name ‘Andrea’,
the platform would show recommendations/results of male users with the name
‘Andrew’ and its variations. However, the site did not show similar
recommendations/results for male names. i.e. A search/query for the name
‘Andrew’ did not result in a prompt asking the users if he/she meant to find
‘Andrea’. The company said this was due to a gender bias in their training data
which they fixed later.
2. Technological Unemployment
➢ Due to heavy automation, (with the advent of AI and robotics) some sets of
people will lose their jobs. These jobs will be replaced by intelligent machines.
There will be significant changes in the workforce and the market - there will be
creation of some high skilled jobs however some roles and jobs will become
obsolete.
3. Disproportionate control over data
➢ Data is the fuel to AI; the more data you have, the more intelligent machine you
would be able to develop. Technology giants are investing heavily in AI and data
acquisition projects. This gives them an unfair advantage over their smaller
competitors.
4. Privacy
➢ In this digitally connected world, privacy will become next to impossible.
Numerous consumer products, from smart home appliances to computer
applications have features that makes them vulnerable to data exploitation by AI.
AI can be utilized to identify, track and monitor individuals across multiple
devices, whether they are at work, home, or at a public location. To complicate
things further, AI does not forget anything. Once AI knows you, it knows you
forever!
Cognitive Computing (Perception, Learning, Reasoning)
Adaptive:
Cognitive systems must be flexible enough to understand the changes in the
information. Also, the systems must be able to digest dynamic data in real-time and make
adjustments as the data and environment change.
Interactive:
Human Computer Interaction (HCI) is a critical component in cognitive systems. Users
must be able to interact with cognitive machines and define their needs as those needs change.
The technologies must also be able to interact with other processors, devices and cloud
platforms.
Contextual:
Cognitive systems must understand, identify and mine contextual data, such as
syntax, time, location, domain, requirements, a specific user’s profile, tasks or goals. They
may draw on multiple sources of information, including structured and unstructured data and
visual, auditory or sensor data.
Cognitive Computing Artificial Intelligence
1. Question Answering
➢ Question Answering focuses on building systems that automatically answer
the questions asked by humans in a natural language.
2. Spam Detection
➢ Spam detection is used to detect unwanted e-mails getting to a user's inbox.
3. Sentiment Analysis
➢ Sentiment Analysis is also known as opinion mining. It is used on the web
to analyse the attitude, behavior, and emotional state of the sender.
➢ This application is implemented through a combination of NLP (Natural
Language Processing) and statistics by assigning the values to the text (positive,
negative, or natural), identify the mood of the context (happy, sad, angry, etc.)
4. Machine Translation
➢ Machine translation is used to translate text or speech from one natural
language to another natural language.
5. Spelling Correction
➢ Microsoft Corporation provides word processor software like MS-word,
PowerPoint for the spelling correction.
6. Speech Recognition
➢ Speech Recognition is used for converting spoken words into text.
➢ It is used in applications, such as mobile, home automation, video
recovery, dictating to Microsoft Word, voice biometrics, voice user interface, and
so on.
7. Chatbot
➢ Implementing the Chatbot is one of the important applications of NLP. It is
used by many companies to provide the customer's chat services.
8. Information Extraction
➢ Information extraction is one of the most important applications of NLP.
➢ It is used for extracting structured information from unstructured or semi-
structured machine-readable documents.
9. Natural Language Understanding (NLU)
➢ It converts a large set of text into more formal representations such as first-
order logic structures that are easier for the computer programs to manipulate
notations of the natural language processing.
Phases of NLP
1. Lexical Analysis and Morphological
➢ The first phase of NLP is the Lexical Analysis. This phase scans the source code
as a stream of characters and converts it into meaningful lexemes. It divides the whole
text into paragraphs, sentences, and words.
2. Syntactic Analysis (Parsing)
➢ Syntactic Analysis is used to check grammar, word arrangements, and shows
the relationship among the words.
➢ Example: Agra goes to the USA
➢ In the real world, Agra goes to the USA, does not make any sense, so this
sentence is rejected by the Syntactic analyzer.
3. Semantic Analysis
➢ Semantic analysis is concerned with the meaning full representation. It mainly
focuses on the literal meaning of words, phrases, and sentences.
4. Discourse Integration
➢ Discourse Integration depends upon the sentences that proceeds it and also
invokes the meaning of the sentences that follow it.
5. Pragmatic Analysis
➢ Pragmatic is the fifth and last phase of NLP. It helps you to discover the
intended effect by applying a set of rules that characterize cooperative dialogues.
➢ For Example: "Open the Door" is interpreted as a request instead of an order.
NLP Libraries: