0% found this document useful (0 votes)
24 views

Issue 5

The document discusses TinyML, a branch of machine learning focused on enabling model inference on resource-constrained embedded devices, particularly in the context of edge computing and IoT. It outlines the history, functioning, hardware, software, advantages, disadvantages, and applications of TinyML, emphasizing its low power consumption and potential for smart IoT applications. The paper also highlights the significance of TinyML in developing intelligent devices that can operate efficiently on minimal energy.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views

Issue 5

The document discusses TinyML, a branch of machine learning focused on enabling model inference on resource-constrained embedded devices, particularly in the context of edge computing and IoT. It outlines the history, functioning, hardware, software, advantages, disadvantages, and applications of TinyML, emphasizing its low power consumption and potential for smart IoT applications. The paper also highlights the significance of TinyML in developing intelligent devices that can operate efficiently on minimal energy.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 25

ELECTROFITY WALL JOURNAL – ECE | UCE – AKNU | Volume 1 Issue 5

TINYML: EMBEDDED MACHINE LEARNING


Sriya Sarvani Vasamsetti 1
Regd. No 198297603058 IV B. Tech ECE
Uma Mahesh Tantapureddy 2
Regd. No 198297603052 IV B. Tech ECE
Department of Electronics and Communication
Engineering, Adikavi Nannaya University,
Rajamahendravaram

ABSTRACT I. INTRODUCTION

Machine learning has emerged as an TinyML is a branch of Machine


essential component of the current digital world. Learning and Embedded Systems that
Edge computing and the Internet of Things investigates the sorts of models that may be run
(IoT) give a fresh opportunity for applying on small, minimal devices such as
techniques for machine learning to resource- microcontrollers. It provides model inference at
constrained embedded devices at the network's the edge with low delay, lower power
edge. To forecast a scenario, traditional consumption, and limited bandwidth. While an
machine learning demands vast amounts of average customer CPU uses between 65 and 85
resources. The TinyML concept of embedded watts and a typical user GPU consumes between
machine learning aims to transfer such a 200 and 500 watts, a typical microcontroller
multitude of typical high systems to minimal requires milliwatts or microwatts. That is
clients. While executing such a transformation, approximately a thousand times less energy
several problems occur, such as maintaining the use. Because of their low
precision of learning approaches, delivering
train-to-deploy capability in resource-
constrained micro edge devices, optimizing
processing capability and increasing stability.
This article discusses the history, working,
hardware, software and the advantages,
disadvantages, applications of the TinyML. power consumption, TinyML devices may
operate offline on batteries for weeks, months,
Keywords: TinyML, Embedded Systems, and even years while executing machine
Machine Learning, Internet of Things (IOT), learning applications on the edge. TinyML is
Multitude, Resource-constrained. currently in its early stages and requires suitable
alignments to be compatible with the current
1
ELECTROFITY WALL JOURNAL – ECE | UCE – AKNU | Volume 1 Issue 5

edge-IoT technologies. According to


groundbreaking

2
ELECTROFITY WALL JOURNAL – ECE | UCE – AKNU | Volume 1 Issue 5

research, the TinyML technique is critical for


developing smart IoT applications. However, TinyML algorithms function similarly to
some research questions have been uncovered regular machine learning models. Typically, the
that may impede TinyML's progress. We algorithms are developed on a user's PC or on
explore the history of the current scenario of the cloud, and it is normal. The actual TinyML
TinyML in this article. We also give a cutting- work occurs after training, in a process known
edge evaluation of research aimed at adapting to as deep compression.
Tiny ML's multiple applications' great utility. The steps involved this deep compression are
The following are the major contributions of 1. Pruning
this paper. 2. Quantization
3. Huffman Encoding
Ⅱ. HISTORY OF TINYML

Early innovators and "founders" of


TinyML include Pete Warden (TensorFlow Lite
Micro), Kwabena Agyeman (Arm Innovator),
and Daniel Situnayake (Edge Impulse). As
3.1 Pruning:
machine learning is a varied skill mostly in
Internet of Things (IoT) and small, wireless Pruning can assist to reduce the size of
technologies, it's no surprise that TinyML was the model's representation. Pruning, in general,
developed so rapidly and received so much aims to eliminate neurons that contribute little
recognition and early acceptance. ABI Research to output results. Small neural weights are
expects that around 2.5 billion TinyML-enabled frequently connected with this, but bigger
devices will be shipped by 2030. Silent weights are retained due to their increased value
Intelligence predicts that TinyML will "achieve during interpretation. To fine-tune the output,
more than $70 billion in economic value" in the the network is trained up on the trimmed design.
next five years. TinyML is defined to have a
power consumption of less than 1 mW, hence
our hardware platforms must be in the area of
embedded devices.

Ⅲ. HOW TINYML WORKS?


3.2 Quantization:

3
ELECTROFITY WALL JOURNAL – ECE | UCE – AKNU | Volume 1 Issue 5

(500 KB in size) and TF Lite Micro (20 KB in


The model weights should preferably be
size). The model is then compiled into C or C++
kept as 8-bit integer values in order to run a
code (the languages most microcontrollers use
model on the Arduino Uno (whereas many
for optimal memory consumption) and executed
desktop computers and laptops use 32-bit or 64-
by the on-device interpreter.
bit floating-point representation). The storage
number of weights is decreased by a factor of
four when the model is quantized from 32-bit to
8-bit values, and the accuracy is frequently
negligible (commonly about 1-3%). Due to
quantization error, some information may be
lost during quantization. Quantization-aware
(QA) training has been offered as a solution to
address this. QA training restricts the network
during training to using values that will be Ⅳ. HARDWARE OF TINYML
accessible on the quantized device.
Arduino Nano 33 BLE Sense Board:

It is a very tiny AI-enabled board with


dimensions of 45*18mm. It is a more powerful
version of the Arduino Nano, using the Nordic
Semiconductors nRF52840, a 32-bit ARM
CortexTM-M4 CPU operating at 64 MHz It will
allow you to build larger programs (it has 1MB
of memory space, which is 32 times larger than
3.3 Huffman Encoding: the Uno) and with a far higher number of
variables than the Arduino Uno (the RAM is
Encoding is an alternative measure that
128 times bigger). Bluetooth connection
is often used to significantly reduce complexity
through NFC and extremely low power
of the model by storing data in the most feasible
consumption modes are two further notable
manner.
characteristics of the main CPU. It includes the
After the model has been quantized and
following integrated sensors:
encoded, it is transferred to a format that can be
1. 9 axis inertial sensor: ideal for wearable
read by a light neural network interpreter, the
devices
most common of which are most likely TF Lite

4
ELECTROFITY WALL JOURNAL – ECE | UCE – AKNU | Volume 1 Issue 5

2. temperature and humidity sensor: to


provide very precise readings of
environmental conditions
3. barometric sensor: users could make a
simple weather station
4. microphone: for real-time sound
collection and analysis gesture,
proximity, light colour, VI. ADVANTAGES OF TINYML

5. light intensity sensor: determine the


 Low power consumption
luminance of the room as well as if
 Requires less bandwidth
somebody is approaching the board.
 Energy efficiency
TensorFlow Lite can be used to create
 Energy Harvesting
machine learning models that are then uploaded
 Data Security
to your board through the Arduino IDE.
 System Reliability
 Protection against Cyber attacks
Ⅴ. SOFTWARE FOR TINYML
 Low latency
The TensorFlow Lite framework has  Low Cost
been updated to run on embedded devices with
only a few tens of kilobytes of RAM. VII. DISADVANTAGES OF TINYML

TensorFlow Lite:  Memory constraints

TensorFlow Lite provides TensorFlow's  Inconsistent Power usage

lightweight solution for mobile and embedded  Resource constrained system

devices. It enables the execution of machine-  Small interpreter system

learned models on mobile devices with low


ⅥII. APPLICATIONS
latency, allowing you to utilize them for
classifying, analysis, and other activities TinyML is widely used in various types of
without requiring another round trip to a server. industries like
TensorFlow Lite differs from TensorFlow  Retail
Mobile in the following ways: It is the most  Health care
recent TensorFlow mobile version. TensorFlow  Transportation
Lite applications often beat TensorFlow mobile  Wellness
apps in terms of performance and binary file  Agriculture
size.  Fitness

5
ELECTROFITY WALL JOURNAL – ECE | UCE – AKNU | Volume 1 Issue 5

 Aquatic Conservation could use it


 Google Assistant
 Alexa
 Photography
 Smart watches
 Automobiles
The current main focused applications of
TinyML are
1. Keyword Spotting: Keywords include
"Hey Siri" and "Hey Google" (wake
word). Such devices continually listen to
audio input from a microphone and are
programmed to respond exclusively to
certain sound sequences that match to the
learnt phrases. These devices are simpler
than automated speech recognition
(ASR) applications and use fewer
resources as a result. Some
products, including as Google
smartphones, use a cascade design
to offer securityspeech
verification.
2. Visual Wake words: Visual wake
words are an image-based equivalent of
wake words. Consider this to be a binary
categorization of a picture, indicating
that an element is either present or
absent. A smart lighting system, for
example, may be programmed to come
on when it senses the presence of a
person and switch off when they depart.
Similarly, wildlife photographers may
use this to take images when a certain
species is present, and security cameras

6
ELECTROFITY WALL JOURNAL – ECE | UCE – AKNU | Volume 1 Issue 5

to take pictures when they identify the


presence of a human.
IX. SUMMARY
In this paper, Tiny machine learning is a
rapidly expanding field of machine learning
technologies and applications that includes
hardware (dedicated integrated circuits),
algorithms, and software capable of performing
on-device sensor data analytics at extremely
low power, typically in the mW range and
below, enabling a variety of always-on use-
cases and targeting battery-powered devices.
Microcontrollers are everywhere, and they
collect massive amounts of data with the aid of
sensors linked to them. The combination of
TinyML with these microcontrollers opens up
a world of possibilities for applications in IoT
devices such as TVs, vehicles, coffee
machines, watches, and other gadgets, allowing
them to have intelligence previously limited to
PCs and smartphones.

REFERENCES

[1] TinyML by Pete Warden, Daniel


Situnayake
https://www.oreilly.com/library/view/tinyml/9
7 81492052036/
[2] Arun, “An Introduction to TinyML,”
“Machine Learning Meets Embedded
Systems,” Towards Data Science, November
10, 2020, https://towardsdatascience.com/an-
introduction- to-tinyml-4617f314aa79.
[3] Ribeiro, Jair, “What is TinyML, and
why does it matter?”, Towards
Data Science, December22, 2020,
https://towardsdatascience.com/what-is-tinyml-
and-why-does-it-matter-f5b164766876.

7
ELECTROFITY WALL JOURNAL – ECE|UCE-AKNU|Volume 1 Issue

CYBERBULLYING DETECTION USING


MACHINE LEARNING

Neeraja Simmalapudi1
Regd. No 198297603047 IV B.Tech
Prathyusha Yamana2
Regd. No 198297603061 IV B.Tech
Department of Electronics and Communication Engineering
Adikavi Nannaya University,Rajamahendravaram.

ABSTRACT I. INTRODUCTION
Cyber bullying is the use of technology as To date, people all over the world utilize
a medium to bully someone. Although it has internet as a tool for communication amongst
been an issue for many years, the recognition of them. Online tools such as social networking
its impact on young people has recently sites (SNSs) are the most popular socializing
increased. Social networking sites provide a tool especially for adolescents as SNSs tightly
medium for bullies, and teens and young adults integrated in their daily practices since it can
who use these sites are vulnerable to attacks. be a medium for users to interact with each
According to Williard(2004), there are eight other without any limitation of time or
types of cyberbullying such as harassment, distance. Nevertheless, SNSs can give negative
denigration, impersonation, etc. It has been consequences if users misuse them and one of
around 2 decades since social media sites came the common negative activities that occurs in
into the picture, but there hasn’t been a lot of SNSs is cyber bullying which is the focus of
effective measures to curb social bullying and it this paper. Cyber bullying involves a person
has become one of the alarming issues in recent doing threatening act, harassment, etc. towards
times. Through machine learning, we can detect another person. Meaning of cyber bullying is a
language patterns used by bullies and their group(s) or an individual(s) of peoples that
victims and develop rules to automatically adopt telecommunication advantages to
detect cyber bullying content. intimidate other persons on the communication
Key Words: Cyberbullying, machine learning. networks. However, most of the researchers in
cyber bullying field take into account
definition of cyber bullying from.

8
ELECTROFITY WALL JOURNAL – ECE|UCE-AKNU|Volume 1 Issue

According to, definition of cyber bullying


II. DATA
formulated as “willful and repeated harm
This section contains all the aspects of
inflicted through the medium of electronic text”.
Data from collection to preprocessing and
Cyber bullying, can takes into a few forms:
features extraction.
flaming, harassment, denigration,
impersonation, outing, boycott and cyber 2.1 Data Collection

stalking. The most severe type of cyber bullying We have used Dataturks’ Tweet Dataset
is flaming and the less severe is cyber stalking for Cybertroll Detection obtained from Kaggle for

as stated in. Flaming occurs between two or reaching the final results. Because of the

more individuals that argue on some incidents seriousness of the issue we aim to resolve, it
that involve rude, offensive and vulgar language was crucial to choose a dataset that was
and occurred within electronic message. complete, reliable, relevant, and to the point.
Flaming is the most severe type of cyber While we considered many other datasets as
bullying because if online fight between well, many of them either had missing
internet’s users take part, it could be difficult to attributes, were too low in quality, or were
recognize cyber bully and victim on that time. found to have irrelevant data after manual
Harassment occurs repeatedly sending of inspection. Thus, after having tried out of
harmful message to a victim. Denigration is many other open sourced datasets, we came
posting about victim that untrue, rumors or down to as it seemed in line with all the
cruel. Impersonation happens when cyber bully parameters required. Here is the Detailed
disguises into a target and post bad information Description of the dataset: It is a partially
about that particular target with intention to manually labelled dataset. Total Instances:
bullying the target. Outing occurs when cyber 20001 The dataset has 2 attributes- tweet and
bully share victim’s secrets or private label [0 corresponds to No while 1 corresponds
information which can embarrassing victim. to Yes]
Boycott is exclude a person within social
2.2 Data Cleaning
interaction in social media with a purpose.
The dataset used was set in a json
Willard mentioned cyber stalking occurs when
format. Since the fields of the dataset were
cyber bully send harmful messages repeatedly.
relatively simple to interpret, the original set of
The cyber stalking is less severity than other
fields in the annotation attribute was removed,
categories since cyber bully (cyber stalker)
and filled with the label values to simplify the
could be detected directly once they send
next step. The number of instances for each
annoying messages towards victim.
class are mentioned in table 1.

9
ELECTROFITY WALL JOURNAL – ECE|UCE-AKNU|Volume 1 Issue

TABLE I 5) Digit removal: We also filtered out any


INSTANCES numeric content as it doesn’t contribute to
Test Training cyberbullying.
Total 4001 16000 6) Now the next step was to extract features so
Instances
Cyber-Bullying 2429 9750 that it can be used with ML algorithms, for
Instances which we used TF-IDF Transform using
Non-Cyber Bullying 1572 6250
Instances Python’s sklearn libary. TF-IDF is a statistical
measure to evaluate the relevance of a word,
2.3 Data Pre-processing
which is basically calculated by multiplying
The preprocessing steps were done as
the number of times that words appeared in the
follows using the nltk library along with regex:
document by the inverse document frequency
1) Word Tokenization: A Token is a single of the word. TF-IDF uses the method
entity that is building blocks for sentence or diminishing the weight (importance) of words
paragraph. Word Tokenization converts our text appeared in many documents in common,
to separate words in a list. considered them incapable of discerning the
2) Stop words filtering is done using documents, rather than simply counting the
nltk.corpus.stopwords.words(‘english’) to fetch frequency of words as CountVectorizer does.
a list of stopwords in the English dictionary,
TABLE 2
after which they are removed. Stop words are
TEST AND TRAINING INSTANCES
words such as “the”, “a”, “an”, “in”, which are
Twitter
not significant and do not affect the meaning of
Total Instances 20001
the data to be interpreted.
Cyber Bullying 7822
3) To remove punctuation, we save only the Instances
Non Cyber Bullying 12179
characters that are not punctuation, which can
Instances
be checked by using string.punctuation .
4) Stemming: Stemming is a process of The outcome matrix consists of each document
linguistic normalization, which reduces words to (row) and each word (column) and the
their word root word. We stem the tokens using importance (weight) computed by tf * idf
nltk.stem.porter.PorterStemmer to get the (values of the matrix). If a word has high tf-idf
stemmed tokens. For example, connection, in a document, it has most of the times
connected, connecting word get reduce to occurred in given documents and must be
common word “ connect ”. absent in the other documents. So the words
must be a signature word.

1
ELECTROFITY WALL JOURNAL – ECE|UCE-AKNU|Volume 1 Issue

Attribute evaluation is done manually as can be linear


seen where we have printed the top 25 words
according to the calculated tf-idf score. Some
Top ranked words for the dataset were: [hate,
fuck, damn, suck, ass, that, lol, im, like, you, it,
get, what, no, would, bitch]

2.4 Data Resampling


As the data was skewed, Resampling had
to be performed on the training data, Firstly the
data was split into Training and Test in 80:20
ratio and resampling was performed on the
training data.

• As we had ample data to work with, we used


oversampling of the minority class. This
means that if the majority class had 1,000
examples and the minority class had 100,
this strategy would oversampling the
minority class so that it has 1,000 examples.
• For Oversampling, Random Oversample
function is used from imblearn package for
all the” not majority” classes which in our
case, was only the 1 minority class.

III. DESCRIPTION
OF ALGORITHMS

3.1 Logistic Regression


Regression analysis is a predictive
modelling technique that analyzes the relation
between the target or dependent variable and
independent variable in a dataset. Regression
analysis techniques get used when the target and
independent variables show a linear or non-

1
ELECTROFITY WALL JOURNAL – ECE|UCE-AKNU|Volume 1 Issue

relationship between each other, and the target


variable contains continuous values.
Regression analysis involves determining the
best fit line, which is a line that passes through
all the data points in such a way that distance of
the line from each data point is minimized.
Logistic regression is one of the types of
regression analysis technique, which gets used
when the dependent variable is discrete.
Example: 0 or 1, true or false, etc. This means
the target variable can have only two values,
and a sigmoid curve denotes the relation
between the target variable and the independent
variable, by mapping any real value to a value
between 0 and 1. We chose Logistic Regression
as the size of our data set was large, and it had
almost equal occurrence of values to come in
target variables. Moreover, there was no
correlation between independent variables in
the dataset. The classifier was implemented
using sklearn.linear model package.

3.2 Decision Tree Classifier


A Decision Tree is constructed by asking
a series of questions with respect to the dataset.
Each time an answer is received, a follow-up
question is asked until a conclusion about the
class label of the record. The series of
questions and their possible answers can be
organised in the form of a decision tree, which
is a hierarchical structure consisting of nodes
and directed edges. It has 3 types of nodes:
Root, Internal, and Leaf nodes.

1
ELECTROFITY WALL JOURNAL – ECE|UCE-AKNU|Volume 1 Issue

In a decision tree, leaf node is assigned a class


IV. METHODOLOGY
label. separate records that have different
characteristics. Using the decision algorithm, we
start at the tree root and split the data on the
feature that results in the largest information
gain(IG)( reduction in uncertainty towards the
final decision). In an iterative process, we can
repeat this splitting procedure at each child node
until the leaves are pure. This means that the
samples at each leaf node all belong to the same
class. The classifier was implemented using
sklearn.tree package.
Figure 4.1: Schematic Diagram of Detecting the
3.3 Random Forest
Cyber Bullying
As its name implies, Random Forest
Classifier consists of a large number of
V. ADVANTAGES
individual decision trees that operate as an
ensemble. Each individual tree in the random  Cyberbullying detection process is

forest spits out a class prediction and the class automatic and time taken for detection

with the most votes becomes our model’s is less and it works on the live

prediction. The low correlation between models environment.

is the key as they can produce ensemble  The latest machine learning models
are used for training models that are
predictions that are more accurate than any of
accurate.
the individual predictions, as the trees protect
each other from their individual errors. The
process of Bagging is used to diversify models VI. FUTURE SCOPE
as each individual tree is allowed to randomly Cyberbullying can come in many forms.
sample from the dataset with replacement. The We can enhance the detection of cyberbullying
classifier was implemented using by combining texts with videos and images
sklearn.ensemble package. and can provide language inputs to detect
sarcastic comments. So, making the dataset a
little more varied and including many more
languages will always be a plus.

1
ELECTROFITY WALL JOURNAL – ECE|UCE-AKNU|Volume 1 Issue

Also, we can test the performance with other


REFERENCES
algorithms along with Perceptron, Logistic
[1] Dev Kathuria, Ishank Nijhawan, Prakhar
regression, and Support Vector Machine and Bhasin, Kritik Singh , “Cyber Bullying
compare the efficiency and accuracy in the Detection: Identifying Hate Speech using
MachineLearning”. raw.githubusercontent.com
future. [Online].Available:
https://raw.githubusercontent.com/kirtiksingh/
Cyberbullying-Detection-using-Machine-
VII. CONCLUSION Learning/main/ProjectReport.pdf
In this paper cyberbullying is one of the [2] Kavya Shetty, Shravya, Sujatha Kharvi,
most critical internet crimes, and research has Ritesh Kumar, “Detection Of Cyberbullying
Using Machine Learning Technique”. irjet.net
demonstrated its critical impact on the victims.
[Online]. Available :
In this paper a novel idea is proposed where any https://www.irjet.net/archives/V8/i7/IRJET-
cyberbullying tweet remarks are identified as V8I7531.pdf

cyberbullying comment or not. In this paper, a [3] Kelly Reynolds, April Kontostathis, Lynne
comparative study between various Supervised Edwards, “Using Machine Learning to Detect
Cyberbullying”, Proceedings of 10th
algorithms, additionally also comparing various
International Conference on Machine Learning
Supervised Ensemble methods as well are done. and Applications and Workshops 2011,
This can help the users by preventing them for doi : 10.1109/ICMLA.2011.152.

becoming victims to this harsh consequence of [4] “Cyber Bullying Detection Using Machine
cyberbullying. Since the domain of online Learning”. 1000projects.org [Online].
bullying is a never-ending process, it is required Available:
https://1000projects.org/cyber-bullying-
that the methodologies require constant detection-using-machine-learning.html
upgrading and updating to the current situation.
[5] Arathi Unni, Ranimol K R, Linda
Finally, it can even prevent a potential
Sebastian, Rajalakshmi S, Sissy Siby,
crisis.This work is a foundational step toward “Detecting the Presence of Cyberbullying
developing software tools for social networks to using Machine Learning”, vol. 09, issue 13,
2021.
monitor cyberbullying.

1
ELECTROFITY WALL JOURNAL-ECE | UCE – AKNU | Volume 1 Issue

MACHINE LEARNING AND IT’S


APPLICATIONS
Sai Santhosh Prem Kumar Ponaji
Regd.No:208297603046 III B.Tech
Department of Electronics and Communications
Engineering Adikavi Nannaya University,
Rajahmahendravram

ABSTRACT intelligence.

Machine learning is a modern


innovation that has enhanced many industrial
and professional processes as well as our
daily lives. It’s a subset of artificial
intelligence (AI), which focuses on using
statistical techniques to build intelligent
computer systems to learn from available
databases. With machine learning, computer
systems can take all the customer data and
utilise it. It operates on what’s been
programmed while also adjusting to new
conditions or changes. Algorithms adapt to
data, developing behaviours that were not
programmed in advance.

Keywords: Artificial intelligence,


Machine Learning
I.INTRODUCTION

Machine learning is a field of artificial


intelligence that deals with the design and
development of algorithms that can learn
from and make predictions on data.
Machine learning is a subset of artificial

1
ELECTROFITY WALL JOURNAL-ECE | UCE – AKNU | Volume 1 Issue

How machine learning works


UC Berkeley breaks out the learning
systemof a machine learning algorithm
into three main parts.
• A Decision Process: In general,
machine learning algorithms are
used to make a prediction or
classification. Based on some input
data, which can be labelled or
unlabelled, your algorithm will
produce an estimate about a pattern
in the data.
• An Error Function: An error
function evaluates the prediction of
the model. If there are known
examples, an error function can make
a comparison to assess the accuracy
of the model.
• A Model Optimization Process: If
the model can fit better to the data
points in the training set, then
weights are adjusted to reduce the
discrepancy between the known
example and the model estimate. The
algorithm will repeat this “evaluate
and optimize” process,
updating weighs
autonomously until a threshold of
accuracy has been met.

1
ELECTROFITY WALL JOURNAL-ECE | UCE – AKNU | Volume 1 Issue

II. MACHINE
2. Unsupervised machine learning
LEARNING
METHODS Unsupervised learning, also known
as unsupervised machine learning, uses
Machine learning models fall
machine learning algorithms to analyze and
into three primary categories.
cluster unlabeled datasets. These algorithms
discover hidden patterns or data groupings
without the need for human intervention.
Thismethod’s ability to discover similarities
and differences in information make it ideal
for exploratory data analysis, cross-selling
strategies, customer segmentation, and
imageand pattern recognition. It’s also used
to reduce the number of features in a model
through the process of dimensionality
1. Supervised machine learning
reduction. Principal component analysis
Supervised learning, also known as (PCA) and singular value decomposition
supervised machine learning, is defined by (SVD) are two common approaches for this.
itsuse of labeled datasets to train algorithms Other algorithms used in unsupervised
to classify data or predict outcomes learning include neural networks, k-means
accurately. As input data is fed into the clustering, and probabilistic clustering
model, the modeladjusts its weights until it methods.
has been fitted appropriately. This occurs as 3. Semi-supervised learning
part of the crossvalidation process to ensure
Semi-supervised learning offers a
that the model avoids overfitting or
happy medium between supervised and
underfitting. Supervised learning helps
unsupervised learning. During training, it
organizations solve a variety of real-world
uses a smaller labeled data set to guide
problems at scale, such as classifying spam
classification and feature extraction from a
in a separate folder from your inbox. Some
larger, unlabeled data set. Semi-supervised
methods used in supervised learning include
learning can solve the problem of not
neural networks, naïve bayes, linear
having enough labeled data for a supervised
regression, logistic regression, random
learning algorithm. It also helps if it’s too
forest, and support vector machine (SVM).
costly to label enough data.

1
ELECTROFITY WALL JOURNAL-ECE | UCE – AKNU | Volume 1 Issue

identify patterns in data so that it can be


grouped. Computers can help data
scientists by identifying differences
III. COMMON MACHINE
between data items that humans have
LEARNINGALGORITHMS
overlooked.
A number of machine learning
• Decision trees: Decision trees can be
algorithms arecommonly used.
used for both predicting numerical
These include:
values (regression) and classifying data

• Neural networks: Neural network into categories. Decision trees use a

simulate the way the human brain branching sequence of linked decisions

works, with a huge number of linked that can be represented with a tree

processing nodes. Neural networks are diagram. One of the advantages of

good at recognizing patterns and play decision trees is that they are easy to

an important role in applications validate and audit, unlike the black box

including natural language translation, of theneural network.

image recognition, speech recognition, • Random forests: In a random forest,

and image creation. the machine learning algorithm predicts

• Linear regression: This algorithm is a value or category by combining the

used to predict numerical values, based results from a number of decision trees.

on a linear relationship between IV.CHALLENGES OF MACHINE


different values. For example, the
LEARNING
technique could be used to predict As machine learning technology has
house prices based onhistorical data for developed, it has certainly made our lives
the area. easier. However, implementing machine
• Logistic regression: This supervised learning in businesses has also raised a
learning algorithm makes predictions number of ethical concerns about AI
for categorical response variables, such technologies. Some of these include:
as “yes/no” answers to questions. It can
be used for applications such as
classifying spam and quality control on
a production line.
• Clustering: Using unsupervised
learning, clustering algorithms can

1
ELECTROFITY WALL JOURNAL-ECE | UCE – AKNU | Volume 1 Issue

4.1 Technological singularity for specific job roles shifts. For example,
While this topic garners a lot of public when we look at the automotive industry,
attention, many researchers are not many manufacturers, like GM, are shifting
concerned with the idea of AI surpassing to focus on electric vehicle production to
human intelligence in the near future. align with green initiatives. The energy
Technological singularity is also referred to industry isn’t going away, but the source of
as strong AI or superintelligence. energy is shifting from a fuel economy to an
Philosopher Nick Bostrum defines electric one.
superintelligence as “any intellect that In a similar way, artificial intelligence
vastly outperforms the best human brains in will shift the demand for jobs to other areas.
practically every field, including scientific There will need to be individuals to help
creativity, general wisdom, and social manage AI systems. There will still need to
skills.” Despite the fact that be people to address more complex
superintelligence is not imminent in society, problems within the industries that are most
the idea of it raises some interesting likely to be affected by job demand shifts,
questions as we consider the use of such as customer service. The biggest
autonomous systems, like self-driving cars. challenge with artificial intelligence and its
It’s unrealistic to think that a driverless car effect on the job market will be helping
would never have an accident, but who is people to transition to new roles that are in
responsible and liable under those demand.
circumstances? Should we still develop 4.3 Privacy
autonomous vehicles, or do we limit this Privacy tends to be discussed in the
technology to semi-autonomous vehicles context of data privacy, data protection, and
which help people drive safely? The jury is data security. These concerns have allowed
still out on this, but these are the types of policymakers to make more strides in recent
ethical debates that are occurring as new, years. For example, in 2016, GDPR
innovative AI technology develops. legislation was created to protect the
4.2 AI impact on jobs personal data of people in the European
While a lot of public perception of Union and European Economic Area,
artificial intelligence centers around job giving individualsmore control of their data.
losses, this concern should probably be In the United States, individual states are
reframed. With every disruptive, new developing policies, such as the California
technology, we see thatthe market demand Consumer Privacy Act (CCPA), which

1
ELECTROFITY WALL JOURNAL-ECE | UCE – AKNU | Volume 1 Issue

was

2
ELECTROFITY WALL JOURNAL-ECE | UCE – AKNU | Volume 1 Issue

introduced in 2018 and requires businesses


to inform consumers about the collection of
their data. Legislation such as this has
forced companies to rethink how they store
and use personally identifiable information
(PII). As a result, investments in security
have become an increasing priority for
businesses as they seek to eliminate any
vulnerabilities and opportunities for
surveillance, hacking, and cyberattacks. Real-world examples of image recognition:

V. Applications of Machine  Label an x-ray as cancerous or not


Learning in real world:  Assign a name to a photographed face
Machine learning is relevant in many (aka “tagging” on social media)
fields, industries, and has the capability to
grow over time. Here are six real-life  Recognise handwriting by segmenting
examples of how machine learning is being a single letter into smallerimages.
used. They are
Machine learning is also frequently used
a. Image recognition
for facial recognition within an image.
b. Speech recognition Using a database of people, the system can
identify commonalities and match them to
c. Medical diagnosis
faces. This is often used in law
d. Statical arbitrage
enforcement.
e. Predictive analytics
5.2 Speech recognition
f. Extraction
Machine learning can translate speech into
5.1 Image recognition text. Certain software applications can

Image recognition is a well-known and convert live voice and recorded speech into

widespread example of machine learning in atext file. The speech can be segmented by

the real world. It can identify an object as a intensities on time-frequency bands as

digital image, based on the intensity of the well.

pixels in black and white images or color


images.

2
ELECTROFITY WALL JOURNAL-ECE | UCE – AKNU | Volume 1 Issue

5.7 Analyse bodily fluids


In the case of rare diseases, the joint
use of facial recognition software and
machine learning helps scan patient photos
and identify phenotypes that correlate with
rare genetic diseases.
5.4 Statistical arbitrage
Real-world examples of speech
Arbitrage is an automated trading
recognition:
strategy that’s used in finance to manage a
5.3 Voice search
large volume of securities. The strategy
5.4 Voice dialling
uses a trading algorithm to analyse a set of
5.5 Appliance control
securities using economic variables and
Some of the most common uses of
correlations.
speech recognition software are devices
like GoogleHome or Amazon Alexa.

5.3 Medical diagnosis

Machine learning can help with the


diagnosis of diseases. Many physicians use
chatbots with speech recognition capabilities
to discern patterns in symptoms.

Real-world examples of statistical arbitrage:

5.5 Algorithmic trading which analyses


amarket microstructure

5.6 Analyse large data sets

5.7 Identify real-time arbitrage

Real-world examples for medical opportunities


diagnosis: 5.8 Machine learning optimizes the

Assisting in formulating a diagnosis or arbitrage strategy to enhance


recommends a treatment option results.
5.6 Oncology and pathology use
machine learning to recognise
canceroustissue

2
ELECTROFITY WALL JOURNAL-ECE | UCE – AKNU | Volume 1 Issue

5.5 Predictive analytics 5.6 Extraction


Machine learning can classify available
Machine learning can extract
data into groups, which are then defined by
structured information from unstructured
rules set by analysts. When the
data. Organisations amass huge volumes
classification is complete, the analysts can
of data from customers. A machine
calculate the probability of a fault.
learning algorithm automates the process
of annotating datasets for predictive
analytics tools.

Feature extraction is a part of the


dimensionality reduction process, in
which, an initial set of the raw data is
divided and reduced to more
Real-world examples of predictive analytics: manageable groups. So when you want

 Predicting whether a transaction is to process it will be easier. The most

fraudulent or legitimate. important characteristic of these large

 Improve prediction systems data sets is that they have a large

to calculate the possibility of fault number of variables.

Typically, these processes are VI. CONCLUSION


tedious. But machine learning can track
From this article what we have
and extractinformation to obtain billions of
learned that Machine learning is a field
data samples.
of artificial intelligence that deals with
Real-world examples of extraction: the design and development of
algorithms that can learn from and make
Predictive analytics is one of the predictions on data. The aim of machine
most promising examples of machine learning is to automate analytical model
learning. It’sapplicable for everything; building and enable computers to learn
from product development to real estate from data without being explicitly
pricing. experiences, check out the programmed to do particular model.
Personalization Builder. Use the power
of predictive analytics and modelling to
understand each customer’spreferences!

2
ELECTROFITY WALL JOURNAL-ECE | UCE – AKNU | Volume 1 Issue

REFERENCES

[1].https://www.ibm.com/cloud/learn/machi
ne-learning
[2].https://www.salesforce.com/eu/blog/20
20/06/real-world-examples-of-machine-
learning.html

2
ELECTROFITY WALL JOURNAL-ECE | UCE – AKNU | Volume 1 Issue

You might also like