0% found this document useful (0 votes)
24 views

AI& Robotics

Uploaded by

kejsicara123
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views

AI& Robotics

Uploaded by

kejsicara123
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 25

POLYTECHNIC UNIVERSITY OF TIRANA

FACULTY OF TECHNOLOGY AND INFORMATION


DEPARTAMENT OF COMPUTER SCIENCE & ENGINEERING

ENGLISH COURSE
TOPIC: ARTIFICIAL INTELLIGENCE AND ROBOTICS
GROUP:D

WORKED BY: KEJSI CARA ACCEPTED BY : MSC VANINA


KANINI
Artificial intelligence
Artificial intelligence (AI) is intelligence—perceiving, synthesizing, and inferring information
—demonstrated by machines, as opposed to intelligence displayed by non-human
animals and humans. Example tasks in which this is done include speech recognition, computer
vision, translation between (natural) languages, as well as other mappings of inputs.
AI applications include advanced web search engines (e.g., Google Search), recommendation
systems (used by YouTube, Amazon and Netflix), understanding human speech (such
as Siri and Alexa), self-driving cars (e.g., Waymo), automated decision-making and competing at
the highest level in strategic game systems (such as chess and Go). As machines become
increasingly capable, tasks considered to require "intelligence" are often removed from the
definition of AI, a phenomenon known as the AI effect. For instance, optical character
recognition is frequently excluded from things considered to be AI, having become a routine
technology.
Artificial intelligence was founded as an academic discipline in 1956, and in the years since has
experienced several waves of optimism, followed by disappointment and the loss of funding
(known as an "AI winter"), followed by new approaches, success and renewed funding. AI
research has tried and discarded many different approaches since its founding, including
simulating the brain, modeling human problem solving, formal logic, large databases of
knowledge and imitating animal behavior. In the first decades of the 21st century, highly
mathematical-statistical machine learning has dominated the field, and this technique has proved
highly successful, helping to solve many challenging problems throughout industry and
academia.
The various sub-fields of AI research are centered around particular goals and the use of
particular tools. The traditional goals of AI research include reasoning, knowledge
representation, planning, learning, natural language processing, perception, and the ability to
move and manipulate objects. General intelligence (the ability to solve an arbitrary problem) is
among the field's long-term goals. To solve these problems, AI researchers have adapted and
integrated a wide range of problem-solving techniques – including search and mathematical
optimization, formal logic, artificial neural networks, and methods based
on statistics, probability and economics. AI also draws upon computer
science, psychology, linguistics, philosophy, and many other fields.
The field was founded on the assumption that human intelligence "can be so precisely described
that a machine can be made to simulate it". This raised philosophical arguments about the mind
and the ethical consequences of creating artificial beings endowed with human-like intelligence;
these issues have previously been explored by myth, fiction and philosophy since
antiquity. Computer scientists and philosophers have since suggested that AI may become
an existential risk to humanity if its rational capacities are not steered towards beneficial goals.
History
Artificial beings with intelligence appeared as storytelling devices in antiquity, and have been
common in fiction, as in Mary Shelley's Frankenstein or Karel Čapek's R.U.R. These characters
and their fates raised many of the same issues now discussed in the ethics of artificial
intelligence.
The study of mechanical or "formal" reasoning began with philosophers and mathematicians in
antiquity. The study of mathematical logic led directly to Alan Turing's theory of computation,
which suggested that a machine, by shuffling symbols as simple as "0" and "1", could simulate
any conceivable act of mathematical deduction. This insight that digital computers can simulate
any process of formal reasoning is known as the Church–Turing thesis. This, along with
concurrent discoveries in neurobiology, information theory and cybernetics, led researchers to
consider the possibility of building an electronic brain. The first work that is now generally
recognized as AI was McCullouch and Pitts' 1943 formal design for Turing-complete "artificial
neurons".
By the 1950s, two visions for how to achieve machine intelligence emerged. One vision, known
as Symbolic AI or GOFAI, was to use computers to create a symbolic representation of the
world and systems that could reason about the world. Proponents included Allen Newell, Herbert
A. Simon, and Marvin Minsky. Closely associated with this approach was the "heuristic
search" approach, which likened intelligence to a problem of exploring a space of possibilities
for answers. The second vision, known as the connectionist approach, sought to achieve
intelligence through learning. Proponents of this approach, most prominently Frank Rosenblatt,
sought to connect Perceptron in ways inspired by connections of neurons. James Manyika and
others have compared the two approaches to the mind (Symbolic AI) and the brain
(connectionist). Manyika argues that symbolic approaches dominated the push for artificial
intelligence in this period, due in part to its connection to intellectual traditions
of Descarte, Boole, Gottlob Frege, Bertrand Russell, and others. Connectionist approaches based
on cybernetics or artificial neural networks were pushed to the background but have gained new
prominence in recent decades.
The field of AI research was born at a workshop at Dartmouth College in 1956.The attendees
became the founders and leaders of AI research.They and their students produced programs that
the press described as "astonishing": computers were learning checkers strategies, solving word
problems in algebra, proving logical theorems and speaking English. By the middle of the 1960s,
research in the U.S. was heavily funded by the Department of Defense and laboratories had been
established around the world.
Researchers in the 1960s and the 1970s were convinced that symbolic approaches would
eventually succeed in creating a machine with artificial general intelligence and considered this
the goal of their field. Herbert Simon predicted, "machines will be capable, within twenty years,
of doing any work a man can do". Marvin Minsky agreed, writing, "within a generation ... the
problem of creating 'artificial intelligence' will substantially be solved".They had failed to
recognize the difficulty of some of the remaining tasks. Progress slowed and in 1974, in response
to the criticism of Sir James Lighthill and ongoing pressure from the US Congress to fund more
productive projects, both the U.S. and British governments cut off exploratory research in AI.
The next few years would later be called an "AI winter", a period when obtaining funding for AI
projects was difficult.
In the early 1980s, AI research was revived by the commercial success of expert systems, a form
of AI program that simulated the knowledge and analytical skills of human experts. By 1985, the
market for AI had reached over a billion dollars. At the same time, Japan's fifth generation
computer project inspired the U.S. and British governments to restore funding for academic
research. However, beginning with the collapse of the Lisp Machine market in 1987, AI once
again fell into disrepute, and a second, longer-lasting winter began.
Many researchers began to doubt that the symbolic approach would be able to imitate all the
processes of human cognition, especially perception, robotics, learning and pattern recognition.
A number of researchers began to look into "sub-symbolic" approaches to specific AI problems.
Robotics researchers, such as Rodney Brooks, rejected symbolic AI and focused on the basic
engineering problems that would allow robots to move, survive, and learn their
environment. Interest in neural networks and "connectionism" was revived by Geoffrey
Hinton, David Rumelhart and others in the middle of the 1980s. Soft computing tools were
developed in the 1980s, such as neural networks, fuzzy systems, Grey system
theory, evolutionary computation and many tools drawn from statistics or mathematical
optimization.
AI gradually restored its reputation in the late 1990s and early 21st century by finding specific
solutions to specific problems. The narrow focus allowed researchers to produce verifiable
results, exploit more mathematical methods, and collaborate with other fields (such
as statistics, economics and mathematics). By 2000, solutions developed by AI researchers were
being widely used, although in the 1990s they were rarely described as "artificial intelligence".
Faster computers, algorithmic improvements, and access to large amounts of data enabled
advances in machine learning and perception; data-hungry deep learning methods started to
dominate accuracy benchmarks around 2012. According to Bloomberg's Jack Clark, 2015 was a
landmark year for artificial intelligence, with the number of software projects that use AI
within Google increased from a "sporadic usage" in 2012 to more than 2,700 projects. He
attributes this to an increase in affordable neural networks, due to a rise in cloud computing
infrastructure and to an increase in research tools and datasets. In a 2017 survey, one in five
companies reported they had "incorporated AI in some offerings or processes".The amount of
research into AI (measured by total publications) increased by 50% in the years 2015–2019.
Numerous academic researchers became concerned that AI was no longer pursuing the original
goal of creating versatile, fully intelligent machines. Much of current research involves statistical
AI, which is overwhelmingly used to solve specific problems, even highly successful techniques
such as deep learning. This concern has led to the subfield of artificial general intelligence (or
"AGI"), which had several well-funded institutions by the 2010s.

Tools
Search and optimization
AI can solve many problems by intelligently searching through many possible
solutions. Reasoning can be reduced to performing a search. For example, logical proof can be
viewed as searching for a path that leads from premises to conclusions, where each step is the
application of an inference rule. Planning algorithms search through trees of goals and subgoals,
attempting to find a path to a target goal, a process called means-ends
analysis. Robotics algorithms for moving limbs and grasping objects use local
searches in configuration space.
Simple exhaustive searches are rarely sufficient for most real-world problems: the search
space (the number of places to search) quickly grows to astronomical numbers. The result is a
search that is too slow or never completes. The solution, for many problems, is to use
"heuristics" or "rules of thumb" that prioritize choices in favor of those more likely to reach a
goal and to do so in a shorter number of steps. In some search methodologies, heuristics can also
serve to eliminate some choices unlikely to lead to a goal (called "pruning the search tree").
Heuristics supply the program with a "best guess" for the path on which the solution
lies. Heuristics limit the search for solutions into a smaller sample size.
A very different kind of search came to prominence in the 1990s, based on the mathematical
theory of optimization. For many problems, it is possible to begin the search with some form of a
guess and then refine the guess incrementally until no more refinements can be made. These
algorithms can be visualized as blind hill climbing: we begin the search at a random point on the
landscape, and then, by jumps or steps, we keep moving our guess uphill, until we reach the top.
Other related optimization algorithms include random optimization, beam
search and metaheuristics like simulated annealing. Evolutionary computation uses a form of
optimization search. For example, they may begin with a population of organisms (the guesses)
and then allow them to mutate and recombine, selecting only the fittest to survive each
generation (refining the guesses). Classic evolutionary algorithms include genetic
algorithms, gene expression programming, and genetic programming. Alternatively, distributed
search processes can coordinate via swarm intelligence algorithms. Two popular swarm
algorithms used in search are particle swarm optimization and ant colony optimization.

Logic
Logic is used for knowledge representation and problem-solving, but it can be applied to other
problems as well. For example, the satplan algorithm uses logic for planning and inductive logic
programming is a method for learning.
Several different forms of logic are used in AI research. Propositional logic involves truth
functions such as "or" and "not". First-order logic adds quantifiers and predicates and can
express facts about objects, their properties, and their relations with each other. Fuzzy
logic assigns a "degree of truth" (between 0 and 1) to vague statements such as "Alice is old" (or
rich, or tall, or hungry), that are too linguistically imprecise to be completely true or
false. Default logics, non-monotonic logics and circumscription are forms of logic designed to
help with default reasoning and the qualification problem. Several extensions of logic have been
designed to handle specific domains of knowledge, such as description logics; situation
calculus, event calculus and fluent calculus (for representing events and time); causal
calculus; belief calculus (belief revision); and modal logics. Logics to model contradictory or
inconsistent statements arising in multi-agent systems have also been designed, such
as paraconsistent logics.
Probabilistic methods for uncertain reasoning
Many problems in AI (including in reasoning, planning, learning, perception, and robotics)
require the agent to operate with incomplete or uncertain information. AI researchers have
devised a number of tools to solve these problems using methods from probability theory and
economics. Bayesian networks are a very general tool that can be used for various problems,
including reasoning (using the Bayesian inference algorithm), learning (using the expectation-
maximization algorithm), planning (using decision networks) and perception (using dynamic
Bayesian networks). Probabilistic algorithms can also be used for filtering, prediction, smoothing
and finding explanations for streams of data, helping perception systems to analyze processes
that occur over time (e.g., hidden Markov models or Kalman filters).
A key concept from the science of economics is "utility", a measure of how valuable something
is to an intelligent agent. Precise mathematical tools have been developed that analyze how an
agent can make choices and plan, using decision theory, decision analysis, and information value
theory. These tools include models such as Markov decision processes, dynamic decision
networks, game theory and mechanism design.
Classifiers and statistical learning methods
The simplest AI applications can be divided into two types: classifiers ("if shiny then diamond")
and controllers ("if diamond then pick up"). Controllers do, however, also classify conditions
before inferring actions, and therefore classification forms a central part of many AI
systems. Classifiers are functions that use pattern matching to determine the closest match. They
can be tuned according to examples, making them very attractive for use in AI. These examples
are known as observations or patterns. In supervised learning, each pattern belongs to a certain
predefined class. A class is a decision that has to be made. All the observations combined with
their class labels are known as a data set. When a new observation is received, that observation is
classified based on previous experience.
A classifier can be trained in various ways; there are many statistical and machine
learning approaches. The decision tree is the simplest and most widely used symbolic machine
learning algorithm. K-nearest neighbor algorithm was the most widely used analogical AI until
the mid-1990s. Kernel methods such as the support vector machine (SVM) displaced k-nearest
neighbor in the 1990s. The naive Bayes classifier is reportedly the "most widely used learner" at
Google, due in part to its scalability. Neural networks are also used for classification.
Classifier performance depends greatly on the characteristics of the data to be classified, such as
the dataset size, distribution of samples across classes, dimensionality, and the level of noise.
Model-based classifiers perform well if the assumed model is an extremely good fit for the actual
data. Otherwise, if no matching model is available, and if accuracy (rather than speed or
scalability) is the sole concern, conventional wisdom is that discriminative classifiers (especially
SVM) tend to be more accurate than model-based classifiers such as "naive Bayes" on most
practical data sets.

Why is artificial intelligence


important?
AI is important because it can give enterprises insights into their operations that they may not
have been aware of previously and because, in some cases, AI can perform tasks better than
humans. Particularly when it comes to repetitive, detail-oriented tasks like analyzing large
numbers of legal documents to ensure relevant fields are filled in properly, AI tools often
complete jobs quickly and with relatively few errors.

This has helped fuel an explosion in efficiency and opened the door to entirely new business
opportunities for some larger enterprises. Prior to the current wave of AI, it would have been
hard to imagine using computer software to connect riders to taxis, but today Uber has become
one of the largest companies in the world by doing just that. It utilizes sophisticated machine
learning algorithms to predict when people are likely to need rides in certain areas, which helps
proactively get drivers on the road before they're needed. As another example, Google has
become one of the largest players for a range of online services by using machine learning to
understand how people use their services and then improving them. In 2017, the company's
CEO, Sundar Pichai, pronounced that Google would operate as an "AI first" company.

Today's largest and most successful enterprises have used AI to improve their operations and
gain advantage on their competitors.

What are the advantages and disadvantages


of artificial intelligence?
Artificial neural networks and deep learning artificial intelligence technologies are quickly
evolving, primarily because AI processes large amounts of data much faster and makes
predictions more accurately than humanly possible.

While the huge volume of data being created on a daily basis would bury a human researcher, AI
applications that use machine learning can take that data and quickly turn it into actionable
information. As of this writing, the primary disadvantage of using AI is that it is expensive to
process the large amounts of data that AI programming requires.

Advantages

 Good at detail-oriented jobs;


 Reduced time for data-heavy tasks;
 Delivers consistent results;
 AI-powered virtual agents are always available.

Disadvantages

 Expensive;
 Requires deep technical expertise;
 Limited supply of qualified workers to build AI tools;
 Only knows what it's been shown; and
 Lack of ability to generalize from one task to another.

Strong AI vs. weak AI


AI can be categorized as either weak or strong.

 Weak AI, also known as narrow AI, is an AI system that is designed and trained to complete
a specific task. Industrial robots and virtual personal assistants, such as Apple's Siri, use
weak AI.
 Strong AI, also known as artificial general intelligence (AGI), describes programming that
can replicate the cognitive abilities of the human brain. When presented with an unfamiliar
task, a strong AI system can use fuzzy logic to apply knowledge from one domain to another
and find a solution autonomously. In theory, a strong AI program should be able to pass both
a Turing Test and the Chinese room test.

What are the 4 types of artificial


intelligence?
Arend Hintze, an assistant professor of integrative biology and computer science and engineering
at Michigan State University, explained in a 2016 article that AI can be categorized into four
types, beginning with the task-specific intelligent systems in wide use today and progressing to
sentient systems, which do not yet exist. The categories are as follows:

 Type 1: Reactive machines. These AI systems have no memory and are task specific. An
example is Deep Blue, the IBM chess program that beat Garry Kasparov in the 1990s. Deep
Blue can identify pieces on the chessboard and make predictions, but because it has no
memory, it cannot use past experiences to inform future ones.
 Type 2: Limited memory. These AI systems have memory, so they can use past experiences
to inform future decisions. Some of the decision-making functions in self-driving cars are
designed this way.
 Type 3: Theory of mind. Theory of mind is a psychology term. When applied to AI, it
means that the system would have the social intelligence to understand emotions. This type
of AI will be able to infer human intentions and predict behavior, a necessary skill for AI
systems to become integral members of human teams.
 Type 4: Self-awareness. In this category, AI systems have a sense of self, which gives them
consciousness. Machines with self-awareness understand their own current state. This type of
AI does not yet exist.

What are examples of AI technology and


how is it used today?
AI is incorporated into a variety of different types of technology. Here are six examples:

 Automation. When paired with AI technologies, automation tools can expand the volume
and types of tasks performed. An example is robotic process automation (RPA), a type of
software that automates repetitive, rules-based data processing tasks traditionally done by
humans. When combined with machine learning and emerging AI tools, RPA can automate
bigger portions of enterprise jobs, enabling RPA's tactical bots to pass along intelligence
from AI and respond to process changes.
 Machine learning. This is the science of getting a computer to act without programming.
Deep learning is a subset of machine learning that, in very simple terms, can be thought of as
the automation of predictive analytics. There are three types of machine learning algorithms:
o Supervised learning. Data sets are labeled so that patterns can be detected and used to
label new data sets.
o Unsupervised learning. Data sets aren't labeled and are sorted according to similarities
or differences.
o Reinforcement learning. Data sets aren't labeled but, after performing an action or
several actions, the AI system is given feedback.
 Machine vision. This technology gives a machine the ability to see. Machine vision captures
and analyzes visual information using a camera, analog-to-digital conversion and digital
signal processing. It is often compared to human eyesight, but machine vision isn't bound by
biology and can be programmed to see through walls, for example. It is used in a range of
applications from signature identification to medical image analysis. Computer vision, which
is focused on machine-based image processing, is often conflated with machine vision.
 Natural language processing (NLP). This is the processing of human language by a
computer program. One of the older and best-known examples of NLP is spam detection,
which looks at the subject line and text of an email and decides if it's junk. Current
approaches to NLP are based on machine learning. NLP tasks include text translation,
sentiment analysis and speech recognition.
 Robotics. This field of engineering focuses on the design and manufacturing of robots.
Robots are often used to perform tasks that are difficult for humans to perform or perform
consistently. For example, robots are used in assembly lines for car production or by NASA
to move large objects in space. Researchers are also using machine learning to build robots
that can interact in social settings.
 Self-driving cars. Autonomous vehicles use a combination of computer vision, image
recognition and deep learning to build automated skill at piloting a vehicle while staying in a
given lane and avoiding unexpected obstructions, such as pedestrians.

What are the applications of AI?


Artificial intelligence has made its way into a wide variety of markets. Here are nine examples.
AI in healthcare. The biggest bets are on improving patient outcomes and reducing costs.
Companies are applying machine learning to make better and faster diagnoses than humans. One
of the best-known healthcare technologies is IBM Watson. It understands natural language and
can respond to questions asked of it. The system mines patient data and other available data
sources to form a hypothesis, which it then presents with a confidence scoring schema. Other AI
applications include using online virtual health assistants and chatbots to help patients and
healthcare customers find medical information, schedule appointments, understand the billing
process and complete other administrative processes. An array of AI technologies is also being
used to predict, fight and understand pandemics such as COVID-19.

AI in business. Machine learning algorithms are being integrated into analytics and customer
relationship management (CRM) platforms to uncover information on how to better serve
customers. Chatbots have been incorporated into websites to provide immediate service to
customers. Automation of job positions has also become a talking point among academics and IT
analysts.

AI in education. AI can automate grading, giving educators more time. It can assess students
and adapt to their needs, helping them work at their own pace. AI tutors can provide additional
support to students, ensuring they stay on track. And it could change where and how students
learn, perhaps even replacing some teachers.

AI in finance. AI in personal finance applications, such as Intuit Mint or TurboTax, is disrupting


financial institutions. Applications such as these collect personal data and provide financial
advice. Other programs, such as IBM Watson, have been applied to the process of buying a
home. Today, artificial intelligence software performs much of the trading on Wall Street.
AI in law. The discovery process -- sifting through documents -- in law is often overwhelming
for humans. Using AI to help automate the legal industry's labor-intensive processes is saving
time and improving client service. Law firms are using machine learning to describe data and
predict outcomes, computer vision to classify and extract information from documents and
natural language processing to interpret requests for information.

AI in manufacturing. Manufacturing has been at the forefront of incorporating robots into


the workflow. For example, the industrial robots that were at one time programmed to perform
single tasks and separated from human workers, increasingly function as cobots: Smaller,
multitasking robots that collaborate with humans and take on responsibility for more parts of the
job in warehouses, factory floors and other workspaces.

AI in banking. Banks are successfully employing chatbots to make their customers aware of
services and offerings and to handle transactions that don't require human intervention. AI virtual
assistants are being used to improve and cut the costs of compliance with banking regulations.
Banking organizations are also using AI to improve their decision-making for loans, and to set
credit limits and identify investment opportunities.

AI in transportation. In addition to AI's fundamental role in operating autonomous vehicles, AI


technologies are used in transportation to manage traffic, predict flight delays, and make ocean
shipping safer and more efficient.

Security. AI and machine learning are at the top of the buzzword list security vendors use today
to differentiate their offerings. Those terms also represent truly viable technologies.
Organizations use machine learning in security information and event management (SIEM)
software and related areas to detect anomalies and identify suspicious activities that indicate
threats. By analyzing data and using logic to identify similarities to known malicious code, AI
can provide alerts to new and emerging attacks much sooner than human employees and
previous technology iterations. The maturing technology is playing a big role in helping
organizations fight off cyber attacks.

Robotics
Robotics is an interdisciplinary branch of computer science and engineering. Robotics involves
the design, construction, operation, and use of robots. The goal of robotics is to design machines
that can help and assist humans. Robotics integrates fields of mechanical engineering, electrical
engineering, information engineering, mechatronics, electronics, bioengineering, computer
engineering, control engineering, software engineering, mathematics, etc.
Robotics develops machines that can substitute for humans and replicate human actions. Robots
can be used in many situations for many purposes, but today many are used in dangerous
environments (including inspection of radioactive materials, bomb detection and deactivation),
manufacturing processes, or where humans cannot survive (e.g., in space, underwater, in high
heat, and clean up and containment of hazardous materials and radiation). Robots can take any
form, but some are made to resemble humans in appearance. This is claimed to help in the
acceptance of robots in certain replicative behaviors which are usually performed by people.
Such robots attempt to replicate walking, lifting, speech, cognition, or any other human activity.
Many of today's robots are inspired by nature, contributing to the field of bio-inspired robotics.
Certain robots require user input to operate, while other robots function autonomously. The
concept of creating robots that can operate autonomously dates back to classical times, but
research into the functionality and potential uses of robots did not grow substantially until the
20th century. Throughout history, it has been frequently assumed by various scholars, inventors,
engineers, and technicians that robots will one day be able to mimic human behavior and manage
tasks in a human-like fashion. Today, robotics is a rapidly growing field, as technological
advances continue; researching, designing, and building new robots serve various practical
purposes, whether domestically, commercially, or militarily. Many robots are built to do jobs
that are hazardous to people, such as defusing bombs, finding survivors in unstable ruins, and
exploring mines and shipwrecks. Robotics is also used
in STEM (science, technology, engineering, and mathematics) as a teaching aid.

Etymology
The word robotics was derived from the word robot, which was introduced to the public
by Czech writer Karel Čapek in his play R.U.R. (Rossum's Universal Robots), which was
published in 1920. The word robot comes from the Slavic word robota, which means work/job.
The play begins in a factory that makes artificial people called robots, creatures who can be
mistaken for humans – very similar to the modern ideas of androids. Karel Čapek himself did not
coin the word. He wrote a short letter in reference to an etymology in the Oxford English
Dictionary in which he named his brother Josef Čapek as its actual originator.
According to the Oxford English Dictionary, the word robotics was first used in print by Isaac
Asimov, in his science fiction short story "Liar!", published in May 1941 in Astounding Science
Fiction. Asimov was unaware that he was coining the term; since the science and technology of
electrical devices is electronics, he assumed robotics already referred to the science and
technology of robots. In some of Asimov's other works, he states that the first use of the
word robotics was in his short story Runaround (Astounding Science Fiction, March
1942), where he introduced his concept of The Three Laws of Robotics. However, the original
publication of "Liar!" predates that of "Runaround" by ten months, so the former is generally
cited as the word's origin.

History
In 1948, Norbert Wiener formulated the principles of cybernetics, the basis of practical robotics.
Fully autonomous robots only appeared in the second half of the 20th century. The first digitally
operated and programmable robot, the Unimate, was installed in 1961 to lift hot pieces of metal
from a die casting machine and stack them. Commercial and industrial robots are widespread
today and used to perform jobs more cheaply, more accurately, and more reliably than humans.
They are also employed in some jobs which are too dirty, dangerous, or dull to be suitable for
humans. Robots are widely used in manufacturing, assembly, packing and packaging, mining,
transport, earth and space exploration, surgery, weaponry, laboratory research, safety, and
the mass production of consumer and industrial goods.

Robotic aspects
There are many types of robots; they are used in many different environments and for many
different uses. Although being very diverse in application and form, they all share three basic
similarities when it comes to their construction:

1. Robots all have some kind of mechanical construction, a frame, form or shape designed
to achieve a particular task. For example, a robot designed to travel across heavy dirt or
mud might use caterpillar tracks. The mechanical aspect is mostly the creator's solution
to completing the assigned task and dealing with the physics of the environment around
it. Form follows function.

Mechanical construction
2. Robots have electrical components that power and control the machinery. For example,
the robot with caterpillar tracks would need some kind of power to move the tracker
treads. That power comes in the form of electricity, which will have to travel through a
wire and originate from a battery, a basic electrical circuit. Even petrol-
powered machines that get their power mainly from petrol still require an electric current
to start the combustion process which is why most petrol-powered machines like cars,
have batteries. The electrical aspect of robots is used for movement (through motors),
sensing (where electrical signals are used to measure things like heat, sound, position,
and energy status), and operation (robots need some level of electrical energy supplied to
their motors and sensors in order to activate and perform basic operations)

Electrical aspect

3. All robots contain some level of computer programming code. A program is how a robot
decides when or how to do something. In the caterpillar track example, a robot that needs
to move across a muddy road may have the correct mechanical construction and receive
the correct amount of power from its battery, but would not go anywhere without a
program telling it to move. Programs are the core essence of a robot, it could have
excellent mechanical and electrical construction, but if its program is poorly constructed
its performance will be very poor (or it may not perform at all). There are three different
types of robotic programs: remote control, artificial intelligence, and hybrid. A robot
with remote control programming has a preexisting set of commands that it will only
perform if and when it receives a signal from a control source, typically a human being
with remote control. It is perhaps more appropriate to view devices controlled primarily
by human commands as falling in the discipline of automation rather than robotics.
Robots that use artificial intelligence interact with their environment on their own
without a control source, and can determine reactions to objects and problems they
encounter using their preexisting programming. A hybrid is a form of programming that
incorporates both AI and RC functions in them.
Programming

Power source

At present, mostly (lead–acid) batteries are used as a power source. Many different types of
batteries can be used as a power source for robots. They range from lead–acid batteries, which
are safe and have relatively long shelf lives but are rather heavy compared to silver–cadmium
batteries which are much smaller in volume and are currently much more expensive. Designing a
battery-powered robot needs to take into account factors such as safety, cycle lifetime,
and weight. Generators, often some type of internal combustion engine, can also be used.
However, such designs are often mechanically complex and need fuel, require heat dissipation,
and are relatively heavy. A tether connecting the robot to a power supply would remove the
power supply from the robot entirely. This has the advantage of saving weight and space by
moving all power generation and storage components elsewhere. However, this design does
come with the drawback of constantly having a cable connected to the robot, which can be
difficult to manage. Potential power sources could be:

 pneumatic (compressed gases)


 Solar power (using the sun's energy and converting it into electrical power)
 hydraulics (liquids)
 flywheel energy storage
 organic garbage (through anaerobic digestion)
 nuclear
Actuation

Actuators are the "muscles" of a robot, the parts which convert stored energy into movement. By
far the most popular actuators are electric motors that rotate a wheel or gear, and linear actuators
that control industrial robots in factories. There are some recent advances in alternative types of
actuators, powered by electricity, chemicals, or compressed air.
Electric motors
The vast majority of robots use electric motors, often brushed and brushless DC motors in
portable robots or AC motors in industrial robots and CNC machines. These motors are often
preferred in systems with lighter loads, and where the predominant form of motion is rotational.
Linear actuators
Various types of linear actuators move in and out instead of by spinning, and often have quicker
direction changes, particularly when very large forces are needed such as with industrial
robotics. They are typically powered by compressed and oxidized air (pneumatic actuator) or an
oil (hydraulic actuator) Linear actuators can also be powered by electricity which usually
consists of a motor and a leadscrew. Another common type is a mechanical linear actuator that is
turned by hand, such as a rack and pinion on a car.
Series elastic actuators
Series elastic actuation (SEA) relies on the idea of introducing intentional elasticity between the
motor actuator and the load for robust force control. Due to the resultant lower reflected inertia,
series elastic actuation improves safety when a robot interacts with the environment (e.g.,
humans or workpieces) or during collisions. Furthermore, it also provides energy efficiency and
shock absorption (mechanical filtering) while reducing excessive wear on the transmission and
other mechanical components. This approach has successfully been employed in various robots,
particularly advanced manufacturing robots and walking humanoid robots.
The controller design of a series elastic actuator is most often performed within
the passivity framework as it ensures the safety of interaction with unstructured environments.
Despite its remarkable stability and robustness, this framework suffers from the stringent
limitations imposed on the controller which may trade-off performance. The reader is referred to
the following survey which summarizes the common controller architectures for SEA along with
the corresponding sufficient passivity conditions. One recent study has derived the necessary and
sufficient passivity conditions for one of the most common impedance control architectures,
namely velocity-sourced SEA. This work is of particular importance as it drives the non-
conservative passivity bounds in an SEA scheme for the first time which allows a larger
selection of control gains.
Air muscles
Pneumatic artificial muscles also known as air muscles, are special tubes that expand (typically
up to 42%) when air is forced inside them. They are used in some robot applications.
Wire muscles
Muscle wire, also known as shape memory alloy, Nitinol® or Flexinol® wire, is a material that
contracts (under 5%) when electricity is applied. They have been used for some small robot
applications.
Electroactive polymers
EAPs or EPAMs are a plastic material that can contract substantially (up to 380% activation
strain) from electricity, and have been used in facial muscles and arms of humanoid robots, and
to enable new robots to float, fly, swim or walk.
Piezo motors
Recent alternatives to DC motors are piezo motors or ultrasonic motors. These work on a
fundamentally different principle, whereby tiny piezoceramic elements, vibrating many
thousands of times per second, cause linear or rotary motion. There are different mechanisms of
operation; one type uses the vibration of the piezo elements to step the motor in a circle or a
straight line. Another type uses the piezo elements to cause a nut to vibrate or to drive a screw.
The advantages of these motors are nanometer resolution, speed, and available force for their
size. These motors are already available commercially, and being used on some robots.
Elastic nanotubes
Elastic nanotubes are a promising artificial muscle technology in early-stage experimental
development. The absence of defects in carbon nanotubes enables these filaments to deform
elastically by several percent, with energy storage levels of perhaps 10 J/cm3 for metal
nanotubes. Human biceps could be replaced with an 8 mm diameter wire of this material. Such
compact "muscle" might allow future robots to outrun and outjump humans.
Sensing
Sensors allow robots to receive information about a certain measurement of the environment, or
internal components. This is essential for robots to perform their tasks, and act upon any changes
in the environment to calculate the appropriate response. They are used for various forms of
measurements, to give the robots warnings about safety or malfunctions, and to provide real-time
information about the task it is performing.
Touch
Current robotic and prosthetic hands receive far less tactile information than the human hand.
Recent research has developed a tactile sensor array that mimics the mechanical properties and
touch receptors of human fingertips. The sensor array is constructed as a rigid core surrounded
by conductive fluid contained by an elastomeric skin. Electrodes are mounted on the surface of
the rigid core and are connected to an impedance-measuring device within the core. When the
artificial skin touches an object the fluid path around the electrodes is deformed, producing
impedance changes that map the forces received from the object. The researchers expect that an
important function of such artificial fingertips will be adjusting the robotic grip on held objects.
Scientists from several European countries and Israel developed a prosthetic hand in 2009, called
SmartHand, which functions like a real one—allowing patients to write with it, type on
a keyboard, play piano and perform other fine movements. The prosthesis has sensors which
enable the patient to sense real feeling in its fingertips.
Vision
Computer vision is the science and technology of machines that see. As a scientific discipline,
computer vision is concerned with the theory behind artificial systems that extract information
from images. The image data can take many forms, such as video sequences and views from
cameras.
In most practical computer vision applications, the computers are pre-programmed to solve a
particular task, but methods based on learning are now becoming increasingly common.
Computer vision systems rely on image sensors that detect electromagnetic radiation which is
typically in the form of either visible light or infra-red light. The sensors are designed
using solid-state physics. The process by which light propagates and reflects off surfaces is
explained using optics. Sophisticated image sensors even require quantum mechanics to provide
a complete understanding of the image formation process. Robots can also be equipped with
multiple vision sensors to be better able to compute the sense of depth in the environment. Like
human eyes, robots' "eyes" must also be able to focus on a particular area of interest, and also
adjust to variations in light intensities.
There is a subfield within computer vision where artificial systems are designed to mimic the
processing and behavior of biological system, at different levels of complexity. Also, some of the
learning-based methods developed within computer vision have a background in biology.
Other
Other common forms of sensing in robotics use lidar, radar, and sonar. Lidar measures distance
to a target by illuminating the target with laser light and measuring the reflected light with a
sensor. Radar uses radio waves to determine the range, angle, or velocity of objects. Sonar uses
sound propagation to navigate, communicate with or detect objects on or under the surface of the
water.
Environmental interaction and navigation
Though a significant percentage of robots in commission today are either human controlled or
operate in a static environment, there is an increasing interest in robots that can operate
autonomously in a dynamic environment. These robots require some combination of navigation
hardware and software in order to traverse their environment. In particular, unforeseen events
(e.g. people and other obstacles that are not stationary) can cause problems or collisions. Some
highly advanced robots such as ASIMO and Meinü robot have particularly good robot navigation
hardware and software. Also, self-controlled cars, Ernst Dickmanns' driverless car, and the
entries in the DARPA Grand Challenge, are capable of sensing the environment well and
subsequently making navigational decisions based on this information, including by a swarm of
autonomous robots. Most of these robots employ a GPS navigation device with waypoints, along
with radar, sometimes combined with other sensory data such as lidar, video cameras,
and inertial guidance systems for better navigation between waypoints.
Human-robot interaction

Kismet can produce a range of facial expressions.


The state of the art in sensory intelligence for robots will have to progress through several orders
of magnitude if we want the robots working in our homes to go beyond vacuum-cleaning the
floors. If robots are to work effectively in homes and other non-industrial environments, the way
they are instructed to perform their jobs, and especially how they will be told to stop will be of
critical importance. The people who interact with them may have little or no training in robotics,
and so any interface will need to be extremely intuitive. Science fiction authors also typically
assume that robots will eventually be capable of communicating with humans
through speech, gestures, and facial expressions, rather than a command-line interface. Although
speech would be the most natural way for the human to communicate, it is unnatural for the
robot. It will probably be a long time before robots interact as naturally as the fictional C-3PO,
or Data of Star Trek, Next Generation. Even though the current state of robotics cannot meet the
standards of these robots from science-fiction, robotic media characters (e.g., Wall-E, R2-D2)
can elicit audience sympathies that increase people's willingness to accept actual robots in the
future. Acceptance of social robots is also likely to increase if people can meet a social robot
under appropriate conditions. Studies have shown that interacting with a robot by looking at,
touching, or even imagining interacting with the robot can reduce negative feelings that some
people have about robots before interacting with them. However, if pre-existing negative
sentiments are especially strong, interacting with a robot can increase those negative feelings
towards robots.
Summary
Artificial intelligence (AI) is intelligence—perceiving, synthesizing, and inferring information
—demonstrated by machines, as opposed to intelligence displayed by non-human
animals and humans. Example tasks in which this is done include speech recognition, computer
vision, translation between (natural) languages, as well as other mappings of inputs.
AI applications include search engines, recommendation systems, recognition systems such as
speech and face recognitions. Artificial intelligence was founded in 1950s and at first people
were very optimistic about it, but then it was a failure. After trying a trying, it began having
success and it started improving.AI was based a lot in the human nervous system when it was
created, and nowadays too. Mathematics, statistics and algebra helped a lot in creating the
artificial intelligence that is today, by solving many academical and industrial problems.
The field was founded on the assumption that human intelligence "can be so precisely described
that a machine can be made to simulate it". This raised philosophical arguments about the mind
and the ethical consequences of creating artificial beings endowed with human-like intelligence;
these issues have previously been explored by myth, fiction and philosophy since
antiquity. Computer scientists and philosophers have since suggested that AI may become
an existential risk to humanity if its rational capacities are not steered towards beneficial goals.
The study of mechanical or "formal" reasoning began with philosophers and mathematicians in
antiquity. The study of mathematical logic led directly to Alan Turing's theory of computation,
which suggested that a machine, by shuffling symbols as simple as "0" and "1", could simulate
any conceivable act of mathematical deduction.
AI can solve many problems by intelligently searching through many possible
solutions. Reasoning can be reduced to performing a search. For example, logical proof can be
viewed as searching for a path that leads from premises to conclusions, where each step is the
application of an inference rule. Planning algorithms search through trees of goals and subgoals,
attempting to find a path to a target goal, a process called means-ends
analysis. Robotics algorithms for moving limbs and grasping objects use local
searches in configuration space.
Logic is used for knowledge representation and problem-solving, but it can be applied to other
problems as well. For example, the satplan algorithm uses logic for planning and inductive logic
programming is a method for learning.
AI is important because it can do tasks better than any human, it can do them faster, and for a
shorter period of time it can give more products than a human can. It can also complete different
tasks with few errors. This has helped fuel an explosion in efficiency and opened the door to
entirely new business opportunities for some larger enterprises.
 Weak AI, also known as narrow AI, is an AI system that is designed and trained to complete
a specific task. Industrial robots and virtual personal assistants, such as Apple's Siri, use
weak AI.
 Strong AI, also known as artificial general intelligence (AGI), describes programming
that can replicate the cognitive abilities of the human brain. When presented with an
unfamiliar task, a strong AI system can use fuzzy logic to apply knowledge from one
domain to another and find a solution autonomously.
There are 4 tpes of artificial intelligence: reactive machines, limited memory, theory of mind,
self awareness.
AI is being used in many directions like healthcare, business, education, transportation,
manufacturing, security etc. There are some types of surgeries that are very dangerous, even
impossible, if they are done by the hand of a surgeon. Because of the improving of the artificial
intelligence these surgeries can be done now without a problem. Each hospital uses databases
where are written the information about each patient. Machines have algorithms that can help
businesses in statistics and creating a better environment for the costumers. AI can do automatic
grading of students’ homework or exams. Now exist machines that can help build great things,
and do a job that no human can physically do.
Robotics is an interdisciplinary branch of computer science and engineering. Robotics involves
the design, construction, operation, and use of robots. The goal of robotics is to design machines
that can help and assist humans. Robotics integrates fields of mechanical engineering, electrical
engineering, information engineering, mechatronics, electronics, bioengineering, computer
engineering, control engineering, software engineering, mathematics, etc.
Robots all have some kind of mechanical construction, a frame, form or shape designed to
achieve a particular task. Robots have electrical components that power and control the
machinery. All robots contain some level of computer programming code. A program is how a
robot decides when or how to do something. At present, mostly (lead–acid) batteries are used as
a power source. Many different types of batteries can be used as a power source for robots. They
range from lead–acid batteries, which are safe and have relatively long shelf lives but are rather
heavy compared to silver–cadmium batteries. Potential power sources could be:

 pneumatic (compressed gases)


 Solar power (using the sun's energy and converting it into electrical power)
 hydraulics (liquids)
 flywheel energy storage
 organic garbage (through anaerobic digestion)
 nuclear

Sensors allow robots to receive information about a certain measurement of the environment, or
internal components. This is essential for robots to perform their tasks, and act upon any changes
in the environment to calculate the appropriate response. They are used for various forms of
measurements, to give the robots warnings about safety or malfunctions, and to provide real-
time information about the task it is performing.
The state of the art in sensory intelligence for robots will have to progress through several orders
of magnitude if we want the robots working in our homes to go beyond vacuum-cleaning the
floors. If robots are to work effectively in homes and other non-industrial environments, the way
they are instructed to perform their jobs, and especially how they will be told to stop will be of
critical importance.
Dictionary
 Artificial intelligence- a computer, robot, or other programmed mechanical device having
this humanlike capacity
(Inteligjenca artificiale)-nje kompjuter,robot ose pajisje te tjera te programuara te kene
kapacitet njerezor
 speech recognition-computer software that allows a computer to understand spoken
words
(njohja e te folures)- nje software kompjuterik qe e lejn kompjuterin te kuptoje fjalet qe
ne flasim
 computer vision- Computer vision is a field of artificial intelligence (AI) enabling
computers to derive information from images, videos and other inputs.
(vizioni I kompjuterit)- nje fushe e inteligjences artificiale qe I mundeson kompjuterit te
marre informacion nga imazhet, videot apo nga gjera te tjera qe futen ne kompjuter
 applications- a computer program that is designed for a particular purpose
(aplikacione)- nje program ne kompjuter qe eshte krijuar per te bere dicka apo per nje
qellim te caktuar
 web search engines-A search engine is a software program that helps people find the
information they are looking for online using keywords or phrases.
(makine e kerkimit ne internet)- nje program qe I ndihmon njerezit te gjejne informacioin
per te cilin po kerkojne ne internet duke perdorur fjale apo fjali
 recommendation systems-a recommendation system is a subclass of information filtering
system that provide suggestions for items that are most pertinent to a particular user.
(sistem sugjerimi)- nje sistem I cili filtron informacionin, mundeson sugjerime per gjerat
te cilat lidhen drejtperdrejt me ate qe ne kerkojme.
 self-driving cars-they are cars in which human drivers are never required to take control
to safely operate the vehicle
(makina qe ecin vete)- makina te cilat njerezit nuk eshte nevoja ti ngasin qe makina te
udhetoje me kujdes, pa demtuar askend
 Databases- a large amount of information stored in a computer system in such a way that
it can be easily looked at or changed
(database)- nje sasi e madhe informacioni qe ruhet nje nje vend ne kompjuter ne menyre
qe te gjehet lehtesisht
 language processing- Language processing is a function that appears to be sensitive to
different sorts of information, some linguistic, some not and helps the computer
understand human language and vice-versa.
(procesimi I gjuhes)- nje funksion ne kompjuter qe eshte I ndjeshem ndaj informacioneve
te ndryshme,pra ndihmon kompjuterin te kuptoje gjuhen njerezore dhe e anasjellta
 mathematical deduction- is drawing a conclusion from something known or assumed
(perfundime matematikore)-te nxjeresh nje perfundim nga dicka e ditur ose e
hamendesuar
 cybernetics-the scientific study of
how information is communicated in machines and electronic devices, comparing this
with how information is communicated in the brain and nervous system
(kibernetika)- studim shkencor qe tregon se si informacioni u komunikohet pajisjeve
elektronike, duke e krahasuar me informacionin qe percillet ne trurin e njeriut
 theory of computation-In theoretical computer science and mathematics, the theory of
computation is the branch that deals with what problems can be solved on a model of
computation, using an algorithm, how efficiently they can be solved or to what degree.
(teoria e llogaritjes)- dege ne shkenca kompjuterike dhe matematike, qe tregon se si
probleme te ndryshme mund te zgjidhen me nje model llogaritjeje, duke perdorur nje
algoritem, dhe tregon shkallen e saktesise se zgjidhjes se problemit
 Robotics- the science of making and
using robots (= machines controlled by computers that are used
to perform jobs automatically)
(robotika)- shkenca qe merret me krijimin dhe perdorimin e roboteve, qe janee makineri
qe kontrollohen nga kompjutere dhe bejne pune automatikisht
 pattern recognition-Pattern recognition is the automated recognition of patterns and
regularities in data.
(njohja e modeleve)- njohja e modeleve dhe rregullsive tek te dhenat.
 neural networks-a computer system modelled on the human brain and nervous system.
(rrjet nervor)-nje sistem kompjuterik I modeluar sipas trurit te njeriut dhe sistemit nervor
 fuzzy systems-Fuzzy systems are structures based on fuzzy techniques oriented towards
information processing, where the usage of classical sets theory and binary logic is
impossible or difficult.
(sistem fuzzy)- strukture qe orientohet drejt procesimit te informacionit, ku perdorimi I
logjikave teorike dhe binare eshte e pamundur ose shume e veshtire
 Grey system-A grey system means that a system in which part of information is known
and part of information is unknown.
(sistemi grei)- nje sistem ne te cilin nje pjese e informacionit eshte e njohur dhe nje pjese
eshte e panjohur
 accuracy benchmark-it is a standard or point of reference people can use to measure
something else accurately.
(standardi I saktesise)-eshte nje pike standard, ose nje pike referimi te cilen njerezit mund
ta perdorin per te matur dicka me saktesi
 inference rule-In the philosophy of logic, a rule of inference, inference rule or
transformation rule is a logical form consisting of a function which takes premises,
analyzes their syntax, and returns a conclusion.
(rregulli I konkluzionit)-nje forme logjike qe perbehet nga nje funksion qe zgjedh nje
infrastructure ne kompjuter, analizon sintaksen e tyre dhe nxjer nje perfundim
 Algorithm-a set of mathematical instructions or rules that, especially if given to
a computer, will help to calculate an answer to a problem
(algoritem)-disa instruksione ose rregulla matematike,te cilat nese futen ne nje kompjuter,
do te ndihmojne ne zgjidhjen e nje problem
 configuration space-the parameters that define the configuration of a system are called
generalized coordinates, and the space defined by these coordinates is called the
configuration space of the physical system.
(hapesire konfigurimi)- parametrat qe percaktojne nje sistem quhen koordinata te
pergjithesuara, dhe hapesira qe percaktohet nga keto koordinata quhet hapesira e
konfigurimit te sistemit)
 satplan algorithm- is a method for automated planning. It converts the planning problem
instance into an instance of the Boolean problem, which is then solved using a method for
establishing satisfiability such as the DPLL algorithm or WalkSAT.
(algoritmi satplan)- nje metode per planifikim automatik. Shnderron shembullin e modelit
te planifikimit ne nje shembull te problemeve booleane, e cila perdor me pas metoda me
algoritma te tjere.
 Bayesian networks-it is a probabilistic graphical model that represents a set of variables
and their conditional dependencies via a directed acyclic graph.
(Bayesian network)-esht nje model grafik probabilistic qe shfaq disa variabla dhe varesite
e tyre te kushtezuara nepermjet nje grafiku te drejtuar ciklik
 Kernel methods-In machine learning, kernel machines are a class of algorithms for
pattern analysis, whose best known member is the support-vector machine (SVM).
(metoda kernel)- ne mesimin e gjuhes makine, makinat kernel jane grup algoritmash per
analizimin e modelve, ku pjesetari me I njohur eshte makina e vektoreve-
mbeshtetes(SVM)
 dynamic decision networks- defines which random variables are dependent on each other.
(dynamic decision network)- percakton cilat variabla te rastesishem varen nga njeri tjetri
 fuzzy logic-Fuzzy logic is a form of many-valued logic in which the truth value of
variables may be any real number between 0 and 1
(logjika fuzzy)- forme logjike ne te cilen variablat e vertetesise mund te jete cdo numer
real mes 0 dhe 1
 Domain- a distinct subset of the internet with addresses sharing a common suffix or under
the control of a particular organization or individual.
(domain)- nje nengrup I dallueshem e adresave te internetit dhe ndajne nje prapashtese te
perbashket ose qe jane nen kontrollin e nje organizate apo personi te caktuar.

You might also like