0% found this document useful (0 votes)
3 views

When will AI emerge

The paper discusses the principles and definitions necessary for achieving artificial general intelligence (AGI), emphasizing the importance of external world models (EWM) in robots. It argues that current AI systems lack the ability to autonomously create EWMs, which is essential for human-like intelligence and consciousness. The author proposes that while some foundational conditions must be pre-programmed, the majority of EWM must be generated by the systems themselves through learning and emergent behavior.

Uploaded by

ImantsVilks
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

When will AI emerge

The paper discusses the principles and definitions necessary for achieving artificial general intelligence (AGI), emphasizing the importance of external world models (EWM) in robots. It argues that current AI systems lack the ability to autonomously create EWMs, which is essential for human-like intelligence and consciousness. The author proposes that while some foundational conditions must be pre-programmed, the majority of EWM must be generated by the systems themselves through learning and emergent behavior.

Uploaded by

ImantsVilks
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 6

AI, an engineering approach

Abstract.
In this paper author uses basic principles of computer science and formulates definitions
of basic notions. Without definitions science is impossible. If notions are not defined it is
possible to say anything and it is not possible to doubt or deny it. All definitions are
temporal. Author proposes definitions and conditions necessary for achieving AGI.
Author maintains that the last hurdle in achieving general artificial intelligence (AGI) in
robots is the fact that they don't create the external world models (EWM) automatically
and proposes the necessary conditions for this.

Current situation.
The common requirements for creation of AI are known [11], [13], but realization in
terms of emergence conditions is not considered .
Human-like intelligence is referred to as a strong AI. General intelligence or strong AI
has not been achieved and is a long-term goal of AI research [6], [7]. Lyle N. Long calls
this approach domain-independent [2] http://arc.aiaa.org/doi/full/10.2514/1.I010191

In conferences and hundreds of papers there are complicated discussions about the
consciousness and AI. But there are some problems.
1. Intelligence and its constituents (consciousness, EWM, thinking, learning) are
emergent processes. This means that in order to achieve intelligence and consciousness in
artificial machines the only way is to define and implement the emergence conditions.
2. Often the basic notions and conditions for emergence of consciousness are not defined.
Many authors do not define what they are writing about. It follows that everybody can
say everything and it is not possible to doubt or deny it.
3. Contemporary robots are preprogrammed machines with distinct behaviors. When
confronted with new and unknown situations these robots don't have adequate behavior.
The main issue in AI is the emergence of EWM. The hopeful systems are with input
sensors, actuators and body distinct from environment, where creation of unrestricted
number of EWM can be induced, e.g., [4], [25]. The main AI emergence conditions are
counted in [11], but instead of EWM creation symbolic and sub-symbolic processing is
proposed.
4. The possible way toward the EWM creation is in [25]:
For the robot to learn how to stand and twist its body, for example, it first performs a
series of simulations in order to train a high-level deep-learning network how to perform
the task—something the researchers compare to an “imaginary process.” This provides
overall guidance for the robot, while a second deep-learning network is trained to carry
out the task while responding to the dynamics of the robot’s joints and the complexity of
the real environment.
5. Step in the direction of domain-independent RL is deep learning, promising results are
achieved with PR2 [4]. But creation and replication of EWM is still missing [8]. Creation
and replication of EWM is mentioned in [1]: …our body and brain, from the cellular

1
level upwards, have already built a model of the world that we can apply almost instantly
to a wide array of challenges…
6. The authors in [3] discuss preprogrammed model-based reinforcement learning (RL):
With model-free approach, these works could not leverage the knowledge about
underlying system, which is essential and plentiful in software engineering, to enhance
their learning. In this paper, we introduce the advantages of model-based RL. By utilizing
engineering knowledge, system maintains a model of interaction with its environment and
predicts the consequence of its action, to improve and guarantee system performance. We
also discuss the engineering issues and propose a procedure to adopt model-based RL to
build a self-adaptive software and bring policy evolution closer to real-world
applications.
…Model-based reinforcement learning provides a stable performance by maintaining an
explicit model of operating environment to predict consequence of actions before they are
taken. Therefore, encoding engineering knowledge about underlying system into a prior
model strongly benefits the learning progress.
7. The authors in [8] have come close to the creation of EWM by their Optimization
principle.
Definitions of basic notions.
1. Intelligence is information processing system’s (IPS) ability to adapt its behavior to
changing environment, using pre-programmed (genetically inherited, or obtained from
environment) information, to optimize its behavior by creating and using models of
environment and predictions about the environment’s reactions [17].
2. Artificial intelligence is the simulation of intelligence in machines.
3. EWM-s are preprogrammed or learned collaboration algorithms between the IPS and
environment. Activation of EWM enables IPS to predict EW events. When these
predictions are correct, we say that IPS understands the EW.
4. General artificial intelligence (AGI) is human-like intelligence in which IPS achieves
its goals by creating unrestricted number of EWM and predicts the EW events.
5. Remembering is activation of memories.
6. Thinking is activation of event streams from past or imagined future and applying rules
of logic and laws of nature (to the degree they are known to the system) to the algorithm
steps, without executing corresponding actions. This allows IPS to predict the EW
reactions and to plan, to choose own behavior.
7. Learning is improving existing or creation of new EWM via acquiring new or
modifying existing knowledge (behaviors, skills, values, preferences).
8. Understanding language is acquired by connecting words and other symbols of
language to the EWM and own sensory experience.
9. Consciousness is the model of self.

Intelligence and EWM emergence conditions.


1. The ability and tendency to memorize EW event strings.
2. Neuron colon level-like structure which provides the ability to filter out the principal,
basic lines in all input patterns [5], [9], [24]. This is called generalization and it is another
issue where the prospects are ‘daunting’ [15].

2
3. The ability and tendency to generate, evaluate, and improve the EWM. For this random
search is necessary. For complicated EWM thinking, that is, the ability to create and use
the rules of logic and laws of nature (gained from previous experience) is necessary.
4. The hierarchical reward and punishing system, which stimulates transition from multi-
coordinate world of output moves to 4 coordinate world of output actions. After some
experience the system creates values – the basic principles saying what is good and what
has to be avoided.
5. The ability to create the model of self, which receives EW signals and executes
internal program's decisions. This is called consciousness. All sensory streams are
integrated and the map of EW is created, where the receiving and acting subject plays the
main role. Lyle N. Long names it Unity: All sensor modalities melded into one
experience [12].
6. Sensor signal processing, actuator control, own action evaluation and learning using
random moves and sensory feedback. Initial criteria for better moves must be pre-
programmed.
7. For achieving the General AI the ability to learn, understand spoken and written
language, and the ability to speak is necessary.

Emergence.
All atoms and molecules of the physical world, all chemical reactions, human made
products, inventions and all living beings are emergent property systems. If we have good
models of systems and processes, we explain and predict the emerged properties by
properties of parts and known physical laws. I will not consider here the declarations
about impossibility of generation or understanding emergence. Human intelligence,
consciousness and thinking are complex emergent property processes, for which we can
define and create emergence conditions.
Many emergence conditions in contemporary AI systems are preprogrammed or
embedded structurally, e.g., big number of connections between the neurons, which
allows the activation of alike memories, a layer-like structure of NN, which allows
generalizing: creating basic lines and abstract notions for incoming pictures.
Learning and random search is stimulated by reward and punishing system. In living
entities these features are inherited genetically, in artificial systems they must be
preprogrammed.
There are no ‘easy’ or ‘hard’ problems of consciousness. Consciousness emerges in
complicated multi-level systems for which we don’t have exact models and therefore
can’t predict the exact properties of consciousness and can’t reduce them to the properties
of neurons. There are many complexity levels between the basic elements (neurons) and
final emerged property: input sensors, neurons, neuron colons, output activators, and
processes: memory reading and writing, generalizing, thinking, reward systems, EWM
generation and development. But we can formulate the emergence conditions and, when
they are fulfilled, the consciousness will emerge. The first results are already obtained
[4].

3
The first teaching-programming of the robot will be like raising human infant [12], [15].
Randomly generated actuator moves will create thousands of event streams (with sensory
signals added for each action) recorded in robot's memory.
If after many trials and actuator moves robot stops hitting obstacles, starts grabbing and
moving objects, and shows collaboration elements (definite reactions to external visual,
audio or touch signals) with EW, or, as in [4], learns to stand vertically on its own feet, or
imitates the learned sounds, this means that robot has created maps and models of the
EW.

How to teach the robot to adapt to environment, to optimize the own body moves? In a
way all animals and humans do: connect input sensor signals to the current situation and
processes, remember them, and use them next time by like situations.
Strong AI will arrive only when we will manage our robots to create the models of
external world by themselves. In animals and humans by everyday usage the EWM are
used partly or completely unconsciously. This means that the essential lines are
maintained but concrete details are abandoned. For example, all animals unconsciously
know and use the Earth’s gravitational force, know that all objects of external world have
hard surfaces, but some are soft or liquid, some are hot or cold, and adjust their behavior.

How to create the EWM?


The first environment all living beings are confronted with after the hatch or birth is own
body. Living beings are genetically prepared for adapting their body to unknown and
changing environment. In relatively short time after the series of random moves they start
genetically predetermined actions, improve, fine-tune them by learning and after short
time start to generate the new EWM. Movement is a fundamental characteristic of living
systems [8]. Human body has about 600 muscles, an approximate robot can have about 50
actuators. It is impossible to solve the math equations even for the simple moves, it takes
“minutes or hours of computation for seconds of motion” [8], these ‘human’ methods
have to be abandoned and natural emergent optimization must be introduced instead. This
means that all moves are to be optimized via supervised learning and after that used
automatically.
For AI devices first stimuli, proclivities, and steps must be preprogrammed. For neural
processing system the necessary body’s moves and targets are analog to the four-
dimensional maze (one time and three space coordinates) with smaller or bigger number
of steps and often – more than one way to one or more solutions. First successful actions
are stored and their structure – the basic algorithm – is copied and used for the creation of
next models.

Discussion.
Consciousness and EWM are emergent properties. EWM and consciousness emerge only
in complex systems. This is a price we have to pay for AI and creation of consciousness.
Complexity of the system can be measured by the number of parts and emerged
properties.
It is not possible to pre-program EWM for the all situations of life. Therefore most of
their EWM robots must create by themselves [11]:

4
Rule-based systems and cognitive architectures require humans to program the rules,
and this process is not scalable to billions of rules. The machines will need to rely on
hybrid systems, learning, and emergent behavior; and they will need to be carefully
taught and trained by teams of engineers and scientists. Humans will not be capable of
completely specifying and programming the entire system; learning and emergent
behavior will be a stringent requirement for development of the system.
The intelligence and its constituents are gradual features [23], [12], which can be more or
less outspoken, developed and recognizable. The task of creating consciousness is
challenging: Consciousness is an emergent property, and the first conscious robot will be
a bit surprising, some authors even say: “It will be as astounding and frightening to
humans as the discovery of life on other planets” [28].

Conclusions.
The main principle remains: the emergence conditions, tendencies and proclivities must
be pre-programmed, but most of the EWM must be created by the system. Contemporary
top-down programming must create only the basic conditions and tendencies.
After the IPS presents the simplest EWM for mechanical moves, it must be taught like
human or animal infants [11], [12].
Random search, learning and emergent behavior are the main processes of intelligent
systems.

Sources.
1. Ben Medlock, The body is the missing link for truly intelligent machines,
https://aeon.co/ideas/the-body-is-the-missing-link-for-truly-intelligent-machines?
utm_source=Aeon+Newsletter&utm_campaign=119b08675c-
EMAIL_CAMPAIGN_2017_03_17&utm_medium=email&utm_term=0_411a82
e59d-119b08675c-68643017

2. Scott D. Hanford, Lyle N. Long, Development of a Mobile Robot System Based on the
Soar Cognitive Architecture, Journal of Aerospace Information Systems, October, Vol. 11,
No. 10 : pp. 714-725. http://arc.aiaa.org/doi/abs/10.2514/1.I010191
3. Han Nguyen Ho, Eunseok Lee, Model-based Reinforcement Learning Approach for
Planning in Self-Adaptive Software System, Proceedings of the 9th International
Conference on Ubiquitos Information Management and Communication, Article No. 103,
http://dl.acm.org/citation.cfm?id=2701191
4. Sarah Yang, New ‘deep learning’ technique enables robot mastery of skills via trial
and error http://news.berkeley.edu/2015/05/21/deep-learning-robot-masters-skills-via-
trial-and-error/.
5.
6.
7.
8. Jean-Paul Laumond, Nicolas Mansard, Jean Bernard Lasserre, Optimization as Motion
Selection Principle in Robot Action, Communications of the ACM, ACM, 2015, 58 (5),

5
pp.64-74. https://hal.archives-ouvertes.fr/hal-01376752/file/CACM-optimization-
principle2.pdf
9.
10.
11. Lyle N. Long, and Troy D. Kelley, The Requirements and Possibilities of Creating
Conscious Systems, http://www.personal.psu.edu/lnl/papers/aiaa20091949.pdf
12.
13. Lyle N. Long, Troy D. Kelley, and Michael, J. Wenger, The Prospects for
Creating Conscious Machines,
http://www.personal.psu.edu/lnl/papers/conscious2008.pdf.
14.
15. Troy D. Kelley and Lyle N. Long, Deep Blue Cannot Play Checkers: The Need for
Generalized Intelligence for Mobile Robots,
https://www.hindawi.com/journals/jr/2010/523757/

16.
17. Jeff Hawkins: How brain science will change computing.

18.
19.
20
21. Jeff Hawkins, On Intelligence.
22.
23. Imants Vilks, When Will Consciousness Emerge? Bulletin of Electrical Engineering
and Informatics, Vol 2, No 1: March 2013
24. http://www.hindawi.com/journals/jr/2010/523757/
25. Robot Toddler Learns to Stand by “Imagining” How to Do It.
http://www.technologyreview.com/news/542921/robot-toddler-learns-to-stand-by-
imagining-how-to-do-it/
26. Jeff Hawkins, Subutai Ahmad, Why Neurons Have Thousands of Synapses, A Theory
of Sequence Memory in Neocortex, arXiv:1511.00083v2
27. Yuwei Cui*, Chetan Surpur, Subutai Ahmad, and Jeff Hawkins,
Continuous online sequence learning with an unsupervised neural network
model, arXiv:1512.05463v1.

You might also like