Intelligent Computing Proceedings of the 2020 Computing Conference Volume 1 Kohei Arai download
Intelligent Computing Proceedings of the 2020 Computing Conference Volume 1 Kohei Arai download
https://textbookfull.com/product/intelligent-computing-
proceedings-of-the-2020-computing-conference-volume-1-kohei-arai/
https://textbookfull.com/product/intelligent-computing-
proceedings-of-the-2020-computing-conference-volume-3-kohei-arai/
https://textbookfull.com/product/intelligent-computing-
proceedings-of-the-2020-computing-conference-volume-2-kohei-arai/
https://textbookfull.com/product/intelligent-computing-
proceedings-of-the-2018-computing-conference-volume-2-kohei-arai/
https://textbookfull.com/product/proceedings-of-the-future-
technologies-conference-ftc-2020-volume-1-kohei-arai/
Intelligent Systems and Applications: Proceedings of
the 2020 Intelligent Systems Conference (IntelliSys)
Volume 2 Kohei Arai
https://textbookfull.com/product/intelligent-systems-and-
applications-proceedings-of-the-2020-intelligent-systems-
conference-intellisys-volume-2-kohei-arai/
https://textbookfull.com/product/intelligent-systems-and-
applications-proceedings-of-the-2020-intelligent-systems-
conference-intellisys-volume-3-kohei-arai/
https://textbookfull.com/product/proceedings-of-the-future-
technologies-conference-ftc-2018-volume-1-kohei-arai/
https://textbookfull.com/product/proceedings-of-the-future-
technologies-conference-ftc-2018-volume-2-kohei-arai/
https://textbookfull.com/product/advances-in-computer-vision-
proceedings-of-the-2019-computer-vision-conference-cvc-
volume-1-kohei-arai/
Advances in Intelligent Systems and Computing 1228
Kohei Arai
Supriya Kapoor
Rahul Bhatia Editors
Intelligent
Computing
Proceedings of the 2020 Computing
Conference, Volume 1
Advances in Intelligent Systems and Computing
Volume 1228
Series Editor
Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences,
Warsaw, Poland
Advisory Editors
Nikhil R. Pal, Indian Statistical Institute, Kolkata, India
Rafael Bello Perez, Faculty of Mathematics, Physics and Computing,
Universidad Central de Las Villas, Santa Clara, Cuba
Emilio S. Corchado, University of Salamanca, Salamanca, Spain
Hani Hagras, School of Computer Science and Electronic Engineering,
University of Essex, Colchester, UK
László T. Kóczy, Department of Automation, Széchenyi István University,
Gyor, Hungary
Vladik Kreinovich, Department of Computer Science, University of Texas
at El Paso, El Paso, TX, USA
Chin-Teng Lin, Department of Electrical Engineering, National Chiao
Tung University, Hsinchu, Taiwan
Jie Lu, Faculty of Engineering and Information Technology,
University of Technology Sydney, Sydney, NSW, Australia
Patricia Melin, Graduate Program of Computer Science, Tijuana Institute
of Technology, Tijuana, Mexico
Nadia Nedjah, Department of Electronics Engineering, University of Rio de Janeiro,
Rio de Janeiro, Brazil
Ngoc Thanh Nguyen , Faculty of Computer Science and Management,
Wrocław University of Technology, Wrocław, Poland
Jun Wang, Department of Mechanical and Automation Engineering,
The Chinese University of Hong Kong, Shatin, Hong Kong
The series “Advances in Intelligent Systems and Computing” contains publications
on theory, applications, and design methods of Intelligent Systems and Intelligent
Computing. Virtually all disciplines such as engineering, natural sciences, computer
and information science, ICT, economics, business, e-commerce, environment,
healthcare, life science are covered. The list of topics spans all the areas of modern
intelligent systems and computing such as: computational intelligence, soft comput-
ing including neural networks, fuzzy systems, evolutionary computing and the fusion
of these paradigms, social intelligence, ambient intelligence, computational neuro-
science, artificial life, virtual worlds and society, cognitive science and systems,
Perception and Vision, DNA and immune based systems, self-organizing and
adaptive systems, e-Learning and teaching, human-centered and human-centric
computing, recommender systems, intelligent control, robotics and mechatronics
including human-machine teaming, knowledge-based paradigms, learning para-
digms, machine ethics, intelligent data analysis, knowledge management, intelligent
agents, intelligent decision making and support, intelligent network security, trust
management, interactive entertainment, Web intelligence and multimedia.
The publications within “Advances in Intelligent Systems and Computing” are
primarily proceedings of important conferences, symposia and congresses. They
cover significant recent developments in the field, both of a foundational and
applicable character. An important characteristic feature of the series is the short
publication time and world-wide distribution. This permits a rapid and broad
dissemination of research results.
** Indexing: The books of this series are submitted to ISI Proceedings,
EI-Compendex, DBLP, SCOPUS, Google Scholar and Springerlink **
Rahul Bhatia
Editors
Intelligent Computing
Proceedings of the 2020 Computing
Conference, Volume 1
123
Editors
Kohei Arai Supriya Kapoor
Saga University The Science and Information
Saga, Japan (SAI) Organization
Bradford, West Yorkshire, UK
Rahul Bhatia
The Science and Information
(SAI) Organization
Bradford, West Yorkshire, UK
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Editor’s Preface
v
vi Editor’s Preface
We hope that all the participants and the interested readers benefit scientifically
from this book and find it stimulating in the process. We are pleased to present the
proceedings of this conference as its published record.
Hope to see you in 2021, in our next Computing Conference, with the same
amplitude, focus and determination.
Kohei Arai
Contents
vii
viii Contents
Received and approved for public release by the Air Force Research Laboratory (AFRL) on 11
June 2019, case number 88ABW-2019-2928. Any opinions, findings, and conclusions or recom-
mendations expressed in this material are those of the authors, and do not necessarily reflect the
views of AFRL or its contractors. This work was partially funded under AFRL’s Neuromorphic
- Compute Architectures and Processing contract that started in September 2018 and continues
until June 2020.
1 Background
Background and insight into the technical applicability of this research is discussed in
Sect. 1. Section 2 provides an overview of the hardware. Section 3 provides detail on
our technical approach. Section 4 provides a concise summary of our results. Section 5
addresses areas of future research, and conclusions are discussed in Sect. 6.
We are currently in a period where the interest and the pace of research and devel-
opment in ML and AI technological advances is high. In part, the progress is enabled
by increases in investment by government, industry, and academia.
Currently, ML algorithms, techniques, and methods are improving at an accelerated
pace, i.e., with methods to recognize objects and patterns outpacing human perfor-
mance. The communities’ interest is supported by the number of applications that can
use existing and emerging ML hardware and software technologies. These applications
are supported by the availability of large quantities of data, connectivity of information,
and new high-performance computing architectures. Such applications are now preva-
lent in many everyday devices. For example, data from low cost optical cameras and
radars provide automobiles the data needed to assist humans. These driver assistants
can identify road signs, pedestrians, and lane lines, while also controlling vehicle speed
and direction. Other devices include smart thermostats, autonomous home floor clean-
ers, robots that deliver towels to hotel guests, systems that track and improve athletic
performance, and devices that help medical professionals diagnose disease. These appli-
cations, and the availability of data collected on increasingly smaller devices, are driving
the need and interest in low-power neuromorphic chip architectures.
The wide applicability of information processing technologies has increased com-
petition and interest in computing hardware and software that can operate within the
memory, power, and cost constraints of the real world. This includes continued research
into computing systems that are structured like the human brain. The research includes
several decades of development, in-part pioneered by Carver Mead [1]. Current exam-
ples of the more advanced neuromorphic chips include SpiNNaker, Loihi, BrainScaleS-
1, NeuroGrid/Braindrop, DYNAP, ODIN and TrueNorth [2]. These systems improve
upon traditional computing architectures, such as the Von Neumann architecture, where
physical memory and logic are separated. In neuromorphic systems, the colocalization
of memory and computation, as well as reduced precision computing, increases energy
efficiencies, and provides a product that uses much less power than traditional compute
architectures.
IBM’s TrueNorth Neurosynaptic System represents an implementation of these
newly available specialized neuromorphic computing architectures [3]. The TrueNorth
NS1e, an evaluation board with a single TrueNorth chip, has the following technical
specifications: 1 million individually programmable neurons, 256 million individually
programmable synapses, and 4,096 parallel & distributed cores. Additionally, this chip
uses approximately 200 mW of total power, resulting in 20 mW/cm2 power density
[4–6].
Demonstrating Advanced Machine Learning and Neuromorphic Computing 3
The latest iteration of the TrueNorth Neurosynaptic System includes the NS16e, a
single board containing a tiled set of 16 TrueNorth processors, assembled in a four-by-
four grid. This state-of-the-art 16 chip computing architecture yields a 16 million neu-
ron processor, capable of implementing large, multi-processor models or parallelizing
smaller models, which can then process 16 times the data.
To demonstrate the processing capabilities of the TrueNorth, we developed multiple
classifiers. These classifiers were trained using optical satellite imagery from the United
States Geological Survey (USGS) [7]. Each image chip in the overall image was labeled
by identifying the existence or non-existence of a vehicle in the chip. The chips were
not centered and could include only a segment of a vehicle [8]. Figure 1 shows the raw
imagery is on the left and the processed imagery on the right. In this analysis, a single
TrueNorth chip was able to process one thousand, 32 × 32 pixel chips per second.
14 Layer
Neural Network
Fig. 1. Electro-optical (EO) image processing using two-class network to detect car/no car in
scene using IBM’s neuromorphic compute architecture, called TrueNorth (using one chip)
Previous work was extended upon through the use of new networks and placement of
those networks on the TrueNorth chip. Additionally, results were captured, and analyses
were completed to assess the performance of these new network models. The overall
accuracy of the best model was 97.6%. Additional performance measures are provided
at the bottom of Fig. 1.
2 Hardware Overview
• Rack mounted
• Four NS16e boards with an
aggregate of 64 million
neurons and 16 billion
synapses
• Enabling parallelization
research and design
• Used to process 5000 x
5000 pixels of information
every 3 seconds.
Fig. 2. AFRL’s BlueRaven system – equivalent to 64 million neurons and 16 billion synapses
before being deployed to the neuromorphic hardware, as well as for data pre-processing.
Table 1 details Blue Raven’s specifications.
Specification Description
Form Factor 2U Server + 2U NS16e Sled
NS16e 4× IBM NS16e PCIe Cards
Neurosynaptic Cores 262,144
Programmable Neurons 67,108,864
Programmable Synapses 17,179,869,184
PCIe NS16e Interface 4× PCIe Gen 2
Ethernet - Server 1x 1 Gbit
Ethernet – NS16e 1x 1 Gbit per NS16e
Training GPUs 2x NVIDIA Tesla P100
Volatile Memory 256 GB
CPUs 2× 10-Core E5-2630
3 Approach
The NS16e processing approach includes the use of deep convolutional Spiking Neural
Networks (SNN) to perform classification inferencing of the input imagery. The deep
networks were designed and trained using IBM’s Energy-efficient Deep Neuromorphic
Networks (EEDN) framework [4].
The neurosynaptic resource utilization of the classifiers were purposely designed to
operate within the constraints of the TrueNorth architecture. Specifically, they stayed
within the limits of the single TrueNorth’s 1 million neurons and 256 million synapses.
The benefit of this technical approach is that it immediately allowed us to populate
an NS16e board with up to sixteen parallel image classifier networks, eight to process
optical imagery and eight to process radar imagery. Specifically, the processing chain
is composed of a collection of 8 duplicates of the same EEDN network trained on a
classification task for each chosen dataset.
This overhead optical imagery includes all 3 color channels (red, green and blue).
The scene analyzed included 5000 × 5000 pixels at 1-foot resolution. From this larger
scene, image chips were extracted. Each image chip from the scene was 32 × 32 pixels.
There was no overlap between samples, thereby sampling the input scene with a receptive
field of 32 × 32 pixels and a stride of 32 pixels. This resulted in over 24,336 (156 ×
156) sub-regions.
The USGS EO data was used to successfully build TrueNorth-based classifiers that
contained up to six object and terrain classes (e.g., vehicle, asphalt, structure, water,
foliage, and grass). For this multi-processor neurosynaptic hardware demonstration, a
subset of the classes was utilized to construct a binary classifier, which detected the
presence or absence of a vehicle within the image chip.
The data set was divided up into a training and test/validation sets. The training
set contained 60% of the chips (14,602 image chips). The remaining 40% of the chips
(9,734) were used for test/validation. The multi-processor demonstration construct and
corresponding imagery is shown in Fig. 3.
The content of the chip was defined during data curation/labeling. The label was in one
of two categories: no vehicle or a vehicle. Additionally, the chips were not chosen with
the targets of interest centered in the image chip. Because of this, many of the image
chips contained portions of a vehicle, e.g., a chip may contain an entire vehicle, fractions
of a vehicle, or even fractions of multiple vehicles.
The process of classifying the existence of a vehicle in the image starts with object
detection. Recognizing that a chip may contain a portion of a vehicle, an approach was
developed to help ensure detection of the vehicles of interest. This approach created
Demonstrating Advanced Machine Learning and Neuromorphic Computing 7
multiple 32 × 32 × 3 image centroids. These centroids were varied in both the X and Y
dimensions to increase the probability of getting more of the target in the image being
analyzed.
A block diagram showing the processing flow from USGS imagery to USGS imagery
with predicted labels is shown in Fig. 4. This includes NS16e implementations with 8
parallel classifier networks, 1 per each TrueNorth on half the board.
Model
Image Model
Chipper
Model
Model
TrueNorth
USGS Model
The copies of the EO network were placed in two full columns of the NS16e, or
eight TrueNorth processors in a 4 × 2 configuration, with one network copy on each
processor. As a note, the remainder of the board was leveraged to study processing with
additional radar imagery data.
The NS16e card’s power usage during inferencing is shown in Table 2. The total uti-
lization of the board was less than 14 W. The runtime analyses included the measurement
of periphery circuits and input/output (I/O) on the board.
Board power
Board Voltage (V) Current (A) Power (W)
Nominal Measured Computed
Interposer (Inclrding MMP) +12 0.528 6.336
16-chip board (Including TN chips) +12 0.622 7.462
Total 1.150 13.798
Table 3 details the power utilization of the TrueNorth chips without the boards
peripheral power use. The contribution from the TrueNorth accounted for approximately
5 W of the total 15 W.
Table 4 provides detail on the power utilization without loading on the system (idle).
4 Results
In Fig. 5, we see an example of predictions (yellow boxes) overlaid with ground truth
(green tiles). Over the entirety of our full-scene image, we report a classification accuracy
of 84.29% or 3,165 of 3,755 vehicles found. Our misclassification rate, meaning the
number of false positives or false negatives, is 35.39%. Of that, 15.71% of targets are
false negatives, i.e. target misses. This can be tuned by changing the chipping algorithm
used with a trade off in the inference speed of a tile.
Demonstrating Advanced Machine Learning and Neuromorphic Computing 9
Board power
Board Voltage (V) Current (A) Power (W)
Nominal Measured Computed
Interposer (Including MMP) +12 0.518 6.216
16-chip board (Including TN chips) +12 0.605 7.265
TOTAL 1.123 13.481
TrueNorth power only
Component Voltage (V) Current (A) Power (W)
Measured Measured Computed
TrueNorth Core VDD 0.978 4.64 4.547
TrueNorth I/O Drivers 1.816 0.03 0.051
TrueNorth I/O Pads 0.998 0.00 0.001
TOTAL 4.599
5 Future Research
Neuromorphic research and development continue with companies such as Intel and
IBM. They are contributing to the communities’ interest in these low power processors.
As an example, the SpiNNaker system consists of many ARM cores and is highly
10 M. Barnell et al.
flexible since neurons are implemented at the software level, albeit somewhat more
energy intensive (each core consumes ~1 W) [10, 11].
As new SNN architectures continue to be developed, new algorithms and applica-
tions continue to surface. This includes technologies such as bioinspired vision systems
[12]. Additionally, Intel’s Loihi neuromorphic processor [13] is a new SNN neuromor-
phic architecture which enables a new set of capabilities on ultra-low power hardware.
Loihi also provides the opportunity for online learning. This makes the chip more flex-
ible as it allows various paradigms, such as supervisor/non-supervisor and reinforc-
ing/configurability. Additional research of these systems, data exploitation techniques,
and methods will continue to enable new low power and low-cost processing capabilities
with consumer interest and applicability.
6 Conclusions
The need for advanced processing algorithms and methods that operate on low-power
computing hardware continues to grow out an outstanding pace. This research has
enabled the demonstration of advanced image exploitation on the newly developed
NS16e neuromorphic hardware, i.e., a board with sixteen neurosynaptic chips on it.
Together, those chips never exceeded 5 W power utilization. The neuromorphic board
never exceeded 15 W power utilization.
References
1. Mead, C.: Neuromorphic electronic systems. Proc. IEEE 78(10), 1629–1636 (1990)
2. Rajendran, B., Sebastian, A., Schmuker, M., Srinivasa, N., Eleftheriou, E.: Low-power neu-
romorphic hardware for signal processing applications (2019). https://arxiv.org/abs/1901.
03690
3. Barnell, M., Raymond, C., Capraro, C., Isereau, D., Cicotta, C., Stokes, N.: High-performance
computing (HPC) and machine learning demonstrated in flight using Agile Condor®. In: IEEE
High Performance Extreme Computing Conference (HPEC), Waltham, MA (2018)
4. Esser, S.K., Merolla, P., Arthur, J.V., Cassidy, A.S., Appuswamy, R., Andreopoulos, A.,
et al.: CNNs for energy-efficient neuromorphic computing. In: Proceedings of the National
Academy of Sciences, p. 201604850, September 2016. https://doi.org/10.1073/pnas.160485
0113
5. R.F. Service: The brain chip. In: Science, vol. 345, no. 6197, pp. 614–615 (2014)
6. Cassidy, A.S., Merolla, P., Arthur, J.V., Esser, S.K., Jackson, B., Alvarez-Icaza, R., Datta, P.,
Sawada, J., Wong, T.M., Feldman, V., Amir, A., Rubin, D.B.-D., Akopyan, F., McQuinn, E.,
Risk, W.P., Modha, D.S.: Cognitive computing building block: a versatile and efficient digital
neuron model for neurosynaptic cores. In: The 2013 International Joint Conference on Neural
Networks (IJCNN), pp. 1–10, 4–9 August 2013
7. U.S. Geological Survey: Landsat Data Access (2016). http://landsat.usgs.gov/Landsat_S
earch_and_Download.php
8. Raymond, C., Barnell, M., Capraro, C., Cote, E., Isereau, D.: Utilizing high-performance
embedded computing, agile condor®, for intelligent processing: an artificial intelligence
platform for remotely piloted aircraft. In: 2017 IEEE Intelligent Systems Conference, London,
UK (2017)
Demonstrating Advanced Machine Learning and Neuromorphic Computing 11
9. Modha, D.S., Ananthanarayanan, R., Esser, S.K., Ndirango, A., et al.: Cognitive computing.
Commun. ACM 54(8), 62–71 (2011)
10. Furber, S.B., Galluppi, F., Temple, S., Plana, L.A.: The SpiNNaker project. Proc. IEEE 102(5),
652–665 (2014)
11. Schuman, C.D., Potok, T.E., Patton, R.M., Birdwell, J.D., Dean, M.E., Rose, G.S., Plank,
J.S.: A survey of neuromorphic computing and neural networks in hardware. CoRR
abs/1705.06963 (2017)
12. Dong, S., Zhu, L., Xu, D., Tian, Y., Huang, T.: An efficient coding method for spike camera
using inter-spike intervals. In: IEEE DCC, March 2019
13. Tang, G., Shah, A., Michmizos, K.P.: Spiking neural network on neuromorphic hardware for
energy-efficient unidimensional SLAM. CoRR abs/1903.02504. arXiv:1611.05141 (2019)
Energy Efficient Resource Utilization:
Architecture for Enterprise Network
Towards Reliability with SleepAlert
1 Introduction
Efficient utilization of energy is one of the biggest challenges around the globe. The dif-
ference between demand and supply is always on rise. For high performance, computing
a reliable, scalable, and cost-effective energy solution satisfying power requirements and
minimizing environmental pollution will have a high impact. The biggest challenge in
enterprise networks is how to manage power consumption. Data centers utilize huge
amount of energy in order to ensure the availability of data when accessed remotely.
Major problem now a day is that energy is scarce, that is why renewable energy
i.e. producing energy by wind, water, solar light, geothermal and bio-energy is a hot
issue in research. It is of equal importance that how efficiently this limited energy would
solution is useful for small enterprise networks, where saving energy is a big challenge
besides reliability.
1. A simple architecture to cope with the challenge of single point failure and ensure
the service until last node is available in the network.
2. Low cost solution to save considerable amount of energy while making sure that
reliability is not compromised.
2 Literature Review
The traditional approach regarding the design of a computer network is to ensure the
accessibility of each machine connected in the network at any cost. Energy consumption
is not taken into account while designing such networks [6]. Generally, a network design
is highly redundant for fault tolerance. All network equipment stays powered-on even
if unused or slightly used. Many solutions [7–16] have been proposed to efficiently use
the energy.
In Green Product Initiative [12], the use of energy efficient devices by using Dynamic
voltage and frequency scaling (DVFS) technique has been introduced but this solution
is not appropriate for the jobs having a short deadline. This is because DVFS-enabled
devices take more time than the normal execution time of a job. Hardware based solutions
[12, 17] require particular hardware such as GumStix [17], when the host sleeps these
low powered devices becomes active. Sometime particular hardware device is required
for each machine beside operating system alteration on both application and host. Apple
has come up with a sleep proxy that is compatible only with Apple-designed hardware
and is appropriate for home networks only [15].
Sleep proxy architecture [13] is a common software based solution to lesser energy
consumption. The major flaw with this technique is that if sleep proxy server is down,
then there is no other way to access any machine. Using the designated machine as sleep
proxy server is, too, not a good approach as the danger of Single Point Failure always
lurks. A different approach is SOCKs based approach, which includes the awareness
of the power state of a machine that how a Network Connectivity Proxy (NCP) could
enable substantial energy savings by letting idle machines to enter a low-power sleep
state and still ensures their presence in the network. This is just a design and prototype,
not yet implemented. Furthermore, there is always a significant difference in simulation-
based test of sleep proxy architecture and real time testing in enterprise networks [14].
Software based solution to energy efficient computing normally based on sleep approach
such as Wake on LAN (WOL) [18] is a standard methodology used to make available a
sleeping machine in a network when required by sending a magic packet.
Cloud and fog computing are the common trend introduced in recent decade. The
way the Information and Communication Technology (ICT) and computing systems are
increasing, it is necessary to take in account the future challenges of energy consumption
Energy Efficient Resource Utilization: Architecture for Enterprise Network 15
due to these latest technologies [19]. Cloud computing involve the large number of data
set on different location. Many companies like Google, Amazon, Yahoo, Microsoft,
having huge data centers are currently following the cloud architecture to handle the large
number of data sets. The progression toward the cloud architecture results in increase of
data centers which lead to increase in the huge amount of energy consumption. Survey
shows that the energy consumption by data centers in US was between 1.7% and 2.2% of
entire power consumption of US in 2010 [20]. To control the huge energy consumption
issues in cloud servers the concept of greening cloud computing [21] and Pico servers
[22], power aware computing, were introduced. Common methods to achieve energy
saving in green cloud computing data center are adaptive link rate, implementing virtual
network methodology, sleeping less utilized servers, green/power aware routing and
server load/network traffic consolidation [23].
Efficient resource allocation is a new diversion to cope with energy challenge.
Research shows different method of resource allocation one of common is allocation
based on priority [24]. This technique requires the intelligent system to predict the
behavior of a network based on different machine learning concepts [25]. Whereas
while implementing the virtual machine based solution for energy efficiency, the prior-
ity allocation and resource sharing algorithms were designed to allocate the resources,
to save maximum energy. The major flaw in technique is excessive load on the network
due to constant reallocations of resources [26]. Intelligent approach to minimize the
consumption of energy is to predict the network work load based on different models
this model turn the work load in different classes and then evaluate each class based on
several available models [27]. Most of available software based solutions are not tested
in real time environment; others are complex and have many shortcomings when tested
in real-time environment. Ensuring reliability is a big challenge for such kind of appli-
cations while saving sufficient amount of energy. Single point failure is one of major
point of concern in this research domain.
Many solutions were proposed but some solutions are complex to contrivance while
other requires a lot of infrastructure to implement. Most of companies in developing
countries even have the major budgetary constraints to set up such a large infrastructure.
In case of small enterprises environments, even if they establish such a large and complex
network solutions, the cost to operate, maintain and make these solutions serviceable
is too much to afford. This lead to even an expensive solution then they get the benefit
from energy savage. They normally need a short, simple and less cost effective energy
saving solutions and hoping the high reliability rate and the good performance as well.
Solution that is beneficial for these kind of organizations especially in under devel-
oped countries is simply the “Sleep Alert”, where they are afraid to upgrade the whole
network or the change cost is much large then the annual revenue of an organization and
need a simple and cost effective solution.
3 System Architecture
Concept of sleep proxy is much appreciated but as the time goes on researchers came to
know some of the major problems that caused the deadlocks in network and effect the
availability of the sleeping machine to respond to a remote request. Some issues with
current sleep proxy that we discuss in this research are as follows:
16 D. Ali et al.
• For making the environment green we can’t take a risk of letting the network on the
sack of just one proxy.
• Because of one proxy, if it went down due to some reason, states of all sleeping
machine lost and they are unable to come back in wake up mode when required.
• As a proxy is a dedicated machine, that have to maintain the state of sleeping machine,
an extra overhead as it consumes extra energy.
• Taking a decision when to go in sleep mode.
• Some sleep approaches lead system to shutdown and restart when required, these
requires lot of energy at start and also takes much time to restart then as from sleep
state.
Sleep Status. When there is no activity on any machine for a specific amount of time
i.e. a machine is in idle state then it will turn into sleep state and through status indicator
it will send the message about its status (sleep) to RMp . There is no need to keep track
which machine is acting as RMp . Ordinary machine will broadcast its status message
in the network and only RMp will save this status of a particular machine and other
machine will discard this message.
Wake Status. When a machine is in sleep state and there is network traffic (Remote
user accessing the machine) for that machine, RMp will send a WOL message to that
machine. If sleeping machine acknowledged this WOL message, then RMp will update
the status (sleep to wake) of that machine in its database. But if there is no response at
all from the machine after sending three WOL messages, RMp will consider that the
machine is no more in the network.
Message Packet. A message’s that is forwarded and received are of following format.
Protocol defined for the identification of message packet is presented below and shown
in Fig. 2. Message contains following five tokens, which are separated by [-]:
• Message Type: Code representing the task to be required from requested pc.
• Requested IP: IP of targeted machines, which respond to user request, is requested IP.
• Source IP: is the IP of sender machine
• Destination IP: is the IP of receiving machine.
• Termination: is usually a termination character, which in our case is ‘$’.
Message Type. Message type is predefined codes those are helpful in determining the
type of request. Message codes are shown in Table 1:
Another Random Document on
Scribd Without Any Related Topics
The Project Gutenberg eBook of Siipirikko:
Ernst Ahlgren kirjailijana ja ihmisenä
This ebook is for the use of anyone anywhere in the United
States and most other parts of the world at no cost and with
almost no restrictions whatsoever. You may copy it, give it away
or re-use it under the terms of the Project Gutenberg License
included with this ebook or online at www.gutenberg.org. If you
are not located in the United States, you will have to check the
laws of the country where you are located before using this
eBook.
Language: Finnish
Kirj.
Helmi Setälä
Otava, Helsinki
1915.
J. P. Jacobsen.
Ottilie Pelisskylle,
äidilliselle ystävälleni
kiitokseksi monivuotisesta rakkaudesta,
kehoituksesta ja kannatuksesta
tekijä.
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
textbookfull.com