0% found this document useful (0 votes)
14 views11 pages

unit 4 iot

Uploaded by

imjyoti1511
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views11 pages

unit 4 iot

Uploaded by

imjyoti1511
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Data Acquiring, Organising, Processing and Analytics

Data-acq uiring and data-stor age functions for IoT/M2M devices data and messages
Data Generation
Data generates at devices that later o n, transfers to the Internet through a gateway.

Data generates as follows:


• Passive devices data: Data generate at the device or system, following the result of interaction s. A
passive device does aot have its own power source. An external source helps such a device 10 generate
and send data. Examples are an RFI D or an ATM debit card. The device may or may not have an
associated microcontroUer. memory and transceiver. A contactless card is aa example of the former and
a label or barcode is tl1e example of the laner.
• A ctive devices data: Data generates at the device or system or following the result of inceractions. An
active device has its own power source. Examples are active RFID, streetlight sensor or wireless sensor
node. An active device also has an associated microcomrol ler, memory and transceiver.
• Event dara: A device caa generate data on an event only once. For example, on detection of the traffic or o n
dark ambient conditions, this signals the event. The event oa darkness communica tes a need for lighting
up a group of streetlights. A system consisting of security cameras can generate data on an event of
security breach or on detection of an intmsion.
• Device r eal-dme data: An ATM generates data and communk ates it to the server instantane ously
drrough the Internet. This initiates and enables Online Transaction s Processing (OL TP) in real time.
• Event-driven device data: A device data can generate on an event only once. Examples are: (i) a device
receive command from Controller or Monitor, and then perform action(s) using an actuator. When the
action completes, then the device sends an acknowled gement; (ii) When an application seeks the status
of a device, then the device communica tes the status.

Data Acquis ition


Data acq u isition means acq wr1ng data from loT or M2M devices. The data communic ates arter the
interactions with a data acquisition system (application ). The application interacts and communicates with a
number of devices for acquiring the needed data. The de\'ices send data on demand or at programm ed
inten·als. Data of devices communic ate using the nehvork, transport and security layers.
An application can configure the devices for the data when devices have configurati on capability. For
example. the system can configure devices to send data at defined periodic inte1vals. Each device configuration
controls the frequency of data generation. For example, system can configure an umbrella device to acquire
weather data from the Internet weather service, once each working day in a week.

Data Va lida tion


Data acquired from the devices does not mean that data are correct, meaningfu l or consistent. Data
consistency means within exl)ected range data or as per panem or data not corrupted during transmission.
Therefore, data needs validation checks. Data validation software do the validation checks on the acquired data.
Validation software appl ies logic, rules and semantic annotatio ns. The application s or se1vices depend on valid
data. Then only the analytics, predictions, prescriptions, diagnosis and decisions can be acceptable.

Data Categorisation for Storage

Services, business processes and business intelligenc e use data. Valid. useful and relevant data can be
categorised into three categories for storag~a ta alone. da1a as well as results of processing, only the results
of data analytics are stored.
FolJowing are three cases for storage: . . d h f data alone
l. Data which needs 10 be repeatedly processed, referenced or audited m future, an t ere ore,
needs to be scored. . . • d
Data which needs processing only once, and the results are used at a later ume usmg ~e ana 1yucs, a_n
2
· both the data and results of processing and analytics are stored. Advantages of this case are qwck
visualisation and reports generation without reprocessing. Also the data is available for reference or
auditing in fumre. . . .
_ Online, real-time or screaming data need to be processed and the resLtlts of this processing and analysis
3
need storage.
4 Data from large number of devices and sources categorises into a fourth category called Big data. Data
_
is stored in databases at a server or in a data warehouse or on a Cloud as Big data.

Assembly Software for the Events


A device can generate events. Each event can be assigned an ID. A logic value sets or resets fo r an event
state. Logic 1 refers to an event generated but not yet acted upon. Logic O refers to an event generated and
acted upon or not yet generated. A software component in applications can assemble the events (logic value,
event ID and device fD) and can also add Date cime stamp. Events from IoTs and logic-flows assemble
using software.

Data Store
A data store is a data repository of a set of objects which integrate into tl1e store.
Features of data score are:
• Objects in a data-store are modeled using Classes which are defined by d1e database schemas.
• A data store is a general concept. lt includes data repositories such as database, relational database, flat
file, spreadsheet, mail server, web server, directory services and VMware
• A data store may be distributed over multiple nodes. Apache Cassandra is an example of distributed data
store.
• A data store may consist of multiple schemas or may consist of data in only one scheme. Example of
only one scheme data score is a relational database.

Data Centre Management


Data centre is meant for data storage, data security and protection. A data centre is a facility which has
multiple banks of computers, servers, large memory systems, high speed network and Internet connectivity.
The centre provides data security and protection using advanced tools, full data backups along with data
recovery, redundant data communication connections and fu ll system power as weU as electricity supply
backups.
Server Management
Server management means managing services, setup and maintenance of systems of al l types associated
with tlle server.
Spatial Storage

Cons_ider goods wid1 ~FID ta~s. Whe_n goods move from one place to another, the IDs of goods as well as
locauons ar~ needed_ ~ tracking or mvento1y control applications. Spatial storage is storage as spatial
database which 1s opunused to store and later on receives queries from 1..he applications.
ORGANISING THE DATA

Data can be organised in a number of ways. For example, objects, fi les, data s tore, database, relational
database and object oriented database.
Databases
Required data val ues are organised as database(s) so that select val ues can be retrieved larer.

Database
O ne popular method of organising data is a database, which is a collection of data. This collection is
organised into tables. A table provid es a systematic way for access, management and update.
Relational Database
A relational database is a collection o f data into multiple tables which relate 10 each other through s pecial
fields, called keys (primary key, foreign key and unique key).
Object Oriented Database (000B) is a collection of objects, wlticb save the objects in objected orieated
design.
Database Management System
Database Management System (DBMS) is a software system, which contai ns a set of programs specially
designed for creation and management of data stored in a database. Database transactions can be performed
on a database or relational database.
Atomicity, Data Consistency, Data Isolation and Durability (ACID) Rules
The database transactions must mainta in the atomicity, data consistency, data isolation and durabi]jty during
transactions.
Atomicity means a transaction must complete in full, treating it as indivisible.
Consistency means that data after the transactions should rema in cons istent. For example, sum of chocolates
sent shoul d equal the sums of sold and unsold chocolates for each flavour after the transactions on the
database.
Isolation means uaasactions between tab les are isolated from eaclJ other.
Durability means after completion of transactions, the previous transaction cannot be reca lled. Only a new
transaction can affect any change.

Distributed Database
Distributed Database (DOB) is a collection of logically interrelated databases over a computer network.
Distributed DBMS means a software system that manages a distributed database. Distributed DB system has
abi.lity 10 access remOle sites and transmit queries. The features of a distributed database system are:
• DDB is a collection of dacabases which are logically related to each other.
• Cooperation exists between the databases in a transparent manner. Transparent means that each user
within the system may access all of lhe data within all of the databases as Lf they were a single database.
• DDB should be 'location independent', which means the user is ua aware of where the data is located,
and it is possible to move the data from one physical location to another without affecting the user.
Consistency, Availability and Partillo11-Tolera11ce Theorem
Consistency, Availability and Partitioa-Tolerance Theorem (CAP theorem) is a theorem for distributed
computing systems. The theorem states Lhat it is impossible for a d istributed computer system to
simultaneously provide all three of the Consistency, Availability, Partition tolerance (CAP) guarantees. This
is due to the fact that a network failure can occur during communication among the distributed computing
nodes. Partitioning of a network therefore needs to be tolerated. Hence,
at all times either there will be
consistency or availability.
Consis tency means 'Every read receives tJ1e most recent write or an error'.
When a message or data is
sought the network generally issues notification of time-out or read error.
During an interval of a network
failure, lhe notification may not reach the requesting node(s).
Availability means 'Every request receive s a response. withou t guaran tee that
it contain s the most recent
version of the infom1ation'. Due to tJ1e interval of network failure, it may happen
that most recent version of
message or data requested may not be avai lable.
Partition toleran ce means 'The system continues to operate despite an arbitrar
y numbe r of messages
being droppe d by tJ1e networ k between the nodes'. The system continu es to
work even if a panitio n causes
communication interruption between nodes. During the interval of a networ
k failure, the networ k wi 11 have
two separate set of networked nodes. Since failure can always occur d1erefo
re, the panitio ning needs to be
tolerated.

Query Processing
Query means an appl icalion seeking a specific data set from a database.
Query process ing means using a proces s and getting the results of the query
made from a databa se. The
prncess s hould use a correct as well as efficien t execution strategy. Five steps
in processing are:
I. Parsing and cranslation: This step translates the query into an interna
l form, into a relational
algebraic expression and then a Parser, which checks the syntax and verifies
the relations.
2. Decomposition to complete the query process into micro-operations using the
analysis (for the numbe r of
micro- operati ons required for the operations), conjun ctive and disjunctive
normalisation and semant ic
analysis.
3. Optimisation which means optimis ing the cost of processing. The cost
means numbe r of micro-
operations genera ted in processing.
-!. Evaluation plan: A query-execution engine (software) takes a query-evaluati
on plan a nd executes chat
plan.
5. Returning tJ1e results of the query.

Distributed Query Proces sing


Query processing operati ons in distributed databases on sanie system or networ
ked systems
SQL
SQ L s tands for Structu ,ed Query Langua ge. ll is a langua ge for viewin g
or changi ng (update, insert or
append or delete) databases. It is a language for data querying. updating. insertin
g. appending and deleting
the databa ses. It is a langua ge for data access control , schema creatio n
and modifi cations . It is also a
languag e for managi ng the RDBM S.
SQL features are as follows:
• Create Schem a is a s11 ucture that contains descriptions of objects created by
a user (base tables, views,
constraints). The user can describe and define tJ1e data for a database.
• Create Catalog consist s of a set of schema s that constiture the description of
ilie database.
• Use Data Definit fon Language (DDL) for the comma nds that depict a databas
e, including creating,
alte ring and droppi ng tables and establishing constraints.
• Use Data Manip ulation Langua ge (DML) for comma nds that maintain
and query a database. The user
can manip ulate ( I NSER T, UPDA TE or SELEC T the data and access
data in rel.nio nal databa se
management system s.
• Use Data Co ntrol Langu age (DCL) for comma nds that control a databas
e, including admini stering
privil eges and commi tting data. The user can set (grant or add or revoke
) permis sions on tables,
procedw·es, and views.

NOSQL
NOSQL stands for No-SQL or Not Only SQL tha t does not integrate with ;ippliratio ns that are based on
SQL. NOSQL is used in cloud data store. NOSQL may cons ist of a class of non-relaLio nal data storage
systems, flexib le data models and multiple schemas
Extract, Trans form and Load

Extract, Transform a nd Load or £TL is a system which enables the usage of databases used, especially the
o nes s tored at a data wareho use. Extract means obtaining data from homogeneous o r heterogeneous data
sources. Transform means transforming a nd storing the data in an app ropriate structure or format. Load
means the structured data load in the fina l target database or data store or da ta wa rehouse. All the three
phases can execute in parallel. ETL system usages are for integrating data from multiple applications
(systems) hos ted separately.

Relational Time Series S ervice


Time series data means an array of numbers indexed with time (date-time or a range of date-time). T ime
series data can be cons idered as time stamped data. It means data carries along with it the date and time
information about the data values. loT devices, such as temperature sensors, wi reless sensor network
nodes, energy meters, RFID tags, ATMs, ACVMs generate time-stamped o r time series data.

TRANSACTION PROCESSING ON STORED DATA


Online Transactions and P rocessing
OLTP means process as soon as data or events generate in real time. OLTP is used when requirements
are availability, speed, concmrency and recoverability in databases for real-time data or events. Exan1ple
OLTP in the application and network domain in Internet of ATMs (ATM of a bank) connec ted to a bank
server.
Batch Transactions Processing
Batch transactions processing means the execution of a series of transactions without user interactions.
When one set of transactions finish, the res ul ts are stored and a next batch is taken up. A good example is
a-edi t card transactions where the final results at the end of the mo nth are used.

Interactive Transactions Processing


Interactive transactions processing means the transactions which involve continual exchange of infom1ation
between the computer and a user. for example, user interactions during e-shopping and e-banking. The
processing is just the opposite of batch processing.

ReaJ-tin1e Transactions Processing


Real-time transanion processing means that transactions process at the same time as the data arrives
from the data sources and data store. An example is ATM mac hine transactions.

Event Stream Processing and Complex Event Processing


Event process ing is a method of tracking a nd a nalyzing streams of information about things and deriving a
conclusion from them.
Event Stream Processing (ESP) is a set of technologies, event process ing languages, Complex Event
Processing (CEP), event visualisation, event databases and evem-driven midcUeware.
• Processes tasks on receiving streams of event data
• Identifies the meaningful pattern from the streams
• Detects relationships between multiple events
• Correlates the events data
• Detects event hierarchies
• Detects aspects sucb as timing, causality, subscription membershi p
• Builds and manages the event-driven information systems.

Com:plex Event Processing


CEP means aggregating, processing and analyzing massive streams of data. CEP has many applications. A
CEIP application is used for capturing a combination of data, timing conditions and efficiently recognises the
corresponding events over data streams.
Examples:
• IOT event processing applkat.ions
• Stocks algoriUm1k based trading
,. Location based service

BUSINESS PROCESSES

A b1Usiness process consists of a series of activities which serves a [Particular speci fie result. The BP is a
repr,esentation or process matrix or flowchart of a sequence of activities with inierleaving decision points.
Business lntelligence
Business inteLligence is a process which enables a business service to extract new facts and knowledge and
Ulen undertake better dec isions. The new facts and knowledge follow from the earlie r results of data
processing, aggregation and then analysing those results.

Distributed Business Process

Distribution of processes reduces the complexity, communication costs, enables faster responses and smaJJer
processing load at the central system.
Dis tributed Business Process System (DBPS) is a collection of logically interrelated business processes in
an Enterprise network. DBPS means a software system that manages Lhe distributed BPs.
DBPS features are:
DBPS is a collection of logically related BPs like DDBS. DBPS exists as cooperation between U1e B Ps i.n
a transparent manner. Transparent means Uiat each user withi n the system may access alJ of the process
decisions within all of U1e processes as if they were a single business process.
DBPS should possess 'location independence' which means the enterprise Bl is unaware of where the BPs
are located. It is possible to move the resuJts of a nalytics and knowl edge from one physical locatlion to
another wiUiout affecting U1e use r.
ANALYTlCS
Imernet of things can use analytics, new facts are found and those facts enable caking o[ i.be decisions for
new option(s) to maximise the profits from the machines. Analytics require the data to be available and
accessible. Tr uses arithmetic and statistical, data mining and advanced methods, such as machine fearning to
find new parameters and information which add value to the data. Analytics enable building models based
on selection of right data. Later the models are tested and used for services and processes.
Analytics Phases
Analytics has three phases before deriving new facts and providing business intelligence. These are:
1. Descriptive analytics enables deriving the additional value from visualisations and reports.
2. Prediccive analytics is advanced analytics whid1 enables extraction of new faces and knowledge, and
then predicts or forecasts.
3. Prescriptive analytics enables derivation of the additional value and undertake better decisions for new
option(s) to maximise die profits.
Descriptive Analytics
Descriptive analytics means finding the aggregates, frequencies of occurrences, mean values. Descriptive
analytics enable Lhe followi ng:
• Actions, sudl as Online Analytical Processing (OLAP) for the analytics
• Reporting or generating spreadsheets
• Visualisations or dashboard displays of the analysed results
• Creation of indicators, called key performance indicators.
Descripdve Analytics Methods
• Spreadsheet-based reports and data visualisatiorts: Results or descriptive analysis can be presented in a
spreadsheet format before creating the data visuals for the user. Spreadsheet enables user visualisation
of what if.
• Descriptive statistics-based reports and data visualisations: Descriptive analysis can also use
descriptive statislics. Staristical analysis means finding peak, minima, variance, probabilities, and
statistical parameters.
• Data mining and machine learning methods in analytics: Data mining analysis means use of algorithms
which extract hidden or un.known information or patterns from large amOLmts of data. Machine learning
means modelling of the specific tasks.
• 011/i11e analytical processing (OLAP) in analytics: OLAP enables viewing of analysed data up to the
desired granularity. OLAP enables obtaining summarized infom1ation and automated reports from large
volume dalabase.
OLAP is an interactive system to show different summaries of multidimensional data by interactive.ly
selecting the attributes in a multidimensional data cube.
Advanced Analytics: Predictive Analytics
Predictive anaJytics answer the question "What will happen?" Predictive analytics is advanced analytics. The
user interprets the outputs from advanced analytics using descriptive analytics methods, such as data
visualisation. For example, output predictions are visualised along with the yearly sales growth of past
five years and predicts next two years sales.

Predictive analytics uses algorithms, such as regression analysis, correlation, optimisation, and multivariate
statistics, and techniques sud1 as modeling, simulation, machine learning, and neural networks.
Prescriptive Analytics

This final phase, suggests actions for deriving benefits from predictions, and shows lhe implications of the
decision options or the optimal solutions or new resource allocation strategies or risk mitigalion strategies.
Prescriptive analytics suggest best course of actions in the g.iven state or set of inputs and rules.
Even t Analytics
event reporting. Event ana!ylics use event data, for
• Event anal~ cs use event data. for events tracking and
event reports using event metric (event counts.
events trackrng and event repontng. Event analytics generate
ation) in each category of events.
events acted up on, event pending action, race of new evems gener
An event has the following components:
example belongs co one category and event of
• Cace~ory-an e~ent of chocolate purchase in ACV M flavour which belongs to other category
reaching predefined threshold of sell for specific chocolate
fined sell is the action taken on the event
• Action-sending message from ACVM on completing prede
• Labe l (optional)
chocolate of that flavow- sold or remaining.
• Value (optional)-on evem. messaging the number of
c, such as event counts for a category of events,
Event analytics generate even t repons using event metri
s generation in that category.
events acted upon , eveoc pending action, rate of new event

In-m emory Data Processing and Analytics


ted in certain databases.
In-memory option of row or column fom1ats can be selec
and Many Columns)
In-memory and On-store Row Format Option (Few Rows
ns or sales order transactions. Each row has separate
Cons ider the transactions of the type, ATM transactio
tions. The operations access only few rows and need
record. A row format can be optimized for OL TP opera
row data, will be brought into the CPU with a single
quick access to the colum ns. A row forma t, allowing
mory and on-store.
memory reference. Data for each record is together in-me
ns and Afore Rows)
In-memory and On-store Column Fom,at Option (Few Colum
few columns. A columnar format allo,vs for much
Analytics run faster on column format, more rows and
are selected because all the data for a column is kept
faste r data retrieval when only a few columns in a table
e memory access will load many column values into
together in-memory in column format option. A singl
the CPU.

Real -time Analytics Man agement


OLTP as well as OLAP. Real-Lime analytics works
Real-time analytics managemenL means ensuring faster
a data warehouse and OLA P on queried results.
both as direct querying using an OLTP database and in

ANALYT ICS USING BIG DATA lN IOT/M2M


collection of both snucrured ~d unstructured ?ara that
Big data means extreme amount of data. It refers to a massive means data of high volume, vanety and
ult to process with traditional techniques. Big data also
is very diffic
velocity and veracity (4Vs).
of data, including data sets with sizes beyond the
Volume means data received from number of sources
ge and process data within a tolerable elapsed time.
ability of commonly used software tools 10 acquire. mana
Variety means structured as well as unstructured
data in different formats. The multi-structured data
data.
compared to ROMS which maintains more structured
use of number of sources of data.
Velocity means data received with higher rates due to
Veracity means variation in data quality for ana lytics.
analytics or ot11er certain advanced methods which
Big data also refers simply to the use of predictive
extract the value from data.
Big Data Analytics

Big data is multi-stmctured data w hile RDMS maintain more stnictured data. The open source software
Hadoop and MapReduce enable storage and analyse the massive amounts of data.

Hadoop File System (HDFS), Ma.bout, a library of machine learning algorithms and HiveQL , a SQL like
scri pting language software are used for Big data analytics in the Hadoop ecosystem.

MapReduce is a programming model . Large data sets process on to a cluster of nodes using MapReduce.
Same node runs the algorit hm using the data sets at HDFS and processing is at that node itself.
Hadoop is an open-source framework. The framework stores and processes big data. T he clusters of
computing nodes process that data us ing s imple programming models. Processing takes place in a
d istributed environment. Hadoop accesses data in sequential manner and perfom1s batch processing. ).
HBase is database for big data. Data access is random access.

Data Analytics Architectui-e

Analytics architecture consists of the followi ng layers:


• Data sources layer
• Data storage and processing layer
• Data access and query processing layer
• Data services, reporting and advanced analytics layer
Figure 5.4 shows an overview of a reference model for analytics archicecn1re. Figure 5.4 also shows on the
light-hand side the layers in the reference model.

:')a\11.0. P.q,uillOM, O:ua' llliUl"11lUM, 01 \Jl,


\J\111>, <e \1 ,11\ a;,_, (Ptieilicll\'C 1'c-•<.flJ1Cl•C Alllll\ lK~)

D.i.u \~ "' "SOL • Proccaui;dllTl'. ETL An,ll)lW..


• n~ .. 1191•" ,,.11 ..1......, fn . \kn:1.-ir) ,1r on ,,.... App!a.:~uxu
O....i- P,...,"""°u,c, '-hi;,rt.."'du.;c aoJ Otha• Sul'fM1
\l'J'hUII•

l
<>t-pmled
IkiJ.1<;1orc
I \ ~Ill .. n.am Pro"~IIII
l uwplc..t. [, ..._"Ul Pro..~wi, Lll~tt

I
f,,,- \U\I O..i.. <;c,11,,~~
s..ur~n
Ai:qu.tJIII?
~041.a~cei
0.iLJ
C.uan.al 0.aU Sourtt,

F'igure 5.4 Analytics Architecture Reference Model

Ber keley Data Analytics Stack (BDAS)


Berkeley Data AnalyLics Stack (BOAS) consists of data processing, data ma nage ment and resource
management layers.
Data processing combines batch, srreaming a nd interactive computations.
Resource ma nagement software component provides for sharing the infrastructure across the frameworks.
Figure 5.5 shows an overview of BDAS architecture which is a reference model for analytics architecture.
Figure 5.5 also shows on right-hand side the fil e system, library of machine learning algorithms and SQL
like scripting language software for the Big data analytics in 1-ladoolP eco system.

~ • , ~ ltq,.'111J1f;_ l'\;ta \ HlEll;t,alk7ftt 111.,.\J'


4.1,-.., AA.11,uu 11',~•"'"""'..._ fK''" A....i,u..-.1
III ~•
Anahu.:i...11!
IIUdJ,;...,.
'lallNI
O..o,"""'1-1
~ k L,bnlry or
\ L,d.-c i..:.ruac

I
Arrlk:M•,,_ \lpmll,,M

- I . I - I m,cQL

I
Dau .V•- SOL 0 - , l'lt uss•'- Oll P. t"TL .\MIJ,111<•
Af"'I,_,._ 1'-QL bl,.-
It llucr,ru,c ,1aut1,,;&. 1a-,1crw,y •• On-\',we
l>Maboo..- r..ic~,m, \l;plto.t.1« t.-d (Jthn, SuppDt1 x,..-.;
~~,
" ' ~ '>"!'11"'1 u-

I l t U,p, ttild

l ...._ ~
llDFS
l t11<fo-•l t>:iuo,w•• lw Wllttb 0
•~
lbtl ~,,rt
l\.,. "1an1 l'rc,,."unJ nia..... 1,k
luaJIIDl•cn&~ <;:,-mi ""
'.w.,t,,;Q
fl,t O.i.t
I I I

I
'C,.JI, ottn: O,;u
t., T \ ll.\l O.u -..:~
~ • • O.i,ta S.Uu"'
E•1<-I0;11a<;."''"°

-
F19ure S.S Betlteley d.iu .an.lyuo sUd\ ¥th tecture
KNOWLED GE ACQUTRING, MANAGING ANl> STORING PROCESSES
Three processes for knowledge are Acquiring process, Managing process and Storing process.

loT data somces continuously generate data, which the applications or processes acquire, organise
and integrates or enriches using analytics. Knowledge discovery tools provide the know ledge at particular
point of time as more and more data is processed and analysed. Know ledge is an important asset of an
enterprise.
Knowledge Management
Knowledge management (KM) is managing knowledge when Lhe new knowledge is regularly acquired,
processed and stored. Knowledge management also provisions for replacing the earlier gathered knowledge
and managing the life cycle of stored knowledge. A KM tool has processes for discovering, using, sharing,
repllacing with new, creating and managing the knowledge database and information of the enterprise.

Knowledge-Management Reference Architecture

Figure 5.6 (a) shows a reference architecrure for knowledge management. Figure 5.6(b) shows
cori-espondences with the ITU-T reference model four layers and OSI model layers.
The lowest layer has sublayers for devices data, streaming data sources which provide input for analytics
and knowledge. Databases, Business Support Systems (BSSs), Operational Support Systems (OSSs) data
can also be additional inputs.
Next higher layer has data adaptation and enrichment sublayers. Adap tation and enrichment sublayers
ada pt the data from the lowest layer in appropriate forms, such as database, structured data and unstructured
data so that it can be used for analytics and processing.
Next higher layer has processing and analytics sublayers. These sublayers are input to infomJacion access
tools and know ledge discovery tools.
The highest layer has knowledge acquiring, managing, storing and knowledge life-cycle management;
sublayers for managing, storing and knowledge life-cycle management Knowledge acquires from the use of
information access tools and knowledge discovery tools.
K...,,,.,c.i_~ \~UW. Storn,, ,t'IJ t.n..••lcd~
I tf" c·r:lf \ b n a ~ ~DJ ~•=~nJ
Kn.:.- l.:J11t .\u1111r,1, '-s.hbycr
llllocm.11•111 .'4-.C'i-4 aaJ
~lc,,u,c Uh,'1 1\.-t)' lwh .',i.i>l•1a
- I
\ppl,ultl•lll•
C:'llf>llllllUn
l.t)<I

(',.--,,er,, anJ
-'rpl,.lliua l~)l-t

rt.A~ .....t AUl}1"", ~"-CII &liJ

I
\JtploealluD 11t1J
\r,ik'lll•oD <;.,1,,,.of\ <;,1t,b)<'ft ~Iii,;
Arphu•io"I '-•irr-in
~
L.oct
(~boldu:i)

I lllU \~111n "nil l .11nd•n.111C ~}-cf•


I tln1CC'11,111l
(,.ic,o I)
I t.,,.,.
\l.:.Jf!U(Hlfl

1,,1 IM:l\l Un,u,, OJIM, S111:-1a; l>.it.a, u.,..J<.'U (~dt4!11


Ptin...l 0.... lll4
~ &llJ ~ ~pp.,rtS}.i.:.mo.. l.;I)« l"')<t
0..lolbA'-"" ~h&aycn

Flgunt , .. IA) A fl.!fetencf! .lrd'\JIKMe '°' Lne knowled~ m.ln.g~ment lldt hM\d sldt'I and
(bl Coff~l)Onc:l!>nc• In ,~m, of ,ru T rrforcnce mod..J .incl ~I I•~ ror loT/M2M (middle
Ind tight h.lnd 11deol

You might also like