0% found this document useful (0 votes)
42 views

Unit 1 - BD - Introduction To Big Data

Uploaded by

mwvvstc8kt
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views

Unit 1 - BD - Introduction To Big Data

Uploaded by

mwvvstc8kt
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 89

Big Data (CS 30017)

Kalinga Institute of Industrial Technology


Deemed to be University
Bhubaneswar-751024

School of Computer Engineering

Strictly for internal circulation (within KIIT) and reference only. Not for outside circulation without permission

3 Credit Lecture Note


Motivating Quotes
2

 “The world is one big data platform.” - Andrew McAfee, co-director of the
MIT Initiative on the Digital Economy, and the associate director of the
Center for Digital Business at the MIT Sloan School of Management.
 “Errors using inadequate data are much less than those using no data at
all.” - Charles Babbage, inventor and mathematician.
 “The most valuable commodity I know of is information.” - Gordon
Gekko, fictional character in the 1987 film Wall Street and its 2010
sequel Wall Street: Money Never Sleeps, played by Michael Douglas.
 “Big data will replace the need for 80% of all doctors” - Vinod Khosla,
Indian-born American engineer and businessman.
 “Thanks to big data, machines can now be programmed to the next thing
right. But only humans can do the next right thing.” - Dov Seidman,
American author, attorney, columnist and businessman
School of Computer Engineering
Motivating Quotes cont’d
3

 “With data collection, ‘the sooner the better’ is always the best answer.” -
Marissa Mayer, former president and CEO of Yahoo!
 “Data is a precious thing and will last longer than the systems
themselves.” - Tim Berners-Lee, inventor of the World Wide Web.
 “Numbers have an important story to tell. They rely on you to give them
a voice.” - Stephen Few, Information Technology innovator, teacher, and
consultant.
 “When we have all data online it will be great for humanity. It is a
prerequisite to solving many problems that humankind faces” - Vinod
Khosla, Indian-born American engineer and businessman.
 “Thanks to big data, machines can now be programmed to the next thing
right. But only humans can do the next right thing.” - Robert Cailliau,
Belgian informatics engineer and computer scientist who, together with
Tim Berners-Lee, developed the World Wide Web.
School of Computer Engineering
Importance of the Course
4

 The Big Data is indeed a revolution in the field of Information Technology.


 The use of big data by the companies is enhancing every year and the primary focus of
the companies is on customers. The field is flourishing specifically in healthcare,
energy, and business to consumer (B2C) applications.
 Many organizations are actively looking out for the right talent to analyze vast amounts
of data.
 Following four perspectives leads to importance of big data analytics.

Business Data
Science
Big Data
Analytics
Real-time
Job Market Usability

Further study: https://www.whizlabs.com/blog/big-data-analytics-importance/

School of Computer Engineering


Why Learn Big Data?
5

To get an answer to why you should learn Big Data? Let’s start with what
industry leaders say about Big Data:
 Gartner – Big Data is the new Oil.
 IDC – Its market will be growing 7 times faster than the overall IT market.
 IBM – It is not just a technology – it’s a business strategy for capitalizing on
information resources.
 IBM – Big Data is the biggest buzz word because technology makes it
possible to analyze all the available data.
 McKinsey – There will be a shortage of 1500000 Big Data professionals by
the mid of 2030.
Industries today are searching new and better ways to maintain their
position and be prepared for the future. According to experts, Big Data
analytics provides leaders a path to capture insights and ideas to stay ahead
in the tough competition.
School of Computer Engineering
Course Objective
6

 To understand the concept and principles of big data.


 To explore the big data stacks and the technologies associated
with it.
 To evaluate the different NoSQL databases and frameworks
required to handle the big data.
 To formulate the concepts, principles and techniques focusing
on the applications to industry and real world experience.
 To contextually integrate and correlate large amounts of
information to gain faster insights for real time scenarios.

School of Computer Engineering


Course Outcome
7

1. Understand the concept of data management, evolution and


building blocks of big data
2. Analyse various big data technology foundations
3. Apply map reduce paradigm to solve data intensive problems
4. Analyse big data framework like Hadoop and NoSQL to
efficiently store and process big data to generate analytics
5. Present appropriate solutions to big data analytics problems
6. Interpret data findings effectively to any audience, visually

School of Computer Engineering


Course Contents
8

Sr # Major and Detailed Coverage Area Hrs


1 Overview of Big Data 6

Importance of Data, Characteristics of Data, Analysis of unstructured data,


Introduction to Big Data, Challenges of conventional systems, Data analytic,
Evolution of analytic scalability, Big Data Analytics, Key Big Data terminologies, Big
Data analytics lifecycle, Cloud Computing and Big Data.
2 Big Data Technology Foundations 5

Exploring the Big Data Stack, Data Sources Layer, Ingestion Layer, Storage Layer,
Physical Infrastructure Layer, Platform Management Layer, Security Layer,
Monitoring Layer, Analytics Engine, Visualization Layer, Big Data Applications,
Virtualization.
3 Streaming 5

Introduction to Streams Concepts – Stream data model and architecture – Stream


Computing, Sampling data in a stream – Filtering streams, Counting distinct
elements in a stream.

School of Computer Engineering


Course Contents continue…
9

Sr # Major and Detailed Coverage Area Hrs


4 Hadoop Ecosystem
Introduction to Hadoop, Hadoop Ecosystem, Hadoop Distributed File System, 10
MapReduce, YARN, Hive, Pig and PigLatin, Jaql - Zookeeper - HBase, Cassandra-
Oozie, Lucene- Avro, Mahout.
5 Storing Data in Big Data context
Data Models, RDBMS and Hadoop, Non-Relational Database, Introduction to NoSQL, 6
Types of NoSQL, Polyglot Persistence, Sharding
6 Frameworks and Visualization 6
Distributed and Parallel Computing for Big Data, Big Data Visualizations – Visual
data analysis techniques, interaction techniques, applications

School of Computer Engineering


Books
10

Textbook
 Big Data, Black Book, DT Editorial Services, Dreamtech Press, 2016
Reference Books
 Big Data and Analytics, Seema Acharya, Subhashini Chellappan, Infosys Limited,
Publication: Wiley India Private Limited,1st Edition 2015.
 Discovering, Analyzing, Visualizing and Presenting Data by EMC Education
Services (Editor), Wiley, 2014
 Stephan Kudyba, Thomas H. Davenport, Big Data, Mining, and Analytics,
Components of Strategic Decision Making, CRC Press, Taylor & Francis Group. 2014
 Norman Matloff , THE ART OF R PROGRAMMING, No Starch Press, Inc.2011
 Big Data For Dummies, Judith Hurwitz et al. Wiley 2013
 Glenn J. Myatt, Making Sense of Data, John Wiley & Sons, 2007 Pete Warden, Big
Data Glossary, O’Reilly, 2011.

School of Computer Engineering


Evaluation
11

Grading:
 Internal assessment – 30 marks
 1 group critical thinking (class test) = 5 X 1 = 5 marks
 2 group assignments = 5 X 2 = 10 marks
 1 individual class note = 5 X 1 = 5 marks
 1 group quiz = 5 X 1 = 5 marks
 1 individual class participation = 5 X 1 = 5 marks

 Mid-Term exam - 20 marks

 End-Term exam - 50 marks

?
School of Computer Engineering
Data
12

 A representation of information, knowledge, facts, concepts or instructions


which are being prepared or have been prepared in a formalized manner.
 Data is either intended to be processed, is being processed, or has been
processed.
 It can be in any form stored internally in a computer system or computer
network or in a person’s mind.
 Since the mid-1900s, people have used the word data to mean computer
information that is transmitted or stored.
 Data is the plural of datum (a Latin word meaning something given), a single
piece of information. In practice, however, people use data as both the
singular and plural form of the word.
 It must be interpreted, by a human or machine to derive meaning.
 It is presents in homogeneous sources as well as heterogeneous sources.
 The need of the hour is to understand, manage, process, and take the data
for analysis to draw valuable insights.
Data  Information  Knowledge  Actionable Insights
School of Computer Engineering
Importance of Data
13

 The ability to analyze and act on data is increasingly important to


businesses. It might be part of a study helping to cure a disease, boost a
company’s revenue, understand and interpret market trends, study
customer behavior and take financial decisions
 The pace of change requires companies to be able to react quickly to
changing demands from customers and environmental conditions. Although
prompt action may be required, decisions are increasingly complex as
companies compete in a global marketplace
 Managers may need to understand high volumes of data before they can
make the necessary decisions
 Relevant data creates strong strategies - Opinions can turn into great
hypotheses, and those hypotheses are just the first step in creating a strong
strategy. It can look something like this: “Based on X, I believe Y, which will
result in Z”
 Relevant data strengthens internal teams
 Relevant data quantifies the purpose of the work
School of Computer Engineering
Characteristics of Data
14

Deals with the structure of the


data i.e. source, the granularity,
the type, nature whether static Composition
or real-time streaming

Deals with the state of the data


i.e. usability for analysis, does it Condition Data
require cleaning for further
enhancement and enrichment?
Deals with “where it has been Context
generated”, “ why was this
generated”, “how sensitive is
this”, “what are the associated
events” and so on.

School of Computer Engineering


Human vs. Machine Readable data
15

 Human-readable refers to information that only humans can interpret and study,
such as an image or the meaning of a block of text. If it requires a person to
interpret it, that information is human-readable.
 Machine-readable refers to information that computer programs can process. A
program is a set of instructions for manipulating data. Such data can be
automatically read and processed by a computer, such as CSV, JSON, XML, etc.
Non-digital material (for example printed or hand-written documents) is by its non-
digital nature not machine-readable. But even digital material need not be machine-
readable. For example, a PDF document containing tables of data. These are
definitely digital but are not machine-readable because a computer would struggle
to access the tabular information - even though they are very human readable. The
equivalent tables in a format such as a spreadsheet would be machine readable.
Another example scans (photographs) of text are not machine-readable (but are
human readable!) but the equivalent text in a format such as a simple ASCII text file
can machine readable and processable.

School of Computer Engineering


Classification of Digital Data
16

Digital data is classified into the following categories:


 Structured data
 Semi-structured data
 Unstructured data

Approximate percentage distribution of digital data

School of Computer Engineering


Structured Data
17

 It is defined as the data that has a defined repeating pattern and this pattern
makes it easier for any program to sort, read, and process the data.
 This is data is in an organized form (e.g., in rows and columns) and can be easily
used by a computer program.
 Relationships exist between entities of data.
 Structured data:
 Organize data in a pre-defined format
 Is stored in a tabular form
 Is the data that resides in a fixed fields within a record of file
 Is formatted data that has entities and their attributes mapped
 Is used to query and report against predetermined data types
 Sources:
Relational Multidimensional
database databases
Structured data
Legacy
Flat files
databases
School of Computer Engineering
Ease with Structured Data
18

Insert/Update/ DML operations provide the required ease with data


input, storage, access, process , analysis etc.
Delete

Encryption and tokenization solution to warrant the


security of information throughout life cycle.
Security Organization able to retain control and maintain
compliance adherence by ensuring that only authorized
are able to decrypt and view sensitive information.

Indexing speed up the data retrieval operation at the


Structured data Indexing cost of additional writes and storage space, but the
benefits that ensure in search operation are worth the
additional writes and storage spaces.

The storage and processing capabilities of the traditional


Scalability DBMS can be easily be scaled up by increasing the
horsepower of the database server

Transaction RDBMS has support of ACID properties of transaction


to ensure accuracy, completeness and data integrity
Processing

School of Computer Engineering


Semi-structured Data
19

 Semi-structured data, also known as having a schema-less or self-describing


structure, refers to a form which does not conform to a data model as in
relational database but has some structure.
 In other words, data is stored inconsistently in rows and columns of a database.
 However, it is not in a form which can be used easily by a computer program.
 Example, emails, XML, markup languages like HTML, etc. Metadata for this data
is available but is not sufficient.
 Sources:

Web data in
the form of XML
cookies Semi-structured
data
Other Markup JSON
languages

School of Computer Engineering


XML, JSON, BSON format
20

Source (XML & JSON): http://sqllearnergroups.blogspot.com/2014/03/how-to-get-json-format-through-sql.html


Source (JSON & BSON): http://www.expert-php.fr/mongodb-bson/

School of Computer Engineering


Characteristics of Semi-structured Data
21

Inconsistent Structure

Self-describing
(level/value pair)

Other schema
Semi-structured information is
data blended with data
values

Data objects may


have different
attributes not known
beforehand

School of Computer Engineering


Unstructured Data
22

 Unstructured data is a set of data that might or might not have any logical or
repeating patterns and is not recognized in a pre-defined manner.
 About 80 percent of enterprise data consists of unstructured content.
 Unstructured data:
 Typically consists of metadata i.e. additional information related to data.
 Comprises of inconsistent data such as data obtained from files, social
media websites, satellites etc
 Consists of data in different formats such as e-mails, text, audio, video, or
images.
 Sources: Body of email

Chats, Text
Text both
messages
internal and
external to org.
Mobile data
Unstructured data
Social Media Images, audios,
data videos
School of Computer Engineering
Challenges associated with Unstructured data
23

Working with unstructured data poses certain challenges, which are as follows:
 Identifying the unstructured data that can be processed
 Sorting, organizing, and arranging unstructured data indifferent sets and
formats
 Combining and linking unstructured data in a more structured format to derive
any logical conclusions out of the available information
 Costing in terms of storage space and human resources need to deal with the
exponential growth of unstructured data
Data Analysis of Unstructured Data
The complexity of unstructured data lies within the language that created it. Human
language is quite different from the language used by machines, which prefer
structured information. Unstructured data analysis is referred to the process of
analyzing data objects that doesn’t follow a predefine data model and/or is
unorganized. It is the analysis of any data that is stored over time within an
organizational data repository without any intent for its orchestration, pattern or
categorization.

School of Computer Engineering


Dealing with Unstructured data
24

Data Mining (DM)

Natural Language Processing (NLP)


Dealing with
Unstructured data Text Analytics (TA)
Noisy Text Analytics

Note: Refer to Appendix for further details.

School of Computer Engineering


Home work
25

 In reference to the site: https://www.geeksforgeeks.org/difference-between-


structured-semi-structured-and-unstructured-data/, draw a table outlining
the differences between structured, semi-structured and unstructured data
in relation to the properties like technology, transaction management, etc.,
 Categorize the following data as structured, semi-structured and
unstructured:
 Newspaper
 Cricket match score
 HTML page
 Patient records in a hospital
 Is natural language unstructured data?
 Are receipts and invoices structured or unstructured or semi-structured?
 Why is unstructured text data important in decision making?
 What does AI have to do with unstructured data?

School of Computer Engineering


Definition of Big Data
26

Big Data is high-volume, high-velocity,


High-volume
High-velocity
and high-variety information assets that
High-variety demand cost effective, innovative forms
of information processing for enhanced
insight and decision making.
Source: Gartner IT Glossary
Cost-effective, innovative forms
of information processing

Enhanced insight &


decision making

School of Computer Engineering


What is Big Data?
27

Think of following:

 Every second, there are around 822 tweets on Twitter


 Every minutes, nearly 510 comments are posted, 293 K statuses are updated,
and 136K photos are uploaded in Facebook
 Every hour, Walmart, a global discount departmental store chain, handles more
than 1 million customer transactions.
 Everyday, consumers make around 11.5 million payments by using PayPal.
In the digital world, data is increasing rapidly because of the ever increasing use of
the internet, sensors, and heavy machines at a very high rate. The sheer volume,
variety, velocity, and veracity of such data is signified the term ‘Big Data’.

Semi- Big
Structured Unstructured
structured Data
Data Data
Data

School of Computer Engineering


Challenges of Conventional Systems
28

The main challenge in the traditional approach for computing systems to manage
‘Big Data’ because of immense speed and volume at which it is generated. Some of
the challenges are:
 Traditional approach cannot work on unstructured data efficiently
 Traditional approach is built on top of the relational data model, relationships
between the subjects of interests have been created inside the system and the
analysis is done based on them. This approach will not adequate for big data
 Traditional approach is batch oriented and need to wait for nightly ETL
(extract, transform and load) and transformation jobs to complete before
the required insight is obtained
 Traditional data management, warehousing, and analysis systems fizzle to
analyze this type of data. Due to it’s complexity, big data is processed with
parallelism. Parallelism in a traditional system is achieved through costly
hardware like MPP (Massively Parallel Processing) systems
 Inadequate support of aggregated summaries of data

School of Computer Engineering


Challenges of Conventional Systems cont’d
29
Other challenges can be categorized as:
 Data Challenges:
 Volume, velocity, veracity, variety
 Data discovery and comprehensiveness
 Scalability

 Process challenges
 Capturing Data
 Aligning data from different sources
 Transforming data into suitable form for data analysis
 Modeling data(Mathematically, simulation)
 Management Challenges:
 Security
 Privacy
 Governance
 Ethical issues
School of Computer Engineering
Elements of Big Data
30
In most big data circles, these are called the four V’s: volume, variety, velocity, and veracity.
(One might consider a fifth V, value.)
Volume - refers to the incredible amounts of data generated each second from social media,
cell phones, cars, credit cards, M2M sensors, photographs, video, etc. The vast amounts of
data have become so large in fact it can no longer store and perform data analysis using
traditional database technology. So using distributed systems, where parts of the data is
stored in different locations and brought together by software.
Variety - defined as the different types of data the digital system now use. Data today looks
very different than data from the past. New and innovative big data technology is now
allowing structured and unstructured data to be harvested, stored, and used
simultaneously.
Velocity - refers to the speed at which vast amounts of data are being generated, collected
and analyzed. Every second of every day data is increasing. Not only must it be analyzed,
but the speed of transmission, and access to the data must also remain instantaneous to
allow for real-time access. Big data technology allows to analyze the data while it is being
generated, without ever putting it into databases.
Veracity - is the quality or trustworthiness of the data. Just how accurate is all this data?
For example, think about all the Twitter posts with hash tags, abbreviations, typos, etc., and
the reliability and accuracy of all that content.
School of Computer Engineering
Elements of Big Data cont’d
31
Value - refers to the ability to transform a tsunami of data into business. Having endless
amounts of data is one thing, but unless it can be turned into value it is useless.

Refer to Appendix
for data volumes

School of Computer Engineering


Why Big Data?
32
More data for analysis will result into greater analytical accuracy and greater
confidence in the decisions based on the analytical findings. This would entail a greater
positive impact in terms of enhancing operational efficiencies, reducing cost and time,
and innovating on new products, new services and optimizing existing services.

More data

More accurate analysis

Greater confidence in decision making

Greater operational efficiencies, cost


reduction, time reduction, new
product development, and optimized
offering etc.

School of Computer Engineering


Data Analytics
33
Data analytics is the process of extracting useful information by analysing
different types of data sets. It is used to discover hidden patterns, outliers,
unearth trends, unknown co-relationship and other useful information for the
benefit of faster decision making.
There are 4 types of analytics:

School of Computer Engineering


Analytics Approach – What is the data telling?
34

Approach Explanation
Descriptive What’s happening in my business?
• Comprehensive, accurate and historical data
• Effective Visualisation
Diagnostic Why is it happening?
• Ability to drill-down to the root-cause
• Ability to isolate all confounding information
Predictive What’s likely to happen?
• Decisions are automated using algorithms and technology
• Historical patterns are being used to predict specific outcomes using
algorithms
Prescriptive What do I need to do?
• Recommended actions and strategies based on champion/challenger
strategy outcomes
• Applying advanced analytical algorithm to make specific
recommendations
School of Computer Engineering
Mapping of Big Data’s Vs to Analytics Focus
35

History data can be quite large. There might be a need to process huge amount of data many times a
day as it gets updated continuously. Therefore volume is mapped to history. Variety is pervasive.
Input data, insights, and decisions can span a variety of forms, hence it is mapped to all three. High
velocity data might have to be processed to help real time decision making and plays across
descriptive, predictive, and prescriptive analytics when they deal with present data. Predictive and
prescriptive analytics create data about the future. That data is uncertain, by nature and its veracity
is in doubt. Therefore veracity is mapped to prescriptive and predictive analytics when it deal with
future.
School of Computer Engineering
Evolution of Analytics Scalability
36
It goes without saying that the world of big data requires new levels of scalability. As the
amount of data organizations process continues to increase, the same old methods for
handling data just won’t work anymore. Organizations that don’t update their
technologies to provide a higher level of scalability will quite simply choke on big data.
Luckily, there are multiple technologies available that address different aspects of the
process of taming big data and making use of it in analytic processes.
Traditional Analytics Architecture

Database 1
Analytic Server

Database 2
Extract
Database 3

The heavy processing occurs in the analytic environment. This


Database n may even a PC
School of Computer Engineering
Evolution of Analytics Scalability cont’d
37

Modern In-Database Analytics Architecture

Refer to Appendix for


Database 1 further details on EDW
Analytic Server
Database 2
Submit
Consolidate
Request

Database 3 Enterprise Data


Warehouse (EDW)

Database n

In an in-database environment, the processing stays in the database where the data
has been consolidated. The user’s machine just submits the request; it doesn’t do
heavy lifting.

School of Computer Engineering


Evolution of Analytics Scalability cont’d
38

MPP Database Analytics Architecture


Massively parallel processing (MPP) database systems is the most mature, proven, and
widely deployed mechanism for storing and analyzing large amounts of data. An MPP
database spreads data out into independent pieces managed by independent storage
and central processing unit (CPU) resources. Conceptually, it is like having pieces of
data loaded onto multiple network connected personal computers around a house.
The data in an MPP system gets split across a variety of disks managed by a variety of
CPUs spread across a number of servers.

Single overloaded server


In stead of single
overloaded database, an
MPP database breaks the
data into independent
chunks with independent
Multiple lightly loaded server
disk and CPU.

School of Computer Engineering


MPP Database Example
39

100-gigabyte 100-gigabyte 100-gigabyte 100-gigabyte 100-gigabyte


chunks chunks chunks chunks chunks

One-terabyte
table 100-gigabyte 100-gigabyte 100-gigabyte 100-gigabyte 100-gigabyte
chunks chunks chunks chunks chunks

A Traditional database will query


a one-terabyte table one row at time 10 simultaneous 100-gigabyte queries

MPP database is based on the principle of SHARE THE WORK!


A MPP database spreads data out across multiple sets of CPU and disk space. Think
logically about dozens or hundreds of personal computers each holding a small piece of a
large set of data. This allows much faster query execution, since many independent
smaller queries are running simultaneously instead of just one big query
If more processing power and more speed are required, just bolt on additional capacity in
the form of additional processing units
MPP systems build in redundancy to make recovery easy and have resource
management tools to manage the CPU and disk space

School of Computer Engineering


MPP Database Example cont’d
40

An MPP system allows the different sets of CPU and disk to run the process concurrently

An MPP system
breaks the job into pieces

Single Threaded
Process ★ Parallel Process ★
School of Computer Engineering
Analysis vs. Reporting
41
Reporting - The process of organizing data into informational summaries in
order to monitor how different areas of a business are performing.
Analysis: The process of exploring data and reports in order to extract
meaningful insights, which can be used to better understand and improve
business performance.
Difference b/w Reporting and Analysis:
 Reporting translates raw data into information. Analysis transforms data
and information into insights.
 Reporting helps companies to monitor their online business and be alerted
to when data falls outside of expected ranges. Good reporting should raise
questions about the business from its end users. The goal of analysis is to
answer questions by interpreting the data at a deeper level and providing
actionable recommendations.
 In summary, reporting shows you what is happening while analysis focuses
on explaining why it is happening and what you can do about it.

School of Computer Engineering


Big Data Analytics
42
Big data analytics is the process of extracting useful information by analysing different
types of big data sets. It is used to discover hidden patterns, outliers, unearth trends,
unknown co-relationship and other useful info for the benefit of faster decision making.
Big Data Application in different Industries

School of Computer Engineering


What is Big Data Analytics ?
43

Move code to data for Richer, deeper insights into


Better, faster decisions in
greater speed and customers, partners and the
real-time
efficiency business
Working with datasets
whose volume and variety is Big Data
Competitive advantages
beyond the storage and Analytics
capacity of typical DB
IT’s collaboration with Time-sensitive decisions
Technology enabled
business users and data made in near real time by
analytics
scientist processing real-time data

School of Computer Engineering


What is Big Data Analytics isn’t?
44

Only about Volume Just about technology Meant to replace RDBMS

Big Data
Analytics isn’t

“One-size-fit-all” traditional
Only used by huge online Meant to replace data
RDBMS built on shared disk
companies warehouse
and memory

School of Computer Engineering


Challenges that prevent business from
capitalizing on Big Data
45

1. Obtaining executive sponsorships for investments in big data and its related
activities such as training etc.
2. Getting the business units to share information across organizational silos.
3. Fining the right skills that can manage large amounts of structured, semi-
structured, and unstructured data and create insights from it.
4. Determining the approach to scale rapidly and elastically. In other words,
the need to address the storage and processing of large volume, velocity and
variety of big data.
5. Deciding whether to use structured or unstructured, internal or external
data to make business decisions.
6. Determining what to do with the insights created from big data.
7. Choosing the optimal way to report findings and analysis of big data for the
presentations to make the most sense.

School of Computer Engineering


Top challenges facing Big Data
46

1. Scale: Storage is one major concern that needs to be addressed to handle


the need for scaling rapidly and elastically. The need of the hour is a storage
that can best withstand the onslaught of large volume, velocity, and variety
of big data? Should scale vertically or horizontally?
2. Security: Most of the NoSQL (Not only SQL) big data platforms have poor
security mechanism (lack of proper authentication and authorization
mechanisms) when it comes to safeguarding big data.
3. Schema: Rigid schema have no place. The need of the hour is dynamic
schema and static (pre-defined) schemas are passed.
4. Data Quality: How to maintain data quality – data accuracy, completeness,
timeliness etc. Is the appropriate metadata in place?
5. Partition Tolerant: How to build partition tolerant systems that can take
care of both hardware and software failures?
6. Continuous availability: The question is how to provide 24/7 support
because almost all RDBMS and NoSQL big data platforms have a certain
amount of downtime built in.
School of Computer Engineering
Kind of Technologies to help meet the
challenges posed by Big Data
47

1. Cheap and abundant storage


2. Faster processors to help with quicker processing of
big data
3. Affordable open-source, distributed big data
platforms
4. Parallel processing, clustering, visualisation, large
grid environments, high connectivity, and high
throughputs rather than low latency
5. Cloud computing and other flexible resource
allocation agreements

School of Computer Engineering


Key terminologies used in Big Data
48

In-Memory Analytics: Data access from non-volatile storage such as hard disk
is a slow process. The more the data is required to be fetched from hard disk or
secondary storage, the slower the process gets. The problem can be addressed
using in-memory analytics. All the relevant data is stored in RAM or primary
storage thus eliminating the need to access the data from hard disk. The
advantage is faster access, rapid deployment, better insights and minimal IT
involvement. In-memory Analytics makes everything Instantly Available due to
lower cost of RAM or Flash Memory, and data can be stored and processed at
lightening speed.
In-Database Processing: Also called as In-Database analytics. It works by
fusing data warehouses with analytical systems. Typically the data from various
enterprise Online Transaction Processing (OLTP) systems after cleaning up (de-
duplication, scrubbing etc.) through the process of ETL is stored in the
Enterprise Data Warehouse or data marts. The huge datasets are then exported
to analytical programs for complex and extensive computations.
Note: Refer to Appendix for further details on OLTP and ETL.
School of Computer Engineering
Key terminologies used in Big Data cont’d
49

Symmetric Multiprocessor System (SMP): In SMP, there is a single common


main memory that is shared by two or more identical processors. The
processors have full access to all I/O devices and are controlled by a single
operating system instance. Each processor has its own high-speed memory,
called cache memory and are connected using a system bus.

Source: https://en.wikipedia.org/wiki/Symmetric_multiprocessing
School of Computer Engineering
Key terminologies used in Big Data cont’d
50

Parallel Systems: A parallel database system is a tightly coupled system. The


processors co-operate for query processing. The user is unaware of the
parallelism since he/she has no access to a specific processor of the system.

User User User

Front end computer

P1 P2 P3

Back end parallel system

School of Computer Engineering


Key terminologies used in Big Data cont’d
51

Distributed Systems: Known to be loosely coupled and are composed of


individual machines. Each of the machine can run their individual application
and serve their own respective users. The data is usually distributed across
several machines, thereby necessitating quite a number of machines to be
accessed to answer a user query.
User User

P2
User User User User

P1 P3

Network

School of Computer Engineering


Distributed vs. Parallel Computing
52

Parallel Computing Distributed Computing


Shared memory system Distributed memory system
Multiple processors share a single Autonomous computer nodes
bus and memory unit connected via network
Processor is order of Tbps Processor is order of Gbps
Limited Scalability Better scalability and cheaper
Distributed computing in local
network (called cluster
computing). Distributed computing
in wide-area network (grid
computing)

School of Computer Engineering


Key terminologies used in Big Data cont’d
53

SM SD

In a shared memory (SM)


architecture, a common central
memory is shared by multiple
processors. In a shared disk (SD)
architecture, multiple processors
share a common collection of
disks while having their own
private memory.

School of Computer Engineering


Key terminologies used in Big Data cont’d
54

In a shared nothing (SN) architecture, neither memory nor disk is shared among
multiple processors.
Advantages:
 Fault Isolation: provides the benefit of isolating fault. A fault in a single
machine or node is contained and confined to that node exclusively and
exposed only through messages.
 Scalability: If the disk is a shared resource, synchronization will have to
maintain a consistent shared state and it means that different nodes will
have to take turns to access the critical data. This imposes a limit on how
many nodes can be added to the distributed shared disk system, this
compromising on scalability.

School of Computer Engineering


Key terminologies used in Big Data cont’d
55

CAP Theorem: In the past, when we wanted to store more data or increase our
processing power, the common option was to scale vertically (get more
powerful machines) or further optimize the existing code base. However, with
the advances in parallel processing and distributed systems, it is more common
to expand horizontally, or have more machines to do the same task in parallel.
However, in order to effectively pick the tool of choice like Spark, Hadoop, Kafka,
Zookeeper and Storm in Apache project, a basic idea of CAP Theorem is
necessary. The CAP theorem is called the Brewer’s Theorem. It states that a
distributed computing environment can only have 2 of the 3: Consistency,
Availability and Partition Tolerance – one must be sacrificed.
 Consistency implies that every read fetches the last write
 Availability implies that reads and write always succeed. In other words,
each non-failing node will return a response in a reasonable amount of time
 Partition Tolerance implies that the system will continue to function when
network partition occurs

School of Computer Engineering


CAP Theorem cont’d
56

The CAP theorem categorizes systems into three


categories:
CP (Consistent and Partition Tolerant) - a
system that is consistent and partition tolerant
but never available. CP is referring to a category
of systems where availability is sacrificed only in
the case of a network partition.
CA (Consistent and Available) - CA systems are
consistent and available systems in the absence
of any network partition. Often a single node's
DB servers are categorized as CA systems. Single
node DB servers do not need to deal with
partition tolerance and are thus considered CA
systems.
Source: Towards Data Science AP (Available and Partition Tolerant) - These
are systems that are available and partition
tolerant but cannot guarantee consistency.

School of Computer Engineering


CAP Theorem Proof
57

Let's consider a very simple distributed system. Our system is composed of S1 S2


two servers, S1 and S2. Both of these servers are keeping track of the same V0 V0
variable, v, whose value is initially v0. S1 and S2 can communicate with each
other and can also communicate with external client. Here's what the system
looks like. Client
Assume for contradiction that the system is consistent, available, and
partition tolerant. S1 S2

The first thing we do is partition our system. It looks like this. V0 V0

Next, the client request that v1 be written to S1. Since the system is Client
available, S1 must respond. Since the network is partitioned, however, S1
cannot replicate its data to S2. This phase of execution is called α1.
S1 S2 S1 S2 S1 S2
V0 V0 V1 V0 V1 V0
Write V1 done
Client Client Client

School of Computer Engineering


CAP Theorem Proof cont’d

Next, the client issue a read request to S2. Again, since the system is available,
S2 must respond and since the network is partitioned, S2 cannot update its
value from S1. It returns v0. This phase of execution is called α2.
S1 S2 S1 S2

V1 V0 V1 V0

read V0
Client Client

S2 returns v0 to the client after the client had already written v1 to S1. This is
inconsistent.
We assumed a consistent, available, partition tolerant system existed, but we
just showed that there exists an execution for any such system in which the
system acts inconsistently. Thus, no such system exists.

School of Computer Engineering


Big Data Analytics Lifecycle
59

 Big Data analysis differs from traditional data analysis primarily due to the
volume, velocity and variety characteristics of the data being processes.
 To address the distinct requirements for performing analysis on Big Data,
a step-by-step methodology is needed to organize the activities and tasks
involved with acquiring, processing, analyzing and repurposing data.
 From a Big Data adoption and planning perspective, it is important that in
addition to the lifecycle, consideration be made for issues of training,
education, tooling and staffing of a data analytics team.
 The Big Data analytics lifecycle can be divided into the following nine
stages namely –
1. Business Case Evaluation 6. Data Aggregation & Representation
2. Data Identification 7. Data Analysis
3. Data Acquisition & Filtering 8. Data Visualization
4. Data Extraction 9. Utilization of Analysis Results
5. Data Validation & Cleansing

School of Computer Engineering


Big Data Analytics Lifecycle cont’d
60

Stage 1 Stage 2 Stage 3


Data Acquisition &
Business Case Evaluation Data Identification
Filtering

Stage 6 Stage 5 Stage 4


Data Aggregation & Data Validation &
Data Extraction
Representation Cleansing

Stage 7 Stage 8 Stage 9


Utilization of Analysis
Data Analysis Data Visualization
Results

School of Computer Engineering


1. Business Case Evaluation
61

 Before any Big Data project can be started, it needs to be clear


what the business objectives and results of the data analysis
should be.
 This initial phase focuses on understanding the project
objectives and requirements from a business perspective, and
then converting this knowledge into a data mining problem
definition.
 A preliminary plan is designed to achieve the objectives. A
decision model, especially one built using the Decision Model
and Notation standard can be used.
 Once an overall business problem is defined, the problem is
converted into an analytical problem.

School of Computer Engineering


2. Data Identification
62

 The Data Identification stage determines the origin of data.


Before data can be analysed, it is important to know what the
sources of the data will be.
 Especially if data is procured from external suppliers, it is
necessary to clearly identify what the original source of the
data is and how reliable (frequently referred to as the veracity
of the data) the dataset is.
 The second stage of the Big Data Lifecycle is very important,
because if the input data is unreliable, the output data will
also definitely be unreliable.
 Identifying a wider variety of data sources may increase the
probability of finding hidden patterns and correlations.

School of Computer Engineering


3. Data Acquisition and Filtering
63

 The Data Acquisition and Filtering Phase builds upon the


previous stage of the Big Data Lifecycle.
 In this stage, the data is gathered from different sources, both
from within the company and outside of the company.
 After the acquisition, a first step of filtering is conducted to filter
out corrupt data.
 Additionally, data that is not necessary for the analysis will be
filtered out as well.
 The filtering step will be applied on each data source individually,
so before the data is aggregated into the data warehouse.
 In many cases, especially where external, unstructured data is
concerned, some or most of the acquired data may be irrelevant
(noise) and can be discarded as part of the filtering process.

School of Computer Engineering


3. Data Acquisition and Filtering cont’d
64

 Data classified as “corrupt” can


include records with missing or
nonsensical values or invalid
data types. Data that is filtered
out for one analysis may possibly
be valuable for a different type of
analysis.
 Metadata can be added via
automation to data from both
internal and external data
sources to improve the
classification and querying.
 Examples of appended metadata
include dataset size and
structure, source information,
date and time of creation or
collection and language-specific
information.
School of Computer Engineering
4. Data Extraction
65

 Some of the data identified in the two previous stages may be


incompatible with the Big Data tool that will perform the actual
analysis.
 In order to deal with this problem, the Data Extraction stage is
dedicated to extracting different data formats from data sets (e.g.
the data source) and transforming these into a format the Big
Data tool is able to process and analyse.
 The complexity of the transformation and the extent in which is
necessary to transform data is greatly dependent on the Big Data
tool that has been selected.
 The Data Extraction lifecycle stage is dedicated to extracting
disparate data and transforming it into a format that the
underlying Big Data solution can use for the purpose of the data
analysis.
School of Computer Engineering
4. Data Extraction cont’d
66

 (A). Illustrates the


extraction of (A)
comments and a user
ID embedded within
an XML document
without the need for
further
transformation.
 (B). Demonstrates (B)
the extraction of the
latitude and
longitude
coordinates of a user
from a single JSON
field.

School of Computer Engineering


5. Data Validation and Cleansing
67

 Data that is invalid leads to invalid results. In order to ensure


only the appropriate data is analysed, the Data Validation and
Cleansing stage of the Big Data Lifecycle is required.
 During this stage, data is validated against a set of
predetermined conditions and rules in order to ensure the data
is not corrupt.
 An example of a validation rule would be to exclude all persons
that are older than 100 years old, since it is very unlikely that
data about these persons would be correct due to physical
constraints.
 The Data Validation and Cleansing stage is dedicated to
establishing often complex validation rules and removing any
known invalid data.
School of Computer Engineering
5. Data Validation and Cleansing cont’d
68

 For example, as illustrated in below figure, the first value in Dataset B is


validated against its corresponding value in Dataset A.
 The second value in Dataset B is not validated against its corresponding
value in Dataset A. If a value is missing, it is inserted from Dataset A.

 Data validation can be used to examine interconnected datasets in order to


fill in missing valid data.

School of Computer Engineering


6. Data Aggregation and Representation
69

 Data may be spread across multiple datasets, requiring that


dataset be joined together to conduct the actual analysis.
 In order to ensure only the correct data will be analysed in the
next stage, it might be necessary to integrate multiple datasets.
 The Data Aggregation and Representation stage is dedicated to
integrate multiple datasets to arrive at a unified view.
 Additionally, data aggregation will greatly speed up the
analysis process of the Big Data tool, because the tool will not
be required to join different tables from different datasets,
greatly speeding up the process.

School of Computer Engineering


7. Data Analysis
70

 The Data Analysis stage of the Big Data Lifecycle stage is dedicated to
carrying out the actual analysis task.
 It runs the code or algorithm that makes the calculations that will lead to
the actual result.
 Data Analysis can be simple or really complex, depending on the required
analysis type.
 In this stage the ‘actual value’ of the Big Data project will be generated. If all
previous stages have been executed carefully, the results will be factual and
correct.
 Depending on the type of analytic result required, this stage can be as
simple as querying a dataset to compute an aggregation for comparison.
 On the other hand, it can be as challenging as combining data mining and
complex statistical analysis techniques to discover patterns and anomalies
or to generate a statistical or mathematical model to depict relationships
between variables.

School of Computer Engineering


7. Data Analysis cont’d
71

 Data analysis can be classified as confirmatory analysis or exploratory


analysis, the latter of which is linked to data mining, as shown below

 Confirmatory data analysis is a deductive approach where the cause of the


phenomenon being investigated is proposed beforehand. The proposed
cause or assumption is called a hypothesis.
 Exploratory data analysis is an inductive approach that is closely associated
with data mining. No hypothesis or predetermined assumptions are
generated. Instead, the data is explored through analysis to develop an
understanding of the cause of the phenomenon.
School of Computer Engineering
8. Data Visualization
72

 The ability to analyze massive amounts of data and find useful insights
carries little value if the only ones that can interpret the results are the
analysts.
 The data visualization stage, is dedicated to using data visualization
techniques and tools to graphically communicate the analysis results for
effective interpretation by business users.
 Business users need to be able to understand the results to obtain value
from the analysis and subsequently have the ability to provide feedback.
 The results of completing the data visualization stage provide users with
the ability to perform visual analysis, allowing for the discovery of answers
to questions that users have not yet even formulated.
 The same results may be presented in a number of different ways, which
can influence the interpretation of the results. Consequently, it is important
to use the most suitable visualization technique by keeping the business
domain in context.

School of Computer Engineering


8. Data Visualization cont’d
73

School of Computer Engineering


9. Utilization of Analysis Results
74

 After the data analysis has been performed an the result have been
presented, the final step of the Big Data Lifecycle is to use the results
in practice.
 The utilization of Analysis results is dedicated to determining how
and where the processed data can be further utilized to leverage the
result of the Big Data Project.
 Depending on the nature of the analysis problems being addressed,
it is possible for the analysis results to produce “models” that
encapsulate new insights and understandings about the nature of
the patterns and relationships that exist within the data that was
analyzed.
 A model may look like a mathematical equation or a set of rules.
Models can be used to improve business process logic and
application system logic, and they can form the basis of a new system
or software program.
School of Computer Engineering
Home Assignments
75

For the below problem statements, identify the tasks/activities to be performed


in each stage of the big data analytics life cycle.
 Stock Market: A small stock trading organization, wants to build a Stock
Performance System. You have been tasked to create a solution to predict
good and bad stocks based on their history. You also have to build a
customized product to handle complex queries such as calculating the
covariance between the stocks for each month.
 Health Care: A mobile health organization captures patient’s physical
activities, by attaching various sensors on different body parts. These
sensors measure the motion of diverse body parts like acceleration, the rate
of turn, magnetic field orientation, etc. You have to build a system for
effectively deriving information about the motion of different body parts like
chest, ankle, etc.

School of Computer Engineering


Home Assignments cont…
76

 Social Media: A social media marketing company which wants to expand its
business. They want to find the websites which have a low rank web page. You have
been tasked to find the low-rated links based on the user comments, likes etc.
 Retail: A retail company wants to enhance their customer experience by analysing
the customer reviews for different products. So that, they can inform the
corresponding vendors and manufacturers about the product defects and
shortcomings. You have been tasked to analyse the complaints filed under each
product & the total number of complaints filed based on the geography, type of
product, etc. You also have to figure out the complaints which have no timely
response.
 Tourism: A new company in the travel domain wants to start their business
efficiently, i.e. high profit for low TCO. They want to analyse & find the most frequent
& popular tourism destinations for their business. You have been tasked to analyse
top tourism destinations that people frequently travel & top locations from where
most of the tourism trips start. They also want you to analyze & find the destinations
with costly tourism packages.

School of Computer Engineering


Big Data And Cloud Computing
77

 Cloud computing is the use of computing resources (hardware and software)


that are delivered as a service over a network (typically the Internet). It’s a
virtualization framework.
 It is like a resource on demand whether it be storage, computing etc. Cloud
follows pay per usage model and one need to pay the amount of resource
usage.
 Cloud plays an important role within the big data world, by providing
horizontally expandable and optimized infrastructure that supports
practical implementation of big data.
 In cloud computing, all variety/volume of data is gathered in data centers
and then distributed to the end-users. Further, automatic backups and
recovery of data is also ensured for business continuity, all such resources
are available in the cloud.

School of Computer Engineering


Cloud Services
78

Cloud services are categorized as below:


 Infrastructure as a service (IaaS): It means complete infrastructure will be
provided to consumer. Maintenance related tasks will be done by cloud
provider and consumer can use it as per the requirement. It can be used as
public and private both. Examples are virtual machines, load balancers, and
network attached storage.
 Platform as a service (PaaS): Here the cloud have object storage, queuing,
databases, runtime etc. All these we can get directly from the cloud provider.
It’s consumer responsibility to configure and use that. Providers will give
consumer the resources but connectivity to the database and other similar
activities are consumer’s responsibility. Examples are Windows Azure and
Google App Engine.
 Software as a service (SaaS): The consumer using the application that is
running on the cloud. All infrastructure setup is the responsibility of the
service provider. Examples are dropbox, Google drive etc.
School of Computer Engineering
Cloud for Big Data - IaaS in cloud
79

 Using a cloud provider’s infrastructure for big data


services, gives access to almost limitless storage and
compute power.
 IaaS can be utilized by enterprise customers to create cost-
effective and easily scalable IT solutions where cloud
providers bear the complexities and expenses of managing
the underlying hardware.
 If the scale of a business customer’s operations fluctuates,
or they are looking to expand, they can tap into the cloud
resource as and when they need it rather than purchase,
install and integrate hardware themselves.

School of Computer Engineering


Cloud for Big Data – PAAS in cloud
80

 PaaS vendors incorporate big data technologies such as


Hadoop and MapReduce into PaaS offerings, which
eliminate the dealing with the complexities of managing
individual software and hardware elements.
 For example, web developers can use individual PaaS
environments at every stage of development, testing and
ultimately hosting their websites.
 However, businesses that are developing their own
internal software can also utilize PaaS , particularly to
create distinct ring-fenced development and testing
environments.

School of Computer Engineering


Cloud for Big Data – SaaS in cloud
81

 Many organizations feel the need to analyze the customer’s


voice, especially on social media. SaaS vendors provide the
platform for the analysis as well as the social media data.
 Office software is the best example of businesses utilizing SaaS.
Tasks related to accounting, sales, invoicing, and planning can
all be performed through SaaS. Businesses may wish to use one
piece of software that performs all of these tasks or several that
each performs different tasks.
 The software can be subscribed through the Internet and then
accessed online via any computer in the office using a username
and password. If needed, they can switch to software that
fulfills their requirements in better manner.

School of Computer Engineering


82

School of Computer Engineering


Appendix
83

 Data Mining: Data mining is the process of looking for hidden, valid, and
potentially useful patterns in huge data sets. Data Mining is all about
discovering unsuspected/previously unknown relationships amongst the
data. It is a multi-disciplinary skill that uses machine learning, statistics,
AI and database technology.
 Natural Language Processing (NLP): NLP gives the machines the ability
to read, understand and derive meaning from human languages.
 Text Analytics (TA): TA is the process of extracting meaning out of text.
For example, this can be analyzing text written by customers in a
customer survey, with the focus on finding common themes and trends.
The idea is to be able to examine the customer feedback to inform the
business on taking strategic action, in order to improve customer
experience.
 Noisy text analytics: It is a process of information extraction whose goal
is to automatically extract structured or semi-structured information from
noisy unstructured text data.
School of Computer Engineering
Appendix cont…
84

Example of Data Volumes


Unit Value Example
Kilobytes (KB) 1,000 bytes a paragraph of a text document
Megabytes (MB) 1,000 Kilobytes a small novel
Gigabytes (GB) 1,000 Megabytes Beethoven’s 5th Symphony
Terabytes (TB) 1,000 Gigabytes all the X-rays in a large hospital
Petabytes (PB) half the contents of all US academic research
1,000 Terabytes
libraries
Exabytes (EB) about one fifth of the words people have ever
1,000 Petabytes
spoken
Zettabytes (ZB) 1,000 Exabytes as much information as there are grains of sand on
all the world’s beaches
Yottabytes (YB) 1,000 Zettabytes as much information as there are atoms in 7,000
human bodies

School of Computer Engineering


Appendix cont…
85

 Enterprise Data Warehouse: An enterprise data warehouse (EDW) is a


database, or collection of databases, that centralizes a business's
information from multiple sources and applications, and makes it
available for analytics and use across the organization. EDWs can be
housed in an on-premise server or in the cloud. The data stored in this
type of digital warehouse can be one of a business’s most valuable assets,
as it represents much of what is known about the business, its employees,
its customers, and more.
 Online Transactional Processing (OLTP): It is a category of data
processing that is focused on transaction-oriented tasks. OLTP typically
involves inserting, updating, and/or deleting small amounts of data in a
database. OLTP mainly deals with large numbers of transactions by a large
number of users.

School of Computer Engineering


Appendix cont…
86

ETL: ETL is short for extract, transform, load, three database functions that are
combined into one tool to pull data out of one database and place it into another
database.
 Extract is the process of reading data from a database. In this stage, the data is
collected, often from multiple and different types of sources.
 Transform is the process of converting the extracted data from its previous form
into the form it needs to be in so that it can be placed into another database.
Transformation occurs by using rules or lookup tables or by combining the data
with other data.
 Load is the process of writing the data into the target database.

School of Computer Engineering


Practice Questions
87

1. You are planning the marketing strategy for a new product in your
company. Identify and list some limitations of structured data
related to this work.
2. In what ways does analyzing Big Data help organizations prevent
fraud?
3. Discuss the techniques of parallel computing.
4. Discuss the features of cloud computing that can be used to handle
Big Data.
5. Discuss similarities and differences between ELT and ETL.
6. It is impossible for a web service to provide following three
guarantees at the same time i.e., consistency, availability and
partition-tolerance. Justify it with suitable explanation.
7. Hotel Booking: are we double-booking the same room? Justify this
statement with CAP theorem.

School of Computer Engineering


Practice Questions cont…
88

8. With the emergence of new technologies, new academic trends introduced


into Educational system which results in large data which is unregulated
and it is also challenge for students to prefer to those academic courses
which are helpful in their industrial training and increases their career
prospects. Another challenge is to convert the unregulated data into
structured and meaningful information. Develop tool that will be helpful in
decision making for students to determine courses chosen for industrial
trainings. Derive preferable courses for pursuing training for students based
on course combinations.
9. You have to analyze the Adahar card data set against different research
queries for example total number of Adahar cards approved by state,
rejected by state, total number of Adahar card applicants by gender and
total number of Adahar card applicants by age type with visual depiction.
How the Big Data and Cloud Computing interlinks in such a case?

School of Computer Engineering


Practice Questions cont…
89

10. Consider an online bookstore OLTP model with the entities and attributes
as follows.
Publisher (PUBLISHER_ID, NAME)
Subject (SUBJECT_ID, NAME)
Author (AUTHOR_ID, NAME)
Publication (PUBLICATION_ID, SUBJECT_ID (FK), AUTHOR_ID (FK), TITLE)
Edition (PUBLISHER_ID (FK), PUBLICATION_ID (FK), PRINT_DATE, PAGES, PRICE, FORMAT)
Review (REVIEW_ID, PUBLICATION_ID, (FK), REVIEW_DATE, TEXT)
Draw the equivalent OLAP conceptual, logical, and physical data model.

School of Computer Engineering

You might also like