SlideShare a Scribd company logo
Big Data
                          Steve Watt   Emerging Technologies @ HP

     1
– Someday Soon (Flickr)
2
– timsnell (Flickr)
Agenda

    Hardware   Software                     Data




                                    • Big Data



                    • Situational
                    Applications




3
Situational Applications




      4
– eaghra (Flickr)
Web 2.0 Era Topic Map
                                                   Produce          Process
                                    Inexpensiv
              Data                   e Storage
            Explosion
                             LAM
 Social                       P
Platform         Publishin
    s                g
                 Platforms

                                                              Situational
                                                             Applications
              Web 2.0                    Mashups




           Enterpris          SOA
              e
5
6
Big Data




      7
– blmiers2 (Flickr)
The data just keeps growing…

 1024 GIGABYTE= 1 TERABYTE
       1024 TERABYTES = 1 PETABYTE
             1024 PETABYTES = 1 EXABYTE


1 PETABYTE 13.3 Years of HD Video

20 PETABYTES Amount of Data processed by Google daily

5 EXABYTES All words ever spoken by humanity
Mobile
  App Economy for Devices                                                        Sensor Web
  App for this     App for that                                      An instrumented and monitored world




Set Top            Tablets, etc.   Multiple Sensors in your pocket
Boxes
                                                                                                   Real-time
                                                                                                   Data

                                        The Fractured Web
                                                                                                       Opportunity
                                          Facebook       Twitter     LinkedIn
Service Economy
Service for this                          Google     NetFlix    New York Times

Service for that                           eBay          Pandora       PayPal              Web 2.0 Data Exhaust of
                                                                                           Historical and Real-time Data



                                   Web 2.0 - Connecting People                            API Foundation
 Web as a Platform
 9                                 Web 1.0 - Connecting Machines                          Infrastructure
Data Deluge! But filter patterns can
                  help…
    10
Kakadu (Flickr)
Filtering
With
Search




 11
Filtering
Socially




            Awesome
 12
Filtering
Visually




 13
But filter patterns force you down a pre-processed
  path
M.V. Jantzen (Flickr)
What if you could ask your own questions?

     15
– wowwzers(Flickr)
And go from discovering Something about Everything…

– MrB-MMX (Flickr)
To discovering Everything about Something ?

17
How do we do this?
 Lets examine a few techniques for
Gathering,
     Storing,
         Processing &

18
                Delivering Data @   Scale
Gathering Data

Data Marketplaces




 19
20
21
Gathering Data

Apache Nutch
(Web Crawler)




 22
Storing, Reading and Processing - Apache Hadoop
    Cluster technology with a single master and scale out with multiple slaves
    It consists of two runtimes:
        The Hadoop Distributed File System (HDFS)
        Map/Reduce

    As data is copied onto the HDFS it ensures the data is blocked and replicated to other
     machines to provide redundancy
    A self-contained job (workload) is written in Map/Reduce and submitted to the Hadoop
     Master which in-turn distributes the job to each slave in the cluster.
    Jobs run on data that is on the local disks of the machine they are sent to ensuring data
     locality
    Node (Slave) failures are handled automatically by Hadoop. Hadoop may execute or re-
     execute a job on any node in the cluster.

     Want to know more?
23
     “Hadoop – The Definitive Guide (2nd Edition)”
Delivering Data @ Scale

•    Structured Data
•    Low Latency & Random Access
•    Column Stores (Apache HBase or Apache Cassandra)
     •   faster seeks
     •   better compression
     •   simpler scale out
     •   De-normalized – Data is written as it is intended to be queried




         Want to know more?
24
         “HBase – The Definitive Guide” & “Cassandra High Performance
Storing, Processing & Delivering : Hadoop + NoSQL

              Gather            Read/Transfor                  Low-
                                m                              latency       Application
        Web Data
                        Nutch                                                Query
                        Crawl
                                                                                     Serve
                      Copy

                                        Apache
                                        Hadoop
 Log Files
                   Flume
                   Connector              HDFS                                 NoSQL
                                                                              Repository
                                                               NoSQL
                   SQOOP                                       Connector/A
                   Connector                                   PI

 Relational
 Data
                                -Clean and Filter Data
 (JDBC)
                                - Transform and Enrich Data
               MySQL
                                - Often multiple Hadoop jobs
   25
Some things to keep
    in mind…




     26
– Kanaka Menehune (Flickr)
Some things to keep in mind…

•    Processing arbitrary types of data (unstructured, semi-
     structured, structured) requires normalizing data with many different
     kinds of readers
     Hadoop is really great at this !
•    However, readers won’t really help you process truly unstructured data
     such as prose. For that you’re going to have to get handy with Natural
     Language Processing. But this is really hard.
     Consider using parsing services & APIs like Open Calais

     Want to know more?
27
     “Programming Pig” (O’REILLY)
Open Calais (Gnosis)




28
Statistical real-time decision making

      Capture Historical information

      Use Machine Learning to build decision making models (such as
       Classification, Clustering & Recommendation)

      Mesh real-time events (such as sensor data) against Models to make
       automated decisions




     Want to know more?
29
     “Mahout in Action”
30
Pascal Terjan (Flickr
31
32
Using Apache
Identify Optimal Seed URLs for a Seed List & Crawl to a depth of 2

For example:

http://www.crunchbase.com/companies?c=a&q=private_held
http://www.crunchbase.com/companies?c=b&q=private_held
http://www.crunchbase.com/companies?c=c&q=private_held
http://www.crunchbase.com/companies?c=d&q=private_held
...

Crawl data is stored in sequence files in the segments dir on the HDFS
 33
34
Making the data STRUCTURED




          Retrieving HTML

                Prelim Filtering on URL


          Company POJO then /t Out




35
Aargh!

My viz tool
requires
zipcodes to plot
geospatially!


  36
Apache Pig Script to Join on City to get Zip
Code and Write the results to Vertica

ZipCodes = LOAD 'demo/zipcodes.txt' USING PigStorage('t') AS (State:chararray, City:chararray, ZipCode:int);

CrunchBase = LOAD 'demo/crunchbase.txt' USING PigStorage('t') AS

(Company:chararray,City:chararray,State:chararray,Sector:chararray,Round:chararray,Month:int,Year:int,Investor:chararray,Amount:int);


CrunchBaseZip = JOIN CrunchBase BY (City,State), ZipCodes BY (City,State);

STORE CrunchBaseZip INTO

'{CrunchBaseZip(Company varchar(40), City varchar(40), State varchar(40), Sector varchar(40), Round varchar(40), Month int, Year

int, Investor int, Amount varchar(40))}’

USING com.vertica.pig.VerticaStorer(‘VerticaServer','OSCON','5433','dbadmin','');
Total Tech Investments By Year
Investment Funding By Sector
Total Investments By Zip Code for all Sectors

                                                                     $1.2 Billion in Boston



     $7.3 Billion in San Francisco


            $2.9 Billion in Mountain View




                                            $1.7 Billion in Austin

40
Total Investments By Zip Code for Consumer Web

        $600 Million in Seattle
                                       $1.2 Billion in Chicago


     $1.7 Billion in San Francisco




41
Total Investments By Zip Code for BioTech

                                            $1.3 Billion in Cambridge




                   $528 Million in Dallas




     $1.1 Billion in San Diego




42
Questions?

     Steve Watt swatt@hp.com

              @wattsteve

               stevewatt.blogspot.com

43

More Related Content

What's hot (20)

PDF
Big data Hadoop Analytic and Data warehouse comparison guide
Danairat Thanabodithammachari
 
PDF
Emergent Distributed Data Storage
hybrid cloud
 
PDF
Big data technologies and Hadoop infrastructure
Roman Nikitchenko
 
PPTX
8 douetteau - dataiku - data tuesday open source 26 fev 2013
Data Tuesday
 
PDF
Introduction to Hadoop and MapReduce
eakasit_dpu
 
PDF
20131205 hadoop-hdfs-map reduce-introduction
Xuan-Chao Huang
 
PDF
hadoop @ Ibmbigdata
Eric Baldeschwieler
 
KEY
Hadoop Summit 2012 - Hadoop and Vertica: The Data Analytics Platform at Twitter
Bill Graham
 
PDF
Introduction to Bigdata and HADOOP
vinoth kumar
 
PPTX
Hadoop Basics - Apache hadoop Bigdata training by Design Pathshala
Desing Pathshala
 
PDF
Scaling Out With Hadoop And HBase
Age Mooij
 
PDF
Hadoop Developer
Edureka!
 
PDF
Introduction to Big data & Hadoop -I
Edureka!
 
PDF
Seminar_Report_hadoop
Varun Narang
 
PPTX
Large Scale Data With Hadoop
guest27e6764
 
PPTX
Data infrastructure at Facebook
AhmedDoukh
 
PPTX
Hadoop and Big Data
Harshdeep Kaur
 
PPTX
Hadoop workshop
Fang Mac
 
PDF
Integrating Hadoop Into the Enterprise – Hadoop Summit 2012
Jonathan Seidman
 
PDF
Using hadoop to expand data warehousing
DataWorks Summit
 
Big data Hadoop Analytic and Data warehouse comparison guide
Danairat Thanabodithammachari
 
Emergent Distributed Data Storage
hybrid cloud
 
Big data technologies and Hadoop infrastructure
Roman Nikitchenko
 
8 douetteau - dataiku - data tuesday open source 26 fev 2013
Data Tuesday
 
Introduction to Hadoop and MapReduce
eakasit_dpu
 
20131205 hadoop-hdfs-map reduce-introduction
Xuan-Chao Huang
 
hadoop @ Ibmbigdata
Eric Baldeschwieler
 
Hadoop Summit 2012 - Hadoop and Vertica: The Data Analytics Platform at Twitter
Bill Graham
 
Introduction to Bigdata and HADOOP
vinoth kumar
 
Hadoop Basics - Apache hadoop Bigdata training by Design Pathshala
Desing Pathshala
 
Scaling Out With Hadoop And HBase
Age Mooij
 
Hadoop Developer
Edureka!
 
Introduction to Big data & Hadoop -I
Edureka!
 
Seminar_Report_hadoop
Varun Narang
 
Large Scale Data With Hadoop
guest27e6764
 
Data infrastructure at Facebook
AhmedDoukh
 
Hadoop and Big Data
Harshdeep Kaur
 
Hadoop workshop
Fang Mac
 
Integrating Hadoop Into the Enterprise – Hadoop Summit 2012
Jonathan Seidman
 
Using hadoop to expand data warehousing
DataWorks Summit
 

Viewers also liked (9)

PDF
dcb1201 - Feature1
Paul Elliott
 
PDF
P+ Special Ondernemers voor ondernemenrs
foostervink
 
DOCX
Ryan Slauson Resume
Ryan Slauson
 
PPTX
Navajo st
altamirana
 
PDF
Lugares turísticos ecuador 1
WilmerGarciaO
 
PPTX
Test Estimation Hacks: Tips, Tricks and Tools Webinar
QASymphony
 
PPT
Репутация в поиске: сайты, блоги, твиты
web2win
 
PPTX
Surgical Audit
akinbodeog
 
PPSX
Transporte celular
Denise Lemos Cardoso, CEFET-MG
 
dcb1201 - Feature1
Paul Elliott
 
P+ Special Ondernemers voor ondernemenrs
foostervink
 
Ryan Slauson Resume
Ryan Slauson
 
Navajo st
altamirana
 
Lugares turísticos ecuador 1
WilmerGarciaO
 
Test Estimation Hacks: Tips, Tricks and Tools Webinar
QASymphony
 
Репутация в поиске: сайты, блоги, твиты
web2win
 
Surgical Audit
akinbodeog
 
Transporte celular
Denise Lemos Cardoso, CEFET-MG
 
Ad

Similar to Steve Watt Presentation (20)

PDF
Apache Kafka® and the Data Mesh
ConfluentInc1
 
PDF
Big Data/Hadoop Infrastructure Considerations
Richard McDougall
 
PPT
Big Data = Big Decisions
InnoTech
 
PDF
Cloud computing era
TrendProgContest13
 
PPTX
big data eco system fundamentals of data science
arivukarasi
 
PPTX
Galaxy of bits
Michal Zylinski
 
PPTX
Microsoft's Hadoop Story
Michael Rys
 
PDF
Big data apache spark + scala
Juantomás García Molina
 
PPTX
Hadoop and IoT Sinergija 2014
Milos Milovanovic
 
PPSX
Big Data Basic Concepts | Presented in 2014
Kenneth Igiri
 
KEY
Large Scale Data Analysis Tools
boorad
 
PDF
Keynote from ApacheCon NA 2011
Hortonworks
 
PPTX
Hadoop and IoT Sinergija 2014
Darko Marjanovic
 
PDF
Cytoscape Untangles the Web: a first step towards Cytoscape Cyberinfrastructu...
Keiichiro Ono
 
PPTX
Apache Kafka and the Data Mesh | Ben Stopford and Michael Noll, Confluent
HostedbyConfluent
 
PPTX
Dataiku - hadoop ecosystem - @Epitech Paris - janvier 2014
Dataiku
 
PDF
Tugdual Grall - Real World Use Cases: Hadoop and NoSQL in Production
Codemotion
 
PPTX
Big Data Analytics PPT - S1 working .pptx
VivekChaurasia43
 
PPTX
Oct 2011 CHADNUG Presentation on Hadoop
Josh Patterson
 
PDF
Big data with java
Stefan Angelov
 
Apache Kafka® and the Data Mesh
ConfluentInc1
 
Big Data/Hadoop Infrastructure Considerations
Richard McDougall
 
Big Data = Big Decisions
InnoTech
 
Cloud computing era
TrendProgContest13
 
big data eco system fundamentals of data science
arivukarasi
 
Galaxy of bits
Michal Zylinski
 
Microsoft's Hadoop Story
Michael Rys
 
Big data apache spark + scala
Juantomás García Molina
 
Hadoop and IoT Sinergija 2014
Milos Milovanovic
 
Big Data Basic Concepts | Presented in 2014
Kenneth Igiri
 
Large Scale Data Analysis Tools
boorad
 
Keynote from ApacheCon NA 2011
Hortonworks
 
Hadoop and IoT Sinergija 2014
Darko Marjanovic
 
Cytoscape Untangles the Web: a first step towards Cytoscape Cyberinfrastructu...
Keiichiro Ono
 
Apache Kafka and the Data Mesh | Ben Stopford and Michael Noll, Confluent
HostedbyConfluent
 
Dataiku - hadoop ecosystem - @Epitech Paris - janvier 2014
Dataiku
 
Tugdual Grall - Real World Use Cases: Hadoop and NoSQL in Production
Codemotion
 
Big Data Analytics PPT - S1 working .pptx
VivekChaurasia43
 
Oct 2011 CHADNUG Presentation on Hadoop
Josh Patterson
 
Big data with java
Stefan Angelov
 
Ad

Recently uploaded (20)

PPTX
Introduction to Flutter by Ayush Desai.pptx
ayushdesai204
 
PPTX
AI and Robotics for Human Well-being.pptx
JAYMIN SUTHAR
 
PDF
Responsible AI and AI Ethics - By Sylvester Ebhonu
Sylvester Ebhonu
 
PDF
Peak of Data & AI Encore - Real-Time Insights & Scalable Editing with ArcGIS
Safe Software
 
PPTX
IT Runs Better with ThousandEyes AI-driven Assurance
ThousandEyes
 
PPTX
OA presentation.pptx OA presentation.pptx
pateldhruv002338
 
PDF
How ETL Control Logic Keeps Your Pipelines Safe and Reliable.pdf
Stryv Solutions Pvt. Ltd.
 
PDF
CIFDAQ's Market Wrap : Bears Back in Control?
CIFDAQ
 
PPTX
Applied-Statistics-Mastering-Data-Driven-Decisions.pptx
parmaryashparmaryash
 
PDF
NewMind AI Weekly Chronicles – July’25, Week III
NewMind AI
 
PDF
MASTERDECK GRAPHSUMMIT SYDNEY (Public).pdf
Neo4j
 
PDF
How Open Source Changed My Career by abdelrahman ismail
a0m0rajab1
 
PDF
The Future of Mobile Is Context-Aware—Are You Ready?
iProgrammer Solutions Private Limited
 
PDF
Build with AI and GDG Cloud Bydgoszcz- ADK .pdf
jaroslawgajewski1
 
PDF
Presentation about Hardware and Software in Computer
snehamodhawadiya
 
PPTX
AI Code Generation Risks (Ramkumar Dilli, CIO, Myridius)
Priyanka Aash
 
PDF
Research-Fundamentals-and-Topic-Development.pdf
ayesha butalia
 
PDF
Researching The Best Chat SDK Providers in 2025
Ray Fields
 
PDF
Per Axbom: The spectacular lies of maps
Nexer Digital
 
PPTX
AVL ( audio, visuals or led ), technology.
Rajeshwri Panchal
 
Introduction to Flutter by Ayush Desai.pptx
ayushdesai204
 
AI and Robotics for Human Well-being.pptx
JAYMIN SUTHAR
 
Responsible AI and AI Ethics - By Sylvester Ebhonu
Sylvester Ebhonu
 
Peak of Data & AI Encore - Real-Time Insights & Scalable Editing with ArcGIS
Safe Software
 
IT Runs Better with ThousandEyes AI-driven Assurance
ThousandEyes
 
OA presentation.pptx OA presentation.pptx
pateldhruv002338
 
How ETL Control Logic Keeps Your Pipelines Safe and Reliable.pdf
Stryv Solutions Pvt. Ltd.
 
CIFDAQ's Market Wrap : Bears Back in Control?
CIFDAQ
 
Applied-Statistics-Mastering-Data-Driven-Decisions.pptx
parmaryashparmaryash
 
NewMind AI Weekly Chronicles – July’25, Week III
NewMind AI
 
MASTERDECK GRAPHSUMMIT SYDNEY (Public).pdf
Neo4j
 
How Open Source Changed My Career by abdelrahman ismail
a0m0rajab1
 
The Future of Mobile Is Context-Aware—Are You Ready?
iProgrammer Solutions Private Limited
 
Build with AI and GDG Cloud Bydgoszcz- ADK .pdf
jaroslawgajewski1
 
Presentation about Hardware and Software in Computer
snehamodhawadiya
 
AI Code Generation Risks (Ramkumar Dilli, CIO, Myridius)
Priyanka Aash
 
Research-Fundamentals-and-Topic-Development.pdf
ayesha butalia
 
Researching The Best Chat SDK Providers in 2025
Ray Fields
 
Per Axbom: The spectacular lies of maps
Nexer Digital
 
AVL ( audio, visuals or led ), technology.
Rajeshwri Panchal
 

Steve Watt Presentation

  • 1. Big Data Steve Watt Emerging Technologies @ HP 1 – Someday Soon (Flickr)
  • 3. Agenda Hardware Software Data • Big Data • Situational Applications 3
  • 4. Situational Applications 4 – eaghra (Flickr)
  • 5. Web 2.0 Era Topic Map Produce Process Inexpensiv Data e Storage Explosion LAM Social P Platform Publishin s g Platforms Situational Applications Web 2.0 Mashups Enterpris SOA e 5
  • 6. 6
  • 7. Big Data 7 – blmiers2 (Flickr)
  • 8. The data just keeps growing… 1024 GIGABYTE= 1 TERABYTE 1024 TERABYTES = 1 PETABYTE 1024 PETABYTES = 1 EXABYTE 1 PETABYTE 13.3 Years of HD Video 20 PETABYTES Amount of Data processed by Google daily 5 EXABYTES All words ever spoken by humanity
  • 9. Mobile App Economy for Devices Sensor Web App for this App for that An instrumented and monitored world Set Top Tablets, etc. Multiple Sensors in your pocket Boxes Real-time Data The Fractured Web Opportunity Facebook Twitter LinkedIn Service Economy Service for this Google NetFlix New York Times Service for that eBay Pandora PayPal Web 2.0 Data Exhaust of Historical and Real-time Data Web 2.0 - Connecting People API Foundation Web as a Platform 9 Web 1.0 - Connecting Machines Infrastructure
  • 10. Data Deluge! But filter patterns can help… 10 Kakadu (Flickr)
  • 12. Filtering Socially Awesome 12
  • 14. But filter patterns force you down a pre-processed path M.V. Jantzen (Flickr)
  • 15. What if you could ask your own questions? 15 – wowwzers(Flickr)
  • 16. And go from discovering Something about Everything… – MrB-MMX (Flickr)
  • 17. To discovering Everything about Something ? 17
  • 18. How do we do this? Lets examine a few techniques for Gathering, Storing, Processing & 18 Delivering Data @ Scale
  • 20. 20
  • 21. 21
  • 23. Storing, Reading and Processing - Apache Hadoop  Cluster technology with a single master and scale out with multiple slaves  It consists of two runtimes:  The Hadoop Distributed File System (HDFS)  Map/Reduce  As data is copied onto the HDFS it ensures the data is blocked and replicated to other machines to provide redundancy  A self-contained job (workload) is written in Map/Reduce and submitted to the Hadoop Master which in-turn distributes the job to each slave in the cluster.  Jobs run on data that is on the local disks of the machine they are sent to ensuring data locality  Node (Slave) failures are handled automatically by Hadoop. Hadoop may execute or re- execute a job on any node in the cluster. Want to know more? 23 “Hadoop – The Definitive Guide (2nd Edition)”
  • 24. Delivering Data @ Scale • Structured Data • Low Latency & Random Access • Column Stores (Apache HBase or Apache Cassandra) • faster seeks • better compression • simpler scale out • De-normalized – Data is written as it is intended to be queried Want to know more? 24 “HBase – The Definitive Guide” & “Cassandra High Performance
  • 25. Storing, Processing & Delivering : Hadoop + NoSQL Gather Read/Transfor Low- m latency Application Web Data Nutch Query Crawl Serve Copy Apache Hadoop Log Files Flume Connector HDFS NoSQL Repository NoSQL SQOOP Connector/A Connector PI Relational Data -Clean and Filter Data (JDBC) - Transform and Enrich Data MySQL - Often multiple Hadoop jobs 25
  • 26. Some things to keep in mind… 26 – Kanaka Menehune (Flickr)
  • 27. Some things to keep in mind… • Processing arbitrary types of data (unstructured, semi- structured, structured) requires normalizing data with many different kinds of readers Hadoop is really great at this ! • However, readers won’t really help you process truly unstructured data such as prose. For that you’re going to have to get handy with Natural Language Processing. But this is really hard. Consider using parsing services & APIs like Open Calais Want to know more? 27 “Programming Pig” (O’REILLY)
  • 29. Statistical real-time decision making  Capture Historical information  Use Machine Learning to build decision making models (such as Classification, Clustering & Recommendation)  Mesh real-time events (such as sensor data) against Models to make automated decisions Want to know more? 29 “Mahout in Action”
  • 31. 31
  • 32. 32
  • 33. Using Apache Identify Optimal Seed URLs for a Seed List & Crawl to a depth of 2 For example: http://www.crunchbase.com/companies?c=a&q=private_held http://www.crunchbase.com/companies?c=b&q=private_held http://www.crunchbase.com/companies?c=c&q=private_held http://www.crunchbase.com/companies?c=d&q=private_held ... Crawl data is stored in sequence files in the segments dir on the HDFS 33
  • 34. 34
  • 35. Making the data STRUCTURED Retrieving HTML Prelim Filtering on URL Company POJO then /t Out 35
  • 36. Aargh! My viz tool requires zipcodes to plot geospatially! 36
  • 37. Apache Pig Script to Join on City to get Zip Code and Write the results to Vertica ZipCodes = LOAD 'demo/zipcodes.txt' USING PigStorage('t') AS (State:chararray, City:chararray, ZipCode:int); CrunchBase = LOAD 'demo/crunchbase.txt' USING PigStorage('t') AS (Company:chararray,City:chararray,State:chararray,Sector:chararray,Round:chararray,Month:int,Year:int,Investor:chararray,Amount:int); CrunchBaseZip = JOIN CrunchBase BY (City,State), ZipCodes BY (City,State); STORE CrunchBaseZip INTO '{CrunchBaseZip(Company varchar(40), City varchar(40), State varchar(40), Sector varchar(40), Round varchar(40), Month int, Year int, Investor int, Amount varchar(40))}’ USING com.vertica.pig.VerticaStorer(‘VerticaServer','OSCON','5433','dbadmin','');
  • 40. Total Investments By Zip Code for all Sectors $1.2 Billion in Boston $7.3 Billion in San Francisco $2.9 Billion in Mountain View $1.7 Billion in Austin 40
  • 41. Total Investments By Zip Code for Consumer Web $600 Million in Seattle $1.2 Billion in Chicago $1.7 Billion in San Francisco 41
  • 42. Total Investments By Zip Code for BioTech $1.3 Billion in Cambridge $528 Million in Dallas $1.1 Billion in San Diego 42
  • 43. Questions? Steve Watt [email protected] @wattsteve stevewatt.blogspot.com 43

Editor's Notes

  • #3: As Hardware becomes increasing commoditized, the margin & differentiation moved to software, as software is becoming increasingly commoditized the margin & differentiation is moving to data2000 - Cloud is an IT Sourcing Alternative (Virtualization extends into Cloud)Explosion of Unstructured DataMobile“Let’s create a context in which to think….”Focused on 3 major tipping points in the evolution of the technology. Mention that this is a very web centric view contrasted to Barry Devlin’s Enterprise viewAssumes Networking falls under Hardware & Cloud is at the Intersection of Software and DataWhy should you care?Tipping Point 1: Situational ApplicationsTipping Point 2: Big DataTipping Point 3: Reasoning
  • #5: Web 2.0(Information Explosion, Now Many Channels - Turning consumers into Producers (Shirky),Tipping point Web Standards allow Rapid Application Development, Advent of Situational Applications, Folksonomies,Social)SOA (Functionality exposed through open interfaces and open standards, Great strides in modularity and re-use whilst reducing complexities around system integration, Still need to be a developer to create applications using theseservice interfaces (WSDL, SOAP, way too complex !) Enter mashups…)Mashups (Place a façade on the service and you have the final step in the evolution of services and service based applications,Now anyone can build applications (i.e. non-programmers). We’ve taken the entire SOA Library and exposed it to non-programmers, What do I mean? Check out this YouTunes app…) 1st example where we saw arbitrary data/content re-purposed in ways the original authors never intended –eg. Craigslist gumtree/ homes for sales scraped and placed on google map mashed up w/ crime statistics. Whole greater than the sum of its parts -> New kinds of Information !!BUT Limitations around how much arbitrary data being scraped and turned into info. Usually no pre-processing and just what can be rendered on a single page.Demo
  • #6: http://www.housingmaps.com/
  • #7: “Every 2 days we create as much data as we did from the dawn of humanity until 2003” – We’ve hit the Petabyte & Exabyte age. What does that mean? Lets look (next slide)
  • #8: Mention Enterprise Growth over time, Mobile/Sensor Data, Web 2.0 Data Exhaust, Social NetworksAdvances in Analytics – keep your data around for deeper business insights and to avoid Enterprise Amnesia
  • #9: How about we summarize a few of the key trends in the Web as we know it today …. This diagram shows some of the main trends of what Web 3.0 is about…Netflix accounts for 29.7 % of US Traffic, Mention Web 2.0 Summit Points of ControlHaving more data leads to better context which leads to deeper understanding/insight or new discoveriesRefer to Reid Hoffman’s views on what web 3.0 is
  • #11: Pre-processed though, not flexible, you can’t ask specific questions that have not been pre-processed
  • #12: Mention folksonomies in Web 2.0 with searching Delicious Bookmarks. Mention Chilean Earthquake Crisis Video using Twitter to do Crisis Mapping.
  • #13: Talk about Visualizations and InfoGraphics – manual and a lot of work
  • #14: They are only part of the solution & don’t allow you to ask your own questions
  • #16: This is the real promise of Big Data
  • #17: These are not all the problems around Big Data. These are the bigger problems around deriving new information out of web data. There are other issues as well likely inconsistency, skew, etc.
  • #18: Give a Nutch example
  • #19: Specifically call out the color coding reasoning for Map/Reduce and HDFS as a single distributed service
  • #20: Give examples of how one might use Open Calais or Entity Extraction libraries