SlideShare a Scribd company logo
Exploring EMC Isilon scale-out storage solutions

Hadoop’s Rise
in Life Sciences
By John Russell, Contributing Editor, Bio•IT World




Produced by Cambridge Healthtech Media Group
By now the ‘Big Data’ challenge is familiar to the entire life sciences
community. Modern high-throughput experimental technologies generate                       The Hadoop Distributed File
vast data sets that can only be tackled with high performance computing
(HPC). Genomics, of course, is the leading example. At the end of 2011,                    System (HDFS) and compute
global annual sequencing capacity was estimated at 13 quadrillion                          framework (MapReduce)
bases and growing rapidly1. It’s worth noting a single base pair typically
represents about 100 bytes of data (raw, analyzed, and interpreted).                       enable Hadoop to break
                                                                                           extremely large data sets
The need to manage and analyze these massive data sets, not just in life
sciences but throughout all of science and industry, has spurred many new                  into chunks, to distribute/
approaches to HPC infrastructure and led to many important IT advances,                    store (Map) those chunks
particularly in distributed computing. While there isn’t a single right
answer, one approach – the Hadoop storage and compute framework – is                       to nodes in a cluster, and
emerging as a compelling contender for use in life sciences to cope with the               to gather (Reduce) results
deluge of data.
                                                                                           following computation.
Created in 2004 by Doug Cutting (who famously named it after his son’s
stuffed elephant) and elevated to a top-level Apache Foundation project
in 2008, Hadoop is intended to run large-scale distributed data analysis
on commodity clusters. Cutting was initially inspired by a paper2 from
Google Labs describing Google’s BigTable infrastructure and MapReduce
application layers. (For a detailed perspective see Ronald Taylor’s, An
overview of the Hadoop/MapReduce/HBase framework and its current
applications in bioinformatics.3)

Broadly, Hadoop uses a file system (Hadoop Distributed File System
(HDFS) and framework software (MapReduce) to break extremely large
data sets into chunks, to distribute/store (Map) those chunks to nodes in
a cluster, and to gather (Reduce) results following computation. Hadoop’s
distinguishing feature is it automatically stores the chunks of data on the
same nodes on which they will be processed. This strategy of co-locating
of data and processing power (proximity computing) significantly
accelerates performance and in April 2008 a Hadoop program, running
on 910-node cluster, broke a world record, sorting a terabyte of data in
less than 3.5 minutes.4




1	 DNA Sequencing Caught in Deluge of Data”, New York Times, Nov. 30, 2011, http://www.nytimes.com/2011/12/01/business/dna-
   sequencing-caught-in-deluge-of-data.html?_r=1&ref=science

2	 OSDI’04: Sixth Symposium on Operating System Design and Implementation,
San Francisco, CA, December, 2004, http://research.
   google.com/archive/mapreduce.html

3	 An overview of the Hadoop/MapReduce/HBase framework and its current applications in bioinformatics, http://www.ncbi.nlm.nih.gov/
   pmc/articles/PMC3040523/

4	 “Hadoop wins Terabyte sort benchmark”, Apr 2008, Apr. 2009, http://sortbenchmark.org/YahooHadoop.pdf last accessed Dec 2011



                                                                                             Hadoop’s Rise in Life Sciences | 2
Part of the improved performance stems from MapReduce’s key:value
programming model which speeds up and scales up parallelized                                        It turns out that Hadoop – a
“job” execution better than many alternatives such as the GridEngine
architecture for High Performance Computing (HPC). (One of the earliest                             fault-tolerant, share-nothing
use-cases of the Sun GridEngine5 HPC was the DNA sequence comparison                                architecture in which tasks
BLAST search.) The MapReduce layer is a batch query processor with
dynamic data schema and linear scaling for unstructured or semi-                                    must have no dependence
structured data. Its data is not “normalized” (decomposition of data                                on each other – is an
into smaller structured relationships). Therefore higher level interpreted
programming languages like Ruby and Python and a compiled language                                  excellent choice for many
like C++ provide easier access to MapReduce to represent the program as                             life sciences applications.
MapReduce “jobs”.

Standard Hadoop interfaces are available via Java, C, FUSE and WebDAV.
The Hadoop R (statistical language) interface, RHIPE, is also popular in the
life sciences community.

It turns out that Hadoop – a fault-tolerant, share-nothing architecture
in which tasks must have no dependence on each other – is an
excellent choice for many life sciences applications. This is largely
because so much of life sciences data is semi- or unstructured file-
based data and ideally suited for ‘embarrassingly parallel’ computation.
Moreover, the use of commodity hardware (e.g. Linux cluster) keeps
cost down, and little or no hardware modification is required6.

Not surprisingly life sciences organizations were among Hadoop’s
earliest adopters. The first large-scale MapReduce project was
initiated by the Broad Institute (in 2008) and resulted in the
comprehensive Genome Analysis Tool Kit (GATK)7. The Hadoop
“CrossBow” project from Johns Hopkins University came soon after8.




5	 Altschul SF, et al, “Basic local alignment search tool”. J Mol Biol 215 (3): 403–410, October 1990.
6	 An overview of the Hadoop/MapReduce/HBase framework and its current applications in bioinformatics, http://www.ncbi.nlm.nih.gov/
   pmc/articles/PMC3040523/

7	 McKenna A, et al, “The Genome Analysis Toolkit: A MapReduce framework for analyzing next-generation DNA sequencing data”,
   Genome Research, 20:1297–1303, July 2010.

8	 http://bowtie-bio.sourceforge.net/crossbow/index.shtml


                                                                                                         Hadoop’s Rise in Life Sciences | 3
Here are a few current Hadoop-based bioinformatics applications9:
   •	 Crossbow. Whole genome resequencing analysis; SNP
       genotyping from short reads.

   •	 Contrail. De novo assembly from short sequencing reads.

   •	 Myrna. Ultrafast short read alignment and differential gene
       expression from large RNA-seq data sets.

   •	 PeakRanger. Cloud-enabled peak caller for ChIP-seq data.

   •	 Quake. Quality-aware detection and sequencing error
       correction tool.

   •	 BlastReduce. High-performance short read mapping.

   •	 CloudBLAST. Hadoop implementation of NCBI’s Blast.

   •	 MrsRF. Algorithm for analyzing large evolutionary trees.
(For a more detailed example of Hadoop in operation see sidebar,
Genomics Example: Calling SNPs with Crossbow.)


   Genomics Example: Calling SNPs with CrossBow
   Next Generation Sequencers (NGS) like Illumina Hiseq can produce data in the
   order of 200 billion base pairs (200 Gbp) in a single one-week run for a 60x human
   genome coverage, which means that each base was present on an average of
   60 reads. The larger the coverage, the more statistically significant is the result.
   Sequence reads are much shorter than traditional “Sanger” sequencing. This data
   requires specialized software algorithms called “short read aligners”.
   CrossBow is a combination of several algorithms that provide SNP calling and
   short read alignment, which are common tasks in NGS. Figure 1 alongside
   explains the steps necessary to process genome data to look for SNPs. The
   Map-Sort-Reduce process is ideally suited for a Hadoop framework. The cluster
   as shown is a traditional N-node Hadoop cluster. All of the Hadoop features
   like HDFS, program management and fault tolerance are available.
   The Map step is the short read alignment algorithm, called BoWTie (named
   after the Burrows Wheeler Transform, BWT). Multiple instances of BoWTie are
   run in parallel in Hadoop. The input tuples (an ordered list of elements) are the
   sequence reads and the output tuples are the alignments of the short reads.
   The Sort step apportions the alignments according to a primary key (the
   genome partition) and sorts based on a secondary key (which is the offset for
   that partition). The data here are the sorted alignments.
   The Reduce step calls SNPs for each reference genome partition. Many
   parallel instances of the algorithm SOAPsnp (Short Oligonucleotide Analysis
   Package for SNP) run in the cluster. Input tuples are sorted alignments for a
   partition and the output tuples are SNP calls. Results are stored via HDFS, and
   then archived in SOAPsnp format.



9	 Got Hadoop?, Sept. 2011, Genome Technology, http://www.genomeweb.com/informatics/got-hadoop


                                                                                          Hadoop’s Rise in Life Sciences | 4
After several years of steady development in academic environments,
Hadoop is now poised for rapid commercialization and broader                                      “Hadoop meets all the tenets
uptake in biopharma and healthcare. Early adoption has been
strongest among next generation sequencing (NGS) centers where                                    of Jim Gray’s Laws of Data
NGS workflows can generate 2 TeraBytes (TB) of data per run per                                   Engineering which have not
week per sequencer – that’s not including the raw images. For these                               changed in 15 years.”
organizations, the need for scale-out storage that integrates with
HPC is a line item requirement.                                                                   Sanjay Joshi
                                                                                                  CTO, Life Sciences,
                                                                                                  EMC Isilon Storage Division
EMC ® Isilon ®, long a leader in scale-out NAS storage solutions,
understands these challenges and has provided the scale-out storage
for nearly all the workflows for all the DNA sequencer instrument
manufacturers in the market today at more than 150 customers.
Since 2008, the EMC Isilon OneFS ® storage platform has an overall
installed base of more than 65 PetaBytes (PB). Recently, EMC
introduced the industry’s first scale-out NAS system with native
Hadoop support (via HDFS).

The EMC Isilon OneFS file system now provides for connectivity to
the Hadoop Distributed File System (HDFS) just like any other shared
file system protocol: NFS, CIFS or SMB10. This allows for the data
co-location of the storage with its compute nodes using the standard
higher-level Java application programming interface (API) to build
MapReduce “jobs”. EMC has gone one step further by combining its
OneFS-based NAS solution with EMC Greenplum ® HD, a powerful
analytics platform, to create a Hadoop appliance. Together, the two
offerings relieve users of the burden of cobbling together various open
source Hadoop components, which sometimes proves problematic.

“Hadoop meets all the tenets of Jim Gray’s Laws of Data
Engineering11 which have not changed in 15 years,” says Sanjay
Joshi, CTO, Life Sciences, EMC Isilon Storage Division. Those tenets
include: scientific computing is very data intensive, with no real
limits; the solution is a scale-out architecture with distributed data
access; and bring computation to the data, rather than data to the
computations.”




10	 Hadoop on EMC Isilon Scale Out NAS: EMC White Paper, Part Number h10528
11	 From Jim Gray, “Scalable Computing”, presentation at Nortel: Microsoft Research, April 1999


                                                                                                    Hadoop’s Rise in Life Sciences | 5
“Isilon built the industry’s first Scale Out storage architecture. Now
with its native and enterprise-ready HDFS protocol via OneFS and
GreenPlum HD, EMC brings simplicity to Big Data in Science.”
says Joshi.

EMC Isilon OneFS combines the three layers of traditional storage
architectures—the file system, volume manager, and RAID—into
one unified software layer, creating a single intelligent distributed
file system that runs on one storage cluster. Important advantages of
OneFS for Hadoop are:

   •	 Scalable: Linear scale with increasing capacity – from 18TB
      to 16PB in a single filesystem and a single global namespace.
      Scale out as needs grow, independent of the compute layer.
   •	 Predictable: Dynamic content balancing is performed as
      nodes are added, upgraded or capacity changes. No added
      management time is required since this process is simple.
                                                                         Storage tiers without fears based
   •	 Available: OneFS protects your data from power loss, node          on performance reside in one global
      or disk failures, loss of quorum and storage rebuild by            namespace, connected via a dedicated
                                                                         backend network.
      distributing data, metadata and parity across all nodes. It
      also eliminates the single point of failure of a Hadoop “Name
      Node”. Therefore OneFS is “self healing”.
   •	 Efficient: Compared to the average 50% efficiency of
      traditional RAID systems, OneFS provides over 80%
      efficiency, independent of CPU compute or cache. This
      efficiency is achieved by ‘tier’ing the process into three types
      as shown in the figure alongside and by the pools within
      these node types. This efficiency extends to the reduction
      from a 3x copy that Hadoop requires to the >80% efficient 1x
      storage via EMC Isilon’s HDFS protocol.
   •	 Enterprise-ready. Administration of the storage clusters is
      via an intuitive Web based UI. Connectivity to your process
      is through standard file protocols: CIFS, SMB, NFS, FTP/
      HTTP, iSCSI and HDFS. Standardized authentication and
      access control is available at scale: AD, LDAP and NIS.




                                                                           Hadoop’s Rise in Life Sciences | 6
CONCLUSION
What began as an internal project at Google in 2004 has now
matured into a scalable framework for two computing paradigms
that are particularly suited for the life sciences: parallelization and
distribution. Indeed, the post-processing streaming data patterns for
text strings, clustering and sorting – the core process patterns in the
life sciences – are ideal workflows for Hadoop.

Case-in-point: The CrossBow example cited earlier aligned Illumina
NGS reads for SNP calling over a ‘35x’ coverage of the human genome in
under 3 hours using a 40-node Hadoop cluster; an order of magnitude
better than traditional HPC technology for parallel processes.

The EMC Isilon OneFS distributed file system handles the Hadoop
distributed file system, HDFS, just like any other shared file system,
and provides a shield for the single point of failure in Hadoop: the
name node. The Hybrid Cloud model (source data mirror) with
Hadoop as a Service (HaaS) is the current state-of-the-art. For more
information visit EMC Isilon at http://www.emc.com/isilon.




  Summary of Hadoop Attributes:
  Overview
  •	Write Once Read Many times (WORM)
  •	Co-locates data with compute, uses higher level architecture with Java API
  •	HDFS is a distributed file system that runs on large clusters
  Advantages
  •	Uses MapReduce framework – a batch query processor, scales linearly
  •	EMC Isilon OneFS implements HDFS and eliminates the single point of failure, the “name node”
  •	Standard programming language development: Java, Ruby, Python, C++ create MapReduce jobs. FUSE and
    WebDAV interfaces provide architectural flexibility
  Challenges
  •	HDFS block size is 128 MB (can be increased), therefore large numbers of small files (<8KB) reduce its
    performance: use Hadoop Archive (HAR)
  •	Data coherency and latency remain issues for large scale implementations
  •	Not suited for low-latency, “in process” use-cases like real-time, spectral or video analysis
  •	Data transfer between Genome sequencing data sources to the Hadoop clusters in the Cloud remains an issue,
    the current business model is mirroring the data between source and Cloud and then utilizing Hadoop as a
    Service model on the mirrored data.



                                                                                 Hadoop’s Rise in Life Sciences | 7

More Related Content

PDF
White Paper: Life Sciences at RENCI, Big Data IT to Manage, Decipher and Info...
 
PDF
HPC lab projects
PDF
White Paper: Hadoop in Life Sciences — An Introduction
 
PDF
Drug Repurposing using Deep Learning on Knowledge Graphs
PPTX
The possibility and probability of a global Neuroscience Information Framework
PPTX
2013 nas-ehs-data-integration-dc
PDF
Storage For Science Wp
PPTX
2016 bergen-sars
White Paper: Life Sciences at RENCI, Big Data IT to Manage, Decipher and Info...
 
HPC lab projects
White Paper: Hadoop in Life Sciences — An Introduction
 
Drug Repurposing using Deep Learning on Knowledge Graphs
The possibility and probability of a global Neuroscience Information Framework
2013 nas-ehs-data-integration-dc
Storage For Science Wp
2016 bergen-sars

What's hot (20)

PDF
The Gordon Data-intensive Supercomputer. Enabling Scientific Discovery
PDF
Challenges and Opportunities of Big Data Genomics
PPTX
2016 davis-biotech
PPTX
2015 balti-and-bioinformatics
PDF
Advanced Research Computing at York
PDF
FC Brochure & Insert
PPTX
Opportunities for HPC in pharma R&D - main deck
PPTX
Whither Small Data?
ODP
Next-generation sequencing: Data mangement
PDF
iMicrobe and iVirus: Extending the iPlant cyberinfrastructure from plants to ...
ODP
Life sciences big data use cases
PPTX
Machine Learning in Healthcare Diagnostics
PDF
A study on cloud computing ppt n_24-12-2017
PPT
Fabricio Silva: Cloud Computing Technologies for Genomic Big Data Analysis
PPTX
Hadoop for Bioinformatics: Building a Scalable Variant Store
ODP
Future Architectures for genomics
PPTX
The Rise of Machine Intelligence
ODP
Storage for next-generation sequencing
PPTX
2013 talk at TGAC, November 4
PDF
NSF Software @ ApacheConNA
The Gordon Data-intensive Supercomputer. Enabling Scientific Discovery
Challenges and Opportunities of Big Data Genomics
2016 davis-biotech
2015 balti-and-bioinformatics
Advanced Research Computing at York
FC Brochure & Insert
Opportunities for HPC in pharma R&D - main deck
Whither Small Data?
Next-generation sequencing: Data mangement
iMicrobe and iVirus: Extending the iPlant cyberinfrastructure from plants to ...
Life sciences big data use cases
Machine Learning in Healthcare Diagnostics
A study on cloud computing ppt n_24-12-2017
Fabricio Silva: Cloud Computing Technologies for Genomic Big Data Analysis
Hadoop for Bioinformatics: Building a Scalable Variant Store
Future Architectures for genomics
The Rise of Machine Intelligence
Storage for next-generation sequencing
2013 talk at TGAC, November 4
NSF Software @ ApacheConNA
Ad

Viewers also liked (20)

PDF
Risk Intelligence: Harnessing Risk, Exploiting Opportunity
 
PPTX
Cultural rev friday
PPS
The ant
PPS
Titanic
PPTX
Tues exploration pre quiz + columbus
PDF
Full-time Prospectus 2012/13
PPTX
Wed quiz and communism
PDF
RSA MONTHLY FRAUD REPORT - September 2014
 
PPT
Day 1 FW Summer 2014
PDF
Pivotal tc server_wp_migrating_jee_apps_042313
 
PPTX
Day 5 race & slavery
DOC
PDF
EMC Isilon: A Scalable Storage Platform for Big Data
 
PPTX
Project info
PDF
Swipp Plus Quick Start Guide
PDF
OpenStack Swift Object Storage on EMC Isilon Scale-Out NAS
 
PPT
Presentation1
PDF
The Evolution of IP Storage and Its Impact on the Network
 
PPT
Food for dogs
PDF
Wealth creation and academic health science networks emc aridhia and pivotal 0
 
Risk Intelligence: Harnessing Risk, Exploiting Opportunity
 
Cultural rev friday
The ant
Titanic
Tues exploration pre quiz + columbus
Full-time Prospectus 2012/13
Wed quiz and communism
RSA MONTHLY FRAUD REPORT - September 2014
 
Day 1 FW Summer 2014
Pivotal tc server_wp_migrating_jee_apps_042313
 
Day 5 race & slavery
EMC Isilon: A Scalable Storage Platform for Big Data
 
Project info
Swipp Plus Quick Start Guide
OpenStack Swift Object Storage on EMC Isilon Scale-Out NAS
 
Presentation1
The Evolution of IP Storage and Its Impact on the Network
 
Food for dogs
Wealth creation and academic health science networks emc aridhia and pivotal 0
 
Ad

Similar to Whitepaper : CHI: Hadoop's Rise in Life Sciences (20)

PPTX
Hadoop ecosystem for health/life sciences
PDF
Brian O'Connor HBase Talk - Triangle Hadoop Users Group Dec 2010
PPT
Taylor bosc2010
PPT
Cloud computing and Hadoop introduction
PPTX
Bw tech hadoop
PPTX
BW Tech Meetup: Hadoop and The rise of Big Data
PDF
MapReduce Best Practices and Lessons Learned Applied to Enterprise Datasets -...
PDF
An Introduction to Apache Hadoop, Mahout and HBase
DOC
PPT
Finding the needles in the haystack. An Overview of Analyzing Big Data with H...
PPT
Hw09 Protein Alignment
PPTX
Big Data and Cloud Computing
PPTX
Project Name
PDF
Hadoop programming
PDF
Hadoop 101 for bioinformaticians
PDF
Scaling Storage and Computation with Hadoop
PDF
A Comprehensive Study on Big Data Applications and Challenges
PDF
Introduction to Big Data and Hadoop using Local Standalone Mode
PDF
Hadoop - Architectural road map for Hadoop Ecosystem
PPTX
Introduction to Apache Hadoop Ecosystem
Hadoop ecosystem for health/life sciences
Brian O'Connor HBase Talk - Triangle Hadoop Users Group Dec 2010
Taylor bosc2010
Cloud computing and Hadoop introduction
Bw tech hadoop
BW Tech Meetup: Hadoop and The rise of Big Data
MapReduce Best Practices and Lessons Learned Applied to Enterprise Datasets -...
An Introduction to Apache Hadoop, Mahout and HBase
Finding the needles in the haystack. An Overview of Analyzing Big Data with H...
Hw09 Protein Alignment
Big Data and Cloud Computing
Project Name
Hadoop programming
Hadoop 101 for bioinformaticians
Scaling Storage and Computation with Hadoop
A Comprehensive Study on Big Data Applications and Challenges
Introduction to Big Data and Hadoop using Local Standalone Mode
Hadoop - Architectural road map for Hadoop Ecosystem
Introduction to Apache Hadoop Ecosystem

More from EMC (20)

PPTX
INDUSTRY-LEADING TECHNOLOGY FOR LONG TERM RETENTION OF BACKUPS IN THE CLOUD
 
PDF
Cloud Foundry Summit Berlin Keynote
 
PPTX
EMC GLOBAL DATA PROTECTION INDEX
 
PDF
Transforming Desktop Virtualization with Citrix XenDesktop and EMC XtremIO
 
PDF
Citrix ready-webinar-xtremio
 
PDF
EMC FORUM RESEARCH GLOBAL RESULTS - 10,451 RESPONSES ACROSS 33 COUNTRIES
 
PPTX
EMC with Mirantis Openstack
 
PPTX
Modern infrastructure for business data lake
 
PDF
Force Cyber Criminals to Shop Elsewhere
 
PDF
Pivotal : Moments in Container History
 
PDF
Data Lake Protection - A Technical Review
 
PDF
Mobile E-commerce: Friend or Foe
 
PDF
Virtualization Myths Infographic
 
PDF
Intelligence-Driven GRC for Security
 
PDF
The Trust Paradox: Access Management and Trust in an Insecure Age
 
PDF
EMC Technology Day - SRM University 2015
 
PDF
EMC Academic Summit 2015
 
PDF
Data Science and Big Data Analytics Book from EMC Education Services
 
PDF
Using EMC Symmetrix Storage in VMware vSphere Environments
 
PDF
Using EMC VNX storage with VMware vSphereTechBook
 
INDUSTRY-LEADING TECHNOLOGY FOR LONG TERM RETENTION OF BACKUPS IN THE CLOUD
 
Cloud Foundry Summit Berlin Keynote
 
EMC GLOBAL DATA PROTECTION INDEX
 
Transforming Desktop Virtualization with Citrix XenDesktop and EMC XtremIO
 
Citrix ready-webinar-xtremio
 
EMC FORUM RESEARCH GLOBAL RESULTS - 10,451 RESPONSES ACROSS 33 COUNTRIES
 
EMC with Mirantis Openstack
 
Modern infrastructure for business data lake
 
Force Cyber Criminals to Shop Elsewhere
 
Pivotal : Moments in Container History
 
Data Lake Protection - A Technical Review
 
Mobile E-commerce: Friend or Foe
 
Virtualization Myths Infographic
 
Intelligence-Driven GRC for Security
 
The Trust Paradox: Access Management and Trust in an Insecure Age
 
EMC Technology Day - SRM University 2015
 
EMC Academic Summit 2015
 
Data Science and Big Data Analytics Book from EMC Education Services
 
Using EMC Symmetrix Storage in VMware vSphere Environments
 
Using EMC VNX storage with VMware vSphereTechBook
 

Recently uploaded (20)

PPTX
ABU RAUP TUGAS TIK kelas 8 hjhgjhgg.pptx
PDF
Event Presentation Google Cloud Next Extended 2025
PDF
Transforming Manufacturing operations through Intelligent Integrations
PDF
HCSP-Presales-Campus Network Planning and Design V1.0 Training Material-Witho...
PDF
Top Generative AI Tools for Patent Drafting in 2025.pdf
PDF
Chapter 2 Digital Image Fundamentals.pdf
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PDF
creating-agentic-ai-solutions-leveraging-aws.pdf
PDF
A Day in the Life of Location Data - Turning Where into How.pdf
PPTX
breach-and-attack-simulation-cybersecurity-india-chennai-defenderrabbit-2025....
PDF
BLW VOCATIONAL TRAINING SUMMER INTERNSHIP REPORT
PDF
NewMind AI Monthly Chronicles - July 2025
PDF
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
PDF
AI And Its Effect On The Evolving IT Sector In Australia - Elevate
PDF
Enable Enterprise-Ready Security on IBM i Systems.pdf
PDF
Dell Pro 14 Plus: Be better prepared for what’s coming
PPTX
Understanding_Digital_Forensics_Presentation.pptx
PPTX
Telecom Fraud Prevention Guide | Hyperlink InfoSystem
PDF
KodekX | Application Modernization Development
PPTX
CroxyProxy Instagram Access id login.pptx
ABU RAUP TUGAS TIK kelas 8 hjhgjhgg.pptx
Event Presentation Google Cloud Next Extended 2025
Transforming Manufacturing operations through Intelligent Integrations
HCSP-Presales-Campus Network Planning and Design V1.0 Training Material-Witho...
Top Generative AI Tools for Patent Drafting in 2025.pdf
Chapter 2 Digital Image Fundamentals.pdf
Chapter 3 Spatial Domain Image Processing.pdf
creating-agentic-ai-solutions-leveraging-aws.pdf
A Day in the Life of Location Data - Turning Where into How.pdf
breach-and-attack-simulation-cybersecurity-india-chennai-defenderrabbit-2025....
BLW VOCATIONAL TRAINING SUMMER INTERNSHIP REPORT
NewMind AI Monthly Chronicles - July 2025
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
AI And Its Effect On The Evolving IT Sector In Australia - Elevate
Enable Enterprise-Ready Security on IBM i Systems.pdf
Dell Pro 14 Plus: Be better prepared for what’s coming
Understanding_Digital_Forensics_Presentation.pptx
Telecom Fraud Prevention Guide | Hyperlink InfoSystem
KodekX | Application Modernization Development
CroxyProxy Instagram Access id login.pptx

Whitepaper : CHI: Hadoop's Rise in Life Sciences

  • 1. Exploring EMC Isilon scale-out storage solutions Hadoop’s Rise in Life Sciences By John Russell, Contributing Editor, Bio•IT World Produced by Cambridge Healthtech Media Group
  • 2. By now the ‘Big Data’ challenge is familiar to the entire life sciences community. Modern high-throughput experimental technologies generate The Hadoop Distributed File vast data sets that can only be tackled with high performance computing (HPC). Genomics, of course, is the leading example. At the end of 2011, System (HDFS) and compute global annual sequencing capacity was estimated at 13 quadrillion framework (MapReduce) bases and growing rapidly1. It’s worth noting a single base pair typically represents about 100 bytes of data (raw, analyzed, and interpreted). enable Hadoop to break extremely large data sets The need to manage and analyze these massive data sets, not just in life sciences but throughout all of science and industry, has spurred many new into chunks, to distribute/ approaches to HPC infrastructure and led to many important IT advances, store (Map) those chunks particularly in distributed computing. While there isn’t a single right answer, one approach – the Hadoop storage and compute framework – is to nodes in a cluster, and emerging as a compelling contender for use in life sciences to cope with the to gather (Reduce) results deluge of data. following computation. Created in 2004 by Doug Cutting (who famously named it after his son’s stuffed elephant) and elevated to a top-level Apache Foundation project in 2008, Hadoop is intended to run large-scale distributed data analysis on commodity clusters. Cutting was initially inspired by a paper2 from Google Labs describing Google’s BigTable infrastructure and MapReduce application layers. (For a detailed perspective see Ronald Taylor’s, An overview of the Hadoop/MapReduce/HBase framework and its current applications in bioinformatics.3) Broadly, Hadoop uses a file system (Hadoop Distributed File System (HDFS) and framework software (MapReduce) to break extremely large data sets into chunks, to distribute/store (Map) those chunks to nodes in a cluster, and to gather (Reduce) results following computation. Hadoop’s distinguishing feature is it automatically stores the chunks of data on the same nodes on which they will be processed. This strategy of co-locating of data and processing power (proximity computing) significantly accelerates performance and in April 2008 a Hadoop program, running on 910-node cluster, broke a world record, sorting a terabyte of data in less than 3.5 minutes.4 1 DNA Sequencing Caught in Deluge of Data”, New York Times, Nov. 30, 2011, http://www.nytimes.com/2011/12/01/business/dna- sequencing-caught-in-deluge-of-data.html?_r=1&ref=science 2 OSDI’04: Sixth Symposium on Operating System Design and Implementation,
San Francisco, CA, December, 2004, http://research. google.com/archive/mapreduce.html 3 An overview of the Hadoop/MapReduce/HBase framework and its current applications in bioinformatics, http://www.ncbi.nlm.nih.gov/ pmc/articles/PMC3040523/ 4 “Hadoop wins Terabyte sort benchmark”, Apr 2008, Apr. 2009, http://sortbenchmark.org/YahooHadoop.pdf last accessed Dec 2011 Hadoop’s Rise in Life Sciences | 2
  • 3. Part of the improved performance stems from MapReduce’s key:value programming model which speeds up and scales up parallelized It turns out that Hadoop – a “job” execution better than many alternatives such as the GridEngine architecture for High Performance Computing (HPC). (One of the earliest fault-tolerant, share-nothing use-cases of the Sun GridEngine5 HPC was the DNA sequence comparison architecture in which tasks BLAST search.) The MapReduce layer is a batch query processor with dynamic data schema and linear scaling for unstructured or semi- must have no dependence structured data. Its data is not “normalized” (decomposition of data on each other – is an into smaller structured relationships). Therefore higher level interpreted programming languages like Ruby and Python and a compiled language excellent choice for many like C++ provide easier access to MapReduce to represent the program as life sciences applications. MapReduce “jobs”. Standard Hadoop interfaces are available via Java, C, FUSE and WebDAV. The Hadoop R (statistical language) interface, RHIPE, is also popular in the life sciences community. It turns out that Hadoop – a fault-tolerant, share-nothing architecture in which tasks must have no dependence on each other – is an excellent choice for many life sciences applications. This is largely because so much of life sciences data is semi- or unstructured file- based data and ideally suited for ‘embarrassingly parallel’ computation. Moreover, the use of commodity hardware (e.g. Linux cluster) keeps cost down, and little or no hardware modification is required6. Not surprisingly life sciences organizations were among Hadoop’s earliest adopters. The first large-scale MapReduce project was initiated by the Broad Institute (in 2008) and resulted in the comprehensive Genome Analysis Tool Kit (GATK)7. The Hadoop “CrossBow” project from Johns Hopkins University came soon after8. 5 Altschul SF, et al, “Basic local alignment search tool”. J Mol Biol 215 (3): 403–410, October 1990. 6 An overview of the Hadoop/MapReduce/HBase framework and its current applications in bioinformatics, http://www.ncbi.nlm.nih.gov/ pmc/articles/PMC3040523/ 7 McKenna A, et al, “The Genome Analysis Toolkit: A MapReduce framework for analyzing next-generation DNA sequencing data”, Genome Research, 20:1297–1303, July 2010. 8 http://bowtie-bio.sourceforge.net/crossbow/index.shtml Hadoop’s Rise in Life Sciences | 3
  • 4. Here are a few current Hadoop-based bioinformatics applications9: • Crossbow. Whole genome resequencing analysis; SNP genotyping from short reads.
 • Contrail. De novo assembly from short sequencing reads.
 • Myrna. Ultrafast short read alignment and differential gene expression from large RNA-seq data sets.
 • PeakRanger. Cloud-enabled peak caller for ChIP-seq data.
 • Quake. Quality-aware detection and sequencing error correction tool.
 • BlastReduce. High-performance short read mapping.
 • CloudBLAST. Hadoop implementation of NCBI’s Blast.
 • MrsRF. Algorithm for analyzing large evolutionary trees. (For a more detailed example of Hadoop in operation see sidebar, Genomics Example: Calling SNPs with Crossbow.) Genomics Example: Calling SNPs with CrossBow Next Generation Sequencers (NGS) like Illumina Hiseq can produce data in the order of 200 billion base pairs (200 Gbp) in a single one-week run for a 60x human genome coverage, which means that each base was present on an average of 60 reads. The larger the coverage, the more statistically significant is the result. Sequence reads are much shorter than traditional “Sanger” sequencing. This data requires specialized software algorithms called “short read aligners”. CrossBow is a combination of several algorithms that provide SNP calling and short read alignment, which are common tasks in NGS. Figure 1 alongside explains the steps necessary to process genome data to look for SNPs. The Map-Sort-Reduce process is ideally suited for a Hadoop framework. The cluster as shown is a traditional N-node Hadoop cluster. All of the Hadoop features like HDFS, program management and fault tolerance are available. The Map step is the short read alignment algorithm, called BoWTie (named after the Burrows Wheeler Transform, BWT). Multiple instances of BoWTie are run in parallel in Hadoop. The input tuples (an ordered list of elements) are the sequence reads and the output tuples are the alignments of the short reads. The Sort step apportions the alignments according to a primary key (the genome partition) and sorts based on a secondary key (which is the offset for that partition). The data here are the sorted alignments. The Reduce step calls SNPs for each reference genome partition. Many parallel instances of the algorithm SOAPsnp (Short Oligonucleotide Analysis Package for SNP) run in the cluster. Input tuples are sorted alignments for a partition and the output tuples are SNP calls. Results are stored via HDFS, and then archived in SOAPsnp format. 9 Got Hadoop?, Sept. 2011, Genome Technology, http://www.genomeweb.com/informatics/got-hadoop Hadoop’s Rise in Life Sciences | 4
  • 5. After several years of steady development in academic environments, Hadoop is now poised for rapid commercialization and broader “Hadoop meets all the tenets uptake in biopharma and healthcare. Early adoption has been strongest among next generation sequencing (NGS) centers where of Jim Gray’s Laws of Data NGS workflows can generate 2 TeraBytes (TB) of data per run per Engineering which have not week per sequencer – that’s not including the raw images. For these changed in 15 years.” organizations, the need for scale-out storage that integrates with HPC is a line item requirement. Sanjay Joshi CTO, Life Sciences, EMC Isilon Storage Division EMC ® Isilon ®, long a leader in scale-out NAS storage solutions, understands these challenges and has provided the scale-out storage for nearly all the workflows for all the DNA sequencer instrument manufacturers in the market today at more than 150 customers. Since 2008, the EMC Isilon OneFS ® storage platform has an overall installed base of more than 65 PetaBytes (PB). Recently, EMC introduced the industry’s first scale-out NAS system with native Hadoop support (via HDFS). The EMC Isilon OneFS file system now provides for connectivity to the Hadoop Distributed File System (HDFS) just like any other shared file system protocol: NFS, CIFS or SMB10. This allows for the data co-location of the storage with its compute nodes using the standard higher-level Java application programming interface (API) to build MapReduce “jobs”. EMC has gone one step further by combining its OneFS-based NAS solution with EMC Greenplum ® HD, a powerful analytics platform, to create a Hadoop appliance. Together, the two offerings relieve users of the burden of cobbling together various open source Hadoop components, which sometimes proves problematic. “Hadoop meets all the tenets of Jim Gray’s Laws of Data Engineering11 which have not changed in 15 years,” says Sanjay Joshi, CTO, Life Sciences, EMC Isilon Storage Division. Those tenets include: scientific computing is very data intensive, with no real limits; the solution is a scale-out architecture with distributed data access; and bring computation to the data, rather than data to the computations.” 10 Hadoop on EMC Isilon Scale Out NAS: EMC White Paper, Part Number h10528 11 From Jim Gray, “Scalable Computing”, presentation at Nortel: Microsoft Research, April 1999 Hadoop’s Rise in Life Sciences | 5
  • 6. “Isilon built the industry’s first Scale Out storage architecture. Now with its native and enterprise-ready HDFS protocol via OneFS and GreenPlum HD, EMC brings simplicity to Big Data in Science.” says Joshi. EMC Isilon OneFS combines the three layers of traditional storage architectures—the file system, volume manager, and RAID—into one unified software layer, creating a single intelligent distributed file system that runs on one storage cluster. Important advantages of OneFS for Hadoop are: • Scalable: Linear scale with increasing capacity – from 18TB to 16PB in a single filesystem and a single global namespace. Scale out as needs grow, independent of the compute layer. • Predictable: Dynamic content balancing is performed as nodes are added, upgraded or capacity changes. No added management time is required since this process is simple. Storage tiers without fears based • Available: OneFS protects your data from power loss, node on performance reside in one global or disk failures, loss of quorum and storage rebuild by namespace, connected via a dedicated backend network. distributing data, metadata and parity across all nodes. It also eliminates the single point of failure of a Hadoop “Name Node”. Therefore OneFS is “self healing”. • Efficient: Compared to the average 50% efficiency of traditional RAID systems, OneFS provides over 80% efficiency, independent of CPU compute or cache. This efficiency is achieved by ‘tier’ing the process into three types as shown in the figure alongside and by the pools within these node types. This efficiency extends to the reduction from a 3x copy that Hadoop requires to the >80% efficient 1x storage via EMC Isilon’s HDFS protocol. • Enterprise-ready. Administration of the storage clusters is via an intuitive Web based UI. Connectivity to your process is through standard file protocols: CIFS, SMB, NFS, FTP/ HTTP, iSCSI and HDFS. Standardized authentication and access control is available at scale: AD, LDAP and NIS. Hadoop’s Rise in Life Sciences | 6
  • 7. CONCLUSION What began as an internal project at Google in 2004 has now matured into a scalable framework for two computing paradigms that are particularly suited for the life sciences: parallelization and distribution. Indeed, the post-processing streaming data patterns for text strings, clustering and sorting – the core process patterns in the life sciences – are ideal workflows for Hadoop. Case-in-point: The CrossBow example cited earlier aligned Illumina NGS reads for SNP calling over a ‘35x’ coverage of the human genome in under 3 hours using a 40-node Hadoop cluster; an order of magnitude better than traditional HPC technology for parallel processes. The EMC Isilon OneFS distributed file system handles the Hadoop distributed file system, HDFS, just like any other shared file system, and provides a shield for the single point of failure in Hadoop: the name node. The Hybrid Cloud model (source data mirror) with Hadoop as a Service (HaaS) is the current state-of-the-art. For more information visit EMC Isilon at http://www.emc.com/isilon. Summary of Hadoop Attributes: Overview • Write Once Read Many times (WORM) • Co-locates data with compute, uses higher level architecture with Java API • HDFS is a distributed file system that runs on large clusters Advantages • Uses MapReduce framework – a batch query processor, scales linearly • EMC Isilon OneFS implements HDFS and eliminates the single point of failure, the “name node” • Standard programming language development: Java, Ruby, Python, C++ create MapReduce jobs. FUSE and WebDAV interfaces provide architectural flexibility Challenges • HDFS block size is 128 MB (can be increased), therefore large numbers of small files (<8KB) reduce its performance: use Hadoop Archive (HAR) • Data coherency and latency remain issues for large scale implementations • Not suited for low-latency, “in process” use-cases like real-time, spectral or video analysis • Data transfer between Genome sequencing data sources to the Hadoop clusters in the Cloud remains an issue, the current business model is mirroring the data between source and Cloud and then utilizing Hadoop as a Service model on the mirrored data. Hadoop’s Rise in Life Sciences | 7