This talk will cover utilizing native Hadoop storage policies and types to effectively archive and tier data in your existing Hadoop infrastructure. Key focus areas are: 1. Why use heterogeneous storage (tiering)? 2. Identifying key metrics for successful archiving of Hadoop data 3. Automation requirements at scale 4. Current limitations and gotchas The impact of successful archive provides Hadoop users better performance, lower hardware cost, and lower software costs. This session will cover the techniques and tools available to unlock this powerful capability in native Hadoop. Speakers: Peter Kisich works with multiple large scale Hadoop customers successfully tiering and optimizing Hadoop infrastructure. He co-founded FactorData to bring enterprise storage features and control to open Hadoop environments. Previously, Mr. Kisich served as a global subject matter expert in Big Data and Cloud computing for IBM including speaking at several global conferences and events.