This presentation provides an overview of Hadoop, including: - A brief history of data and the rise of big data from various sources. - An introduction to Hadoop as an open source framework used for distributed processing and storage of large datasets across clusters of computers. - Descriptions of the key components of Hadoop - HDFS for storage, and MapReduce for processing - and how they work together in the Hadoop architecture. - An explanation of how Hadoop can be installed and configured in standalone, pseudo-distributed and fully distributed modes. - Examples of major companies that use Hadoop like Amazon, Facebook, Google and Yahoo to handle their large-scale data and analytics needs.