The document provides details about the candidate's experience and skills in big data technologies like Hadoop, Hive, Pig, Spark, Sqoop, Flume, and HBase. The candidate has over 1.5 years of experience learning and working with these technologies. He has installed and configured Hadoop clusters from different versions and used distributions from MapR. He has in-depth knowledge of Hadoop architecture and frameworks and has performed various tasks in a Hadoop environment including configuration of Hive, writing Pig scripts, using Sqoop and Flume, and writing Spark programs.