This document discusses Project Amaterasu, a tool for simplifying the deployment of big data applications. Amaterasu uses Mesos to deploy Spark jobs and other frameworks across clusters. It defines workflows, actions, and environments in YAML and JSON files. Workflows contain a series of actions like Spark jobs. Actions are written in Scala and interface with Amaterasu's context. Environments configure settings for different clusters. Amaterasu aims to improve collaboration and testing for big data teams through continuous integration and deployment of data pipelines.