Apache Hadoop Apache Spark
Apache Hadoop Apache Spark
analyze vast amounts of data. By using these frameworks and related open-source
projects, such as Apache Hive and Apache Pig, you can process data for analytics
purposes and business intelligence workloads. Additionally, you can use Amazon EMR
to transform and move large amounts of data into and out of other AWS data stores
and databases, such as Amazon Simple Storage Service (Amazon S3) and Amazon
DynamoDB.
Amazon Elastic Transcoder lets you convert media files that you have stored in
Amazon S3 into media files in the formats required by consumer playback devices.
For example, you can convert large, high-quality digital media files into formats that
users can play back on mobile devices, tablets, web browsers, and connected
televisions.
Jobs do the work of transcoding. Each job converts one file into up to 30
formats. For example, if you want to convert a media file into six different formats, you
Pipelines are queues that manage your transcoding jobs. When you create a
job, you specify which pipeline you want to add the job to.
If you configure a job to transcode into more than one format, Elastic Transcoder
creates the files for each format in the order in which you specify the formats in the job.
-A pipeline can process more than one job simultaneously, and jobs don't necessarily
Presets are templates that contain most of the settings for transcoding media
files from one format to another. Elastic Transcoder includes some default presets for