Amazon EMR release 4.4.0 - Amazon EMR
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China (PDF).

Amazon EMR release 4.4.0

Application versions

The following applications are supported in this release: Ganglia, HCatalog, Hadoop, Hive, Hue, Mahout, Oozie-Sandbox, Pig, Presto-Sandbox, Spark, Sqoop-Sandbox, and Zeppelin-Sandbox.

The table below lists the application versions available in this release of Amazon EMR and the application versions in the preceding three Amazon EMR releases (when applicable).

For a comprehensive history of application versions for each release of Amazon EMR, see the following topics:

Application version information
emr-4.4.0 emr-4.3.0 emr-4.2.0 emr-4.1.0
Amazon SDK for Java tracked
Python Not trackedNot trackedNot trackedNot tracked
Scala Not trackedNot trackedNot trackedNot tracked
AmazonCloudWatchAgent - - - -
Delta - - - -
Flink - - - -
Ganglia3. -
HBase - - - -
HCatalog1.0.0 - - -
Hudi - - - -
Iceberg - - - -
JupyterEnterpriseGateway - - - -
JupyterHub - - - -
Livy - - - -
MXNet - - - -
Oozie - - - -
Phoenix - - - -
Presto - - - -
Sqoop - - - -
Sqoop-Sandbox1.4.6 - - -
TensorFlow - - - -
Tez - - - -
Trino (PrestoSQL) - - - -
Zeppelin - - - -
ZooKeeper - - - -
ZooKeeper-Sandbox - - - -

Release notes

The following release notes include information for the Amazon EMR 4.4.0 release.

Release date: March 14, 2016

  • Added HCatalog 1.0.0

  • Added Sqoop-Sandbox 1.4.6

  • Upgraded to Presto 0.136

  • Upgraded to Zeppelin 0.5.6

  • Upgraded to Mahout 0.11.1

  • Enabled dynamicResourceAllocation by default.

  • Added a table of all configuration classifications for the release. For more information, see the Configuration Classifications table in Configuring Applications.

Known issues resolved from previous releases
  • Fixed an issue where the maximizeResourceAllocation setting would not reserve enough memory for YARN ApplicationMaster daemons.

  • Fixed an issue encountered with a custom DNS. If any entries in resolve.conf precede the custom entries provided, then the custom entries are not resolvable. This behavior was affected by clusters in a VPC where the default VPC name server is inserted as the top entry in resolve.conf.

  • Fixed an issue where the default Python moved to version 2.7 and boto was not installed for that version.

  • Fixed an issue where YARN containers and Spark applications would generate a unique Ganglia round robin database (rrd) file, which resulted in the first disk attached to the instance filling up. Because of this fix, YARN container level metrics have been disabled and Spark application level metrics have been disabled.

  • Fixed an issue in log pusher where it would delete all empty log folders. The effect was that the Hive CLI was not able to log because log pusher was removing the empty user folder under /var/log/hive.

  • Fixed an issue affecting Hive imports, which affected partitioning and resulted in an error during import.

  • Fixed an issue where EMRFS and s3-dist-cp did not properly handle bucket names that contain periods.

  • Changed a behavior in EMRFS so that in versioning-enabled buckets the _$folder$ marker file is not continuously created, which may contribute to improved performance for versioning-enabled buckets.

  • Changed the behavior in EMRFS such that it does not use instruction files except for cases where client-side encryption is enabled. If you want to delete instruction files while using client-side encryption, you can set the emrfs-site.xml property, fs.s3.cse.cryptoStorageMode.deleteInstructionFiles.enabled, to true.

  • Changed YARN log aggregation to retain logs at the aggregation destination for two days. The default destination is your cluster HDFS storage. If you want to change this duration, change the value of yarn.log-aggregation.retain-seconds using the yarn-site configuration classification when you create your cluster. As always, you can save your application logs to Amazon S3 using the log-uri parameter when you create your cluster.

Patches Applied

Component versions

The components that Amazon EMR installs with this release are listed below. Some are installed as part of big-data application packages. Others are unique to Amazon EMR and installed for system processes and features. These typically start with emr or aws. Big-data application packages in the most recent Amazon EMR release are usually the latest version found in the community. We make community releases available in Amazon EMR as quickly as possible.

Some components in Amazon EMR differ from community versions. These components have a version label in the form CommunityVersion-amzn-EmrVersion. The EmrVersion starts at 0. For example, if open source community component named myapp-component with version 2.2 has been modified three times for inclusion in different Amazon EMR releases, its release version is listed as 2.2-amzn-2.

Component Version Description
emr-ddb3.0.0Amazon DynamoDB connector for Hadoop ecosystem applications.
emr-goodies2.0.0Extra convenience libraries for the Hadoop ecosystem.
emr-kinesis3.1.0Amazon Kinesis connector for Hadoop ecosystem applications.
emr-s3-dist-cp2.2.0Distributed copy application optimized for Amazon S3.
emrfs2.4.0Amazon S3 connector for Hadoop ecosystem applications.
ganglia-monitor3.7.2Embedded Ganglia agent for Hadoop ecosystem applications along with the Ganglia monitoring agent.
ganglia-metadata-collector3.7.2Ganglia metadata collector for aggregating metrics from Ganglia monitoring agents.
ganglia-web3.7.1Web application for viewing metrics collected by the Ganglia metadata collector.
hadoop-client2.7.1-amzn-1Hadoop command-line clients such as 'hdfs', 'hadoop', or 'yarn'.
hadoop-hdfs-datanode2.7.1-amzn-1HDFS node-level service for storing blocks.
hadoop-hdfs-library2.7.1-amzn-1HDFS command-line client and library
hadoop-hdfs-namenode2.7.1-amzn-1HDFS service for tracking file names and block locations.
hadoop-httpfs-server2.7.1-amzn-1HTTP endpoint for HDFS operations.
hadoop-kms-server2.7.1-amzn-1Cryptographic key management server based on Hadoop's KeyProvider API.
hadoop-mapred2.7.1-amzn-1MapReduce execution engine libraries for running a MapReduce application.
hadoop-yarn-nodemanager2.7.1-amzn-1YARN service for managing containers on an individual node.
hadoop-yarn-resourcemanager2.7.1-amzn-1YARN service for allocating and managing cluster resources and distributed applications.
hcatalog-client1.0.0-amzn-3The 'hcat' command line client for manipulating hcatalog-server.
hcatalog-server1.0.0-amzn-3Service providing HCatalog, a table and storage management layer for distributed applications.
hcatalog-webhcat-server1.0.0-amzn-3HTTP endpoint providing a REST interface to HCatalog.
hive-client1.0.0-amzn-3Hive command line client.
hive-metastore-server1.0.0-amzn-3Service for accessing the Hive metastore, a semantic repository storing metadata for SQL on Hadoop operations.
hive-server1.0.0-amzn-3Service for accepting Hive queries as web requests.
hue-server3.7.1-amzn-5Web application for analyzing data using Hadoop ecosystem applications
mahout-client0.11.1Library for machine learning.
mysql-server5.5MySQL database server.
oozie-client4.2.0Oozie command-line client.
oozie-server4.2.0Service for accepting Oozie workflow requests.
presto-coordinator0.136Service for accepting queries and managing query execution among presto-workers.
presto-worker0.136Service for executing pieces of a query.
pig-client0.14.0-amzn-0Pig command-line client.
spark-client1.6.0Spark command-line clients.
spark-history-server1.6.0Web UI for viewing logged events for the lifetime of a completed Spark application.
spark-on-yarn1.6.0In-memory execution engine for YARN.
spark-yarn-slave1.6.0Apache Spark libraries needed by YARN slaves.
sqoop-client1.4.6Apache Sqoop command-line client.
webserver2.4Apache HTTP server.
zeppelin-server0.5.6-incubatingWeb-based notebook that enables interactive data analytics.

Configuration classifications

Configuration classifications allow you to customize applications. These often correspond to a configuration XML file for the application, such as hive-site.xml. For more information, see Configure applications.

emr-4.4.0 classifications
Classifications Description


Change values in Hadoop's capacity-scheduler.xml file.


Change values in Hadoop's core-site.xml file.


Change EMRFS settings.


Change values in the Hadoop environment for all Hadoop components.


Change values in Hadoop's file.


Configure HDFS encryption zones.


Change values in HDFS's hdfs-site.xml.


Change values in HCatalog's environment.


Change values in HCatalog's


Change values in HCatalog's proto-hive-site.xml.


Change values in HCatalog WebHCat's environment.


Change values in HCatalog WebHCat's


Change values in HCatalog WebHCat's webhcat-site.xml file.


Change values in the Hive environment.


Change values in Hive's file.


Change values in Hive's file.


Change values in Hive's hive-site.xml file


Change values in Hue's ini file


Change values in the HTTPFS environment.


Change values in Hadoop's httpfs-site.xml file.


Change values in Hadoop's kms-acls.xml file.


Change values in the Hadoop KMS environment.


Change values in Hadoop's file.


Change values in Hadoop's kms-site.xml file.


Change values in the MapReduce application's environment.


Change values in the MapReduce application's mapred-site.xml file.


Change values in Oozie's environment.


Change values in Oozie's file.


Change values in Oozie's oozie-site.xml file.


Change values in Pig's file.


Change values in Pig's file.


Change values in Presto's file.


Change values in Presto's file.


Change values in Presto's file.


Amazon EMR-curated settings for Apache Spark.


Change values in Spark's spark-defaults.conf file.


Change values in the Spark environment.


Change values in Spark's file.


Change values in Spark's file.


Change values in Sqoop's environment.


Change values in Sqoop OraOop's oraoop-site.xml file.


Change values in Sqoop's sqoop-site.xml file.


Change values in the YARN environment.


Change values in YARN's yarn-site.xml file.


Change values in the Zeppelin environment.