Amazon EMR release 4.6.0 - Amazon EMR
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China (PDF).

Amazon EMR release 4.6.0

Application versions

The following applications are supported in this release: Ganglia, HBase, HCatalog, Hadoop, Hive, Hue, Mahout, Oozie-Sandbox, Pig, Presto-Sandbox, Spark, Sqoop-Sandbox, Zeppelin-Sandbox, and ZooKeeper-Sandbox.

The table below lists the application versions available in this release of Amazon EMR and the application versions in the preceding three Amazon EMR releases (when applicable).

For a comprehensive history of application versions for each release of Amazon EMR, see the following topics:

Application version information
emr-4.6.0 emr-4.5.0 emr-4.4.0 emr-4.3.0
Amazon SDK for Java
Python Not trackedNot trackedNot trackedNot tracked
Scala Not trackedNot trackedNot trackedNot tracked
AmazonCloudWatchAgent - - - -
Delta - - - -
Flink - - - -
HBase1.2.0 - - -
HCatalog1. -
Hudi - - - -
Iceberg - - - -
JupyterEnterpriseGateway - - - -
JupyterHub - - - -
Livy - - - -
MXNet - - - -
Oozie - - - -
Phoenix - - - -
Presto - - - -
Sqoop - - - -
Sqoop-Sandbox1. -
TensorFlow - - - -
Tez - - - -
Trino (PrestoSQL) - - - -
Zeppelin - - - -
ZooKeeper - - - -
ZooKeeper-Sandbox3.4.8 - - -

Release notes

The following release notes include information for the Amazon EMR 4.6.0 release.

  • Added HBase 1.2.0

  • Added Zookeeper-Sandbox 3.4.8

  • Upgraded to Presto-Sandbox 0.143

  • Amazon EMR releases are now based on Amazon Linux 2016.03.0. For more information, see

  • Issue Affecting Throughput Optimized HDD (st1) EBS Volume Types

    An issue in the Linux kernel versions 4.2 and above significantly affects performance on Throughput Optimized HDD (st1) EBS volumes for EMR. This release (emr-4.6.0) uses kernel version 4.4.5 and hence is impacted. Therefore, we recommend not using emr-4.6.0 if you want to use st1 EBS volumes. You can use emr-4.5.0 or prior Amazon EMR releases with st1 without impact. In addition, we provide the fix with future releases.

  • Python Defaults

    Python 3.4 is now installed by default, but Python 2.7 remains the system default. You may configure Python 3.4 as the system default using either a bootstrap action; you can use the configuration API to set PYSPARK_PYTHON export to /usr/bin/python3.4 in the spark-env classification to affect the Python version used by PySpark.

  • Java 8

    Except for Presto, OpenJDK 1.7 is the default JDK used for all applications. However, both OpenJDK 1.7 and 1.8 are installed. For information about how to set JAVA_HOME for applications, see Configuring Applications to Use Java 8.

Known issues resolved from previous releases
  • Fixed an issue where application provisioning would sometimes randomly fail due to a generated password.

  • Previously, mysqld was installed on all nodes. Now, it is only installed on the master instance and only if the chosen application includes mysql-server as a component. Currently, the following applications include the mysql-server component: HCatalog, Hive, Hue, Presto-Sandbox, and Sqoop-Sandbox.

  • Changed yarn.scheduler.maximum-allocation-vcores to 80 from the default of 32, which fixes an issue introduced in emr-4.4.0 that mainly occurs with Spark while using the maximizeResourceAllocation option in a cluster whose core instance type is one of a few large instance types that have the YARN vcores set higher than 32; namely c4.8xlarge, cc2.8xlarge, hs1.8xlarge, i2.8xlarge, m2.4xlarge, r3.8xlarge, d2.8xlarge, or m4.10xlarge were affected by this issue.

  • s3-dist-cp now uses EMRFS for all Amazon S3 nominations and no longer stages to a temporary HDFS directory.

  • Fixed an issue with exception handling for client-side encryption multipart uploads.

  • Added an option to allow users to change the Amazon S3 storage class. By default this setting is STANDARD. The emrfs-site configuration classification setting is fs.s3.storageClass and the possible values are STANDARD, STANDARD_IA, and REDUCED_REDUNDANCY. For more information about storage classes, see Storage Classes in the Amazon Simple Storage Service User Guide.

Component versions

The components that Amazon EMR installs with this release are listed below. Some are installed as part of big-data application packages. Others are unique to Amazon EMR and installed for system processes and features. These typically start with emr or aws. Big-data application packages in the most recent Amazon EMR release are usually the latest version found in the community. We make community releases available in Amazon EMR as quickly as possible.

Some components in Amazon EMR differ from community versions. These components have a version label in the form CommunityVersion-amzn-EmrVersion. The EmrVersion starts at 0. For example, if open source community component named myapp-component with version 2.2 has been modified three times for inclusion in different Amazon EMR releases, its release version is listed as 2.2-amzn-2.

Component Version Description
emr-ddb3.0.0Amazon DynamoDB connector for Hadoop ecosystem applications.
emr-goodies2.0.0Extra convenience libraries for the Hadoop ecosystem.
emr-kinesis3.1.0Amazon Kinesis connector for Hadoop ecosystem applications.
emr-s3-dist-cp2.3.0Distributed copy application optimized for Amazon S3.
emrfs2.6.0Amazon S3 connector for Hadoop ecosystem applications.
ganglia-monitor3.7.2Embedded Ganglia agent for Hadoop ecosystem applications along with the Ganglia monitoring agent.
ganglia-metadata-collector3.7.2Ganglia metadata collector for aggregating metrics from Ganglia monitoring agents.
ganglia-web3.7.1Web application for viewing metrics collected by the Ganglia metadata collector.
hadoop-client2.7.2-amzn-1Hadoop command-line clients such as 'hdfs', 'hadoop', or 'yarn'.
hadoop-hdfs-datanode2.7.2-amzn-1HDFS node-level service for storing blocks.
hadoop-hdfs-library2.7.2-amzn-1HDFS command-line client and library
hadoop-hdfs-namenode2.7.2-amzn-1HDFS service for tracking file names and block locations.
hadoop-httpfs-server2.7.2-amzn-1HTTP endpoint for HDFS operations.
hadoop-kms-server2.7.2-amzn-1Cryptographic key management server based on Hadoop's KeyProvider API.
hadoop-mapred2.7.2-amzn-1MapReduce execution engine libraries for running a MapReduce application.
hadoop-yarn-nodemanager2.7.2-amzn-1YARN service for managing containers on an individual node.
hadoop-yarn-resourcemanager2.7.2-amzn-1YARN service for allocating and managing cluster resources and distributed applications.
hbase-hmaster1.2.0Service for an HBase cluster responsible for coordination of Regions and execution of administrative commands.
hbase-region-server1.2.0Service for serving one or more HBase regions.
hbase-client1.2.0HBase command-line client.
hbase-rest-server1.2.0Service providing a RESTful HTTP endpoint for HBase.
hbase-thrift-server1.2.0Service providing a Thrift endpoint to HBase.
hcatalog-client1.0.0-amzn-4The 'hcat' command line client for manipulating hcatalog-server.
hcatalog-server1.0.0-amzn-4Service providing HCatalog, a table and storage management layer for distributed applications.
hcatalog-webhcat-server1.0.0-amzn-4HTTP endpoint providing a REST interface to HCatalog.
hive-client1.0.0-amzn-4Hive command line client.
hive-metastore-server1.0.0-amzn-4Service for accessing the Hive metastore, a semantic repository storing metadata for SQL on Hadoop operations.
hive-server1.0.0-amzn-4Service for accepting Hive queries as web requests.
hue-server3.7.1-amzn-6Web application for analyzing data using Hadoop ecosystem applications
mahout-client0.11.1Library for machine learning.
mysql-server5.5MySQL database server.
oozie-client4.2.0Oozie command-line client.
oozie-server4.2.0Service for accepting Oozie workflow requests.
presto-coordinator0.143Service for accepting queries and managing query execution among presto-workers.
presto-worker0.143Service for executing pieces of a query.
pig-client0.14.0-amzn-0Pig command-line client.
spark-client1.6.1Spark command-line clients.
spark-history-server1.6.1Web UI for viewing logged events for the lifetime of a completed Spark application.
spark-on-yarn1.6.1In-memory execution engine for YARN.
spark-yarn-slave1.6.1Apache Spark libraries needed by YARN slaves.
sqoop-client1.4.6Apache Sqoop command-line client.
webserver2.4Apache HTTP server.
zeppelin-server0.5.6-incubatingWeb-based notebook that enables interactive data analytics.
zookeeper-server3.4.8Centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services.
zookeeper-client3.4.8ZooKeeper command line client.

Configuration classifications

Configuration classifications allow you to customize applications. These often correspond to a configuration XML file for the application, such as hive-site.xml. For more information, see Configure applications.

emr-4.6.0 classifications
Classifications Description


Change values in Hadoop's capacity-scheduler.xml file.


Change values in Hadoop's core-site.xml file.


Change EMRFS settings.


Change values in the Hadoop environment for all Hadoop components.


Change values in Hadoop's file.


Change values in HBase's environment.


Change values in HBase's file.


Change values in HBase's file.


Change values in HBase's hbase-policy.xml file.


Change values in HBase's hbase-site.xml file.


Configure HDFS encryption zones.


Change values in HDFS's hdfs-site.xml.


Change values in HCatalog's environment.


Change values in HCatalog's


Change values in HCatalog's proto-hive-site.xml.


Change values in HCatalog WebHCat's environment.


Change values in HCatalog WebHCat's


Change values in HCatalog WebHCat's webhcat-site.xml file.


Change values in the Hive environment.


Change values in Hive's file.


Change values in Hive's file.


Change values in Hive's hive-site.xml file


Change values in Hue's ini file


Change values in the HTTPFS environment.


Change values in Hadoop's httpfs-site.xml file.


Change values in Hadoop's kms-acls.xml file.


Change values in the Hadoop KMS environment.


Change values in Hadoop's file.


Change values in Hadoop's kms-site.xml file.


Change values in the MapReduce application's environment.


Change values in the MapReduce application's mapred-site.xml file.


Change values in Oozie's environment.


Change values in Oozie's file.


Change values in Oozie's oozie-site.xml file.


Change values in Pig's file.


Change values in Pig's file.


Change values in Presto's file.


Change values in Presto's file.


Change values in Presto's file.


Amazon EMR-curated settings for Apache Spark.


Change values in Spark's spark-defaults.conf file.


Change values in the Spark environment.


Change values in Spark's file.


Change values in Spark's file.


Change values in Sqoop's environment.


Change values in Sqoop OraOop's oraoop-site.xml file.


Change values in Sqoop's sqoop-site.xml file.


Change values in the YARN environment.


Change values in YARN's yarn-site.xml file.


Change values in the Zeppelin environment.


Change values in ZooKeeper's zoo.cfg file.


Change values in ZooKeeper's file.