Amazon EMR release 6.0.0 - Amazon EMR
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China (PDF).

Amazon EMR release 6.0.0

6.0.0 application versions

The following applications are supported in this release: Ganglia, HBase, HCatalog, Hadoop, Hive, Hudi, Hue, JupyterHub, Livy, MXNet, Oozie, Phoenix, Presto, Spark, TensorFlow, Tez, Zeppelin, and ZooKeeper.

The table below lists the application versions available in this release of Amazon EMR and the application versions in the preceding three Amazon EMR releases (when applicable).

For a comprehensive history of application versions for each release of Amazon EMR, see the following topics:

Application version information
emr-6.1.1 emr-6.1.0 emr-6.0.1 emr-6.0.0
Amazon SDK for Java 1.11.8281.11.8281.11.7111.11.711
Python 2.7, 3.72.7, 3.72.7, 3.72.7, 3.7
AmazonCloudWatchAgent - - - -
Delta - - - -
Flink1. - -
Iceberg - - - -
JupyterEnterpriseGateway - - - -
Mahout - - - -
Pig0. - -
Sqoop1. - -
Trino (PrestoSQL)338338 - -

6.0.0 release notes

The following release notes include information for Amazon EMR release 6.0.0.

Initial release date: March 10, 2020

Supported applications
  • Amazon SDK for Java version 1.11.711

  • Ganglia version 3.7.2

  • Hadoop version 3.2.1

  • HBase version 2.2.3

  • HCatalog version 3.1.2

  • Hive version 3.1.2

  • Hudi version 0.5.0-incubating

  • Hue version 4.4.0

  • JupyterHub version 1.0.0

  • Livy version 0.6.0

  • MXNet version 1.5.1

  • Oozie version 5.1.0

  • Phoenix version 5.0.0

  • Presto version 0.230

  • Spark version 2.4.4

  • TensorFlow version 1.14.0

  • Zeppelin version 0.9.0-SNAPSHOT

  • Zookeeper version 3.4.14

  • Connectors and drivers: DynamoDB Connector 4.14.0


Flink, Sqoop, Pig, and Mahout are not available in Amazon EMR version 6.0.0.

New features
  • YARN Docker Runtime Support - YARN applications, such as Spark jobs, can now run in the context of a Docker container. This allows you to easily define dependencies in a Docker image without the need to install custom libraries on your Amazon EMR cluster. For more information, see Configure Docker Integration and Run Spark applications with Docker using Amazon EMR 6.0.0.

  • Hive LLAP Support - Hive now supports the LLAP execution mode for improved query performance. For more information, see Using Hive LLAP.

Changes, enhancements, and resolved issues
  • This is a release to fix issues with Amazon EMR Scaling when it fails to scale up/scale down a cluster successfully or causes application failures.

  • Fixed an issue where scaling requests failed for a large, highly utilized cluster when Amazon EMR on-cluster daemons were running health checking activities, such as gathering YARN node state and HDFS node state. This was happening because on-cluster daemons were not able to communicate the health status data of a node to internal Amazon EMR components.

  • Improved EMR on-cluster daemons to correctly track the node states when IP addresses are reused to improve reliability during scaling operations.

  • SPARK-29683. Fixed an issue where job failures occurred during cluster scale-down as Spark was assuming all available nodes were deny-listed.

  • YARN-9011. Fixed an issue where job failures occurred due to a race condition in YARN decommissioning when cluster tried to scale up or down.

  • Fixed issue with step or job failures during cluster scaling by ensuring that the node states are always consistent between the Amazon EMR on-cluster daemons and YARN/HDFS.

  • Fixed an issue where cluster operations such as scale down and step submission failed for Amazon EMR clusters enabled with Kerberos authentication. This was because the Amazon EMR on-cluster daemon did not renew the Kerberos ticket, which is required to securely communicate with HDFS/YARN running on the primary node.

  • Newer Amazon EMR releases fix the issue with a lower "Max open files" limit on older AL2 in Amazon EMR. Amazon EMR releases 5.30.1, 5.30.2, 5.31.1, 5.32.1, 6.0.1, 6.1.1, 6.2.1, 5.33.0, 6.3.0 and later now include a permanent fix with a higher "Max open files" setting.

  • Amazon Linux

    • Amazon Linux 2 is the operating system for the EMR 6.x release series.

    • systemd is used for service management instead of upstart used inAmazon Linux 1.

  • Java Development Kit (JDK)

    • Corretto JDK 8 is the default JDK for the EMR 6.x release series.

  • Scala

    • Scala 2.12 is used with Apache Spark and Apache Livy.

  • Python 3

    • Python 3 is now the default version of Python in EMR.

  • YARN node labels

    • Beginning with Amazon EMR 6.x release series, the YARN node labels feature is disabled by default. The application master processes can run on both core and task nodes by default. You can enable the YARN node labels feature by configuring following properties: yarn.node-labels.enabled and For more information, see Understanding Primary, Core, and Task Nodes.

Known issues
  • Lower "Max open files" limit on older AL2 [fixed in newer releases]. Amazon EMR releases: emr-5.30.x, emr-5.31.0, emr-5.32.0, emr-6.0.0, emr-6.1.0, and emr-6.2.0 are based on older versions ofAmazon Linux 2 (AL2), which have a lower ulimit setting for "Max open files" when Amazon EMR clusters are created with the default AMI. Amazon EMR releases 5.30.1, 5.30.2, 5.31.1, 5.32.1, 6.0.1, 6.1.1, 6.2.1, 5.33.0, 6.3.0 and later include a permanent fix with a higher "Max open files" setting. Releases with the lower open file limit causes a "Too many open files" error when submitting Spark job. In the impacted releases, the Amazon EMR default AMI has a default ulimit setting of 4096 for "Max open files," which is lower than the 65536 file limit in the latestAmazon Linux 2 AMI. The lower ulimit setting for "Max open files" causes Spark job failure when the Spark driver and executor try to open more than 4096 files. To fix the issue, Amazon EMR has a bootstrap action (BA) script that adjusts the ulimit setting at cluster creation.

    If you are using an older Amazon EMR version that doesn't have the permanent fix for this issue, the following workaround lets you to explicitly set the instance-controller ulimit to a maximum of 65536 files.

    Explicitly set a ulimit from the command line
    1. Edit /etc/systemd/system/instance-controller.service to add the following parameters to Service section.



    2. Restart InstanceController

      $ sudo systemctl daemon-reload

      $ sudo systemctl restart instance-controller

    Set a ulimit using bootstrap action (BA)

    You can also use a bootstrap action (BA) script to configure the instance-controller ulimit to 65536 files at cluster creation.

    #!/bin/bash for user in hadoop spark hive; do sudo tee /etc/security/limits.d/$user.conf << EOF $user - nofile 65536 $user - nproc 65536 EOF done for proc in instancecontroller logpusher; do sudo mkdir -p /etc/systemd/system/$proc.service.d/ sudo tee /etc/systemd/system/$proc.service.d/override.conf << EOF [Service] LimitNOFILE=65536 LimitNPROC=65536 EOF pid=$(pgrep -f aws157.$proc.Main) sudo prlimit --pid $pid --nofile=65535:65535 --nproc=65535:65535 done sudo systemctl daemon-reload
  • Spark interactive shell, including PySpark, SparkR, and spark-shell, does not support using Docker with additional libraries.

  • To use Python 3 with Amazon EMR version 6.0.0, you must add PATH to yarn.nodemanager.env-whitelist.

  • The Live Long and Process (LLAP) functionality is not supported when you use the Amazon Glue Data Catalog as the metastore for Hive.

  • When using Amazon EMR 6.0.0 with Spark and Docker integration, you need to configure the instances in your cluster with the same instance type and the same amount of EBS volumes to avoid failure when submitting a Spark job with Docker runtime.

  • In Amazon EMR 6.0.0, HBase on Amazon S3 storage mode is impacted by the HBASE-24286. issue. HBase master cannot initialize when the cluster is created using existing S3 data.

  • Known issue in clusters with multiple primary nodes and Kerberos authentication

    If you run clusters with multiple primary nodes and Kerberos authentication in Amazon EMR releases 5.20.0 and later, you may encounter problems with cluster operations such as scale down or step submission, after the cluster has been running for some time. The time period depends on the Kerberos ticket validity period that you defined. The scale-down problem impacts both automatic scale-down and explicit scale down requests that you submitted. Additional cluster operations can also be impacted.


    • SSH as hadoop user to the lead primary node of the EMR cluster with multiple primary nodes.

    • Run the following command to renew Kerberos ticket for hadoop user.

      kinit -kt <keytab_file> <principal>

      Typically, the keytab file is located at /etc/hadoop.keytab and the principal is in the form of hadoop/<hostname>@<REALM>.


    This workaround will be effective for the time period the Kerberos ticket is valid. This duration is 10 hours by default, but can configured by your Kerberos settings. You must re-run the above command once the Kerberos ticket expires.

6.0.0 component versions

The components that Amazon EMR installs with this release are listed below. Some are installed as part of big-data application packages. Others are unique to Amazon EMR and installed for system processes and features. These typically start with emr or aws. Big-data application packages in the most recent Amazon EMR release are usually the latest version found in the community. We make community releases available in Amazon EMR as quickly as possible.

Some components in Amazon EMR differ from community versions. These components have a version label in the form CommunityVersion-amzn-EmrVersion. The EmrVersion starts at 0. For example, if open source community component named myapp-component with version 2.2 has been modified three times for inclusion in different Amazon EMR releases, its release version is listed as 2.2-amzn-2.

Component Version Description
aws-sagemaker-spark-sdk1.2.6Amazon SageMaker Spark SDK
emr-ddb4.14.0Amazon DynamoDB connector for Hadoop ecosystem applications.
emr-goodies3.0.0Extra convenience libraries for the Hadoop ecosystem.
emr-kinesis3.5.0Amazon Kinesis connector for Hadoop ecosystem applications.
emr-s3-dist-cp2.14.0Distributed copy application optimized for Amazon S3.
emr-s3-select1.5.0EMR S3Select Connector
emrfs2.39.0Amazon S3 connector for Hadoop ecosystem applications.
ganglia-monitor3.7.2Embedded Ganglia agent for Hadoop ecosystem applications along with the Ganglia monitoring agent.
ganglia-metadata-collector3.7.2Ganglia metadata collector for aggregating metrics from Ganglia monitoring agents.
ganglia-web3.7.1Web application for viewing metrics collected by the Ganglia metadata collector.
hadoop-client3.2.1-amzn-0Hadoop command-line clients such as 'hdfs', 'hadoop', or 'yarn'.
hadoop-hdfs-datanode3.2.1-amzn-0HDFS node-level service for storing blocks.
hadoop-hdfs-library3.2.1-amzn-0HDFS command-line client and library
hadoop-hdfs-namenode3.2.1-amzn-0HDFS service for tracking file names and block locations.
hadoop-hdfs-journalnode3.2.1-amzn-0HDFS service for managing the Hadoop filesystem journal on HA clusters.
hadoop-httpfs-server3.2.1-amzn-0HTTP endpoint for HDFS operations.
hadoop-kms-server3.2.1-amzn-0Cryptographic key management server based on Hadoop's KeyProvider API.
hadoop-mapred3.2.1-amzn-0MapReduce execution engine libraries for running a MapReduce application.
hadoop-yarn-nodemanager3.2.1-amzn-0YARN service for managing containers on an individual node.
hadoop-yarn-resourcemanager3.2.1-amzn-0YARN service for allocating and managing cluster resources and distributed applications.
hadoop-yarn-timeline-server3.2.1-amzn-0Service for retrieving current and historical information for YARN applications.
hbase-hmaster2.2.3Service for an HBase cluster responsible for coordination of Regions and execution of administrative commands.
hbase-region-server2.2.3Service for serving one or more HBase regions.
hbase-client2.2.3HBase command-line client.
hbase-rest-server2.2.3Service providing a RESTful HTTP endpoint for HBase.
hbase-thrift-server2.2.3Service providing a Thrift endpoint to HBase.
hcatalog-client3.1.2-amzn-0The 'hcat' command line client for manipulating hcatalog-server.
hcatalog-server3.1.2-amzn-0Service providing HCatalog, a table and storage management layer for distributed applications.
hcatalog-webhcat-server3.1.2-amzn-0HTTP endpoint providing a REST interface to HCatalog.
hive-client3.1.2-amzn-0Hive command line client.
hive-hbase3.1.2-amzn-0Hive-hbase client.
hive-metastore-server3.1.2-amzn-0Service for accessing the Hive metastore, a semantic repository storing metadata for SQL on Hadoop operations.
hive-server23.1.2-amzn-0Service for accepting Hive queries as web requests.
hudi0.5.0-incubating-amzn-1Incremental processing framework to power data pipline at low latency and high efficiency.
hudi-presto0.5.0-incubating-amzn-1Bundle library for running Presto with Hudi.
hue-server4.4.0Web application for analyzing data using Hadoop ecosystem applications
jupyterhub1.0.0Multi-user server for Jupyter notebooks
livy-server0.6.0-incubatingREST interface for interacting with Apache Spark
nginx1.12.1nginx [engine x] is an HTTP and reverse proxy server
mxnet1.5.1A flexible, scalable, and efficient library for deep learning.
mariadb-server5.5.64+MariaDB database server.
nvidia-cuda9.2.88Nvidia drivers and Cuda toolkit
oozie-client5.1.0Oozie command-line client.
oozie-server5.1.0Service for accepting Oozie workflow requests.
opencv3.4.0Open Source Computer Vision Library.
phoenix-library5.0.0-HBase-2.0The phoenix libraries for server and client
phoenix-query-server5.0.0-HBase-2.0A light weight server providing JDBC access as well as Protocol Buffers and JSON format access to the Avatica API
presto-coordinator0.230Service for accepting queries and managing query execution among presto-workers.
presto-worker0.230Service for executing pieces of a query.
presto-client0.230Presto command-line client which is installed on an HA cluster's stand-by masters where Presto server is not started.
r3.4.3The R Project for Statistical Computing
spark-client2.4.4Spark command-line clients.
spark-history-server2.4.4Web UI for viewing logged events for the lifetime of a completed Spark application.
spark-on-yarn2.4.4In-memory execution engine for YARN.
spark-yarn-slave2.4.4Apache Spark libraries needed by YARN slaves.
tensorflow1.14.0TensorFlow open source software library for high performance numerical computation.
tez-on-yarn0.9.2The tez YARN application and libraries.
webserver2.4.41+Apache HTTP server.
zeppelin-server0.9.0-SNAPSHOTWeb-based notebook that enables interactive data analytics.
zookeeper-server3.4.14Centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services.
zookeeper-client3.4.14ZooKeeper command line client.

6.0.0 configuration classifications

Configuration classifications allow you to customize applications. These often correspond to a configuration XML file for the application, such as hive-site.xml. For more information, see Configure applications.

emr-6.0.0 classifications
Classifications Description


Change values in Hadoop's capacity-scheduler.xml file.


Change values in Hadoop YARN's container-executor.cfg file.


Change values in Hadoop YARN's file.


Change values in Hadoop's core-site.xml file.


Change EMRFS settings.


Change values in the Hadoop environment for all Hadoop components.


Change values in Hadoop's file.


Change hadoop ssl server configuration


Change hadoop ssl client configuration


Amazon EMR-curated settings for Apache HBase.


Change values in HBase's environment.


Change values in HBase's file.


Change values in HBase's file.


Change values in HBase's hbase-policy.xml file.


Change values in HBase's hbase-site.xml file.


Configure HDFS encryption zones.


Change values in the HDFS environment.


Change values in HDFS's hdfs-site.xml.


Change values in HCatalog's environment.


Change values in HCatalog's


Change values in HCatalog's proto-hive-site.xml.


Change values in HCatalog WebHCat's environment.


Change values in HCatalog WebHCat's


Change values in HCatalog WebHCat's webhcat-site.xml file.


Amazon EMR-curated settings for Apache Hive.


Change values in Hive's file.


Change values in Hive's file.


Change values in the Hive environment.


Change values in Hive's file.


Change values in Hive's file.


Change values in Hive's file.


Change values in Hive's hive-site.xml file


Change values in Hive Server2's hiveserver2-site.xml file


Change values in Hue's ini file


Change values in the HTTPFS environment.


Change values in Hadoop's httpfs-site.xml file.


Change values in Hadoop's kms-acls.xml file.


Change values in the Hadoop KMS environment.


Change values in Hadoop's file.


Change values in Hadoop's kms-site.xml file.


Change values in Jupyter Notebook's file.


Change values in JupyterHubs's file.


Configure Jupyter Notebook S3 persistence.


Change values in Sparkmagic's config.json file.


Change values in Livy's livy.conf file.


Change values in the Livy environment.


Change Livy settings.


Change values in the MapReduce application's environment.


Change values in the MapReduce application's mapred-site.xml file.


Change values in Oozie's environment.


Change values in Oozie's file.


Change values in Oozie's oozie-site.xml file.


Change values in Phoenix's file.


Change values in Phoenix's hbase-site.xml file.


Change values in Phoenix's file.


Change values in Phoenix's file.


Change values in Presto's file.


Change values in Presto's file.


Change values in Presto's file.


Change values in Presto's file.


Change values in Presto's file.


Change values in Presto's file.


Change values in Presto's file.


Change values in Presto's file.


Change values in Presto's file.


Change values in Presto's file.


Change values in Presto's file.


Change values in Presto's file.


Change values in Presto's file.


Change values in Presto's file.


Change values in Presto's file.


Change values in Presto's file.


Change values in Presto's file.


Change values in Presto's file.


Change values in Presto's file.


Change values in Presto's file.


Change values in dbks-site.xml file of Ranger KMS.


Change values in ranger-kms-site.xml file of Ranger KMS.


Change values in the Ranger KMS environment.


Change values in file of Ranger KMS.


Change values for CA file on S3 for MySQL SSL connection with Ranger KMS.


Change values in the EMR RecordServer environment.


Change values in EMR RecordServer's file.


Change values in EMR RecordServer's file.


Amazon EMR-curated settings for Apache Spark.


Change values in Spark's spark-defaults.conf file.


Change values in the Spark environment.


Change values in Spark's hive-site.xml file


Change values in Spark's file.


Change values in Spark's file.


Change values in Tez's tez-site.xml file.


Change values in the YARN environment.


Change values in YARN's yarn-site.xml file.


Change values in the Zeppelin environment.


Change values in ZooKeeper's zoo.cfg file.


Change values in ZooKeeper's file.