Amazon EMR release 6.9.0 - Amazon EMR
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China (PDF).

Amazon EMR release 6.9.0

6.9.0 application versions

The following applications are supported in this release: Delta, Flink, Ganglia, HBase, HCatalog, Hadoop, Hive, Hudi, Hue, Iceberg, JupyterEnterpriseGateway, JupyterHub, Livy, MXNet, Oozie, Phoenix, Pig, Presto, Spark, Sqoop, TensorFlow, Tez, Trino, Zeppelin, and ZooKeeper.

The table below lists the application versions available in this release of Amazon EMR and the application versions in the preceding three Amazon EMR releases (when applicable).

For a comprehensive history of application versions for each release of Amazon EMR, see the following topics:

Application version information
emr-6.9.0 emr-6.8.1 emr-6.8.0 emr-6.7.0
Amazon SDK for Java 1.12.1701.12.1701.12.1701.12.170
Python 2.7, 3.72.7, 3.72.7, 3.72.7, 3.7
Scala 2.12.152.12.152.12.152.12.15
AmazonCloudWatchAgent - - - -
Delta2.1.0 - - -
Flink1.15.21.15.11.15.11.14.2
Ganglia3.7.23.7.23.7.23.7.2
HBase2.4.132.4.122.4.122.4.4
HCatalog3.1.33.1.33.1.33.1.3
Hadoop3.3.33.2.13.2.13.2.1
Hive3.1.33.1.33.1.33.1.3
Hudi0.12.1-amzn-00.11.1-amzn-00.11.1-amzn-00.11.0-amzn-0
Hue4.10.04.10.04.10.04.10.0
Iceberg0.14.1-amzn-00.14.0-amzn-00.14.0-amzn-00.13.1-amzn-0
JupyterEnterpriseGateway2.6.02.1.02.1.02.1.0
JupyterHub1.4.11.4.11.4.11.4.1
Livy0.7.10.7.10.7.10.7.1
MXNet1.9.11.9.11.9.11.8.0
Mahout - - - -
Oozie5.2.15.2.15.2.15.2.1
Phoenix5.1.25.1.25.1.25.1.2
Pig0.17.00.17.00.17.00.17.0
Presto0.2760.2730.2730.272
Spark3.3.03.3.03.3.03.2.1
Sqoop1.4.71.4.71.4.71.4.7
TensorFlow2.10.02.9.12.9.12.4.1
Tez0.10.20.9.20.9.20.9.2
Trino (PrestoSQL)398388388378
Zeppelin0.10.10.10.10.10.10.10.0
ZooKeeper3.5.103.5.103.5.103.5.7

6.9.0 release notes

The following release notes include information for Amazon EMR release 6.9.0. Changes are relative to Amazon EMR release 6.8.0. For information on the release timeline, see the change log.

New Features
  • Amazon EMR release 6.9.0 supports Apache Spark RAPIDS 22.08.0, Apache Hudi 0.12.1, Apache Iceberg 0.14.1, Trino 398, and Tez 0.10.2.

  • Amazon EMR release 6.9.0 includes a new open-source application, Delta Lake 2.1.0.

  • The Amazon Redshift integration for Apache Spark is included in Amazon EMR releases 6.9.0 and later. Previously an open-source tool, the native integration is a Spark connector that you can use to build Apache Spark applications that read from and write to data in Amazon Redshift and Amazon Redshift Serverless. For more information, see Using Amazon Redshift integration for Apache Spark with Amazon EMR .

  • Amazon EMR release 6.9.0 adds support for archiving logs to Amazon S3 during cluster scale-down. Previously, you could only archive log files to Amazon S3 during cluster termination. The new capability ensures that log files generated on the cluster persist on Amazon S3 even after the node is terminated. For more information, see Configure cluster logging and debugging.

  • To support long running queries, Trino now includes a fault-tolerant execution mechanism. Fault-tolerant execution mitigates query failures by retrying failed queries or their component tasks. For more information, see Fault-tolerant execution in Trino.

  • You can use Apache Flink on Amazon EMR for unified BATCH and STREAM processing of Apache Hive Tables or metadata of any Flink tablesource such as Iceberg, Kinesis or Kafka. You can specify the Amazon Glue Data Catalog as the metastore for Flink using the Amazon Web Services Management Console, Amazon CLI, or Amazon EMR API. For more information, see Configuring Flink in Amazon EMR.

  • You can now specify Amazon Identity and Access Management (IAM) runtime roles and Amazon Lake Formation-based access control for Apache Spark, Apache Hive, and Presto queries on Amazon EMR on EC2 clusters with Amazon SageMaker Studio. For more information, see Configure runtime roles for Amazon EMR steps.

Known Issues
  • For Amazon EMR release 6.9.0, Trino does not work on clusters enabled for Apache Ranger. If you need to use Trino with Ranger, contact Amazon Web Services Support.

  • If you use the the Amazon Redshift integration for Apache Spark and have a time, timetz, timestamp, or timestamptz with microsecond precision in Parquet format, the connector rounds the time values to the nearest millisecond value. As a workaround, use the text unload format unload_s3_format parameter.

  • When you use Spark with Hive partition location formatting to read data in Amazon S3, and you run Spark on Amazon EMR releases 5.30.0 to 5.36.0, and 6.2.0 to 6.9.0, you might encounter an issue that prevents your cluster from reading data correctly. This can happen if your partitions have all of the following characteristics:

    • Two or more partitions are scanned from the same table.

    • At least one partition directory path is a prefix of at least one other partition directory path, for example, s3://bucket/table/p=a is a prefix of s3://bucket/table/p=a b.

    • The first character that follows the prefix in the other partition directory has a UTF-8 value that’s less than than the / character (U+002F). For example, the space character (U+0020) that occurs between a and b in s3://bucket/table/p=a b falls into this category. Note that there are 14 other non-control characters: !"#$%&‘()*+,-. For more information, see UTF-8 encoding table and Unicode characters.

    As a workaround to this issue, set the spark.sql.sources.fastS3PartitionDiscovery.enabled configuration to false in the spark-defaults classification.

  • Connections to Amazon EMR clusters from Amazon SageMaker Studio may intermittently fail with a 403 Forbidden response code. This error happens when setup of the IAM role on the cluster takes longer than 60 seconds. As a workaround, you can install an Amazon EMR patch to enable retries and increase the timeout to a minimum of 300 seconds. Use the following steps to apply the bootstrap action when you launch your cluster.

    1. Download the bootstrap script and RPM files from the following Amazon S3 URIs.

      s3://emr-data-access-control-us-east-1/customer-bootstrap-actions/gcsc/replace-rpms.sh s3://emr-data-access-control-us-east-1/customer-bootstrap-actions/gcsc/emr-secret-agent-1.18.0-SNAPSHOT20221121212949.noarch.rpm
    2. Upload the files from the previous step to an Amazon S3 bucket that you own. The bucket must be in the same Amazon Web Services Region where you plan to launch the cluster.

    3. Include the following bootstrap action when you launch your EMR cluster. Replace bootstrap_URI and RPM_URI with the corresponding URIs from Amazon S3.

      --bootstrap-actions "Path=bootstrap_URI,Args=[RPM_URI]"
  • With Amazon EMR releases 5.36.0 and 6.6.0 through 6.9.0, SecretAgent and RecordServer service components may experience log data loss due to an incorrect file name pattern configuration in Log4j2 properties. The incorrect configuration causes the components to generate only one log file per day. When the rotation strategy occurs, it overwrites the existing file instead of generating a new log file as expected. As a workaround, use a bootstrap action to generate log files each hour and append an auto-increment integer in the file name to handle the rotation.

    For Amazon EMR 6.6.0 through 6.9.0 releases, use the following bootstrap action when you launch a cluster.

    ‑‑bootstrap‑actions "Path=s3://emr-data-access-control-us-east-1/customer-bootstrap-actions/log-rotation-emr-6x/replace-puppet.sh,Args=[]"

    For Amazon EMR 5.36.0, use the following bootstrap action when you launch a cluster.

    ‑‑bootstrap‑actions "Path=s3://emr-data-access-control-us-east-1/customer-bootstrap-actions/log-rotation-emr-5x/replace-puppet.sh,Args=[]"
  • Apache Flink provides Native S3 FileSystem and Hadoop FileSystem Connectors, which let applications create a FileSink and write the data into Amazon S3. This FileSink fails with one of the following two exceptions.

    java.lang.UnsupportedOperationException: Recoverable writers on Hadoop are only supported for HDFS
    Caused by: java.lang.NoSuchMethodError: org.apache.hadoop.io.retry.RetryPolicies.retryOtherThanRemoteAndSaslException(Lorg/apache/hadoop/io/retry/RetryPolicy;Ljava/util/Map;)Lorg/apache/hadoop/io/retry/RetryPolicy; at org.apache.hadoop.yarn.client.RMProxy.createRetryPolicy(RMProxy.java:302) ~[hadoop-yarn-common-3.3.3-amzn-0.jar:?]

    As a workaround, you can install an Amazon EMR patch, which fixes the above issue in Flink. To apply the bootstrap action when you launch your cluster, complete the following steps.

    1. Download the flink-rpm to your Amazon S3 bucket. Your RPM path is s3://DOC-EXAMPLE-BUCKET/rpms/flink/.

    2. Download the bootstrap script and RPM files from Amazon S3 using the following URI. Replace regionName with the Amazon Web Services Region where you plan to launch the cluster.

      s3://emr-data-access-control-regionName/customer-bootstrap-actions/gcsc/replace-rpms.sh
    3. Hadoop 3.3.3 introduced a change in YARN (YARN-9608) that keeps nodes where containers ran in a decommissioning state until the application completes. This change ensures that local data such as shuffle data doesn't get lost, and you don' need to re-run the job. In Amazon EMR 6.8.0 and 6.9.0, this approach might also lead to underutilization of resources on clusters with or without managed scaling enabled.

      With Amazon EMR 6.10.0, there's a workaround for this issue to set the value of yarn.resourcemanager.decommissioning-nodes-watcher.wait-for-applications to false in yarn-site.xml. In Amazon EMR releases 6.11.0 and higher as well as 6.8.1, 6.9.1, and 6.10.1, the config is set to false by default to resolve this issue.

Changes, Enhancements, and Resolved Issues
  • For Amazon EMR release 6.9.0 and later, all components installed by Amazon EMR that use Log4j libraries use Log4j version 2.17.1 or later.

  • When you use the DynamoDB connector with Spark on Amazon EMR versions 6.6.0, 6.7.0, and 6.8.0, all reads from your table return an empty result, even though the input split references non-empty data. Amazon EMR release 6.9.0 fixes this issue.

  • Amazon EMR 6.9.0 adds limited support for Lake Formation-based access control with Apache Hudi when reading data using Spark SQL. The support is for SELECT queries using Spark SQL and is limited to column-level access control. For more information, see Hudi and Lake Formation.

  • When you use Amazon EMR 6.9.0 to create a Hadoop cluster with Node Labels enabled, the YARN metrics API returns aggregated information across all partitions, instead of the default partition. For more information, see YARN-11414.

  • With Amazon EMR release 6.9.0, we've updated Trino to version 398, which uses Java 17. The previous supported version of Trino for Amazon EMR 6.8.0 was Trino 388 running on Java 11. For more information about this change, see Trino updates to Java 17 on the Trino blog.

  • This releases fixes a timing sequence mismatch issue between Apache BigTop and the Amazon EMR on EC2 cluster startup sequence. This timing sequence mismatch occurs when a system attempts to perform two or more operations at the same time instead of doing them in the proper sequence. As a result, certain cluster configurations experienced instance startup timeouts and slower cluster startup times.

  • When you launch a cluster with the latest patch release of Amazon EMR 5.36 or higher, 6.6 or higher, or 7.0 or higher, Amazon EMR uses the latest Amazon Linux 2023 or Amazon Linux 2 release for the default Amazon EMR AMI. For more information, see Using the default Amazon Linux AMI for Amazon EMR.

    Note

    This release no longer gets automatic AMI updates since it has been succeeded by 1 more more patch releases. The patch release is denoted by the number after the second decimal point (6.8.1). To see if you're using the latest patch release, check the available releases in the Release Guide, or check the Amazon EMR release dropdown when you create a cluster in the console, or use the ListReleaseLabels API or list-release-labels CLI action. To get updates about new releases, subscribe to the RSS feed on the What's new? page.

    OsReleaseLabel (Amazon Linux version) Amazon Linux kernel version Available date Supported Regions
    2.0.20230808.0 4.14.320 August 24, 2023 US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Europe (Stockholm), Europe (Milan), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Osaka), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Jakarta), Asia Pacific (Melbourne), Africa (Cape Town), South America (São Paulo), Middle East (Bahrain), Canada (Central), Israel (Tel Aviv)
    2.0.20230727.0 4.14.320 August 14, 2023 US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Europe (Stockholm), Europe (Milan), Europe (Spain), Europe (Frankfurt), Europe (Zurich), Europe (Ireland), Europe (London), Europe (Paris), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Hyderabad), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Osaka), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Jakarta), Asia Pacific (Melbourne), Africa (Cape Town), South America (São Paulo), Middle East (Bahrain), Middle East (UAE), Canada (Central), Israel (Tel Aviv)
    2.0.20230719.0 4.14.320 August 2, 2023 US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Europe (Stockholm), Europe (Milan), Europe (Spain), Europe (Frankfurt), Europe (Zurich), Europe (Ireland), Europe (London), Europe (Paris), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Hyderabad), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Osaka), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Jakarta), Asia Pacific (Melbourne), Africa (Cape Town), South America (São Paulo), Middle East (Bahrain), Middle East (UAE), Canada (Central), Israel (Tel Aviv)
    2.0.20230628.0 4.14.318 July 12, 2023 US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Canada (Central), Europe (Stockholm), Europe (Ireland), Europe (London), Europe (Paris), Europe (Frankfurt), Europe (Milan), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Jakarta), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Osaka), Asia Pacific (Singapore), Asia Pacific (Sydney), Africa (Cape Town), South America (São Paulo), Middle East (Bahrain)
    2.0.20230612.0 4.14.314 June 23, 2023 US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Canada (Central), Europe (Stockholm), Europe (Ireland), Europe (London), Europe (Paris), Europe (Frankfurt), Europe (Milan), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Jakarta), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Osaka), Asia Pacific (Singapore), Asia Pacific (Sydney), Africa (Cape Town), South America (São Paulo), Middle East (Bahrain)
    2.0.20230504.1 4.14.313 May 16, 2023 US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Canada (Central), Europe (Stockholm), Europe (Ireland), Europe (London), Europe (Paris), Europe (Frankfurt), Europe (Milan), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Jakarta), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Osaka), Asia Pacific (Singapore), Asia Pacific (Sydney), Africa (Cape Town), South America (São Paulo), Middle East (Bahrain)
    2.0.20230418.0 4.14.311 May 3, 2023 US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Canada (Central), Europe (Stockholm), Europe (Ireland), Europe (London), Europe (Paris), Europe (Frankfurt), Europe (Milan), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Jakarta), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Osaka), Asia Pacific (Singapore), Asia Pacific (Sydney), Africa (Cape Town), South America (São Paulo), Middle East (Bahrain)
    2.0.20230404.1 4.14.311 April 18, 2023 US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Canada (Central), Europe (Stockholm), Europe (Ireland), Europe (London), Europe (Paris), Europe (Frankfurt), Europe (Milan), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Jakarta), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Osaka), Asia Pacific (Singapore), Asia Pacific (Sydney), Africa (Cape Town), South America (São Paulo), Middle East (Bahrain)
    2.0.20230404.0 4.14.311 April 10, 2023 US East (N. Virginia), Europe (Paris)
    2.0.20230320.0 4.14.309 March 30, 2023 US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Canada (Central), Europe (Stockholm), Europe (Ireland), Europe (London), Europe (Paris), Europe (Frankfurt), Europe (Milan), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Jakarta), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Osaka), Asia Pacific (Singapore), Asia Pacific (Sydney), Africa (Cape Town), South America (São Paulo), Middle East (Bahrain)
    2.0.20230307.0 4.14.305 March 15, 2023 US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Canada (Central), Europe (Stockholm), Europe (Ireland), Europe (London), Europe (Paris), Europe (Frankfurt), Europe (Milan), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Jakarta), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Osaka), Asia Pacific (Singapore), Asia Pacific (Sydney), Africa (Cape Town), South America (São Paulo), Middle East (Bahrain)
    2.0.20230207.0 4.14.304 February 22, 2023 US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Canada (Central), Europe (Stockholm), Europe (Ireland), Europe (London), Europe (Paris), Europe (Frankfurt), Europe (Milan), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Jakarta), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Osaka), Asia Pacific (Singapore), Asia Pacific (Sydney), Africa (Cape Town), South America (São Paulo), Middle East (Bahrain)
    2.0.20221210.1 4.14.301 January 12, 2023 US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Canada (Central), Europe (Stockholm), Europe (Ireland), Europe (London), Europe (Paris), Europe (Frankfurt), Europe (Milan), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Jakarta), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Osaka), Asia Pacific (Singapore), Asia Pacific (Sydney), Africa (Cape Town), South America (São Paulo), Middle East (Bahrain)
    2.0.20221103.3 4.14.296 December 5, 2022 US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Canada (Central), Europe (Stockholm), Europe (Ireland), Europe (London), Europe (Paris), Europe (Frankfurt), Europe (Milan), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Jakarta), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Osaka), Asia Pacific (Singapore), Asia Pacific (Sydney), Africa (Cape Town), South America (São Paulo), Middle East (Bahrain)

6.9.0 component versions

The components that Amazon EMR installs with this release are listed below. Some are installed as part of big-data application packages. Others are unique to Amazon EMR and installed for system processes and features. These typically start with emr or aws. Big-data application packages in the most recent Amazon EMR release are usually the latest version found in the community. We make community releases available in Amazon EMR as quickly as possible.

Some components in Amazon EMR differ from community versions. These components have a version label in the form CommunityVersion-amzn-EmrVersion. The EmrVersion starts at 0. For example, if open source community component named myapp-component with version 2.2 has been modified three times for inclusion in different Amazon EMR releases, its release version is listed as 2.2-amzn-2.

Component Version Description
aws-sagemaker-spark-sdk1.4.2Amazon SageMaker Spark SDK
delta2.1.0Delta lake is an open table format for huge analytic datasets
emr-ddb4.16.0Amazon DynamoDB connector for Hadoop ecosystem applications.
emr-goodies3.3.0Extra convenience libraries for the Hadoop ecosystem.
emr-kinesis3.6.0Amazon Kinesis connector for Hadoop ecosystem applications.
emr-notebook-env1.7.0Conda env for emr notebook which includes jupyter enterprise gateway
emr-s3-dist-cp2.23.0Distributed copy application optimized for Amazon S3.
emr-s3-select2.2.0EMR S3Select Connector
emrfs2.54.0Amazon S3 connector for Hadoop ecosystem applications.
flink-client1.15.2Apache Flink command line client scripts and applications.
flink-jobmanager-config1.15.2Managing resources on EMR nodes for Apache Flink JobManager.
ganglia-monitor3.7.2Embedded Ganglia agent for Hadoop ecosystem applications along with the Ganglia monitoring agent.
ganglia-metadata-collector3.7.2Ganglia metadata collector for aggregating metrics from Ganglia monitoring agents.
ganglia-web3.7.1Web application for viewing metrics collected by the Ganglia metadata collector.
hadoop-client3.3.3-amzn-1Hadoop command-line clients such as 'hdfs', 'hadoop', or 'yarn'.
hadoop-hdfs-datanode3.3.3-amzn-1HDFS node-level service for storing blocks.
hadoop-hdfs-library3.3.3-amzn-1HDFS command-line client and library
hadoop-hdfs-namenode3.3.3-amzn-1HDFS service for tracking file names and block locations.
hadoop-hdfs-journalnode3.3.3-amzn-1HDFS service for managing the Hadoop filesystem journal on HA clusters.
hadoop-httpfs-server3.3.3-amzn-1HTTP endpoint for HDFS operations.
hadoop-kms-server3.3.3-amzn-1Cryptographic key management server based on Hadoop's KeyProvider API.
hadoop-mapred3.3.3-amzn-1MapReduce execution engine libraries for running a MapReduce application.
hadoop-yarn-nodemanager3.3.3-amzn-1YARN service for managing containers on an individual node.
hadoop-yarn-resourcemanager3.3.3-amzn-1YARN service for allocating and managing cluster resources and distributed applications.
hadoop-yarn-timeline-server3.3.3-amzn-1Service for retrieving current and historical information for YARN applications.
hbase-hmaster2.4.13-amzn-0Service for an HBase cluster responsible for coordination of Regions and execution of administrative commands.
hbase-region-server2.4.13-amzn-0Service for serving one or more HBase regions.
hbase-client2.4.13-amzn-0HBase command-line client.
hbase-rest-server2.4.13-amzn-0Service providing a RESTful HTTP endpoint for HBase.
hbase-thrift-server2.4.13-amzn-0Service providing a Thrift endpoint to HBase.
hbase-operator-tools2.4.13-amzn-0Repair tool for Apache HBase clusters.
hcatalog-client3.1.3-amzn-2The 'hcat' command line client for manipulating hcatalog-server.
hcatalog-server3.1.3-amzn-2Service providing HCatalog, a table and storage management layer for distributed applications.
hcatalog-webhcat-server3.1.3-amzn-2HTTP endpoint providing a REST interface to HCatalog.
hive-client3.1.3-amzn-2Hive command line client.
hive-hbase3.1.3-amzn-2Hive-hbase client.
hive-metastore-server3.1.3-amzn-2Service for accessing the Hive metastore, a semantic repository storing metadata for SQL on Hadoop operations.
hive-server23.1.3-amzn-2Service for accepting Hive queries as web requests.
hudi0.12.1-amzn-0Incremental processing framework to power data pipeline at low latency and high efficiency.
hudi-presto0.12.1-amzn-0Bundle library for running Presto with Hudi.
hudi-trino0.12.1-amzn-0Bundle library for running Trino with Hudi.
hudi-spark0.12.1-amzn-0Bundle library for running Spark with Hudi.
hue-server4.10.0Web application for analyzing data using Hadoop ecosystem applications
iceberg0.14.1-amzn-0Apache Iceberg is an open table format for huge analytic datasets
jupyterhub1.4.1Multi-user server for Jupyter notebooks
livy-server0.7.1-incubatingREST interface for interacting with Apache Spark
nginx1.12.1nginx [engine x] is an HTTP and reverse proxy server
mxnet1.9.1A flexible, scalable, and efficient library for deep learning.
mariadb-server5.5.68+MariaDB database server.
nvidia-cuda11.7.0Nvidia drivers and Cuda toolkit
oozie-client5.2.1Oozie command-line client.
oozie-server5.2.1Service for accepting Oozie workflow requests.
opencv4.5.0Open Source Computer Vision Library.
phoenix-library5.1.2The phoenix libraries for server and client
phoenix-connectors6.0.0-SNAPSHOTApache Phoenix-Connectors for Spark-3
phoenix-query-server6.0.0A light weight server providing JDBC access as well as Protocol Buffers and JSON format access to the Avatica API
presto-coordinator0.276-amzn-0Service for accepting queries and managing query execution among presto-workers.
presto-worker0.276-amzn-0Service for executing pieces of a query.
presto-client0.276-amzn-0Presto command-line client which is installed on an HA cluster's stand-by masters where Presto server is not started.
trino-coordinator398-amzn-0Service for accepting queries and managing query execution among trino-workers.
trino-worker398-amzn-0Service for executing pieces of a query.
trino-client398-amzn-0Trino command-line client which is installed on an HA cluster's stand-by masters where Trino server is not started.
pig-client0.17.0Pig command-line client.
r4.0.2The R Project for Statistical Computing
ranger-kms-server2.0.0Apache Ranger Key Management System
spark-client3.3.0-amzn-1Spark command-line clients.
spark-history-server3.3.0-amzn-1Web UI for viewing logged events for the lifetime of a completed Spark application.
spark-on-yarn3.3.0-amzn-1In-memory execution engine for YARN.
spark-yarn-slave3.3.0-amzn-1Apache Spark libraries needed by YARN slaves.
spark-rapids22.08.0-amzn-0Nvidia Spark RAPIDS plugin that accelerates Apache Spark with GPUs.
sqoop-client1.4.7Apache Sqoop command-line client.
tensorflow2.10.0TensorFlow open source software library for high performance numerical computation.
tez-on-yarn0.10.2-amzn-0The tez YARN application and libraries.
webserver2.4.41+Apache HTTP server.
zeppelin-server0.10.1Web-based notebook that enables interactive data analytics.
zookeeper-server3.5.10Centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services.
zookeeper-client3.5.10ZooKeeper command line client.

6.9.0 configuration classifications

Configuration classifications allow you to customize applications. These often correspond to a configuration XML file for the application, such as hive-site.xml. For more information, see Configure applications.

Reconfiguration actions occur when you specify a configuration for instance groups in a running cluster. Amazon EMR only initiates reconfiguration actions for the classifications that you modify. For more information, see Reconfigure an instance group in a running cluster.

emr-6.9.0 classifications
Classifications Description Reconfiguration Actions

capacity-scheduler

Change values in Hadoop's capacity-scheduler.xml file.

Restarts the ResourceManager service.

container-executor

Change values in Hadoop YARN's container-executor.cfg file.

Not available.

container-log4j

Change values in Hadoop YARN's container-log4j.properties file.

Not available.

core-site

Change values in Hadoop's core-site.xml file.

Restarts the Hadoop HDFS services Namenode, SecondaryNamenode, Datanode, ZKFC, and Journalnode. Restarts the Hadoop YARN services ResourceManager, NodeManager, ProxyServer, and TimelineServer. Additionally restarts Hadoop KMS, Ranger KMS, HiveServer2, Hive MetaStore, Hadoop Httpfs, and MapReduce-HistoryServer.

docker-conf

Change docker related settings.

Not available.

emrfs-site

Change EMRFS settings.

Restarts the Hadoop HDFS services Namenode, SecondaryNamenode, Datanode, ZKFC, and Journalnode. Restarts the Hadoop YARN services ResourceManager, NodeManager, ProxyServer, and TimelineServer. Additionally restarts HBaseRegionserver, HBaseMaster, HBaseThrift, HBaseRest, HiveServer2, Hive MetaStore, Hadoop Httpfs, and MapReduce-HistoryServer.

flink-conf

Change flink-conf.yaml settings.

Restarts Flink history server.

flink-log4j

Change Flink log4j.properties settings.

Restarts Flink history server.

flink-log4j-session

Change Flink log4j-session.properties settings for Kubernetes/Yarn session.

Restarts Flink history server.

flink-log4j-cli

Change Flink log4j-cli.properties settings.

Restarts Flink history server.

hadoop-env

Change values in the Hadoop environment for all Hadoop components.

Restarts the Hadoop HDFS services Namenode, SecondaryNamenode, Datanode, ZKFC, and Journalnode. Restarts the Hadoop YARN services ResourceManager, NodeManager, ProxyServer, and TimelineServer. Additionally restarts PhoenixQueryserver, HiveServer2, Hive MetaStore, and MapReduce-HistoryServer.

hadoop-log4j

Change values in Hadoop's log4j.properties file.

Restarts the Hadoop HDFS services SecondaryNamenode, Datanode, and Journalnode. Restarts the Hadoop YARN services ResourceManager, NodeManager, ProxyServer, and TimelineServer. Additionally restarts Hadoop KMS, Hadoop Httpfs, and MapReduce-HistoryServer.

hadoop-ssl-server

Change hadoop ssl server configuration

Not available.

hadoop-ssl-client

Change hadoop ssl client configuration

Not available.

hbase

Amazon EMR-curated settings for Apache HBase.

Custom EMR specific property. Sets emrfs-site and hbase-site configs. See those for their associated restarts.

hbase-env

Change values in HBase's environment.

Restarts the HBase services RegionServer, HBaseMaster, ThriftServer, RestServer.

hbase-log4j

Change values in HBase's hbase-log4j.properties file.

Restarts the HBase services RegionServer, HBaseMaster, ThriftServer, RestServer.

hbase-metrics

Change values in HBase's hadoop-metrics2-hbase.properties file.

Restarts the HBase services RegionServer, HBaseMaster, ThriftServer, RestServer.

hbase-policy

Change values in HBase's hbase-policy.xml file.

Not available.

hbase-site

Change values in HBase's hbase-site.xml file.

Restarts the HBase services RegionServer, HBaseMaster, ThriftServer, RestServer. Additionally restarts Phoenix QueryServer.

hdfs-encryption-zones

Configure HDFS encryption zones.

This classification should not be reconfigured.

hdfs-env

Change values in the HDFS environment.

Restarts Hadoop HDFS services Namenode, Datanode, and ZKFC.

hdfs-site

Change values in HDFS's hdfs-site.xml.

Restarts the Hadoop HDFS services Namenode, SecondaryNamenode, Datanode, ZKFC, and Journalnode. Additionally restarts Hadoop Httpfs.

hcatalog-env

Change values in HCatalog's environment.

Restarts Hive HCatalog Server.

hcatalog-server-jndi

Change values in HCatalog's jndi.properties.

Restarts Hive HCatalog Server.

hcatalog-server-proto-hive-site

Change values in HCatalog's proto-hive-site.xml.

Restarts Hive HCatalog Server.

hcatalog-webhcat-env

Change values in HCatalog WebHCat's environment.

Restarts Hive WebHCat server.

hcatalog-webhcat-log4j2

Change values in HCatalog WebHCat's log4j2.properties.

Restarts Hive WebHCat server.

hcatalog-webhcat-site

Change values in HCatalog WebHCat's webhcat-site.xml file.

Restarts Hive WebHCat server.

hive

Amazon EMR-curated settings for Apache Hive.

Sets configurations to launch Hive LLAP service.

hive-beeline-log4j2

Change values in Hive's beeline-log4j2.properties file.

Not available.

hive-parquet-logging

Change values in Hive's parquet-logging.properties file.

Not available.

hive-env

Change values in the Hive environment.

Restarts HiveServer2, HiveMetastore, and Hive HCatalog-Server. Runs Hive schemaTool CLI commands to verify hive-metastore.

hive-exec-log4j2

Change values in Hive's hive-exec-log4j2.properties file.

Not available.

hive-llap-daemon-log4j2

Change values in Hive's llap-daemon-log4j2.properties file.

Not available.

hive-log4j2

Change values in Hive's hive-log4j2.properties file.

Not available.

hive-site

Change values in Hive's hive-site.xml file

Restarts HiveServer2, HiveMetastore, and Hive HCatalog-Server. Runs Hive schemaTool CLI commands to verify hive-metastore. Also restarts Oozie and Zeppelin.

hiveserver2-site

Change values in Hive Server2's hiveserver2-site.xml file

Not available.

hue-ini

Change values in Hue's ini file

Restarts Hue. Also activates Hue config override CLI commands to pick up new configurations.

httpfs-env

Change values in the HTTPFS environment.

Restarts Hadoop Httpfs service.

httpfs-site

Change values in Hadoop's httpfs-site.xml file.

Restarts Hadoop Httpfs service.

hadoop-kms-acls

Change values in Hadoop's kms-acls.xml file.

Not available.

hadoop-kms-env

Change values in the Hadoop KMS environment.

Restarts Hadoop-KMS service.

hadoop-kms-log4j

Change values in Hadoop's kms-log4j.properties file.

Not available.

hadoop-kms-site

Change values in Hadoop's kms-site.xml file.

Restarts Hadoop-KMS and Ranger-KMS service.

hudi-env

Change values in the Hudi environment.

Not available.

hudi-defaults

Change values in Hudi's hudi-defaults.conf file.

Not available.

iceberg-defaults

Change values in Iceberg's iceberg-defaults.conf file.

Not available.

delta-defaults

Change values in Delta's delta-defaults.conf file.

Not available.

jupyter-notebook-conf

Change values in Jupyter Notebook's jupyter_notebook_config.py file.

Not available.

jupyter-hub-conf

Change values in JupyterHubs's jupyterhub_config.py file.

Not available.

jupyter-s3-conf

Configure Jupyter Notebook S3 persistence.

Not available.

jupyter-sparkmagic-conf

Change values in Sparkmagic's config.json file.

Not available.

livy-conf

Change values in Livy's livy.conf file.

Restarts Livy Server.

livy-env

Change values in the Livy environment.

Restarts Livy Server.

livy-log4j2

Change Livy log4j2.properties settings.

Restarts Livy Server.

mapred-env

Change values in the MapReduce application's environment.

Restarts Hadoop MapReduce-HistoryServer.

mapred-site

Change values in the MapReduce application's mapred-site.xml file.

Restarts Hadoop MapReduce-HistoryServer.

oozie-env

Change values in Oozie's environment.

Restarts Oozie.

oozie-log4j

Change values in Oozie's oozie-log4j.properties file.

Restarts Oozie.

oozie-site

Change values in Oozie's oozie-site.xml file.

Restarts Oozie.

phoenix-hbase-metrics

Change values in Phoenix's hadoop-metrics2-hbase.properties file.

Not available.

phoenix-hbase-site

Change values in Phoenix's hbase-site.xml file.

Not available.

phoenix-log4j

Change values in Phoenix's log4j.properties file.

Restarts Phoenix-QueryServer.

phoenix-metrics

Change values in Phoenix's hadoop-metrics2-phoenix.properties file.

Not available.

pig-env

Change values in the Pig environment.

Not available.

pig-properties

Change values in Pig's pig.properties file.

Restarts Oozie.

pig-log4j

Change values in Pig's log4j.properties file.

Not available.

presto-log

Change values in Presto's log.properties file.

Restarts Presto-Server (for PrestoDB)

presto-config

Change values in Presto's config.properties file.

Restarts Presto-Server (for PrestoDB)

presto-password-authenticator

Change values in Presto's password-authenticator.properties file.

Not available.

presto-env

Change values in Presto's presto-env.sh file.

Restarts Presto-Server (for PrestoDB)

presto-node

Change values in Presto's node.properties file.

Not available.

presto-connector-blackhole

Change values in Presto's blackhole.properties file.

Not available.

presto-connector-cassandra

Change values in Presto's cassandra.properties file.

Not available.

presto-connector-hive

Change values in Presto's hive.properties file.

Restarts Presto-Server (for PrestoDB)

presto-connector-jmx

Change values in Presto's jmx.properties file.

Not available.

presto-connector-kafka

Change values in Presto's kafka.properties file.

Not available.

presto-connector-lakeformation

Change values in Presto's lakeformation.properties file.

Restarts Presto-Server (for PrestoDB)

presto-connector-localfile

Change values in Presto's localfile.properties file.

Not available.

presto-connector-memory

Change values in Presto's memory.properties file.

Not available.

presto-connector-mongodb

Change values in Presto's mongodb.properties file.

Not available.

presto-connector-mysql

Change values in Presto's mysql.properties file.

Not available.

presto-connector-postgresql

Change values in Presto's postgresql.properties file.

Not available.

presto-connector-raptor

Change values in Presto's raptor.properties file.

Not available.

presto-connector-redis

Change values in Presto's redis.properties file.

Not available.

presto-connector-redshift

Change values in Presto's redshift.properties file.

Not available.

presto-connector-tpch

Change values in Presto's tpch.properties file.

Not available.

presto-connector-tpcds

Change values in Presto's tpcds.properties file.

Not available.

trino-log

Change values in Trino's log.properties file.

Restarts Trino-Server (for Trino)

trino-config

Change values in Trino's config.properties file.

Restarts Trino-Server (for Trino)

trino-password-authenticator

Change values in Trino's password-authenticator.properties file.

Restarts Trino-Server (for Trino)

trino-env

Change values in Trino's trino-env.sh file.

Restarts Trino-Server (for Trino)

trino-node

Change values in Trino's node.properties file.

Not available.

trino-connector-blackhole

Change values in Trino's blackhole.properties file.

Not available.

trino-connector-cassandra

Change values in Trino's cassandra.properties file.

Not available.

trino-connector-delta

Change values in Trino's delta.properties file.

Restarts Trino-Server (for Trino)

trino-connector-hive

Change values in Trino's hive.properties file.

Restarts Trino-Server (for Trino)

trino-exchange-manager

Change values in Trino's exchange-manager.properties file.

Restarts Trino-Server (for Trino)

trino-connector-iceberg

Change values in Trino's iceberg.properties file.

Restarts Trino-Server (for Trino)

trino-connector-jmx

Change values in Trino's jmx.properties file.

Not available.

trino-connector-kafka

Change values in Trino's kafka.properties file.

Not available.

trino-connector-localfile

Change values in Trino's localfile.properties file.

Not available.

trino-connector-memory

Change values in Trino's memory.properties file.

Not available.

trino-connector-mongodb

Change values in Trino's mongodb.properties file.

Not available.

trino-connector-mysql

Change values in Trino's mysql.properties file.

Not available.

trino-connector-postgresql

Change values in Trino's postgresql.properties file.

Not available.

trino-connector-raptor

Change values in Trino's raptor.properties file.

Not available.

trino-connector-redis

Change values in Trino's redis.properties file.

Not available.

trino-connector-redshift

Change values in Trino's redshift.properties file.

Not available.

trino-connector-tpch

Change values in Trino's tpch.properties file.

Not available.

trino-connector-tpcds

Change values in Trino's tpcds.properties file.

Not available.

ranger-kms-dbks-site

Change values in dbks-site.xml file of Ranger KMS.

Restarts Ranger KMS Server.

ranger-kms-site

Change values in ranger-kms-site.xml file of Ranger KMS.

Restarts Ranger KMS Server.

ranger-kms-env

Change values in the Ranger KMS environment.

Restarts Ranger KMS Server.

ranger-kms-log4j

Change values in kms-log4j.properties file of Ranger KMS.

Not available.

ranger-kms-db-ca

Change values for CA file on S3 for MySQL SSL connection with Ranger KMS.

Not available.

spark

Amazon EMR-curated settings for Apache Spark.

This property modifies spark-defaults. See actions there.

spark-defaults

Change values in Spark's spark-defaults.conf file.

Restarts Spark history server and Spark thrift server.

spark-env

Change values in the Spark environment.

Restarts Spark history server and Spark thrift server.

spark-hive-site

Change values in Spark's hive-site.xml file

Not available.

spark-log4j2

Change values in Spark's log4j2.properties file.

Restarts Spark history server and Spark thrift server.

spark-metrics

Change values in Spark's metrics.properties file.

Restarts Spark history server and Spark thrift server.

sqoop-env

Change values in Sqoop's environment.

Not available.

sqoop-oraoop-site

Change values in Sqoop OraOop's oraoop-site.xml file.

Not available.

sqoop-site

Change values in Sqoop's sqoop-site.xml file.

Not available.

tez-site

Change values in Tez's tez-site.xml file.

Restart Oozie and HiveServer2.

yarn-env

Change values in the YARN environment.

Restarts the Hadoop YARN services ResourceManager, NodeManager, ProxyServer, and TimelineServer. Additionally restarts MapReduce-HistoryServer.

yarn-site

Change values in YARN's yarn-site.xml file.

Restarts the Hadoop YARN services ResourceManager, NodeManager, ProxyServer, and TimelineServer. Additionally restarts Livy Server and MapReduce-HistoryServer.

zeppelin-env

Change values in the Zeppelin environment.

Restarts Zeppelin.

zeppelin-site

Change configuration settings in zeppelin-site.xml.

Restarts Zeppelin.

zookeeper-config

Change values in ZooKeeper's zoo.cfg file.

Restarts Zookeeper server.

zookeeper-log4j

Change values in ZooKeeper's log4j.properties file.

Restarts Zookeeper server.

6.9.0 change log

Change log for 6.9.0 release and release notes
Date Event Description
2023-08-30 Update release notes Added fix for timing sequence mismatch issue
2023-08-21 Update release notes Added a known issue with Hadoop 3.3.3.
2023-07-26 Update New OS release labels 2.0.20230612.0 and 2.0.20230628.0.
2022-12-13 Release notes updated Added feature and known issue for runtime with SageMaker
2022-11-29 Release notes and documentation updated Added feature for Amazon Redshift integration for Apache Spark
2022-11-23 Release notes updated Removed Log4j entry
2022-11-18 Deployment complete Amazon EMR 6.9 fully deployed to all supported Regions
2022-11-18 Docs publication Amazon EMR 6.9 release notes first published
2022-11-14 Initial release Amazon EMR 6.9 deployed to limited commercial Regions