Supported applications and features - Amazon EMR
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China (PDF).

Supported applications and features

This topic provides information about the Hadoop high-availability features of HDFS NameNode and YARN ResourceManager in an Amazon EMR cluster, and how the high-availability features work with open source applications and other Amazon EMR features.

High-availability HDFS

An Amazon EMR cluster with multiple primary nodes enables the HDFS NameNode high availability feature in Hadoop. For more information, see HDFS high availability.

In an Amazon EMR cluster, two or more separate nodes are configured as NameNodes. One NameNode is in an active state and the others are in a standby state. If the node with active NameNode fails, Amazon EMR starts an automatic HDFS failover process. A node with standby NameNode becomes active and takes over all client operations in the cluster. Amazon EMR replaces the failed node with a new one, which then rejoins as a standby.

Note

In Amazon EMR versions 5.23.0 up to and including 5.30.1, only two of the three primary nodes run HDFS NameNode.

If you need to find out which NameNode is active, you can use SSH to connect to any primary node in the cluster and run the following command:

hdfs haadmin -getAllServiceState

The output lists the nodes where NameNode is installed and their status. For example,

ip-##-#-#-##1.ec2.internal:8020 active ip-##-#-#-##2.ec2.internal:8020 standby ip-##-#-#-##3.ec2.internal:8020 standby

High-availability YARN ResourceManager

An Amazon EMR cluster with multiple primary nodes enables the YARN ResourceManager high availability feature in Hadoop. For more information, see ResourceManager high availability.

In an Amazon EMR cluster with multiple primary nodes, YARN ResourceManager runs on all three primary nodes. One ResourceManager is in active state, and the other two are in standby state. If the primary node with active ResourceManager fails, Amazon EMR starts an automatic failover process. A primary node with a standby ResourceManager takes over all operations. Amazon EMR replaces the failed primary node with a new one, which then rejoins the ResourceManager quorum as a standby.

You can connect to "http://master-public-dns-name:8088/cluster" for any primary node, which automatically directs you to the active resource manager. To find out which resource manager is active, use SSH to connect to any primary node in the cluster. Then run the following command to get a list of the three primary nodes and their status:

yarn rmadmin -getAllServiceState

Supported applications in an Amazon EMR Cluster with multiple primary nodes

You can install and run the following applications on an Amazon EMR cluster with multiple primary nodes. For each application, the primary node failover process varies.

Application Availability during primary node failover Notes
Flink

Availability not affected by primary node failover

Flink jobs on Amazon EMR run as YARN applications. Flink's JobManagers run as YARN's ApplicationMasters on core nodes. The JobManager is not affected by the primary node failover process.

If you use Amazon EMR version 5.27.0 or earlier, the JobManager is a single point of failure. When the JobManager fails, it loses all job states and will not resume the running jobs. You can enable JobManager high availability by configuring application attempt count, checkpointing, and enabling ZooKeeper as state storage for Flink. For more information, see Configuring Flink on an Amazon EMR Cluster with multiple primary nodes.

Beginning with Amazon EMR version 5.28.0, no manual configuration is needed to enable JobManager high availability.

Ganglia

Availability not affected by primary node failover

Ganglia is available on all primary nodes, so Ganglia can continue to run during the primary node failover process.

Hadoop

High availability

HDFS NameNode and YARN ResourceManager automatically fail over to the standby node when the active primary node fails.

HBase

High availability

HBase automatically fails over to the standby node when the active primary node fails.

If you are connecting to HBase through a REST or Thrift server, you must switch to a different primary node when the active primary node fails.

HCatalog

Availability not affected by primary node failover

HCatalog is built upon Hive metastore, which exists outside of the cluster. HCatalog remains available during the primary node failover process.

JupyterHub

High availability

JupyterHub is installed on all three primary instances. It is highly recommended to configure notebook persistence to prevent notebook loss upon primary node failure. For more information, see Configuring persistence for notebooks in Amazon S3.

Livy

High availability

Livy is installed on all three primary nodes. When the active primary node fails, you lose access to the current Livy session and need to create a new Livy session on a different primary node or on the new replacement node.

Mahout

Availability not affected by primary node failover

Since Mahout has no daemon, it is not affected by the primary node failover process.

MXNet

Availability not affected by primary node failover

Since MXNet has no daemon, it is not affected by the primary node failover process.

Phoenix

High Availability

Phoenix' QueryServer runs only on one of the three primary nodes. Phoenix on all three masters is configured to connect the Phoenix QueryServer. You can find the private IP of Phoenix's Query server by using /etc/phoenix/conf/phoenix-env.sh file

Pig

Availability not affected by primary node failover

Since Pig has no daemon, it is not affected by the primary node failover process.

Spark

High availability

All Spark applications run in YARN containers and can react to primary node failover in the same way as high-availability YARN features.

Sqoop

High availability

By default, sqoop-job and sqoop-metastore store data(job descriptions) on local disk of master that runs the command, if you want to save metastore data on external Database, please refer to apache Sqoop documentation

Tez

High availability

Since Tez containers run on YARN, Tez behaves the same way as YARN during the primary node failover process.

TensorFlow

Availability not affected by primary node failover

Since TensorFlow has no daemon, it is not affected by the primary node failover process.

Zeppelin

High availability

Zeppelin is installed on all three primary nodes. Zeppelin stores notes and interpreter configurations in HDFS by default to prevent data loss. Interpreter sessions are completely isolated across all three primary instances. Session data will be lost upon master failure. It is recommended to not modify the same note concurrently on different primary instances.

ZooKeeper

High availability

ZooKeeper is the foundation of the HDFS automatic failover feature. ZooKeeper provides a highly available service for maintaining coordination data, notifying clients of changes in that data, and monitoring clients for failures. For more information, see HDFS automatic failover.

To run the following applications in an Amazon EMR cluster with multiple primary nodes, you must configure an external database. The external database exists outside the cluster and makes data persistent during the primary node failover process. For the following applications, the service components will automatically recover during the primary node failover process, but active jobs may fail and need to be retried.

Application Availability during primary node failover Notes
Hive

High availability for service components only

An external metastore for Hive is required. This must be a MySQL external metastore, as PostgreSQL is not supported for multi-master clusters. For more information, see Configuring an external metastore for Hive.

Hue

High availability for service components only

An external database for Hue is required. For more information, see Using Hue with a remote database in Amazon RDS.

Oozie

High availability for service components only

An external database for Oozie is required. For more information, see Using Oozie with a remote database in Amazon RDS.

Oozie-server and oozie-client are installed on all three primary nodes. The oozie-clients are configured to connect to the correct oozie-server by default.

PrestoDB or PrestoSQL/Trino

High availability for service components only

An external Hive metastore for PrestoDB (PrestoSQL on Amazon EMR 6.1.0-6.3.0 or Trino on Amazon EMR 6.4.0 and later) is required. You can use Presto with the Amazon Glue Data Catalog or use an external MySQL database for Hive.

The Presto CLI is installed on all three primary nodes so you can use it to access the Presto Coordinator from any of the primary nodes. The Presto Coordinator is installed on only one primary node. You can find the DNS name of the primary node where the Presto Coordinator is installed by calling the Amazon EMR describe-cluster API and reading the returned value of the MasterPublicDnsName field in the response.

Note

When a primary node fails, your Java Database Connectivity (JDBC) or Open Database Connectivity (ODBC) terminates its connection to the primary node. You can connect to any of the remaining primary nodes to continue your work because the Hive metastore daemon runs on all primary nodes. Or you can wait for the failed primary node to be replaced.

How Amazon EMR features work in a cluster with multiple primary nodes

Connecting to primary nodes using SSH

You can connect to any of the three primary nodes in an Amazon EMR cluster using SSH in the same way you connect to a single primary node. For more information, see Connect to the primary node using SSH.

If a primary node fails, your SSH connection to that primary node ends. To continue your work, you can connect to one of the other two primary nodes. Alternatively, you can access the new primary node after Amazon EMR replaces the failed one with a new one.

Note

The private IP address for the replacement primary node remains the same as the previous one. The public IP address for the replacement primary node may change. You can retrieve the new IP addresses in the console or by using the describe-cluster command in the Amazon CLI.

NameNode only runs on two of the primary nodes. However, you can run hdfs CLI commands and operate jobs to access HDFS on all three primary nodes.

Working with steps in an Amazon EMR Cluster with multiple primary nodes

You can submit steps to an Amazon EMR cluster with multiple primary nodes in the same way you work with steps in a cluster with a single primary node. For more information, see Submit work to a cluster.

The following are considerations for working with steps in an Amazon EMR cluster with multiple primary nodes:

  • If a primary node fails, the steps that are running on the primary node are marked as FAILED. Any data that were written locally are lost. However, the status FAILED may not reflect the real state of the steps.

  • If a running step has started a YARN application when the primary node fails, the step can continue and succeed due to the automatic failover of the primary node.

  • It is recommended that you check the status of steps by referring to the output of the jobs. For example, MapReduce jobs use a _SUCCESS file to determine if the job completes successfully.

  • It is recommended that you set ActionOnFailure parameter to CONTINUE, or CANCEL_AND_WAIT, instead of TERMINATE_JOB_FLOW, or TERMINATE_CLUSTER.

Automatic termination protection

Amazon EMR automatically enables termination protection for all clusters with multiple primary nodes, and overrides any step execution settings that you supply when you create the cluster. You can disable termination protection after the cluster has been launched. See Configuring termination protection for running clusters. To shut down a cluster with multiple primary nodes, you must first modify the cluster attributes to disable termination protection. For instructions, see Terminate an Amazon EMR Cluster with multiple primary nodes.

For more information about termination protection, see Using termination protection.

Unsupported features in an Amazon EMR Cluster with multiple primary nodes

The following Amazon EMR features are currently not available in an Amazon EMR cluster with multiple primary nodes:

  • EMR Notebooks

  • One-click access to persistent Spark history server

  • Persistent application user interfaces

  • One-click access to persistent application user interfaces is currently not available for Amazon EMR clusters with multiple primary nodes or for Amazon EMR clusters integrated with Amazon Lake Formation.

Note

To use Kerberos authentication in your cluster, you must configure an external KDC.

Beginning with Amazon EMR version 5.27.0, you can configure HDFS Transparent encryption on an Amazon EMR cluster with multiple primary nodes. For more information, see Transparent encryption in HDFS on Amazon EMR.