Uploading data into Amazon S3 Express One Zone with Amazon EMR on EKS - Amazon EMR
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China (PDF).

Uploading data into Amazon S3 Express One Zone with Amazon EMR on EKS

With Amazon EMR releases 7.2.0 and higher, you can use Amazon EMR on EKS with the Amazon S3 Express One Zone storage class for improved performance when you run jobs and workloads. S3 Express One Zone is a a high-performance, single-zone Amazon S3 storage class that delivers consistent, single-digit millisecond data access for most latency-sensitive applications. At the time of its release, S3 Express One Zone delivers the lowest latency and highest performance cloud object storage in Amazon S3.

Prerequisites

Before you can use S3 Express One Zone with Amazon EMR on EKS, you must have the following prerequisites:

Getting started with S3 Express One Zone

Follow these steps to get started with S3 Express One Zone

  1. Add the CreateSession permission to your job execution role. When S3 Express One Zone initially performs an action like GET, LIST, or PUT on an S3 object, the storage class calls CreateSession on your behalf. The following is an example of how to grant the CreateSession permission.

    { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Resource": "arn:aws:s3express:<AWS_REGION>:<ACCOUNT_ID>:bucket/DOC-EXAMPLE-BUCKET", "Action": [ "s3express:CreateSession" ] } ] }
  2. You must use the Apache Hadoop connector S3A to access the S3 Express buckets, so change your Amazon S3 URIs to use the s3a scheme to use the connector. If they don’t use the scheme, you can change the filesystem implementation that you use for s3 and s3n schemes.

    To change the s3 scheme, specify the following cluster configurations:

    [ { "Classification": "core-site", "Properties": { "fs.s3.impl": "org.apache.hadoop.fs.s3a.S3AFileSystem", "fs.AbstractFileSystem.s3.impl": "org.apache.hadoop.fs.s3a.S3A" } } ]

    To change the s3n scheme, specify the following cluster configurations:

    [ { "Classification": "core-site", "Properties": { "fs.s3n.impl": "org.apache.hadoop.fs.s3a.S3AFileSystem", "fs.AbstractFileSystem.s3n.impl": "org.apache.hadoop.fs.s3a.S3A" } } ]
  3. In your spark-submit configuration, use the web identity credential provider.

    "spark.hadoop.fs.s3a.aws.credentials.provider=com.amazonaws.auth.WebIdentityTokenCredentialsProvider"