Using custom images with EMR Serverless - Amazon EMR
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China (PDF).

Using custom images with EMR Serverless

Use a custom Python version

You can build a custom image to use a different version of Python. To use Python version 3.10 for Spark jobs, for example, run the following command:

FROM public.ecr.aws/emr-serverless/spark/emr-6.9.0:latest USER root # install python 3 RUN yum install -y gcc openssl-devel bzip2-devel libffi-devel tar gzip wget make RUN wget https://www.python.org/ftp/python/3.10.0/Python-3.10.0.tgz && \ tar xzf Python-3.10.0.tgz && cd Python-3.10.0 && \ ./configure --enable-optimizations && \ make altinstall # EMRS will run the image as hadoop USER hadoop:hadoop

Before you submit the Spark job, set your properties to use the Python virtual environment, as follows.

--conf spark.emr-serverless.driverEnv.PYSPARK_DRIVER_PYTHON=/usr/local/bin/python3.10 --conf spark.emr-serverless.driverEnv.PYSPARK_PYTHON=/usr/local/bin/python3.10 --conf spark.executorEnv.PYSPARK_PYTHON=/usr/local/bin/python3.10

Use a custom Java version

The following example demonstrates how to build a custom image to use Java 11 for your Spark jobs.

FROM public.ecr.aws/emr-serverless/spark/emr-6.9.0:latest USER root # install JDK 11 RUN sudo amazon-linux-extras install java-openjdk11 # EMRS will run the image as hadoop USER hadoop:hadoop

Before you submit the Spark job, set Spark properties to use Java 11, as follows.

--conf spark.executorEnv.JAVA_HOME=/usr/lib/jvm/java-11-openjdk-11.0.16.0.8-1.amzn2.0.1.x86_64 --conf spark.emr-serverless.driverEnv.JAVA_HOME=/usr/lib/jvm/java-11-openjdk-11.0.16.0.8-

Build a data science image

The following example shows how to include common, data science Python packages, such as Pandas and NumPy.

FROM public.ecr.aws/emr-serverless/spark/emr-6.9.0:latest USER root # python packages RUN pip3 install boto3 pandas numpy RUN pip3 install -U scikit-learn==0.23.2 scipy RUN pip3 install sk-dist RUN pip3 install xgboost # EMR Serverless will run the image as hadoop USER hadoop:hadoop

Processing geospatial data with Apache Sedona

The following example shows how to build an image to include Apache Sedona for geospatial processing.

FROM public.ecr.aws/emr-serverless/spark/emr-6.9.0:latest USER root RUN yum install -y wget RUN wget https://repo1.maven.org/maven2/org/apache/sedona/sedona-core-3.0_2.12/1.3.0-incubating/sedona-core-3.0_2.12-1.3.0-incubating.jar -P /usr/lib/spark/jars/ RUN pip3 install apache-sedona # EMRS will run the image as hadoop USER hadoop:hadoop