Testing Lambda container images locally - Amazon Lambda
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China (PDF).

Testing Lambda container images locally

You can use the Lambda runtime interface emulator to locally test a container image function before uploading it to Amazon Elastic Container Registry (Amazon ECR) and deploying it to Lambda. The emulator is a proxy for the Lambda runtime API. It's a lightweight web server that converts HTTP requests into JSON events to pass to the Lambda function in the container image.

The Amazon base images and OS-only base images include the runtime interface emulator. If you use an alternative base image, such as an Alpine Linux or Debian image, you can build the emulator into your image or install it on your local machine.

The runtime interface emulator is available on the Amazon GitHub repository. There are separate packages for the x86-64 and arm64 architectures.

Guidelines for using the runtime interface emulator

Note the following guidelines when using the runtime interface emulator:

  • The RIE does not emulate Lambda security and authentication configurations, or Lambda orchestration.

  • Lambda provides an emulator for each of the instruction set architectures.

  • The emulator does not support Amazon X-Ray tracing or other Lambda integrations.

Environment variables

The runtime interface emulator supports a subset of environment variables for the Lambda function in the local running image.

If your function uses security credentials, you can configure the credentials by setting the following environment variables:

  • AWS_ACCESS_KEY_ID

  • AWS_SECRET_ACCESS_KEY

  • AWS_SESSION_TOKEN

  • AWS_DEFAULT_REGION

To set the function timeout, configure AWS_LAMBDA_FUNCTION_TIMEOUT. Enter the maximum number of seconds that you want to allow the function to run.

The emulator does not populate the following Lambda environment variables. However, you can set them to match the values that you expect when the function runs in the Lambda service:

  • AWS_LAMBDA_FUNCTION_VERSION

  • AWS_LAMBDA_FUNCTION_NAME

  • AWS_LAMBDA_FUNCTION_MEMORY_SIZE

Testing images built from Amazon base images

The Amazon base images for Lambda include the runtime interface emulator. After building your Docker image, follow these steps to test it locally.

  1. Start the Docker image with the docker run command. In this example, docker-image is the image name and test is the tag.

    docker run --platform linux/amd64 -p 9000:8080 docker-image:test

    This command runs the image as a container and creates a local endpoint at localhost:9000/2015-03-31/functions/function/invocations.

    Note

    If you built the Docker image for the ARM64 instruction set architecture, be sure to use the --platform linux/arm64 option instead of --platform linux/amd64.

  2. From a new terminal window, post an event to the local endpoint.

    Linux/macOS

    In Linux and macOS, run the following curl command:

    curl "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{}'

    This command invokes the function with an empty event and returns a response. If you're using your own function code rather than the sample function code, you might want to invoke the function with a JSON payload. Example:

    curl "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{"payload":"hello world!"}'
    PowerShell

    In PowerShell, run the following Invoke-WebRequest command:

    Invoke-WebRequest -Uri "http://localhost:9000/2015-03-31/functions/function/invocations" -Method Post -Body '{}' -ContentType "application/json"

    This command invokes the function with an empty event and returns a response. If you're using your own function code rather than the sample function code, you might want to invoke the function with a JSON payload. Example:

    Invoke-WebRequest -Uri "http://localhost:9000/2015-03-31/functions/function/invocations" -Method Post -Body '{"payload":"hello world!"}' -ContentType "application/json"
  3. Get the container ID.

    docker ps
  4. Use the docker kill command to stop the container. In this command, replace 3766c4ab331c with the container ID from the previous step.

    docker kill 3766c4ab331c

Testing images built from alternative base images

If you use an alternative base image, such as an Alpine Linux or Debian image, you can build the emulator into your image or install it on your local machine.

Building the runtime interface emulator into an image

To build the emulator into your image
  1. Create a script and save it in your project directory. Set execution permissions for the script file.

    The script checks for the presence of the AWS_LAMBDA_RUNTIME_API environment variable, which indicates the presence of the runtime API. If the runtime API is present, the script runs the runtime interface client. Otherwise, the script runs the runtime interface emulator.

    Choose your language to see an example script:

    Node.js

    In the following example, /usr/local/bin/npx aws-lambda-ric is the npx command to start the Node.js runtime interface client.

    Example entry_script.sh
    #!/bin/sh if [ -z "${AWS_LAMBDA_RUNTIME_API}" ]; then exec /usr/local/bin/aws-lambda-rie /usr/local/bin/npx aws-lambda-ric $@ else exec /usr/local/bin/npx aws-lambda-ric $@ fi
    Note

    If you're using Windows, make sure to save the script with LF line endings. If the script uses CRLF, you'll get an error like this when you try to run the Docker image:

    exec /entry_script.sh: no such file or directory
    Python

    In the following example, /usr/local/bin/python -m awslambdaric is the Python interpreter command to run the Python runtime interface client as a script.

    Example entry_script.sh
    #!/bin/sh if [ -z "${AWS_LAMBDA_RUNTIME_API}" ]; then exec /usr/local/bin/aws-lambda-rie /usr/local/bin/python -m awslambdaric $@ else exec /usr/local/bin/python -m awslambdaric $@ fi
    Note

    If you're using Windows, make sure to save the script with LF line endings. If the script uses CRLF, you'll get an error like this when you try to run the Docker image:

    exec /entry_script.sh: no such file or directory
    Java

    In the following example, /usr/bin/java -cp './*' com.amazonaws.services.lambda.runtime.api.client.AWSLambda sets the classpath to the Java runtime interface client.

    Example entry_script.sh
    #!/bin/sh if [ -z "${AWS_LAMBDA_RUNTIME_API}" ]; then exec /usr/local/bin/aws-lambda-rie /usr/bin/java -cp './*' com.amazonaws.services.lambda.runtime.api.client.AWSLambda $@ else exec /usr/bin/java -cp './*' com.amazonaws.services.lambda.runtime.api.client.AWSLambda $@ fi
    Note

    If you're using Windows, make sure to save the script with LF line endings. If the script uses CRLF, you'll get an error like this when you try to run the Docker image:

    exec /entry_script.sh: no such file or directory
    Go

    In the following example, /main is the binary that is compiled during the Docker build.

    Example entry_script.sh
    #!/bin/sh if [ -z "${AWS_LAMBDA_RUNTIME_API}" ]; then exec /usr/local/bin/aws-lambda-rie /main $@ else exec /main $@ fi
    Note

    If you're using Windows, make sure to save the script with LF line endings. If the script uses CRLF, you'll get an error like this when you try to run the Docker image:

    exec /entry_script.sh: no such file or directory
    Ruby

    In the following example, aws_lambda_ric is the Ruby runtime interface client.

    Example entry_script.sh
    #!/bin/sh if [ -z "${AWS_LAMBDA_RUNTIME_API}" ]; then exec /usr/local/bin/aws-lambda-rie aws_lambda_ric $@ else exec aws_lambda_ric $@ fi
    Note

    If you're using Windows, make sure to save the script with LF line endings. If the script uses CRLF, you'll get an error like this when you try to run the Docker image:

    exec /entry_script.sh: no such file or directory
  2. Download the runtime interface emulator for your target architecture from GitHub into your project directory. Lambda provides an emulator for each of the instruction set architectures.

    Linux/macOS
    curl -Lo aws-lambda-rie https://github.com/aws/aws-lambda-runtime-interface-emulator/releases/latest/download/aws-lambda-rie \ && chmod +x aws-lambda-rie

    To install the arm64 emulator, replace the GitHub repository URL in the previous command with the following:

    https://github.com/aws/aws-lambda-runtime-interface-emulator/releases/latest/download/aws-lambda-rie-arm64
    PowerShell
    Invoke-WebRequest -Uri https://github.com/aws/aws-lambda-runtime-interface-emulator/releases/latest/download/aws-lambda-rie -OutFile aws-lambda-rie

    To install the arm64 emulator, replace the Uri with the following:

    https://github.com/aws/aws-lambda-runtime-interface-emulator/releases/latest/download/aws-lambda-rie-arm64
  3. Add the following lines to your Dockerfile. The ENTRYPOINT includes the script that you created in step 1 and your function handler.

    Example lines to add to Dockerfile

    In the following example, replace lambda_function.handler with your function handler.

    COPY ./entry_script.sh /entry_script.sh RUN chmod +x /entry_script.sh ADD aws-lambda-rie /usr/local/bin/aws-lambda-rie ENTRYPOINT [ "/entry_script.sh","lambda_function.handler" ]
  4. Build the Docker image with the docker build command. The following example names the image docker-image and gives it the test tag.

    docker build --platform linux/amd64 -t docker-image:test .
    Note

    The command specifies the --platform linux/amd64 option to ensure that your container is compatible with the Lambda execution environment regardless of the architecture of your build machine. If you intend to create a Lambda function using the ARM64 instruction set architecture, be sure to change the command to use the --platform linux/arm64 option instead.

  5. Start the Docker image with the docker run command. In this example, docker-image is the image name and test is the tag.

    docker run --platform linux/amd64 -p 9000:8080 docker-image:test

    This command runs the image as a container and creates a local endpoint at localhost:9000/2015-03-31/functions/function/invocations.

    Note

    If you built the Docker image for the ARM64 instruction set architecture, be sure to use the --platform linux/arm64 option instead of --platform linux/amd64.

  6. From a new terminal window, post an event to the local endpoint.

    Linux/macOS

    In Linux and macOS, run the following curl command:

    curl "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{}'

    This command invokes the function with an empty event and returns a response. If you're using your own function code rather than the sample function code, you might want to invoke the function with a JSON payload. Example:

    curl "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{"payload":"hello world!"}'
    PowerShell

    In PowerShell, run the following Invoke-WebRequest command:

    Invoke-WebRequest -Uri "http://localhost:9000/2015-03-31/functions/function/invocations" -Method Post -Body '{}' -ContentType "application/json"

    This command invokes the function with an empty event and returns a response. If you're using your own function code rather than the sample function code, you might want to invoke the function with a JSON payload. Example:

    Invoke-WebRequest -Uri "http://localhost:9000/2015-03-31/functions/function/invocations" -Method Post -Body '{"payload":"hello world!"}' -ContentType "application/json"
  7. Get the container ID.

    docker ps
  8. Use the docker kill command to stop the container. In this command, replace 3766c4ab331c with the container ID from the previous step.

    docker kill 3766c4ab331c

Install the runtime interface emulator locally

To install the runtime interface emulator on your local machine, download the package for your preferred architecture from GitHub. Then, use the docker run command to start the container image and set the --entrypoint to the emulator. For more information, choose the instructions for your preferred language: