Configuring the Amazon MWAA environment class - Amazon Managed Workflows for Apache Airflow
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China (PDF).

Configuring the Amazon MWAA environment class

The environment class you choose for your Amazon MWAA environment determines the size of the Amazon-managed Amazon Fargate containers where the Celery Executor runs, and the Amazon-managed Amazon Aurora PostgreSQL metadata database where the Apache Airflow schedulers creates task instances. This page describes each Amazon MWAA environment class, and steps to update the environment class on the Amazon MWAA console.

Environment capabilities

The following section contains the default concurrent Apache Airflow tasks, Random Access Memory (RAM), and the virtual centralized processing units (vCPUs) for each environment class. The concurrent tasks listed assume that task concurrency does not exceed the Apache Airflow Worker capacity in the environment.

In the following table, DAG capacity refers to DAG definitions, not executions, and assumes that your DAGs are dynamic in a single Python file and written with Apache Airflow best practices.

Task executions depend by how many are scheduled simultaneously, and assumes that the number of DAG runs set to start at the same time does not exceed the default max_dagruns_per_loop_to_schedule, as well as the size and number of workers as detailed in this topic.

  • Up to 50 DAG capacity

  • 5 concurrent tasks (by default)

  • 1 vCPUs

  • 2 GB RAM

  • Up to 200 DAG capacity

  • 10 concurrent tasks (by default)

  • 2 vCPUs

  • 4 GB RAM

  • Up to 1000 DAG capacity

  • 20 concurrent tasks (by default)

  • 4 vCPUs

  • 8 GB RAM

  • Up to 2000 DAG capacity

  • 40 concurrent tasks (by default)

  • 8 vCPUs

  • 24 GB RAM

  • Up to 4000 DAG capacity

  • 80 concurrent tasks (by default)

  • 16 vCPUs

  • 48 GB RAM

You can use celery.worker_autoscale to increase tasks per worker. For more information, see the Example high performance use case.

Apache Airflow Schedulers

The following section contains the Apache Airflow scheduler options available on the Amazon MWAA, and how the number of schedulers affects the number of triggerers.

In Apache Airflow, a triggerer manages tasks which it deffers until certain conditions specified using a trigger have been met. In Amazon MWAA the triggerer runs alongside the scheduler on the same Fargate task. Increasing the scheduler count correspondingly increases the number of available triggerers, optimizing how the environment manages deferred tasks. This ensures efficient handling of tasks, promptly scheduling them to run when conditions are satisfied.

Apache Airflow v2
  • v2 - Accepts between 2 to 5. Defaults to 2.