Implementing workload management - Amazon Redshift
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China (PDF).

Implementing workload management

You can configure Amazon Redshift WLM to run with either automatic WLM or manual WLM.

Automatic WLM

To maximize system throughput and use resources effectively, you can enable Amazon Redshift to manage how resources are divided to run concurrent queries with automatic WLM. Automatic WLM manages the resources required to run queries. Amazon Redshift determines how many queries run concurrently and how much memory is allocated to each dispatched query. Use Auto WLM when you want Amazon Redshift to manage how resources are divided to run concurrent queries. For more information, see Implementing automatic WLM.

Working with concurrency scaling and automatic WLM, you can support virtually unlimited concurrent users and concurrent queries, with consistently fast query performance. For more information, see Working with concurrency scaling.

Note

In most cases we recommend that you use automatic WLM. If you're using manual WLM and you want to migrate from to automatic WLM, see Migrating from manual WLM to automatic WLM.

With Auto WLM, it's possible to define query priorities for workloads in a queue. For more information about query priority, see Query priority.

Manual WLM

You might have multiple sessions or users running queries at the same time. Some queries might consume cluster resources for long periods and affect the performance of others. Manual WLM can help manage this for specialized use cases. Use Manual WLM when you want more control over concurrency.

You can manage system performance by modifying your WLM configuration to create separate queues for long-running queries and short-running queries. At runtime, you can route queries to these queues according to user groups or query groups.

You can set up rules to route queries to particular queues based on the user running the query or labels that you specify. You can also configure the amount of memory allocated to each queue, so that large queries run in queues with more memory than other queues. You can also configure a query monitoring rule (QMR) to limit long-running queries. For more information, see Implementing manual WLM.

Note

We recommend configuring your manual WLM query queues with a total of 15 or fewer query slots. For more information, see Concurrency level.

Note that in regards to a manual WLM configuration, the maximum slots you can allocate to a queue is 50. However, this doesn't mean that in an automatic WLM configuration, a Amazon Redshift cluster always runs 50 queries concurrently. This can change, based on the memory needs or other types of resource allocation on the cluster.