Implementing workload management - Amazon Redshift
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China (PDF).

Implementing workload management

You can use workload management (WLM) to define multiple query queues and to route queries to the appropriate queues at runtime.

In some cases, you might have multiple sessions or users running queries at the same time. In these cases, some queries might consume cluster resources for long periods of time and affect the performance of other queries. For example, suppose that one group of users submits occasional complex, long-running queries that select and sort rows from several large tables. Another group frequently submits short queries that select only a few rows from one or two tables and run in a few seconds. In this situation, the short-running queries might have to wait in a queue for a long-running query to complete. WLM helps manage this situation.

You can configure Amazon Redshift WLM to run with either automatic WLM or manual WLM.

Automatic WLM

To maximize system throughput and use resources effectively, you can enable Amazon Redshift to manage how resources are divided to run concurrent queries with automatic WLM. Automatic WLM manages the resources required to run queries. Amazon Redshift determines how many queries run concurrently and how much memory is allocated to each dispatched query. You can enable automatic WLM using the Amazon Redshift console by choosing Switch WLM mode and then choosing Auto WLM. With this choice, up to eight queues are used to manage queries, and the Memory and Concurrency on main fields are both set to Auto. You can specify a priority that reflects the business priority of the workload or users that map to each queue. The default priority of queries is set to Normal. For information about how to change the priority of queries in a queue, see Query priority. For more information, see Implementing automatic WLM.

At runtime, you can route queries to these queues according to user groups or query groups. You can also configure a query monitoring rule (QMR) to limit long-running queries.

Working with concurrency scaling and automatic WLM, you can support virtually unlimited concurrent users and concurrent queries, with consistently fast query performance. For more information, see Working with concurrency scaling.

Note

We recommend that you create a parameter group and choose automatic WLM to manage your query resources. For details about how to migrate from manual WLM to automatic WLM, see Migrating from manual WLM to automatic WLM.

Manual WLM

Alternatively, you can manage system performance and your users' experience by modifying your WLM configuration to create separate queues for the long-running queries and the short-running queries. At runtime, you can route queries to these queues according to user groups or query groups. You can enable this manual configuration using the Amazon Redshift console by switching to Manual WLM. With this choice, you specify the queues used to manage queries, and the Memory and Concurrency on main field values. With a manual configuration, you can configure up to eight query queues and set the number of queries that can run in each of those queues concurrently.

You can set up rules to route queries to particular queues based on the user running the query or labels that you specify. You can also configure the amount of memory allocated to each queue, so that large queries run in queues with more memory than other queues. You can also configure a query monitoring rule (QMR) to limit long-running queries. For more information, see Implementing manual WLM.

Note

We recommend configuring your manual WLM query queues with a total of 15 or fewer query slots. For more information, see Concurrency level.

WLM queuing limitations

Note that in regards to a manual WLM configuration, the maximum slots you can allocate to a queue is 50. However, this doesn't mean that in an automatic WLM configuration, a Amazon Redshift cluster always runs 50 queries concurrently. This can change, based on the memory needs or other types of resource allocation on the cluster.

Use cases for Auto WLM and Manual WLM

Use Auto WLM when you want Amazon Redshift to manage how resources are divided to run concurrent queries. Using Auto WLM often results in a higher throughput than Manual WLM. With Auto WLM, you can define query priorities for workloads in a queue. For more information about query priority, see Query priority.

Use Manual WLM when you want more control over concurrency.