SQL Server Resource Governor and PostgreSQL Dedicated Amazon Aurora Clusters or Aurora Read-Replicas - SQL Server to Aurora PostgreSQL Migration Playbook
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China.

SQL Server Resource Governor and PostgreSQL Dedicated Amazon Aurora Clusters or Aurora Read-Replicas

Feature compatibility Amazon SCT / Amazon DMS automation level Amazon SCT action code index Key differences

                              Three star feature compatibility



Distribute load, applications, or users across multiple instances.

SQL Server Usage

SQL Server Resource Governor provides the capability to control and manage resource consumption. Administrators can specify and enforce workload limits on CPU, physical I/O, and Memory. Resource configurations are dynamic and you can change them in real time.

In SQL Server 2019 configurable value for the REQUEST_MAX_MEMORY_GRANT_PERCENT option of CREATE WORKLOAD GROUP and ALTER WORKLOAD GROUP has been changed from an integer to a float data type to allow more granular control of memory limits. For more information, see ALTER WORKLOAD GROUP (Transact-SQL) and CREATE WORKLOAD GROUP (Transact-SQL) in the SQL Server documentation.

Use Cases

The following list identifies typical Resource Governor use cases:

  • Minimize performance bottlenecks and inconsistencies to better support Service Level Agreements (SLA) for multiple workloads and users.

  • Protect against runaway queries that consume a large amount of resources or explicitly throttle I/O intensive operations. For example, consistency checks with DBCC that may bottleneck the I/O subsystem and negatively impact concurrent workloads.

  • Allow tracking and control for resource-based pricing scenarios to improve predictability of user charges.


The three basic concepts in Resource Governor are Resource Pools, Workload Groups, and Classification.

  • Resource Pools represent physical resources. Two built-in resource pools, internal and default, are created when SQL Server is installed. You can create custom user-defined resource pools for specific workload types.

  • Workload Groups are logical containers for session requests with similar characteristics. Workload Groups allow aggregate resource monitoring of multiple sessions. Resource limit policies are defined for a Workload Group. Each Workload Group belongs to a Resource Pool.

  • Classification is a process that inspects incoming connections and assigns them to a specific Workload Group based on the common attributes. User-defined functions are used to implement Classification. For more information, see User-Defined Functions.


Enable the Resource Governor.


Create a Resource Pool.

CREATE RESOURCE POOL ReportingWorkloadPool

Create a Workload Group.

CREATE WORKLOAD GROUP ReportingWorkloadGroup USING poolAdhoc;

Create a classifier function.

CREATE FUNCTION dbo.WorkloadClassifier()
        WHEN HOST_NAME()= 'ReportServer'
        THEN 'ReportingWorkloadGroup'
        ELSE 'Default'

Register the classifier function.


For more information, see Resource Governor in the SQL Server documentation.

PostgreSQL Usage

PostgreSQL doesn’t have built-in resource management capabilities equivalent to the functionality provided by SQL Server’s Resource Governor. However, due to the elasticity and flexibility provided by cloud economics, workarounds could be applicable and such capabilities might not be as of similar importance to monolithic on-premises databases.

The SQL Server’s Resource Governor primarily exists because traditionally, SQL Server instances were installed on very powerful monolithic servers that powered multiple applications simultaneously. The monolithic model made the most sense in an environment where the licensing for the SQL Server database was per-CPU and where SQL Server instances were deployed on physical hardware. In these scenarios, it made sense to consolidate as many workloads as possible into fewer servers. With cloud databases, the strict requirement to maximize the usage of each individual server is often not as important and you can use a different approach.

You can deploy individual Amazon Aurora clusters with varying sizes, each dedicated to a specific application or workload. You can use additional read-only Amazon Aurora Replica servers to offload any reporting workloads from the master instance.

With Amazon Aurora, you can deploy separate and dedicated database clusters, each dedicated to a specific application or workload creating isolation between multiple connected sessions and applications.

Each Amazon Aurora instance (primary or replica) can scale independently in terms of CPU and memory resources using different instance types. Because you can instantly deploy multiple Amazon Aurora Instances and much less overhead is associated with the deployment and management of Amazon Aurora instances when compared to physical servers, separating different workloads to different instance classes could be a suitable solution for controlling resource management.

For more information, see Amazon EC2 Instance Types.

In addition, each Amazon Aurora instance can also be directly accessed from your applications using its own endpoint. This capability is especially useful if you have multiple Amazon Aurora read-replicas for a given cluster and you want to use different Amazon Aurora replicas to segment your workload.

You can adjust the resources and some parameters for Amazon Aurora read-replicas in the same cluster to avoid having additional cluster, however, this will allow to be used only for read operations.


Follow these steps to create an Amazon Aurora cluster.

  1. In the Amazon console, choose RDS.

  2. Choose Databases, and then choose Create database.

  3. Follow the wizard. Your new cluster appears in the Databases section.

Suppose that you were using a single SQL Server instance for multiple separate applications and used SQL Server Resource Governor to enforce a workload separation, allocating a specific amount of server resources for each application. With Amazon Aurora, you might want to create multiple separate databases for each individual application.

Follow these steps to add additional replica instances to an existing Amazon Aurora cluster:

  1. In the Amazon console, choose RDS.

  2. Choose the Amazon Aurora cluster that you want to scale-out by adding an additional read replica.

  3. For Instance actions, choose Create Aurora Replica.

  4. Select the instance class depending on the amount of compute resources your application requires.

  5. Choose Create Aurora Replica.

Dedicated Aurora PostgreSQL Instances

Feature Amazon Aurora instances

Set the maximum CPU usage for a resource group.

Create a dedicated Amazon Aurora instance for a specific application.

Limit the degree of parallelism for specific queries.

SET max_parallel_workers_per_gather TO x;

Setting the PostgreSQL max_parallel_workers_per_gather parameter should be done as part of your application database connection.

Limit parallel runs

SET max_parallel_workers_per_gather TO 0;


SET max_parallel_workers TO x; -- for the whole system (since PostgreSQL 10)

Limit the number of active sessions.

Manually detect the number of connections that are open from a specific application and restrict connectivity either with database procedures or within the application DAL itself.

select pid from pg_stat_activity where usename in( select usename from pg_stat_activity
    where state = 'active' group by usename having count(*) > 10)
    and state = 'active' order by query_Start;

Restrict maximum runtime of queries.

Manually terminate sessions that exceed the required threshold. You can detect the length of running queries using SQL commands and restrict max run duration using either database procedures or within the application DAL itself.

SELECT pg_terminate_backend(pid)
    FROM pg_stat_activity
    WHERE now()-pg_stat_activity.query_start > interval '5 minutes';

Limit the maximum idle time for sessions.

Manually terminate sessions that exceed the required threshold. You can detect the length of your idle sessions using SQL queries and restrict maximum run using either database procedures or within the application DAL itself.

SELECT pg_terminate_backend(pid)
    FROM pg_stat_activity
    WHERE datname = 'regress' AND pid <> pg_backend_pid()
    AND state = 'idle' AND state_change < current_timestamp - INTERVAL '5' MINUTE;

Limit the time that an idle session holding open locks can block other sessions.

Manually terminate sessions that exceed the required threshold. You can detect the length of blocking idle sessions using SQL queries and restrict max run duration using either database procedures or within the application DAL itself.

SELECT pg_terminate_backend(blocking_locks.pid)
    FROM pg_catalog.pg_locks AS blocked_locks
    JOIN pg_catalog.pg_stat_activity AS blocked_activity ON blocked_activity.pid = blocked_locks.pid
    JOIN pg_catalog.pg_locks AS blocking_locks ON blocking_locks.locktype = blocked_locks.locktype
    AND blocking_locks.DATABASE IS NOT DISTINCT FROM blocked_locks.DATABASE
    AND blocking_locks.relation IS NOT DISTINCT FROM blocked_locks.relation
    AND blocking_locks.page IS NOT DISTINCT FROM blocked_locks.page
    AND blocking_locks.tuple IS NOT DISTINCT FROM blocked_locks.tuple
    AND blocking_locks.virtualxid IS NOT DISTINCT FROM blocked_locks.virtualxid
    AND blocking_locks.transactionid IS NOT DISTINCT FROM blocked_locks.transactionid
    AND blocking_locks.classid IS NOT DISTINCT FROM blocked_locks.classid
    AND blocking_locks.objid IS NOT DISTINCT FROM blocked_locks.objid
    AND blocking_locks.objsubid IS NOT DISTINCT FROM blocked_locks.objsubid
    AND blocking_locks.pid != blocked_locks.pid
    JOIN pg_catalog.pg_stat_activity AS blocking_activity
    ON blocking_activity.pid = blocking_locks.pid
    WHERE NOT blocked_locks.granted and blocked_activity.state_change < current_timestamp - INTERVAL '5' minute;

For more information, see Resource Consumption in the PostgreSQL documentation.