Using Amazon Aurora Auto Scaling with Aurora replicas
To meet your connectivity and workload requirements, Aurora Auto Scaling dynamically adjusts the number of Aurora Replicas provisioned for an Aurora DB cluster. Aurora Auto Scaling is available for both Aurora MySQL and Aurora PostgreSQL. Aurora Auto Scaling enables your Aurora DB cluster to handle sudden increases in connectivity or workload. When the connectivity or workload decreases, Aurora Auto Scaling removes unnecessary Aurora Replicas so that you don't pay for unused provisioned DB instances.
You define and apply a scaling policy to an Aurora DB cluster. The scaling policy defines the minimum and maximum number of Aurora Replicas that Aurora Auto Scaling can manage. Based on the policy, Aurora Auto Scaling adjusts the number of Aurora Replicas up or down in response to actual workloads, determined by using Amazon CloudWatch metrics and target values.
You can use the Amazon Web Services Management Console to apply a scaling policy based on a predefined metric. Alternatively, you can use either the Amazon CLI or Aurora Auto Scaling API to apply a scaling policy based on a predefined or custom metric.
Topics
Before you begin
Before you can use Aurora Auto Scaling with an Aurora DB cluster, you must first create an Aurora DB cluster with a primary DB instance. For more information about creating an Aurora DB cluster, see Creating an Amazon Aurora DB cluster.
Aurora Auto Scaling only scales a DB cluster if the DB cluster is in the available state.
When Aurora Auto Scaling adds a new Aurora Replica, the new Aurora Replica is the same DB instance class as the one used by the primary instance. For more information about DB instance classes, see Aurora DB instance classes. Also, the promotion tier for new Aurora Replicas is set to the last priority, which is 15 by default. This means that during a failover, a replica with a better priority, such as one created manually, would be promoted first. For more information, see Fault tolerance for an Aurora DB cluster.
Aurora Auto Scaling only removes Aurora Replicas that it created.
To benefit from Aurora Auto Scaling, your applications must support connections to new Aurora Replicas. To do so, we recommend using the Aurora reader endpoint. For Aurora MySQL you can use a driver such as the Amazon JDBC Driver for MySQL. For more information, see Connecting to an Amazon Aurora DB cluster.
Aurora global databases currently don't support Aurora Auto Scaling for secondary DB clusters.
Aurora Auto Scaling policies
Aurora Auto Scaling uses a scaling policy to adjust the number of Aurora Replicas in an Aurora DB cluster. Aurora Auto Scaling has the following components:
A service-linked role
A target metric
Minimum and maximum capacity
A cooldown period
Topics
Service linked role
Aurora Auto Scaling uses the
AWSServiceRoleForApplicationAutoScaling_RDSCluster
service-linked
role. For more information, see Service-linked roles for Application Auto Scaling in the
Application Auto Scaling User Guide.
Target metric
In this type of policy, a predefined or custom metric and a target value for the metric is specified in a target-tracking scaling policy configuration. Aurora Auto Scaling creates and manages CloudWatch alarms that trigger the scaling policy and calculates the scaling adjustment based on the metric and target value. The scaling policy adds or removes Aurora Replicas as required to keep the metric at, or close to, the specified target value. In addition to keeping the metric close to the target value, a target-tracking scaling policy also adjusts to fluctuations in the metric due to a changing workload. Such a policy also minimizes rapid fluctuations in the number of available Aurora Replicas for your DB cluster.
For example, take a scaling policy that uses the predefined average CPU utilization metric. Such a policy can keep CPU utilization at, or close to, a specified percentage of utilization, such as 40 percent.
For each Aurora DB cluster, you can create only one Auto Scaling policy for each target metric.
Minimum and maximum capacity
You can specify the maximum number of Aurora Replicas to be managed by Application Auto Scaling. This value must be set to 0–15, and must be equal to or greater than the value specified for the minimum number of Aurora Replicas.
You can also specify the minimum number of Aurora Replicas to be managed by Application Auto Scaling. This value must be set to 0–15, and must be equal to or less than the value specified for the maximum number of Aurora Replicas.
The minimum and maximum capacity are set for an Aurora DB cluster. The specified values apply to all of the policies associated with that Aurora DB cluster.
Cooldown period
You can tune the responsiveness of a target-tracking scaling policy by adding cooldown periods that affect scaling your Aurora DB cluster in and out. A cooldown period blocks subsequent scale-in or scale-out requests until the period expires. These blocks slow the deletions of Aurora Replicas in your Aurora DB cluster for scale-in requests, and the creation of Aurora Replicas for scale-out requests.
You can specify the following cooldown periods:
-
A scale-in activity reduces the number of Aurora Replicas in your Aurora DB cluster. A scale-in cooldown period specifies the amount of time, in seconds, after a scale-in activity completes before another scale-in activity can start.
-
A scale-out activity increases the number of Aurora Replicas in your Aurora DB cluster. A scale-out cooldown period specifies the amount of time, in seconds, after a scale-out activity completes before another scale-out activity can start.
Note A scale-out cooldown period is ignored if a subsequent scale-out request is for a larger number of Aurora Replicas than the first request.
If you don't set the scale-in or scale-out cooldown period, the default for each is 300 seconds.
Enable or disable scale-in activities
You can enable or disable scale-in activities for a policy. Enabling scale-in activities allows the scaling policy to delete Aurora Replicas. When scale-in activities are enabled, the scale-in cooldown period in the scaling policy applies to scale-in activities. Disabling scale-in activities prevents the scaling policy from deleting Aurora Replicas.
Scale-out activities are always enabled so that the scaling policy can create Aurora Replicas as needed.
Adding a scaling policy to an Aurora DB cluster
You can add a scaling policy using the Amazon Web Services Management Console, the Amazon CLI, or the Application Auto Scaling API.
For an example that adds a scaling policy using Amazon CloudFormation, see Declaring a scaling policy for an Aurora DB cluster in the Amazon CloudFormation User Guide.
You can add a scaling policy to an Aurora DB cluster by using the Amazon Web Services Management Console.
To add an auto scaling policy to an Aurora DB cluster
Sign in to the Amazon Web Services Management Console and open the Amazon RDS console at https://console.amazonaws.cn/rds/
. -
In the navigation pane, choose Databases.
-
Choose the Aurora DB cluster that you want to add a policy for.
-
Choose the Logs & events tab.
-
In the Auto scaling policies section, choose Add.
The Add Auto Scaling policy dialog box appears.
-
For Policy Name, type the policy name.
-
For the target metric, choose one of the following:
-
Average CPU utilization of Aurora Replicas to create a policy based on the average CPU utilization.
-
Average connections of Aurora Replicas to create a policy based on the average number of connections to Aurora Replicas.
-
-
For the target value, type one of the following:
-
If you chose Average CPU utilization of Aurora Replicas in the previous step, type the percentage of CPU utilization that you want to maintain on Aurora Replicas.
-
If you chose Average connections of Aurora Replicas in the previous step, type the number of connections that you want to maintain.
Aurora Replicas are added or removed to keep the metric close to the specified value.
-
-
(Optional) Expand Additional Configuration to create a scale-in or scale-out cooldown period.
-
For Minimum capacity, type the minimum number of Aurora Replicas that the Aurora Auto Scaling policy is required to maintain.
-
For Maximum capacity, type the maximum number of Aurora Replicas the Aurora Auto Scaling policy is required to maintain.
-
Choose Add policy.
The following dialog box creates an Auto Scaling policy based an average CPU utilization of 40 percent. The policy specifies a minimum of 5 Aurora Replicas and a maximum of 15 Aurora Replicas.

The following dialog box creates an auto scaling policy based an average number of connections of 100. The policy specifies a minimum of two Aurora Replicas and a maximum of eight Aurora Replicas.

You can apply a scaling policy based on either a predefined or custom metric. To do so, you can use the Amazon CLI or the Application Auto Scaling API. The first step is to register your Aurora DB cluster with Application Auto Scaling.
Registering an Aurora DB cluster
Before you can use Aurora Auto Scaling with an Aurora DB cluster, you register your Aurora DB cluster with Application Auto Scaling.
You do so to define the scaling dimension and limits to be applied to that cluster. Application Auto Scaling dynamically scales the
Aurora DB cluster along the rds:cluster:ReadReplicaCount
scalable dimension, which represents the number
of Aurora Replicas.
To register your Aurora DB cluster, you can use either the Amazon CLI or the Application Auto Scaling API.
Amazon CLI
To register your Aurora DB cluster, use the register-scalable-target
Amazon CLI command with the following parameters:
-
--service-namespace
– Set this value tords
. -
--resource-id
– The resource identifier for the Aurora DB cluster. For this parameter, the resource type iscluster
and the unique identifier is the name of the Aurora DB cluster, for examplecluster:myscalablecluster
. -
--scalable-dimension
– Set this value tords:cluster:ReadReplicaCount
. -
--min-capacity
– The minimum number of reader DB instances to be managed by Application Auto Scaling. For information about the relationship between--min-capacity
,--max-capacity
, and the number of DB instances in your cluster, see Minimum and maximum capacity. -
--max-capacity
– The maximum number of reader DB instances to be managed by Application Auto Scaling. For information about the relationship between--min-capacity
,--max-capacity
, and the number of DB instances in your cluster, see Minimum and maximum capacity.
In the following example, you register an Aurora DB cluster named myscalablecluster
. The
registration indicates that the DB cluster should be dynamically scaled to have from one to eight Aurora
Replicas.
For Linux, macOS, or Unix:
aws application-autoscaling register-scalable-target \ --service-namespace rds \ --resource-id cluster:
myscalablecluster
\ --scalable-dimension rds:cluster:ReadReplicaCount \ --min-capacity1
\ --max-capacity8
\
For Windows:
aws application-autoscaling register-scalable-target ^ --service-namespace rds ^ --resource-id cluster:
myscalablecluster
^ --scalable-dimension rds:cluster:ReadReplicaCount ^ --min-capacity1
^ --max-capacity8
^
Application Auto Scaling API
To register your Aurora DB cluster with Application Auto Scaling, use the RegisterScalableTarget
Application Auto Scaling API operation with the following
parameters:
-
ServiceNamespace
– Set this value tords
. -
ResourceID
– The resource identifier for the Aurora DB cluster. For this parameter, the resource type iscluster
and the unique identifier is the name of the Aurora DB cluster, for examplecluster:myscalablecluster
. -
ScalableDimension
– Set this value tords:cluster:ReadReplicaCount
. -
MinCapacity
– The minimum number of reader DB instances to be managed by Application Auto Scaling. For information about the relationship betweenMinCapacity
,MaxCapacity
, and the number of DB instances in your cluster, see Minimum and maximum capacity. -
MaxCapacity
– The maximum number of reader DB instances to be managed by Application Auto Scaling. For information about the relationship betweenMinCapacity
,MaxCapacity
, and the number of DB instances in your cluster, see Minimum and maximum capacity.
In the following example, you register an Aurora DB cluster named myscalablecluster
with the
Application Auto Scaling API. This registration indicates that the DB cluster should be dynamically scaled to have from one
to eight Aurora Replicas.
POST / HTTP/1.1 Host: autoscaling.us-east-2.amazonaws.com Accept-Encoding: identity Content-Length: 219 X-Amz-Target: AnyScaleFrontendService.RegisterScalableTarget X-Amz-Date: 20160506T182145Z User-Agent: aws-cli/1.10.23 Python/2.7.11 Darwin/15.4.0 botocore/1.4.8 Content-Type: application/x-amz-json-1.1 Authorization: AUTHPARAMS { "ServiceNamespace": "rds", "ResourceId": "cluster:
myscalablecluster
", "ScalableDimension": "rds:cluster:ReadReplicaCount", "MinCapacity":1
, "MaxCapacity":8
}
Defining a scaling policy for an Aurora DB cluster
A target-tracking scaling policy configuration is represented by a JSON block that the metrics and target values
are defined in. You can save a scaling policy configuration as a JSON block in a text file. You use that text file
when invoking the Amazon CLI or the Application Auto Scaling API. For more information about policy configuration syntax, see TargetTrackingScalingPolicyConfiguration
in the Application Auto Scaling API
Reference.
The following options are available for defining a target-tracking scaling policy configuration.
Topics
Using a predefined metric
By using predefined metrics, you can quickly define a target-tracking scaling policy for an Aurora DB cluster that works well with both target tracking and dynamic scaling in Aurora Auto Scaling.
Currently, Aurora supports the following predefined metrics in Aurora Auto Scaling:
-
RDSReaderAverageCPUUtilization – The average value of the
CPUUtilization
metric in CloudWatch across all Aurora Replicas in the Aurora DB cluster. -
RDSReaderAverageDatabaseConnections – The average value of the
DatabaseConnections
metric in CloudWatch across all Aurora Replicas in the Aurora DB cluster.
For more information about the CPUUtilization
and DatabaseConnections
metrics, see
Amazon CloudWatch metrics for Amazon Aurora.
To use a predefined metric in your scaling policy, you create a target tracking configuration for your scaling
policy. This configuration must include a PredefinedMetricSpecification
for the predefined metric
and a TargetValue
for the target value of that metric.
Example
The following example describes a typical policy configuration for target-tracking scaling for an Aurora DB
cluster. In this configuration, the RDSReaderAverageCPUUtilization
predefined metric is used to
adjust the Aurora DB cluster based on an average CPU utilization of 40 percent across all Aurora Replicas.
{ "TargetValue": 40.0, "PredefinedMetricSpecification": { "PredefinedMetricType": "RDSReaderAverageCPUUtilization" } }
Using a custom metric
By using custom metrics, you can define a target-tracking scaling policy that meets your custom requirements. You can define a custom metric based on any Aurora metric that changes in proportion to scaling.
Not all Aurora metrics work for target tracking. The metric must be a valid utilization metric and describe how busy an instance is. The value of the metric must increase or decrease in proportion to the number of Aurora Replicas in the Aurora DB cluster. This proportional increase or decrease is necessary to use the metric data to proportionally scale out or in the number of Aurora Replicas.
Example
The following example describes a target-tracking configuration for a scaling policy. In this
configuration, a custom metric adjusts an Aurora DB cluster based on an average CPU utilization of 50 percent
across all Aurora Replicas in an Aurora DB cluster named my-db-cluster
.
{ "TargetValue": 50, "CustomizedMetricSpecification": { "MetricName": "CPUUtilization", "Namespace": "AWS/RDS", "Dimensions": [ {"Name": "DBClusterIdentifier","Value": "my-db-cluster"}, {"Name": "Role","Value": "READER"} ], "Statistic": "Average", "Unit": "Percent" } }
Using cooldown periods
You can specify a value, in seconds, for ScaleOutCooldown
to add a cooldown period for scaling
out your Aurora DB cluster. Similarly, you can add a value, in seconds, for ScaleInCooldown
to add a
cooldown period for scaling in your Aurora DB cluster. For more information about ScaleInCooldown
and ScaleOutCooldown
, see
TargetTrackingScalingPolicyConfiguration
in the Application Auto Scaling
API Reference.
The following example describes a target-tracking configuration for a scaling policy. In this
configuration, the RDSReaderAverageCPUUtilization
predefined metric is used to adjust an Aurora
DB cluster based on an average CPU utilization of 40 percent across all Aurora Replicas in that Aurora DB
cluster. The configuration provides a scale-in cooldown period of 10 minutes and a scale-out cooldown period
of 5 minutes.
{ "TargetValue": 40.0, "PredefinedMetricSpecification": { "PredefinedMetricType": "RDSReaderAverageCPUUtilization" }, "ScaleInCooldown": 600, "ScaleOutCooldown": 300 }
Disabling scale-in activity
You can prevent the target-tracking scaling policy configuration from scaling in your Aurora DB cluster by disabling scale-in activity. Disabling scale-in activity prevents the scaling policy from deleting Aurora Replicas, while still allowing the scaling policy to create them as needed.
You can specify a Boolean value for DisableScaleIn
to enable or disable scale in activity for
your Aurora DB cluster. For more information about DisableScaleIn
, see TargetTrackingScalingPolicyConfiguration
in the Application Auto Scaling
API Reference.
The following example describes a target-tracking configuration for a scaling policy. In this
configuration, the RDSReaderAverageCPUUtilization
predefined metric adjusts an Aurora DB cluster
based on an average CPU utilization of 40 percent across all Aurora Replicas in that Aurora DB cluster. The
configuration disables scale-in activity for the scaling policy.
{ "TargetValue": 40.0, "PredefinedMetricSpecification": { "PredefinedMetricType": "RDSReaderAverageCPUUtilization" }, "DisableScaleIn": true }
Applying a scaling policy to an Aurora DB cluster
After registering your Aurora DB cluster with Application Auto Scaling and defining a scaling policy, you apply the scaling policy to the registered Aurora DB cluster. To apply a scaling policy to an Aurora DB cluster, you can use the Amazon CLI or the Application Auto Scaling API.
To apply a scaling policy to your Aurora DB cluster, use the put-scaling-policy
Amazon CLI command with the following parameters:
-
--policy-name
– The name of the scaling policy. -
--policy-type
– Set this value toTargetTrackingScaling
. -
--resource-id
– The resource identifier for the Aurora DB cluster. For this parameter, the resource type iscluster
and the unique identifier is the name of the Aurora DB cluster, for examplecluster:myscalablecluster
. -
--service-namespace
– Set this value tords
. -
--scalable-dimension
– Set this value tords:cluster:ReadReplicaCount
. -
--target-tracking-scaling-policy-configuration
– The target-tracking scaling policy configuration to use for the Aurora DB cluster.
In the following example, you apply a target-tracking scaling policy named
myscalablepolicy
to an Aurora DB cluster named myscalablecluster
with
Application Auto Scaling. To do so, you use a policy configuration saved in a file named
config.json
.
For Linux, macOS, or Unix:
aws application-autoscaling put-scaling-policy \ --policy-name
myscalablepolicy
\ --policy-type TargetTrackingScaling \ --resource-id cluster:myscalablecluster
\ --service-namespace rds \ --scalable-dimension rds:cluster:ReadReplicaCount \ --target-tracking-scaling-policy-configurationfile://config.json
For Windows:
aws application-autoscaling put-scaling-policy ^ --policy-name
myscalablepolicy
^ --policy-type TargetTrackingScaling ^ --resource-id cluster:myscalablecluster
^ --service-namespace rds ^ --scalable-dimension rds:cluster:ReadReplicaCount ^ --target-tracking-scaling-policy-configurationfile://config.json
To apply a scaling policy to your Aurora DB cluster with the Application Auto Scaling API, use the PutScalingPolicy
Application Auto Scaling API operation with the following parameters:
-
PolicyName
– The name of the scaling policy. -
ServiceNamespace
– Set this value tords
. -
ResourceID
– The resource identifier for the Aurora DB cluster. For this parameter, the resource type iscluster
and the unique identifier is the name of the Aurora DB cluster, for examplecluster:myscalablecluster
. -
ScalableDimension
– Set this value tords:cluster:ReadReplicaCount
. -
PolicyType
– Set this value toTargetTrackingScaling
. -
TargetTrackingScalingPolicyConfiguration
– The target-tracking scaling policy configuration to use for the Aurora DB cluster.
In the following example, you apply a target-tracking scaling policy named
myscalablepolicy
to an Aurora DB cluster named myscalablecluster
with
Application Auto Scaling. You use a policy configuration based on the RDSReaderAverageCPUUtilization
predefined metric.
POST / HTTP/1.1 Host: autoscaling.us-east-2.amazonaws.com Accept-Encoding: identity Content-Length: 219 X-Amz-Target: AnyScaleFrontendService.PutScalingPolicy X-Amz-Date: 20160506T182145Z User-Agent: aws-cli/1.10.23 Python/2.7.11 Darwin/15.4.0 botocore/1.4.8 Content-Type: application/x-amz-json-1.1 Authorization: AUTHPARAMS { "PolicyName": "
myscalablepolicy
", "ServiceNamespace": "rds", "ResourceId": "cluster:myscalablecluster
", "ScalableDimension": "rds:cluster:ReadReplicaCount", "PolicyType": "TargetTrackingScaling", "TargetTrackingScalingPolicyConfiguration": { "TargetValue":40.0
, "PredefinedMetricSpecification": { "PredefinedMetricType": "RDSReaderAverageCPUUtilization" } } }
Editing a scaling policy
You can edit a scaling policy using the Amazon Web Services Management Console, the Amazon CLI, or the Application Auto Scaling API.
You can edit a scaling policy by using the Amazon Web Services Management Console.
To edit an auto scaling policy for an Aurora DB cluster
Sign in to the Amazon Web Services Management Console and open the Amazon RDS console at https://console.amazonaws.cn/rds/
. -
In the navigation pane, choose Databases.
-
Choose the Aurora DB cluster whose auto scaling policy you want to edit.
-
Choose the Logs & events tab.
-
In the Auto scaling policies section, choose the auto scaling policy, and then choose Edit.
-
Make changes to the policy.
-
Choose Save.
The following is a sample Edit Auto Scaling policy dialog box.

You can use the Amazon CLI or the Application Auto Scaling API to edit a scaling policy in the same way that you apply a scaling policy:
-
When using the Amazon CLI, specify the name of the policy you want to edit in the
--policy-name
parameter. Specify new values for the parameters you want to change. -
When using the Application Auto Scaling API, specify the name of the policy you want to edit in the
PolicyName
parameter. Specify new values for the parameters you want to change.
For more information, see Applying a scaling policy to an Aurora DB cluster.
Deleting a scaling policy
You can delete a scaling policy using the Amazon Web Services Management Console, the Amazon CLI, or the Application Auto Scaling API.
You can delete a scaling policy by using the Amazon Web Services Management Console.
To delete an auto scaling policy for an Aurora DB cluster
Sign in to the Amazon Web Services Management Console and open the Amazon RDS console at https://console.amazonaws.cn/rds/
. -
In the navigation pane, choose Databases.
-
Choose the Aurora DB cluster whose auto scaling policy you want to delete.
-
Choose the Logs & events tab.
-
In the Auto scaling policies section, choose the auto scaling policy, and then choose Delete.
To delete a scaling policy from your Aurora DB cluster, use the delete-scaling-policy
Amazon CLI command with the following parameters:
-
--policy-name
– The name of the scaling policy. -
--resource-id
– The resource identifier for the Aurora DB cluster. For this parameter, the resource type iscluster
and the unique identifier is the name of the Aurora DB cluster, for examplecluster:myscalablecluster
. -
--service-namespace
– Set this value tords
. -
--scalable-dimension
– Set this value tords:cluster:ReadReplicaCount
.
In the following example, you delete a target-tracking scaling policy named myscalablepolicy
from an
Aurora DB cluster named myscalablecluster
.
For Linux, macOS, or Unix:
aws application-autoscaling delete-scaling-policy \ --policy-name
myscalablepolicy
\ --resource-id cluster:myscalablecluster
\ --service-namespace rds \ --scalable-dimension rds:cluster:ReadReplicaCount \
For Windows:
aws application-autoscaling delete-scaling-policy ^ --policy-name
myscalablepolicy
^ --resource-id cluster:myscalablecluster
^ --service-namespace rds ^ --scalable-dimension rds:cluster:ReadReplicaCount ^
To delete a scaling policy from your Aurora DB cluster, use the DeleteScalingPolicy
the Application Auto Scaling API operation with the following parameters:
-
PolicyName
– The name of the scaling policy. -
ServiceNamespace
– Set this value tords
. -
ResourceID
– The resource identifier for the Aurora DB cluster. For this parameter, the resource type iscluster
and the unique identifier is the name of the Aurora DB cluster, for examplecluster:myscalablecluster
. -
ScalableDimension
– Set this value tords:cluster:ReadReplicaCount
.
In the following example, you delete a target-tracking scaling policy named myscalablepolicy
from an
Aurora DB cluster named myscalablecluster
with the Application Auto Scaling API.
POST / HTTP/1.1 Host: autoscaling.us-east-2.amazonaws.com Accept-Encoding: identity Content-Length: 219 X-Amz-Target: AnyScaleFrontendService.DeleteScalingPolicy X-Amz-Date: 20160506T182145Z User-Agent: aws-cli/1.10.23 Python/2.7.11 Darwin/15.4.0 botocore/1.4.8 Content-Type: application/x-amz-json-1.1 Authorization: AUTHPARAMS { "PolicyName": "
myscalablepolicy
", "ServiceNamespace": "rds", "ResourceId": "cluster:myscalablecluster
", "ScalableDimension": "rds:cluster:ReadReplicaCount" }
DB instance IDs and tagging
When a replica is added by Aurora Auto Scaling, its DB instance ID is prefixed by application-autoscaling-
, for
example, application-autoscaling-61aabbcc-4e2f-4c65-b620-ab7421abc123
.
The following tag is automatically added to the DB instance. You can view it on the Tags tab of the DB instance detail page.
Tag | Value |
---|---|
application-autoscaling:resourceId | cluster:mynewcluster-cluster |
For more information on Amazon RDS resource tags, see Tagging Amazon RDS resources.