Optimize Amazon ECS capacity and availability
Application availability is crucial for providing an error-free experience and for minimizing application latency. Availability depends on having available resources that have enough capacity to meet demand. Amazon provides several mechanisms to manage availability. For applications hosted on Amazon ECS, these include autoscaling and Availability Zones (AZs). Autoscaling manages the number of tasks or instances based on metrics you define, while Availability Zones allow you to host your application in isolated but geographically-close locations.
As with task sizes, capacity and availability present certain trade-offs you must consider. Ideally, capacity would be perfectly aligned with demand. There would always be just enough capacity to serve requests and process jobs to meet Service Level Objectives (SLOs) including a low latency and error rate. Capacity would never be too high, leading to excessive cost; nor would it never be too low, leading to high latency and error rates.
Autoscaling is a latent process. First, real-time metrics must be delivered to CloudWatch. Then, they need to be aggregated for analysis, which can take up to several minutes depending on the granularity of the metric. CloudWatch compares the metrics against alarm thresholds to identify a shortage or excess of resources. To prevent instability, configure alarms to require the set threshold be crossed for a few minutes before the alarm goes off. It also takes time to provision new tasks and to terminate tasks that are no longer needed.
Because of these potential delays in the system described, it's important that you maintain some headroom by over-provisioning. Doing this can help accommodate short-term bursts in demand. This also helps your application to service additional requests without reaching saturation. As a good practice, you can set your scaling target between 60-80% of utilization. This helps your application better handle bursts of extra demand while additional capacity is still in the process of being provisioned.
Another reason we recommend that you over-provision is so that you can quickly respond to Availability Zone failures. We recommend that production workloads be served from multiple Availability Zones. This is because, if an Availability Zone failure occurs, your tasks that are running in the remaining Availability Zones can still serve the demand. If your application runs in two Availability Zones, you need to double your normal task count. This is so that you can provide immediate capacity during any potential failure. If your application runs in three Availability Zones, we recommend that you run 1.5 times your normal task count. That is, run three tasks for every two that are needed for ordinary serving.
Maximizing scaling speed
Autoscaling is a reactive process that takes time to take effect. However, there are some ways to help minimize the time that's needed to scale out.
Minimize image size. Larger images take longer to download from an image repository and unpack. Therefore, keeping image sizes smaller reduces the amount of time that's needed for a container to start. To reduce the image size, you can follow these specific recommendations:
-
If you can build a static binary or use Golang, build your image
FROM
scratch and include only your binary application in the resulting image. -
Use minimized base images from upstream distro vendors, such as Amazon Linux or Ubuntu.
-
Don’t include any build artifacts in your final image. Using multi-stage builds can help with this.
-
Compact
RUN
stages wherever possible. EachRUN
stage creates a new image layer, leading to an additional round trip to download the layer. A singleRUN
stage that has multiple commands joined by&&
has fewer layers than one with multipleRUN
stages. -
If you want to include data, such as ML inference data, in your final image, include only the data that's needed to start up and begin serving traffic. If you fetch data on demand from Amazon S3 or other storage without impacting service, then store your data in those places instead.
Keep your images close. The higher the network latency, the longer it takes to download the image. Host your images in a repository in the same Region that your workload is in. Amazon ECR is a high-performance image repository that's available in every Region that Amazon ECS is available in. Avoid traversing the internet or a VPN link to download container images. Hosting your images in the same Region improves overall reliability. It mitigates the risk of network connectivity issues and availability issues in a different Region. Alternatively, you can also implement Amazon ECR cross-region replication to help with this.
Reduce load balancer health check thresholds. Load balancers perform health checks before sending traffic to your application. The default health check configuration for a target group can take 90 seconds or longer. During this, the load balancer checks the health status and receives requests. Lowering the health check interval and threshold count can make your application accept traffic quicker and reduce load on other tasks.
Consider cold-start performance. Some applications use runtimes such as Java perform Just-In-Time (JIT) compilation. The compilation process at least as it starts can slow application performance. A workaround is to rewrite the latency-critical parts of your workload in languages that don't impose a cold-start performance penalty.
Use step scaling, not target-tracking scaling policies. There are several Application Auto Scaling options for tasks. Target tracking is the easiest mode to use. With it, all you need to do is set a target value for a metric, such as CPU average utilization. Then, the auto scaler automatically manages the number of tasks that are needed to attain that value. With step scaling you can more quickly react to changes in demand, because you define the specific thresholds for your scaling metrics, and how many tasks to add or remove when the thresholds are crossed. And, more importantly, you can react very quickly to changes in demand by minimizing the amount of time a threshold alarm is in breach. For more information, see Service Auto Scaling.
If you use Amazon EC2 instances to provide cluster capacity, consider the following recommendations:
Use larger Amazon EC2 instances and faster Amazon EBS volumes.
You can improve image download and preparation speeds by using a larger Amazon EC2 instance
and faster Amazon EBS volume. Within a given instance family, the network and Amazon EBS maximum
throughput increases as the instance size increases (for example, from
m5.xlarge
to m5.2xlarge
). Additionally, you can also
customize Amazon EBS volumes to increase their throughput and IOPS. For example, if you use
gp2
volumes, use larger volumes that offer more baseline throughput. If
you use gp3
volumes, specify throughput and IOPS when you create the
volume.
Use bridge network mode for tasks running on Amazon EC2
instances. Tasks that use bridge
network mode on Amazon EC2
start faster than tasks that use the awsvpc
network mode. When
awsvpc
network mode is used, Amazon ECS attaches an elastic network
interface (ENI) to the instance before launching the task. This introduces
additional latency. There are several tradeoffs for using bridge networking though.
These tasks don't get their own security group, and there are some implications for
load balancing. For more information, see Load
balancer target groups in the
Elastic Load Balancing User Guide.
Handling demand shocks
Some applications experience sudden large shocks in demand. This happens for a variety of reasons: a news event, big sale, media event, or some other event that goes viral and causes traffic to quickly and significantly increase in a very short span of time. If unplanned, this can cause demand to quickly outstrip available resources.
The best way to handle demand shocks is to anticipate them and plan accordingly. Because autoscaling can take time, we recommend that you scale out your application before the demand shock begins. For the best results, we recommend having a business plan that involves tight collaboration between teams that use a shared calender. The team that's planning the event should work closely with the team in charge of the application in advance. This gives that team enough time to have a clear scheduling plan. They can schedule capacity to scale out before the event and to scale in after the event. For more information, see Scheduled scaling in the Application Auto Scaling User Guide.
If you have an Enterprise Support plan, be sure also to work with your Technical Account Manager (TAM). Your TAM can verify your service quotas and ensure that any necessary quotas are raised before the event begins. This way, you don't accidentally hit any service quotas. They can also help you by pre warming services such as load balancers to make sure your event goes smoothly.
Handling unscheduled demand shocks is a more difficult problem. Unscheduled shocks, if large enough in amplitude, can quickly cause demand to outstrip capacity. It can also outpace the ability for autoscaling to react. The best way to prepare for unscheduled shocks is to over-provision resources. You must have enough resources to handle maximum anticipated traffic demand at any time.
Maintaining maximum capacity in anticipation of unscheduled demand shocks can be costly. To mitigate the cost impact, find a leading indicator metric or event that predicts a large demand shock is imminent. If the metric or event reliably provides significant advance notice, begin the scale-out process immediately when the event occurs or when the metric crosses the specific threshold that you set.
If your application is prone to sudden unscheduled demand shocks, consider adding a high-performance mode to your application that sacrifices non-critical functionality but retains crucial functionality for a customer. For example, assume that your application can switch from generating expensive customized responses to serving a static response page. In this scenario, you can increase throughput significantly without scaling the application at all.
You can consider breaking apart monolithic services to better deal with demand shocks. If your application is a monolithic service that's expensive to run and slow to scale, you might be able to extract or rewrite performance-critical pieces and run them as separate services. These new services then can be scaled independently from less-critical components. Having the flexibility to scale out performance-critical functionality separately from other parts of your application can both reduce the time it takes to add capacity and help conserve costs.