Compute optimized instances - Amazon Elastic Compute Cloud
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China (PDF).

Compute optimized instances

Note

For detailed instance type specifications, see the Amazon EC2 Instance Types Guide. For pricing information, see Amazon EC2 Instance Types.

Compute optimized instances are ideal for compute-bound applications that benefit from high-performance processors.

C5 and C5n instances

These instances are well suited for the following:

  • Batch processing workloads

  • Media transcoding

  • High-performance web servers

  • High-performance computing (HPC)

  • Scientific modeling

  • Dedicated gaming servers and ad serving engines

  • Machine learning inference and other compute-intensive applications

Bare metal instances, such as c5.metal, provide your applications with direct access to physical resources of the host server, such as processors and memory.

C6g, C6gd, and C6gn instances

These instances are powered by Amazon Graviton2 processors and are ideal for running advanced, compute-intensive workloads, such as the following:

  • High-performance computing (HPC)

  • Batch processing

  • Ad serving

  • Video encoding

  • Gaming servers

  • Scientific modeling

  • Distributed analytics

  • CPU-based machine learning inference

Bare metal instances, such as c6g.metal, provide your applications with direct access to physical resources of the host server, such as processors and memory.

C6i and C6id instances

These instances are ideal for running advanced, compute-intensive workloads, such as the following:

  • High-performance computing (HPC)

  • Batch processing

  • Ad serving

  • Video encoding

  • Distributed analytics

  • Highly scalable multiplayer gaming

C6in instances

These instances are well suited for compute-intensive workloads such as the following:

  • Distributed computing applications

  • Network virtual appliances

  • Data analytics

  • High Performance Computing (HPC)

  • CPU-based AI/ML

For more information, see Amazon EC2 C6i Instances.

C7a instances

These instances are powered by 4th generation AMD EPYC processors and are ideal for running advanced, compute-intensive workloads, such as the following:

  • High-performance computing (HPC)

  • Batch processing

  • Ad serving

  • Video encoding

  • Gaming servers

  • Scientific modeling

  • Distributed analytics

For more information, see Amazon EC2 C7a instances.

C7g and C7gd instances

These instances are powered by Amazon Graviton3 processors and are ideal for running advanced, compute-intensive workloads, such as the following:

  • High-performance computing (HPC)

  • Batch processing

  • Ad serving

  • Video encoding

  • Gaming servers

  • Scientific modeling

  • Distributed analytics

For more information, see Amazon EC2 C7g instances.

C7gn instances

Featuring the new Amazon Nitro Cards, C7gn instances deliver the highest network bandwidth, and the best packet-processing performance for Graviton-based Amazon EC2 instances. C7gn instances offer up to 200 Gbps network bandwidth and up to 50 percent higher packet-processing performance compared to previous generation C6gn instances. C7gn instances are ideal for network-intensive workloads, including:

  • Network virtual appliance workloads

  • Data-intensive workloads, such as data analytics

  • CPU-based artificial intelligence and machine learning (AI/ML) inference workloads

For more information, see Amazon EC2 C7g instances.

C7i instances

C7i instances are ideal for running compute intensive workloads, such as batch processing, machine learning, high end gaming, ad serving, and video encoding.

For more information, see Amazon EC2 C7i instances.

Hardware specifications

The following is a summary of the hardware specifications for compute optimized instances. A virtual central processing unit (vCPU) represents a portion of the physical CPU assigned to a virtual machine (VM). For x86 instances, there are two vCPUs per core. For Graviton instances, there is one vCPU per core.

Instance type Default vCPUs Memory (GiB)
c1.medium 2 1.70
c1.xlarge 8 7.00
c3.large 2 3.75
c3.xlarge 4 7.50
c3.2xlarge 8 15.00
c3.4xlarge 16 30.00
c3.8xlarge 32 60.00
c4.large 2 3.75
c4.xlarge 4 7.50
c4.2xlarge 8 15.00
c4.4xlarge 16 30.00
c4.8xlarge 36 60.00
c5.large 2 4.00
c5.xlarge 4 8.00
c5.2xlarge 8 16.00
c5.4xlarge 16 32.00
c5.9xlarge 36 72.00
c5.12xlarge 48 96.00
c5.18xlarge 72 144.00
c5.24xlarge 96 192.00
c5.metal 96 192.00
c5a.large 2 4.00
c5a.xlarge 4 8.00
c5a.2xlarge 8 16.00
c5a.4xlarge 16 32.00
c5a.8xlarge 32 64.00
c5a.12xlarge 48 96.00
c5a.16xlarge 64 128.00
c5a.24xlarge 96 192.00
c5ad.large 2 4.00
c5ad.xlarge 4 8.00
c5ad.2xlarge 8 16.00
c5ad.4xlarge 16 32.00
c5ad.8xlarge 32 64.00
c5ad.12xlarge 48 96.00
c5ad.16xlarge 64 128.00
c5ad.24xlarge 96 192.00
c5d.large 2 4.00
c5d.xlarge 4 8.00
c5d.2xlarge 8 16.00
c5d.4xlarge 16 32.00
c5d.9xlarge 36 72.00
c5d.12xlarge 48 96.00
c5d.18xlarge 72 144.00
c5d.24xlarge 96 192.00
c5d.metal 96 192.00
c5n.large 2 5.25
c5n.xlarge 4 10.50
c5n.2xlarge 8 21.00
c5n.4xlarge 16 42.00
c5n.9xlarge 36 96.00
c5n.18xlarge 72 192.00
c5n.metal 72 192.00
c6a.large 2 4.00
c6a.xlarge 4 8.00
c6a.2xlarge 8 16.00
c6a.4xlarge 16 32.00
c6a.8xlarge 32 64.00
c6a.12xlarge 48 96.00
c6a.16xlarge 64 128.00
c6a.24xlarge 96 192.00
c6a.32xlarge 128 256.00
c6a.48xlarge 192 384.00
c6a.metal 192 384.00
c6g.medium 1 2.00
c6g.large 2 4.00
c6g.xlarge 4 8.00
c6g.2xlarge 8 16.00
c6g.4xlarge 16 32.00
c6g.8xlarge 32 64.00
c6g.12xlarge 48 96.00
c6g.16xlarge 64 128.00
c6g.metal 64 128.00
c6gd.medium 1 2.00
c6gd.large 2 4.00
c6gd.xlarge 4 8.00
c6gd.2xlarge 8 16.00
c6gd.4xlarge 16 32.00
c6gd.8xlarge 32 64.00
c6gd.12xlarge 48 96.00
c6gd.16xlarge 64 128.00
c6gd.metal 64 128.00
c6gn.medium 1 2.00
c6gn.large 2 4.00
c6gn.xlarge 4 8.00
c6gn.2xlarge 8 16.00
c6gn.4xlarge 16 32.00
c6gn.8xlarge 32 64.00
c6gn.12xlarge 48 96.00
c6gn.16xlarge 64 128.00
c6i.large 2 4.00
c6i.xlarge 4 8.00
c6i.2xlarge 8 16.00
c6i.4xlarge 16 32.00
c6i.8xlarge 32 64.00
c6i.12xlarge 48 96.00
c6i.16xlarge 64 128.00
c6i.24xlarge 96 192.00
c6i.32xlarge 128 256.00
c6i.metal 128 256.00
c6id.large 2 4.00
c6id.xlarge 4 8.00
c6id.2xlarge 8 16.00
c6id.4xlarge 16 32.00
c6id.8xlarge 32 64.00
c6id.12xlarge 48 96.00
c6id.16xlarge 64 128.00
c6id.24xlarge 96 192.00
c6id.32xlarge 128 256.00
c6id.metal 128 256.00
c6in.large 2 4.00
c6in.xlarge 4 8.00
c6in.2xlarge 8 16.00
c6in.4xlarge 16 32.00
c6in.8xlarge 32 64.00
c6in.12xlarge 48 96.00
c6in.16xlarge 64 128.00
c6in.24xlarge 96 192.00
c6in.32xlarge 128 256.00
c6in.metal 128 256.00
c7a.medium 1 2.00
c7a.large 2 4.00
c7a.xlarge 4 8.00
c7a.2xlarge 8 16.00
c7a.4xlarge 16 32.00
c7a.8xlarge 32 64.00
c7a.12xlarge 48 96.00
c7a.16xlarge 64 128.00
c7a.24xlarge 96 192.00
c7a.32xlarge 128 256.00
c7a.48xlarge 192 384.00
c7a.metal-48xl 192 384.00
c7g.medium 1 2.00
c7g.large 2 4.00
c7g.xlarge 4 8.00
c7g.2xlarge 8 16.00
c7g.4xlarge 16 32.00
c7g.8xlarge 32 64.00
c7g.12xlarge 48 96.00
c7g.16xlarge 64 128.00
c7g.metal 64 128.00
c7gd.medium 1 2.00
c7gd.large 2 4.00
c7gd.xlarge 4 8.00
c7gd.2xlarge 8 16.00
c7gd.4xlarge 16 32.00
c7gd.8xlarge 32 64.00
c7gd.12xlarge 48 96.00
c7gd.16xlarge 64 128.00
c7gd.metal 64 128.00
c7gn.medium 1 2.00
c7gn.large 2 4.00
c7gn.xlarge 4 8.00
c7gn.2xlarge 8 16.00
c7gn.4xlarge 16 32.00
c7gn.8xlarge 32 64.00
c7gn.12xlarge 48 96.00
c7gn.16xlarge 64 128.00
c7i.large 2 4.00
c7i.xlarge 4 8.00
c7i.2xlarge 8 16.00
c7i.4xlarge 16 32.00
c7i.8xlarge 32 64.00
c7i.12xlarge 48 96.00
c7i.16xlarge 64 128.00
c7i.24xlarge 96 192.00
c7i.48xlarge 192 384.00
c7i.metal-24xl 96 192.00
c7i.metal-48xl 192 384.00

The compute optimized instances use the following processors.

Amazon Graviton processors
  • Amazon Graviton2: C6g, C6gd, C6gn

  • Amazon Graviton3: C7g, C7gd

  • Amazon Graviton3E: C7gn

AMD processors
  • 2nd generation AMD EPYC processors (AMD EPYC 7R32): C5a, C5ad

  • 3rd generation AMD EPYC processors (AMD EPYC 7R13): C6a

  • 4th generation AMD EPYC processors (AMD EPYC 9R14): C7a

Intel processors
  • Intel Xeon Scalable processors (Haswell E5-2666 v3): C4

  • Intel Xeon Scalable processors (Skylake 8124): C5n

  • Intel Xeon Scalable processors (Skylake 8124M or Cascade Lake 8223CL): Smaller C5 and C5d

  • 2nd generation Intel Xeon Scalable processors (Cascade Lake 8275CL): Larger C5 and C5d

  • 3rd generation Intel Xeon Scalable processors (Ice Lake 8375C): C6i, C6id

  • 4th generation Intel Xeon Scalable processors (Sapphire Rapids 8488C): C7i

For detailed instance type specifications, see the Amazon EC2 Instance Types Guide. For pricing information, see Amazon EC2 Instance Types.

Instance performance

EBS-optimized instances enable you to get consistently high performance for your EBS volumes by eliminating contention between Amazon EBS I/O and other network traffic from your instance. Some compute optimized instances are EBS-optimized by default at no additional cost. For more information, see Amazon EBS–optimized instances.

Some compute optimized instance types provide the ability to control processor C-states and P-states on Linux. C-states control the sleep levels that a core can enter when it is inactive, while P-states control the desired performance (in CPU frequency) from a core. For more information, see Processor state control for your EC2 instance.

Network performance

You can enable enhanced networking on supported instance types to provide lower latencies, lower network jitter, and higher packet-per-second (PPS) performance. Most applications do not consistently need a high level of network performance, but can benefit from access to increased bandwidth when they send or receive data. For more information, see Enhanced networking on Linux.

The following is a summary of network performance for compute optimized instances that support enhanced networking.

Note

Instance types indicated with a have a baseline bandwidth and can use a network I/O credit mechanism to burst beyond their baseline bandwidth on a best effort basis. For more information, see instance network bandwidth.

Instance type Network performance Enhanced networking features
c1.medium Moderate Not supported
c1.xlarge High Not supported
c3.large Moderate Intel 82599 VF
c3.xlarge Moderate Intel 82599 VF
c3.2xlarge High Intel 82599 VF
c3.4xlarge High Intel 82599 VF
c3.8xlarge 10 Gigabit Intel 82599 VF
c4.large Moderate Intel 82599 VF
c4.xlarge High Intel 82599 VF
c4.2xlarge High Intel 82599 VF
c4.4xlarge High Intel 82599 VF
c4.8xlarge 10 Gigabit Intel 82599 VF
c5.large Up to 10 Gigabit ENA
c5.xlarge Up to 10 Gigabit ENA
c5.2xlarge Up to 10 Gigabit ENA
c5.4xlarge Up to 10 Gigabit ENA
c5.9xlarge 12 Gigabit ENA
c5.12xlarge 12 Gigabit ENA
c5.18xlarge 25 Gigabit ENA
c5.24xlarge 25 Gigabit ENA
c5.metal 25 Gigabit ENA
c5a.large Up to 10 Gigabit ENA
c5a.xlarge Up to 10 Gigabit ENA
c5a.2xlarge Up to 10 Gigabit ENA
c5a.4xlarge Up to 10 Gigabit ENA
c5a.8xlarge 10 Gigabit ENA
c5a.12xlarge 12 Gigabit ENA
c5a.16xlarge 20 Gigabit ENA
c5a.24xlarge 20 Gigabit ENA
c5ad.large Up to 10 Gigabit ENA
c5ad.xlarge Up to 10 Gigabit ENA
c5ad.2xlarge Up to 10 Gigabit ENA
c5ad.4xlarge Up to 10 Gigabit ENA
c5ad.8xlarge 10 Gigabit ENA
c5ad.12xlarge 12 Gigabit ENA
c5ad.16xlarge 20 Gigabit ENA
c5ad.24xlarge 20 Gigabit ENA
c5d.large Up to 10 Gigabit ENA
c5d.xlarge Up to 10 Gigabit ENA
c5d.2xlarge Up to 10 Gigabit ENA
c5d.4xlarge Up to 10 Gigabit ENA
c5d.9xlarge 12 Gigabit ENA
c5d.12xlarge 12 Gigabit ENA
c5d.18xlarge 25 Gigabit ENA
c5d.24xlarge 25 Gigabit ENA
c5d.metal 25 Gigabit ENA
c5n.large Up to 25 Gigabit ENA
c5n.xlarge Up to 25 Gigabit ENA
c5n.2xlarge Up to 25 Gigabit ENA
c5n.4xlarge Up to 25 Gigabit ENA
c5n.9xlarge 50 Gigabit ENA | EFA
c5n.18xlarge 100 Gigabit ENA | EFA
c5n.metal 100 Gigabit ENA | EFA
c6a.large Up to 12.5 Gigabit ENA
c6a.xlarge Up to 12.5 Gigabit ENA
c6a.2xlarge Up to 12.5 Gigabit ENA
c6a.4xlarge Up to 12.5 Gigabit ENA
c6a.8xlarge 12.5 Gigabit ENA
c6a.12xlarge 18.75 Gigabit ENA
c6a.16xlarge 25 Gigabit ENA
c6a.24xlarge 37.5 Gigabit ENA
c6a.32xlarge 50 Gigabit ENA
c6a.48xlarge 50 Gigabit ENA | EFA
c6a.metal 50 Gigabit ENA | EFA
c6g.medium Up to 10 Gigabit ENA
c6g.large Up to 10 Gigabit ENA
c6g.xlarge Up to 10 Gigabit ENA
c6g.2xlarge Up to 10 Gigabit ENA
c6g.4xlarge Up to 10 Gigabit ENA
c6g.8xlarge 12 Gigabit ENA
c6g.12xlarge 20 Gigabit ENA
c6g.16xlarge 25 Gigabit ENA
c6g.metal 25 Gigabit ENA
c6gd.medium Up to 10 Gigabit ENA
c6gd.large Up to 10 Gigabit ENA
c6gd.xlarge Up to 10 Gigabit ENA
c6gd.2xlarge Up to 10 Gigabit ENA
c6gd.4xlarge Up to 10 Gigabit ENA
c6gd.8xlarge 12 Gigabit ENA
c6gd.12xlarge 20 Gigabit ENA
c6gd.16xlarge 25 Gigabit ENA
c6gd.metal 25 Gigabit ENA
c6gn.medium Up to 16 Gigabit ENA
c6gn.large Up to 25 Gigabit ENA
c6gn.xlarge Up to 25 Gigabit ENA
c6gn.2xlarge Up to 25 Gigabit ENA
c6gn.4xlarge 25 Gigabit ENA
c6gn.8xlarge 50 Gigabit ENA
c6gn.12xlarge 75 Gigabit ENA
c6gn.16xlarge 100 Gigabit ENA | EFA
c6i.large Up to 12.5 Gigabit ENA
c6i.xlarge Up to 12.5 Gigabit ENA
c6i.2xlarge Up to 12.5 Gigabit ENA
c6i.4xlarge Up to 12.5 Gigabit ENA
c6i.8xlarge 12.5 Gigabit ENA
c6i.12xlarge 18.75 Gigabit ENA
c6i.16xlarge 25 Gigabit ENA
c6i.24xlarge 37.5 Gigabit ENA
c6i.32xlarge 50 Gigabit ENA | EFA
c6i.metal 50 Gigabit ENA | EFA
c6id.large Up to 12.5 Gigabit ENA
c6id.xlarge Up to 12.5 Gigabit ENA
c6id.2xlarge Up to 12.5 Gigabit ENA
c6id.4xlarge Up to 12.5 Gigabit ENA
c6id.8xlarge 12.5 Gigabit ENA
c6id.12xlarge 18.75 Gigabit ENA
c6id.16xlarge 25 Gigabit ENA
c6id.24xlarge 37.5 Gigabit ENA
c6id.32xlarge 50 Gigabit ENA | EFA
c6id.metal 50 Gigabit ENA | EFA
c6in.large Up to 25 Gigabit ENA
c6in.xlarge Up to 30 Gigabit ENA
c6in.2xlarge Up to 40 Gigabit ENA
c6in.4xlarge Up to 50 Gigabit ENA
c6in.8xlarge 50 Gigabit ENA
c6in.12xlarge 75 Gigabit ENA
c6in.16xlarge 100 Gigabit ENA
c6in.24xlarge 150 Gigabit ENA
c6in.32xlarge 200 Gigabit ENA | EFA
c6in.metal 200 Gigabit ENA | EFA
c7a.medium Up to 12.5 Gigabit ENA
c7a.large Up to 12.5 Gigabit ENA
c7a.xlarge Up to 12.5 Gigabit ENA
c7a.2xlarge Up to 12.5 Gigabit ENA
c7a.4xlarge Up to 12.5 Gigabit ENA
c7a.8xlarge 12.5 Gigabit ENA
c7a.12xlarge 18.75 Gigabit ENA
c7a.16xlarge 25 Gigabit ENA
c7a.24xlarge 37.5 Gigabit ENA
c7a.32xlarge 50 Gigabit ENA
c7a.48xlarge 50 Gigabit ENA | EFA
c7a.metal-48xl 50 Gigabit ENA | EFA
c7g.medium Up to 12.5 Gigabit ENA
c7g.large Up to 12.5 Gigabit ENA
c7g.xlarge Up to 12.5 Gigabit ENA
c7g.2xlarge Up to 15 Gigabit ENA
c7g.4xlarge Up to 15 Gigabit ENA
c7g.8xlarge 15 Gigabit ENA
c7g.12xlarge 22.5 Gigabit ENA
c7g.16xlarge 30 Gigabit ENA | EFA
c7g.metal 30 Gigabit ENA | EFA
c7gd.medium Up to 12.5 Gigabit ENA
c7gd.large Up to 12.5 Gigabit ENA
c7gd.xlarge Up to 12.5 Gigabit ENA
c7gd.2xlarge Up to 15 Gigabit ENA
c7gd.4xlarge Up to 15 Gigabit ENA
c7gd.8xlarge 15 Gigabit ENA
c7gd.12xlarge 22.5 Gigabit ENA
c7gd.16xlarge 30 Gigabit ENA | EFA
c7gd.metal 30 Gigabit ENA | EFA
c7gn.medium Up to 25 Gigabit ENA
c7gn.large Up to 30 Gigabit ENA
c7gn.xlarge Up to 40 Gigabit ENA
c7gn.2xlarge Up to 50 Gigabit ENA
c7gn.4xlarge 50 Gigabit ENA
c7gn.8xlarge 100 Gigabit ENA
c7gn.12xlarge 150 Gigabit ENA
c7gn.16xlarge 200 Gigabit ENA | EFA
c7i.large Up to 12.5 Gigabit ENA
c7i.xlarge Up to 12.5 Gigabit ENA
c7i.2xlarge Up to 12.5 Gigabit ENA
c7i.4xlarge Up to 12.5 Gigabit ENA
c7i.8xlarge 12.5 Gigabit ENA
c7i.12xlarge 18.75 Gigabit ENA
c7i.16xlarge 25 Gigabit ENA
c7i.24xlarge 37.5 Gigabit ENA
c7i.48xlarge 50 Gigabit ENA | EFA
c7i.metal-24xl 37.5 Gigabit ENA
c7i.metal-48xl 50 Gigabit ENA | EFA

For 32xlarge and metal instance types that support 200 Gbps, at least 2 ENIs, each attached to a different network card, are required on the instance to achieve 200 Gbps throughput. Each ENI attached to a network card can achieve a max of 170 Gbps.

The following table shows the baseline and burst bandwidth for instance types that use the network I/O credit mechanism to burst beyond their baseline bandwidth.

Instance type Baseline bandwidth (Gbps) Burst bandwidth (Gbps)
c5.large 0.75 10.0
c5.xlarge 1.25 10.0
c5.2xlarge 2.5 10.0
c5.4xlarge 5.0 10.0
c5a.large 0.75 10.0
c5a.xlarge 1.25 10.0
c5a.2xlarge 2.5 10.0
c5a.4xlarge 5.0 10.0
c5ad.large 0.75 10.0
c5ad.xlarge 1.25 10.0
c5ad.2xlarge 2.5 10.0
c5ad.4xlarge 5.0 10.0
c5d.large 0.75 10.0
c5d.xlarge 1.25 10.0
c5d.2xlarge 2.5 10.0
c5d.4xlarge 5.0 10.0
c5n.large 3.0 25.0
c5n.xlarge 5.0 25.0
c5n.2xlarge 10.0 25.0
c5n.4xlarge 15.0 25.0
c6a.large 0.781 12.5
c6a.xlarge 1.562 12.5
c6a.2xlarge 3.125 12.5
c6a.4xlarge 6.25 12.5
c6g.medium 0.5 10.0
c6g.large 0.75 10.0
c6g.xlarge 1.25 10.0
c6g.2xlarge 2.5 10.0
c6g.4xlarge 5.0 10.0
c6gd.medium 0.5 10.0
c6gd.large 0.75 10.0
c6gd.xlarge 1.25 10.0
c6gd.2xlarge 2.5 10.0
c6gd.4xlarge 5.0 10.0
c6gn.medium 1.6 16.0
c6gn.large 3.0 25.0
c6gn.xlarge 6.3 25.0
c6gn.2xlarge 12.5 25.0
c6i.large 0.781 12.5
c6i.xlarge 1.562 12.5
c6i.2xlarge 3.125 12.5
c6i.4xlarge 6.25 12.5
c6id.large 0.781 12.5
c6id.xlarge 1.562 12.5
c6id.2xlarge 3.125 12.5
c6id.4xlarge 6.25 12.5
c6in.large 3.125 25.0
c6in.xlarge 6.25 30.0
c6in.2xlarge 12.5 40.0
c6in.4xlarge 25.0 50.0
c7a.medium 0.39 12.5
c7a.large 0.781 12.5
c7a.xlarge 1.562 12.5
c7a.2xlarge 3.125 12.5
c7a.4xlarge 6.25 12.5
c7g.medium 0.52 12.5
c7g.large 0.937 12.5
c7g.xlarge 1.876 12.5
c7g.2xlarge 3.75 15.0
c7g.4xlarge 7.5 15.0
c7gd.medium 0.52 12.5
c7gd.large 0.937 12.5
c7gd.xlarge 1.876 12.5
c7gd.2xlarge 3.75 15.0
c7gd.4xlarge 7.5 15.0
c7gn.medium 3.125 25.0
c7gn.large 6.25 30.0
c7gn.xlarge 12.5 40.0
c7gn.2xlarge 25.0 50.0
c7i.large 0.781 12.5
c7i.xlarge 1.562 12.5
c7i.2xlarge 3.125 12.5
c7i.4xlarge 6.25 12.5

Amazon EBS I/O performance

Amazon EBS optimized instances use an optimized configuration stack and provide additional, dedicated capacity for Amazon EBS I/O. This optimization provides the best performance for your Amazon EBS volumes by minimizing contention between Amazon EBS I/O and other traffic from your instance.

For more information, see Amazon EBS–optimized instances.

SSD-based instance store volume I/O performance

If you use a Linux AMI with kernel version 4.4 or later and use all the SSD-based instance store volumes available to your instance, you can get up to the IOPS (4,096 byte block size) performance listed in the following table (at queue depth saturation). Otherwise, you get lower IOPS performance.

Instance Size 100% Random Read IOPS Write IOPS
c5ad.large 16283 7105
c5ad.xlarge 32566 14211
c5ad.2xlarge 65132 28421
c5ad.4xlarge 130262 56842
c5ad.8xlarge 260526 113684
c5ad.12xlarge 412500 180000
c5ad.16xlarge 521052 227368
c5ad.24xlarge 825000 360000
c5d.large 20000 9000
c5d.xlarge 40000 18000
c5d.2xlarge 80000 37000
c5d.4xlarge 175000 75000
c5d.9xlarge 350000 170000
c5d.12xlarge 700000 340000
c5d.18xlarge 700000 340000
c5d.24xlarge 1400000 680000
c5d.metal 1400000 680000
c6gd.medium 13438 5625
c6gd.large 26875 11250
c6gd.xlarge 53750 22500
c6gd.2xlarge 107500 45000
c6gd.4xlarge 215000 90000
c6gd.8xlarge 430000 180000
c6gd.12xlarge 645000 270000
c6gd.16xlarge 860000 360000
c6gd.metal 860000 360000
c6id.large 33542 16771
c6id.xlarge 67083 33542
c6id.2xlarge 134167 67084
c6id.4xlarge 268333 134167
c6id.8xlarge 536666 268334
c6id.12xlarge 804998 402500
c6id.16xlarge 1073332 536668
c6id.24xlarge 1609996 805000
c6id.32xlarge 2146664 1073336
c6id.metal 2146664 1073336
c7gd.medium 16771 8385
c7gd.large 33542 16771
c7gd.xlarge 67083 33542
c7gd.2xlarge 134167 67084
c7gd.4xlarge 268333 134167
c7gd.8xlarge 536666 268334
c7gd.12xlarge 804998 402500
c7gd.16xlarge 1073332 536668
c7gd.metal 1073332 536668

As you fill the SSD-based instance store volumes for your instance, the number of write IOPS that you can achieve decreases. This is due to the extra work the SSD controller must do to find available space, rewrite existing data, and erase unused space so that it can be rewritten. This process of garbage collection results in internal write amplification to the SSD, expressed as the ratio of SSD write operations to user write operations. This decrease in performance is even larger if the write operations are not in multiples of 4,096 bytes or not aligned to a 4,096-byte boundary. If you write a smaller amount of bytes or bytes that are not aligned, the SSD controller must read the surrounding data and store the result in a new location. This pattern results in significantly increased write amplification, increased latency, and dramatically reduced I/O performance.

SSD controllers can use several strategies to reduce the impact of write amplification. One such strategy is to reserve space in the SSD instance storage so that the controller can more efficiently manage the space available for write operations. This is called over-provisioning. The SSD-based instance store volumes provided to an instance don't have any space reserved for over-provisioning. To reduce write amplification, we recommend that you leave 10% of the volume unpartitioned so that the SSD controller can use it for over-provisioning. This decreases the storage that you can use, but increases performance even if the disk is close to full capacity.

For instance store volumes that support TRIM, you can use the TRIM command to notify the SSD controller whenever you no longer need data that you've written. This provides the controller with more free space, which can reduce write amplification and increase performance. For more information, see Instance store volume TRIM support.

Release notes

  • C4 instances and instances built on the Nitro System require 64-bit EBS-backed HVM AMIs. They have high-memory and require a 64-bit operating system to take advantage of that capacity. HVM AMIs provide superior performance in comparison to paravirtual (PV) AMIs on high-memory instance types. In addition, you must use an HVM AMI to take advantage of enhanced networking.

  • C7g instances are powered by the latest generation Amazon Graviton3 processors. They offer up to 25% better performance over the sixth generation Amazon Graviton2-based C6g instances. C7g instances are the first in the cloud to feature DDR5 memory, which provides 50% higher memory bandwidth compared to DDR4 memory to enable high-speed access to data in memory.

    • C7g instances limit the aggregate execution rate of high power instructions such as floating point multiply accumulates across cores in an instance. Future instance types focused on other workload segments may not have this restriction

  • Instances built on the Nitro System have the following requirements:

    The following Linux AMIs meet these requirements:

    • AL2023

    • Amazon Linux 2

    • Amazon Linux AMI 2018.03 and later

    • Ubuntu 14.04 or later with linux-aws kernel

      Note

      Amazon Graviton-based instance types require Ubuntu 18.04 or later with linux-aws kernel

    • Red Hat Enterprise Linux 7.4 or later

    • SUSE Linux Enterprise Server 12 SP2 or later

    • CentOS 7.4.1708 or later

    • FreeBSD 11.1 or later

    • Debian GNU/Linux 9 or later

  • Instances with an Amazon Graviton processors have the following requirements:

    • Use an AMI for the 64-bit Arm architecture.

    • Support booting through UEFI with ACPI tables and support ACPI hot-plug of PCI devices.

    The following AMIs meet these requirements:

    • Amazon Linux 2 (64-bit Arm)

    • Ubuntu 16.04 or later (64-bit Arm)

    • Red Hat Enterprise Linux 8.0 or later (64-bit Arm)

    • SUSE Linux Enterprise Server 15 or later (64-bit Arm)

    • Debian 10 or later (64-bit Arm)

  • To get the best performance from your C6i instances, ensure that they have ENA driver version 2.2.9 or later. Using an ENA driver earlier than version 1.2 with these instances causes network interface attachment failures. The following AMIs have a compatible ENA driver.

    • AL2023

    • Amazon Linux 2 with kernel 4.14.186 and later

    • Ubuntu 20.04 with kernel 5.4.0-1025-aws and later

    • Red Hat Enterprise Linux 8.3 with kernel 4.18.0-240.1.1.el8_3.ARCH and later

    • SUSE Linux Enterprise Server 15 SP2 with kernel 5.3.18-24.15.1 and later

  • The maximum number of Amazon EBS volumes that you can attach to an instance depends on the instance type and instance size. For more information, see Instance volume limits.

  • To get the best performance from your C6gn instances, ensure that they have ENA driver version 2.2.9 or later. Using an ENA driver earlier than version 1.2 with these instances causes network interface attachment failures. The following AMIs have a compatible ENA driver.

    • AL2023

    • Amazon Linux 2 with kernel 4.14.186 and later

    • Ubuntu 20.04 with kernel 5.4.0-1025-aws and later

    • Red Hat Enterprise Linux 8.3 with kernel 4.18.0-240.1.1.el8_3.ARCH and later

    • SUSE Linux Enterprise Server 15 SP2 with kernel 5.3.18-24.15.1 and later

  • To launch AMIs for all Linux distributions on C6gn instances, use AMIs with the latest version and run an update for the latest driver. For earlier versions, download the latest driver from GitHub.

  • Launching a bare metal instance boots the underlying server, which includes verifying all hardware and firmware components. This means that it can take 20 minutes from the time the instance enters the running state until it becomes available over the network.

  • To attach or detach EBS volumes or secondary network interfaces from a bare metal instance requires PCIe native hotplug support. Amazon Linux 2 and the latest versions of the Amazon Linux AMI support PCIe native hotplug, but earlier versions do not. You must enable the following Linux kernel configuration options:

    CONFIG_HOTPLUG_PCI_PCIE=y CONFIG_PCIEASPM=y
  • Bare metal instances use a PCI-based serial device rather than an I/O port-based serial device. The upstream Linux kernel and the latest Amazon Linux AMIs support this device. Bare metal instances also provide an ACPI SPCR table to enable the system to automatically use the PCI-based serial device. The latest Windows AMIs automatically use the PCI-based serial device.

  • Instances built on the Nitro System should have acpid installed to support clean shutdown through API requests.

  • There is a limit on the total number of instances that you can launch in a Region, and there are additional limits on some instance types. For more information, see How many instances can I run in Amazon EC2? in the Amazon EC2 FAQ.