Amazon FSx for NetApp ONTAP performance - FSx for ONTAP
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China (PDF).

Amazon FSx for NetApp ONTAP performance

Following is an overview of Amazon FSx for NetApp ONTAP file system performance, with a discussion of the available performance and throughput options and useful performance tips.

How performance is measured for FSx for ONTAP file systems

File system performance is measured by its latency, throughput, and I/O operations per second (IOPS).

Latency

Amazon FSx for NetApp ONTAP provides sub-millisecond file operation latencies with solid state drive (SSD) storage, and tens of milliseconds of latency for capacity pool storage. Above that, Amazon FSx has two layers of read caching on each file server—NVMe (non-volatile memory express) drives and in-memory—to provide even lower latencies when you access your most frequently-read data.

Throughput and IOPS

Each Amazon FSx file system provides up to tens of GB/s of throughput and millions of IOPS. The specific amount of throughput and IOPS that your workload can drive on your file system depends on the total throughput capacity and storage capacity configuration of your file system, along with the nature of your workload, including the size of the active working set.

SMB Multichannel and NFS nconnect support

With Amazon FSx, you can configure SMB Multichannel to provide multiple connections between ONTAP and clients in a single SMB session. SMB Multichannel uses multiple network connections between the client and server simultaneously to aggregate network bandwidth for maximal utilization. For information on using the NetApp ONTAP CLI to configure SMB Multichannel, see Configuring SMB Multichannel for performance and redundancy.

NFS clients can use the nconnect mount option to have multiple TCP connections (up to 16) associated with a single NFS mount. Such an NFS client multiplexes file operations onto multiple TCP connections in a round-robin fashion and thus obtains higher throughput from the available network bandwidth. NFSv3 and NFSv4.1+ support nconnect. Amazon EC2 instance network bandwidth describes the full duplex 5 Gbps per network flow bandwidth limit. You can overcome this limit by using multiple network flows with nconnect or SMB multichannel. See your NFS client documentation to confirm whether nconnect is supported in your client version. For more information about NetApp ONTAP support for nconnect, see ONTAP support for NFSv4.1.

Performance details

To understand the Amazon FSx for NetApp ONTAP performance model in detail, you can examine the architectural components of an Amazon FSx file system. Your client compute instances, whether they exist in Amazon or on-premises, access your file system through one or multiple elastic network interfaces (ENI). These network interfaces reside in the Amazon VPC that you associate with your file system. Behind each file system ENI is an NetApp ONTAP file server that is serving data over the network to the clients accessing the file system. Amazon FSx provides a fast in-memory cache and NVMe cache on each file server to enhance performance for the most frequently accessed data. Attached to each file server are the SSD disks hosting your file system data.

These components are illustrated in the following diagram.


      FSx for ONTAP architecture.

Corresponding with these architectural components–network interface, in-memory cache, NVMe cache, and storage volumes–are the primary performance characteristics of an Amazon FSx for NetApp ONTAP file system that determine the overall throughput and IOPS performance.

  • Network I/O performance: throughput/IOPS of requests between the clients and the file server (in aggregate)

  • In-memory and NVMe cache size on the file server: size of active working set that can be accommodated for caching

  • Disk I/O performance: throughput/IOPS of requests between the file server and the storage disks

There are two factors that determine these performance characteristics for your file system: the total amount of SSD IOPS and throughput capacity that you configure for it. The first two performance characteristics – network I/O performance and in-memory and NVMe cache size – are solely determined by throughput capacity, while the third one – disk I/O performance – is determined by a combination of throughput capacity and SSD IOPS.

File-based workloads are typically spiky, characterized by short, intense periods of high I/O with plenty of idle time between bursts. To support spiky workloads, in addition to the baseline speeds that a file system can sustain 24/7, Amazon FSx provides the capability to burst to higher speeds for periods of time for both network I/O and disk I/O operations. Amazon FSx uses a network I/O credit mechanism to allocate throughput and IOPS based on average utilization — file systems accrue credits when their throughput and IOPS usage is below their baseline limits, and can use these credits when they perform I/O operations.

Write operations use twice as much network bandwidth as read operations. A write operation has to be replicated on the secondary file server, so a single write operation results in twice the amount of network throughput.

Impact of deployment type on performance

You can create two types of file systems with FSx for ONTAP. File systems with a single high-availability (HA) pair of file servers are called scale-up file systems. File systems with multiple HA pairs are called scale-out file systems. For more information, see High-availability (HA) pairs.

FSx for ONTAP Multi-AZ and Single-AZ file systems provide consistent sub-millisecond file operation latencies with SSD storage and tens of milliseconds of latency with capacity pool storage. Additionally, file systems that meet the following requirements provide an NVMe read cache to reduce read latencies and increase IOPS for frequently-read data:

  • Multi-AZ file systems

  • Single-AZ scale-up file systems created after November 28, 2022 with at least 2 GBps of throughput capacity

The following tables show the amount of throughput capacity that file systems can scale up to depending on factors such as the number of high availability (HA) pairs and Amazon Web Services Regions availability.

Scale-up

These performance specifications apply to scale-up file systems.

Maximum throughput from SSD storage per HA pair for scale-up file systems
US East (Ohio) Region, US East (N. Virginia) Region, US West (Oregon) Region, and Europe (Ireland) All other Amazon Web Services Regions where FSx for ONTAP is available

Read throughput (MBps)

Write throughput (MBps)

Read throughput (MBps)

Write throughput (MBps)

Single-AZ

4,096* 1,000 2,048 750

Multi-AZ

4,096* 1,800 2,048 1,300
Note

* To provision 4 GBps of throughput capacity, your file system must be configured with a minimum of 5,120 GiB of SSD storage capacity and 160,000 SSD IOPS.

Scale-out

These performance specifications apply to scale-out file systems.

Maximum throughput from SSD storage per HA pair for scale-out file systems

Read throughput (MBps)

Write throughput (MBps)

Single-AZ scale-out

6,144* 1,100
Note

* Per HA pair (up to six). For more information, see High-availability (HA) pairs.

Impact of storage capacity on performance

The maximum disk throughput and IOPS levels your file system can achieve is the lower of:

  • the disk performance level provided by your file servers, based on the throughput capacity you select for your file system

  • the disk performance level provided by the number of SSD IOPS you provision for your file system

By default, your file system's SSD storage provides up to the following levels of disk throughput and IOPS:

  • Disk throughput (MBps per TiB of storage): 768

  • Disk IOPS (IOPs per TiB of storage): 3,072

Impact of throughput capacity on performance

Every Amazon FSx file system has a throughput capacity that you configure when the file system is created. Your file system's throughput capacity determines the level of network I/O performance, or the speed at which each of the file servers that are hosting your file system can serve file data over the network to clients accessing it. Higher levels of throughput capacity come with more memory and non-volatile memory express (NVMe) storage for caching data on each file server, and higher levels of disk I/O performance supported by each file server.

You can optionally provision a higher level of SSD IOPS when creating your file system. The maximum level of SSD IOPS that your file system can achieve is also dictated by your file system's throughput capacity, even when provisioning additional SSD IOPS.

The following tables show the full set of specifications for throughput capacity, along with baseline and burst levels, and amount of memory for caching on the file server in the corresponding Amazon Web Services Regions.

Single-AZ (scale-up)

These performance specifications apply to Single-AZ scale-up file systems created after November 28, 2022 in the specified Amazon Web Services Regions.

Performance specifications for file systems in the following Amazon Web Services Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), and Europe (Ireland)
FSx throughput capacity (MBps) Network throughput capacity (MBps) Network IOPS In-memory caching (GB) NVMe read caching (GB) Disk throughput (MBps) SSD drive IOPS *

Baseline

Burst

Baseline

Burst

Baseline

Burst

128 188 1,500

Tens of thousands baseline

16 128 1,250 6,000

40,000

256 375 1,500 32 256 1,250 12,000

40,000

512 750 1,500

Hundreds of thousands baseline

64 512 1,250 20,000 40,000
1,024 1,500 128 1,024 1,250 40,000
2,048 3,125

256 1,900 2,048

80,000
4,096 6,250

512 5,400 4,096

160,000
Note

* Your SSD IOPS are only used when you access data that is not cached in your file server's in-memory cache or NVMe cache.

These performance specifications apply to Single-AZ scale-up file systems in all other Amazon Web Services Regions where FSx for ONTAP is available.

FSx throughput capacity (MBps) Network throughput capacity (MBps) Network IOPS In-memory caching (GB) Disk throughput (MBps) SSD drive IOPS *

Baseline

Burst

Baseline

Burst

Baseline

Burst

128 150 1,250

Tens of thousands baseline

16 128 600 6,000

18,750

256 300 1,250 32 256 600 12,000

18,750

512 625 1,250

Hundreds of thousands baseline

64 512 600 18,750
1,024 1,500 128 1,024

40,000
2,048 3,125

256 2,048

80,000
Note

* Your SSD IOPS are only used when you access data that is not cached in your file server's in-memory cache or NVMe cache.

Single-AZ (scale-out)

These performance specifications apply to scale-out file systems created after November 27, 2023.

Performance specifications for scale-out file systems
FSx throughput capacity (MBps) Network throughput capacity (MBps) Network IOPS In-memory caching (GB) Disk throughput (MBps) SSD drive IOPS *

Baseline

Burst

Baseline

Burst

Baseline

Burst

3,072** 6,250

Hundreds of thousands baseline

128 3,072

100,000
6,144** 12,500

256 6,144

200,000
Note

* Your SSD IOPS are only used when you access data that is not cached in your file server's in-memory cache or NVMe cache.

** Per HA pair (up to six). For more information, see High-availability (HA) pairs.

Multi-AZ (scale-up)

These performance specifications apply to Multi-AZ scale-up file systems created after November 28, 2022 in the specified Amazon Web Services Regions.

Performance specifications for file systems in the following Amazon Web Services Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), and Europe (Ireland)
FSx throughput capacity (MBps) Network throughput capacity (MBps) Network IOPS In-memory caching (GB) NVMe caching (GB) Disk throughput (MBps) SSD drive IOPS *

Baseline

Burst

Baseline

Burst

Baseline

Burst

128 188 1,500

Tens of thousands baseline

16 238 128 1,250 6,000

40,000

256 375 1,500 32 475 256 1,250 12,000

40,000

512 750 1,500

Hundreds of thousands baseline

64 950 512 1,250 20,000 40,000
1,024 1,500 128 1,900 1,024

1,250

40,000
2,048 3,125

256 3,800 2,048

80,000
4,096 6,250

512 7,600 4,096

160,000
Note

* Your SSD IOPS are only used when you access data that is not cached in your file server's in-memory cache or NVMe cache.

These performance specifications apply to Multi-AZ scale-up file systems in all other Amazon Web Services Regions where FSx for ONTAP is available.

FSx throughput capacity (MBps) Network throughput capacity (MBps) Network IOPS In-memory caching (GB) NVMe caching (GB) Disk throughput (MBps) SSD drive IOPS *

Baseline

Burst

Baseline

Burst

Baseline

Burst

128 150 1,250

Tens of thousands baseline

16 150 128 600 6,000

18,750

256 300 1,250 32 300 256 600 12,000

18,750

512 625 1,250

Hundreds of thousands baseline

64 600 512 600 18,750
1,024 1,500 128 1,200 1,024

40,000
2,048 3,125

256 2,400 2,048

80,000
Note

* Your SSD IOPS are only used when you access data that is not cached in your file server's in-memory cache or NVMe cache.

Example: storage capacity and throughput capacity

The following example illustrates how storage capacity and throughput capacity impact file system performance.

A scale-up file system that is configured with 2 TiB of SSD storage capacity and 512 MBps of throughput capacity has the following throughput levels:

  • Network throughput – 625 MBps baseline and 1,250 MBps burst (see throughput capacity table)

  • Disk throughput – 512 MBps baseline and 600 MBps burst.

Your workload accessing the file system will therefore be able to drive up to 625 MBps baseline and 1,250 MBps burst throughput for file operations performed on actively accessed data cached in the file server in-memory cache and NVMe cache.