Configuring and right-sizing your file system - Amazon FSx for Windows File Server
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China (PDF).

Configuring and right-sizing your file system

Selecting a deployment type

Amazon FSx provides two deployment options: Single-AZ and Multi-AZ. We recommend using Multi-AZ file systems for most production workloads that require high availability to shared Windows file data. For more information, see Availability and durability: Single-AZ and Multi-AZ file systems.

Selecting a storage type

SSD storage is appropriate for most production workloads that have high performance requirements and latency-sensitivity. Examples of these workloads include databases, data analytics, media processesing, and business applications. We also recommend SSD for use cases involving large numbers of end users, high levels of I/O, or datasets that have large numbers of small files. Lastly, we recommend using SSD storage if you plan to enable shadow copies. You can configure and scale SSD IOPS for file systems with SSD storage, but not HDD storage.

If you decide to use HDD storage, test your file system to ensure it's able to meet your performance requirements. HDD storage comes at a lower cost relative to SSD storage, but with higher latencies and lower levels of disk throughput and disk IOPS per unit of storage. It might be suitable for general-purpose user shares and home directories with low I/O requirements, large content management systems (CMS) where data is retrieved infrequently, or datasets with small numbers of large files. For more information, see Storage configuration & performance.

You can upgrade your storage type from HDD to SSD at any time by using the Amazon FSx Console or Amazon FSx API. For more information, see Managing storage type.

Selecting a throughput capacity

Configure your file system with sufficient throughput capacity to meet not only the expected traffic of your workload, but also additional performance resources required to support the features you want to enable on your file system. For example, if you’re running data deduplication, the throughput capacity that you select must provide enough memory to run deduplication based on the storage that you have. If you’re using shadow copies, increase throughput capacity to a value that's at least three times the value that's expected to be driven by your workload to avoid Windows Server deleting your shadow copies. For more information, see Impact of throughput capacity on performance.

Increasing your storage capacity and throughput capacity

Increase the storage capacity of your file system when it’s running low on free storage, or when you expect your storage requirements to grow larger than the current storage limit. We recommend maintaining at least 10% of free storage capacity at all times on your file system. We also recommend increasing storage capacity by at least 20% before storage scaling, as you will not be able to increase it while the process is ongoing. You can use the FreeStorageCapacity CloudWatch metric to monitor the amount of free storage available and understand how it trends. For more information, see Managing storage capacity.

You should also increase the throughput capacity of your file system if your workload is constrained by the current performance limits. You can use the Monitoring and performance page on the FSx console to see when workload demands have approached or exceeded performance limits to determine whether your file system is under-provisioned for your workload.

To minimize the duration of storage scaling and avoid reduction in write performance, we recommend increasing your file system's throughput capacity before increasing storage capacity and then scaling back throughput capacity after the storage capacity increase is complete. Most workloads experience minimal performance impact during storage scaling, but write-heavy applications with large active datasets can temporarily experience up to a one-half reduction in the write performance.

Modifying throughput capacity during idle periods

Updating throughput capacity interrupts availability for a few minutes for Single-AZ file systems and causes failover and failback for Multi-AZ file systems. For Multi-AZ file systems, if there is ongoing traffic during failover and failback, any data changes made during this time will need to be synchronized between the file servers. The data synchronization process can take up to multiple hours for write-heavy and IOPS-heavy workloads. Although your file system will continue to be available during this time, we recommend scheduling maintenance windows and performing throughput capacity updates during idle periods when there is minimal load on your file system to reduce the duration of data synchronization. To learn more, see Managing throughput capacity.