Provisioning storage throughput - Amazon Managed Streaming for Apache Kafka
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China (PDF).

Provisioning storage throughput

Amazon MSK brokers persist data on storage volumes. Storage I/O is consumed when producers write to the cluster, when data is replicated between brokers, and when consumers read data that isn't in memory. The volume storage throughput is the rate at which data can be written into and read from a storage volume. Provisioned storage throughput is the ability to specify that rate for the brokers in your cluster.

You can specify the provisioned throughput rate in MiB per second for clusters whose brokers are of size kafka.m5.4xlarge or larger and if the storage volume is 10 GiB or greater. It is possible to specify provisioned throughput during cluster creation. You can also enable or disable provisioned throughput for a cluster that is in the ACTIVE state.

Throughput bottlenecks

There are multiple causes of bottlenecks in broker throughput: volume throughput, Amazon EC2 to Amazon EBS network throughput, and Amazon EC2 egress throughput. You can enable provisioned storage throughput to adjust volume throughput. However, broker throughput limitations can be caused by Amazon EC2 to Amazon EBS network throughput and Amazon EC2 egress throughput.

Amazon EC2 egress throughput is impacted by the number of consumer groups and consumers per consumer groups. Also, both Amazon EC2 to Amazon EBS network throughput and Amazon EC2 egress throughput are higher for larger broker sizes.

For volume sizes of 10 GiB or larger, you can provision storage throughput of 250 MiB per second or greater. 250 MiB per second is the default. To provision storage throughput, you must choose broker size kafka.m5.4xlarge or larger (or kafka.m7g.2xlarge or larger), and you can specify maximum throughput as shown in the following table.

broker size Maximum storage throughput (MiB/second)
kafka.m5.4xlarge 593
kafka.m5.8xlarge 850
kafka.m5.12xlarge 1000
kafka.m5.16xlarge 1000
kafka.m5.24xlarge 1000
kafka.m7g.2xlarge 312.5
kafka.m7g.4xlarge 625
kafka.m7g.8xlarge 1000
kafka.m7g.12xlarge 1000
kafka.m7g.16xlarge 1000

Measuring storage throughput

You can use the VolumeReadBytes and VolumeWriteBytes metrics to measure the average storage throughput of a cluster. The sum of these two metrics gives the average storage throughput in bytes. To get the average storage throughput for a cluster, set these two metrics to SUM and the period to 1 minute, then use the following formula.

Average storage throughput in MiB/s = (Sum(VolumeReadBytes) + Sum(VolumeWriteBytes)) / (60 * 1024 * 1024)

For information about the VolumeReadBytes and VolumeWriteBytes metrics, see PER_BROKER Level monitoring.

Configuration update

You can update your Amazon MSK configuration either before or after you turn on provisioned throughput. However, you won't see the desired throughput until you perform both actions: update the num.replica.fetchers configuration parameter and turn on provisioned throughput.

In the default Amazon MSK configuration, num.replica.fetchers has a value of 2. To update your num.replica.fetchers, you can use the suggested values from the following table. These values are for guidance purposes. We recommend that you adjust these values based on your use case.

broker size num.replica.fetchers
kafka.m5.4xlarge 4
kafka.m5.8xlarge 8
kafka.m5.12xlarge 14
kafka.m5.16xlarge 16
kafka.m5.24xlarge 16

Your updated configuration may not take effect for up to 24 hours, and may take longer when a source volume is not fully utilized. However, transitional volume performance at least equals the performance of source storage volumes during the migration period. A fully-utilized 1 TiB volume typically takes about six hours to migrate to an updated configuration.

Provisioning storage throughput using the Amazon Web Services Management Console

  1. Sign in to the Amazon Web Services Management Console, and open the Amazon MSK console at https://console.amazonaws.cn/msk/home?region=us-east-1#/home/.

  2. Choose Create cluster.

  3. Choose Custom create.

  4. Specify a name for the cluster.

  5. In the Storage section, choose Enable.

  6. Choose a value for storage throughput per broker.

  7. Choose a VPC, zones and subnets, and a security group.

  8. Choose Next.

  9. At the bottom of the Security step, choose Next.

  10. At the bottom of the Monitoring and tags step, choose Next.

  11. Review the cluster settings, then choose Create cluster.

Provisioning storage throughput using the Amazon CLI

This section shows an example of how you can use the Amazon CLI to create a cluster with provisioned throughput enabled.

  1. Copy the following JSON and paste it into a file. Replace the subnet IDs and security group ID placeholders with values from your account. Name the file cluster-creation.json and save it.

    { "Provisioned": { "BrokerNodeGroupInfo":{ "InstanceType":"kafka.m5.4xlarge", "ClientSubnets":[ "Subnet-1-ID", "Subnet-2-ID" ], "SecurityGroups":[ "Security-Group-ID" ], "StorageInfo": { "EbsStorageInfo": { "VolumeSize": 10, "ProvisionedThroughput": { "Enabled": true, "VolumeThroughput": 250 } } } }, "EncryptionInfo": { "EncryptionInTransit": { "InCluster": false, "ClientBroker": "PLAINTEXT" } }, "KafkaVersion":"2.8.1", "NumberOfBrokerNodes": 2 }, "ClusterName": "provisioned-throughput-example" }
  2. Run the following Amazon CLI command from the directory where you saved the JSON file in the previous step.

    aws kafka create-cluster-v2 --cli-input-json file://cluster-creation.json

Provisioning storage throughput using the API

To configure provisioned storage throughput while creating a cluster, use CreateClusterV2.