Amazon Data Firehose Quota - Amazon Data Firehose
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China (PDF).

Firehose supports database as a source in all Amazon Web Services Regions except China Regions, Amazon GovCloud (US) Regions, and Asia Pacific (Malaysia). This feature is in preview and is subject to change. Do not use it for your production workloads.

Amazon Data Firehose Quota

This section describes current quotas, formerly referred to as limits, within Amazon Data Firehose. Each quota applies on a per-Region basis unless otherwise specified.

The Service Quotas console is a central location where you can view and manage your quotas for Amazon services, and request a quota increase for many of the resources that you use. Use the quota information that we provide to manage your Amazon infrastructure. Plan to request any quota increases in advance of the time that you'll need them.

For more information, see Amazon Data Firehose endpoints and quotas in the Amazon Web Services General Reference.

The following section shows Amazon Data Firehose has the following quota.

  • With Amazon MSK as the source for the Firehose stream, each Firehose stream has a default quota of 10 MB/sec of read throughput per partition and 10MB max record size. You can use the Service quota increase to request an increase on the default quota of 10 MB/sec of read throughput per partition.

  • With Amazon MSK as the source for the Firehose stream, there is a 6 MB maximum record size if Amazon Lambda is enabled, and 10 MB maximum record size if Lambda is disabled. Amazon Lambda caps its incoming record to 6 MB, and Amazon Data Firehose forwards records above 6Mb to an error S3 bucket. If Lambda is disabled, Firehose cap its incoming record to 10 MB. If Amazon Data Firehose receives a record size from Amazon MSK that is larger than 10 MB, then Amazon Data Firehose delivers this record to S3 error bucket and emits Cloudwatch metrics to your account. For more information on Amazon Lambda limits, see: https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-limits.html.

  • When dynamic partitioning on a Firehose stream is enabled, there is a default quota of 500 active partitions that can be created for that Firehose stream. The active partition count is the total number of active partitions within the delivery buffer. For example, if the dynamic partitioning query constructs 3 partitions per second and you have a buffer hint configuration that triggers delivery every 60 seconds, then, on average, you would have 180 active partitions. Once data is delivered in a partition, then this partition is no longer active. You can use the Amazon Data Firehose Limits form to request an increase of this quota up to 5000 active partitions per given Firehose stream. If you need more partitions, you can create more Firehose streams and distribute the active partitions across them.

  • When dynamic partitioning on a Firehose stream is enabled, a max throughput of 1 GB per second is supported for each active partition.

  • Each account will have following quota for the number of Firehose streams per Region:

    • US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Asia Pacific (Tokyo): 5,000 Firehose streams

    • Europe (Frankfurt), Europe (London), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Seoul), Asia Pacific (Mumbai), Amazon GovCloud (US-West), Canada (West), Canada (Central): 2,000 Firehose streams

    • Europe (Paris), Europe (Milan), Europe (Stockholm), Asia Pacific (Hong Kong), Asia Pacific (Osaka), South America (Sao Paulo), China (Ningxia), China (Beijing), Middle East (Bahrain), Amazon GovCloud (US-East), Africa (Cape Town): 500 Firehose streams

    • Europe (Zurich), Europe (Spain), Asia Pacific (Hyderabad), Asia Pacific (Jakarta), Asia Pacific (Melbourne), Middle East (UAE), Israel (Tel Aviv), Canada West (Calgary), Canada (Central), Asia Pacific (Malaysia): 100 Firehose streams

    • If you exceed this number, a call to CreateDeliveryStream results in a LimitExceededException exception. To increase this quota, you can use Service Quotas if it's available in your Region. For information about using Service Quotas, see Requesting a Quota Increase. If Service Quotas aren't available in your Region, you can use the Amazon Data Firehose Limits form to request an increase.

  • When Direct PUT is configured as the data source, each Firehose stream provides the following combined quota for PutRecord and PutRecordBatch requests:

    • For US East (N. Virginia), US West (Oregon), and Europe (Ireland): 500,000 records/second, 2,000 requests/second, and 5 MiB/second.

    • For US East (Ohio), US West (N. California), Amazon GovCloud (US-East), Amazon GovCloud (US-West), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), China (Beijing), China (Ningxia), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Canada West (Calgary), Europe (Frankfurt), Europe (London), Europe (Paris), Europe (Stockholm), Middle East (Bahrain), South America (São Paulo), Africa (Cape Town), Asia Pacific (Malaysia), and Europe (Milan): 100,000 records/second, 1,000 requests/second, and 1 MiB/second.

    To request an increase in quota, use the Amazon Data Firehose Limits form. The three quota scale proportionally. For example, if you increase the throughput quota in US East (N. Virginia), US West (Oregon), or Europe (Ireland) to 10 MiB/second, the other two quota increase to 4,000 requests/second and 1,000,000 records/second.

    Important

    If the increased quota is much higher than the running traffic, it causes small delivery batches to destinations. This is inefficient and can result in higher costs at the destination services. Be sure to increase the quota only to match current running traffic, and increase the quota further if traffic increases.

    Important

    Note that smaller data records can lead to higher costs. Firehose ingestion pricing is based on the number of data records you send to the service, times the size of each record rounded up to the nearest 5KB (5120 bytes). So, for the same volume of incoming data (bytes), if there is a greater number of incoming records, the cost incurred would be higher. For example, if the total incoming data volume is 5MiB, sending 5MiB of data over 5,000 records costs more compared to sending the same amount of data using 1,000 records. For more information, see Amazon Data Firehose in the Amazon Calculator.

    Note

    When Kinesis Data Streams is configured as the data source, this quota doesn't apply, and Amazon Data Firehose scales up and down with no limit.

  • Each Firehose stream stores data records for up to 24 hours in case the delivery destination is unavailable and if the source is DirectPut. If the source is Kinesis Data Streams (KDS) and the destination is unavailable, then the data will be retained based on your KDS configuration.

  • The maximum size of a record sent to Amazon Data Firehose, before base64-encoding, is 1,000 KiB.

  • The PutRecordBatch operation can take up to 500 records per call or 4 MiB per call, whichever is smaller. This quota cannot be changed.

  • Each of the following operations can provide up to five invocations per second, which is a hard limit.

  • The buffer interval hints range from 60 seconds to 900 seconds.

  • For delivery from Amazon Data Firehose to Amazon Redshift, only publicly accessible Amazon Redshift clusters are supported.

  • The retry duration range is from 0 seconds to 7,200 seconds for Amazon Redshift and OpenSearch Service delivery.

  • Firehose supports Elasticsearch versions – 1.5, 2.3, 5.1, 5.3, 5.5, 5.6, as well as all 6.*, 7.*, and 8.* versions. Firehose supports Amazon OpenSearch Service 2.x up to 2.11.

  • When the destination is Amazon S3, Amazon Redshift, or OpenSearch Service, Amazon Data Firehose allows up to 5 outstanding Lambda invocations per shard. For Splunk, the quota is 10 outstanding Lambda invocations per shard.

  • You can use a CMK of type CUSTOMER_MANAGED_CMK to encrypt up to 500 Firehose streams.