Optimizing costs on DynamoDB tables - Amazon DynamoDB
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China.

Optimizing costs on DynamoDB tables

This section covers best practices on how to optimize costs for your existing DynamoDB tables. You should look at the following strategies to see which cost optimization strategy best suits your needs and approach them iteratively. Each strategy will provide an overview of what might be impacting your costs, what signs to look for, and prescriptive guidance on how to reduce them.

Topics

Determine if you can improve your table-level cost analysis

The default Amazon Cost Explorer configuration makes it easy to see aggregated DynamoDB costs by usage type, but does not expose individual table costs. This makes it harder to answer questions like “What is my highest cost table in a region?” or “What usage type is responsible for the majority of the cost for this table?”. To enable these table-level filtering options in Cost Explorer, you can tag your tables eponymously (by their own name).

How to enable table-level cost analysis with tagging

Tag each table through the Amazon Web Services Management Console or Amazon CLI, using a consistent tag key for the table name (e.g. "tablename"), setting the tag value to that table’s name. If you need to tag a large number of tables, you can use the Eponymous Table Tagger tool.

Note

In order to use this functionality, you need to enable the tag as a tag that can be filtered on in Amazon Cost Explorer. It can take up to 24 hours for the newly created tag to show up, and 24 hours for it to be usable in this way.

Determine if you need on-demand capacity or provisioned capacity

DynamoDB tables support two capacity modes: on-demand capacity mode and provisioned capacity mode.

  • On-demand capacity mode automatically changes capacity settings to adapt to workloads with frequently changing capacity requirements, and can scale capacity to zero for intermittent workloads. On-demand capacity mode is the recommended mode for new workloads with undetermined capacity requirements, as it does not require capacity planning or provisioning, and you only pay for the reads and writes performed.

  • With provisioned capacity mode, you specify the number of reads and writes per second required for your table and are billed hourly for that provisioned capacity. Provisioned capacity mode is a good fit for workloads with predictable capacity requirements, as it priced lower than on-demand capacity mode for the equivalent capacity.

Note

Throttling can occur if you do not provision your table accurately and traffic to your DynamoDB table spikes repeatedly. For more information on capacity modes, see DynamoDB Capacity Modes.

For new workloads with unknown capacity needs, use on-demand capacity mode for a few weeks, and then evaluate the capacity usage using CloudWatch metrics to validate the capacity mode choice. If your traffic is unpredictable, with spikes in demand, keep using on-demand capacity mode. If your traffic is predictable with steady reads and write usage, consider changing to provisioned capacity mode.

How to choose the right table capacity mode

You can determine the amount of capacity used via CloudWatch metrics. For an in-depth look at how you can implement this strategy, see Evaluate table capacity mode.

Determine if you are using auto scaling for your provisioned capacity tables

Most tables using provisioned capacity mode should also make use of DynamoDB auto scaling. Auto scaling automatically changes provisioned capacity settings based on your table’s actual capacity utilization, avoiding the need to manually manage capacity as your workload changes. DynamoDB auto scaling helps ensure your provisioned capacity tables are not over provisioned, avoiding unnecessary costs.

How to use auto scaling for provisioned capacity tables

You can enable DynamoDB Auto Scaling using the Amazon Amazon Web Services Management Console, Amazon Amazon CLI, or with the Amazon SDK.

Determine if you are using the right table class

DynamoDB offers two table classes designed to help you optimize costs for your workloads: Standard and Standard-Infrequent Access (DynamoDB Standard-IA). Choose the table class that best fits your workload’s balance between storage and throughput usage:

  • Standard table class (the default): this table class balances storage and read/write costs.

  • Standard-IA table class: this table class offers up to ~60% lower storage costs, and 25% higher read/write costs than the Standard table class.

Note

The table class chosen can affect other pricing dimensions such as read/write costs, data storage costs, Global Table costs, and GSI costs (GSIs inherit the table class of their parent table). For more information about choosing between the two table classes, Considerations when choosing a table class.

How to choose the right table class

You can use the Table Class Evaluator Tool to identify tables that may benefit from the Standard-IA table class. For an in-depth look at how you can implement this strategy, see Evaluate table class.

Determine if you have unused resources

Unused DynamoDB tables and Global Secondary Indexes (GSIs) can generate costs even if they are not actively being used. Regularly check each of your tables and GSIs to ensure they are still in active use, and consider deleting unused resources.

How to identify unused resources that impact billing

Check the CloudWatch metrics console for tables and GSIs with no ConsumedReadCapacityUnits or ConsumedWriteCapacityUnits traffic. For an in-depth look at how you can implement this strategy, see Identifying unused resources.

Determine if you have sub-optimal usage patterns

Sub-optimal table usage patterns can lead to unintentional costs. Evaluate how you’re using your tables and whether any of these usage patterns apply to you.

  • Performing only strongly-consistent read operations

    An eventually-consistent read (the default) consumes 0.5 (one-half) RCU/RRU per 4kb. A strongly-consistent read consumes 1 RCU/RRU per 4kb, twice the cost of an eventually-consistent read. Ensure that you only use strongly-consistent reads where your application cannot tolerate stale data. For more information about RCUs, see read capacity units and write capacity units.

  • Using transactions for all read operations

    Transactional reads (using the TransactReadItems and PartiQL ExecuteTransaction APIs) consume 2 RCUs/RRUs per 4KB read, twice the cost of strongly consistent reads, and four times the cost of eventually-consistent reads . Ensure that you are only performing transactional reads when your application requires all-or-nothing consistency for all items read. For more information about transactions, see Amazon DynamoDB Transactions: How it works.

  • Using transactions for all write operations

    Transactional writes (using the TransactWriteItems and PartiQL ExecuteTransaction APIs) consume 2 WCUs per 1KB write, twice the write capacity of a standard write. Ensure that you are only performing transactional writes when your application requires all-or-nothing atomicity for all items written. For more information about transactions, see Amazon DynamoDB Transactions: How it works.

  • Scanning tables for analytics operations

    Performing data analysis via table scans can generate high costs, as you are charged for all data read from the table. Consider using DynamoDB’s Export to Amazon S3 functionality to export table data, and perform analytics ion the data in Amazon S3 instead.

    Note

    Point-in-time recovery for DynamoDB must be enabled for a table in order to use Export to S3.

  • Not using Time-to-Live (TTL)

    Regularly removing unneeded items from your tables can help reduce storage costs. DynamoDB’s Time-to-Live feature automatically deletes items from your tables, reducing storage costs without incurring additional write costs.

  • Keeping long-term backups in warm storage

    Amazon Backup offers a cold storage tier for DynamoDB backups, which can reduce backup storage costs by up to 66% for long-term backup data. Amazon Backup also offers lifecycle features, and backups created by Amazon Backup inherit table tags that can be used for cost optimization analysis.

    Note

    You must opt in to the Access Management in the Amazon Backup service before you can start using it.

  • Using Global Tables for Disaster Recovery of a single region

    If you are using a Global Tables replica strictly for Disaster Recovery of a single primary region, and your Recovery Point Objective (RPO) and Recovery Time Objective (RTO) goals are measured in hours or days, you may not need Global Table’s low latency replication to meet your recovery goals. Consider using Amazon Backup for your DynamoDB tables, and use Amazon Backup’s cross region copy feature Access Management cross-Region copy feature.

How to identify suboptimal table usage patterns

  • Using only strongly-consistent reads

    Talk to your developers to see if they are only using strongly-consistent reads and if it’s necessary for your workflow.

  • Using transactions for all operations

    Check CloudWatch and filter on Operations to look at your TransactGetItem and TransactWriteItem requests. Compare the chart with your overall table utilization—if they look the same then everything is probably being done as a transaction.

  • Scanning tables for batch operations

    Export your DynamoDB data to S3 and perform any scan operations on the S3 data instead.

  • Not using Time to Live (TTL)

    Implement TTL in your item structure (TTL is free) and regularly remove items you don’t need.

  • Not using Amazon Backup vs. DynamoDB Backup

    Opt into the Amazon Backup service and start backing up your data with Amazon Backup instead of DynamoDB Backup.

  • Using global tables for data replication with high allowable Recovery Point Objective (RPO)/ Recovery Time Objective (RTO) requirements

    Check your global table usage and see whether they are being used for their intended purpose, or whether they are being used just for data replication. Global tables offer low RTO/RPO. There may be cheaper alternatives if you have more flexible RTO/RPO requirements.

Determine if you can lower your stream costs

Change data capture for DynamoDB Streams captures item modifications as stream records. You can configure DynamoDB Streams as an event source that triggers a Lambda functions to process stream records to perform tasks such as making your data searchable by indexing with Amazon OpenSearch, or aggregating data for analytics in Amazon Redshift.

If you know that a Lambda function only needs to process a subset of DynamoDB item changes, you can define an event filter to trigger Lambda functions on specific events instead of every stream event. This will reduce the number of Lambda invocations, and thus lower your Lambda costs.

Note

This cost optimization practice reduces costs for the Lambda service rather than DynamoDB directly.

How to evaluate streams usage to filter by events

Define a Lambda event filter pattern to only trigger Lambda functions for necessary events.