Programming Amazon DynamoDB with Amazon SDK for Java 2.x - Amazon DynamoDB
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China (PDF).

Programming Amazon DynamoDB with Amazon SDK for Java 2.x

This guide provides an orientation to programmers wanting to use Amazon DynamoDB with Java. This guide covers the different concepts, such as abstraction layers, configuration management, error handling, controlling retry policies, and managing keep-alive.

About Amazon SDK for Java 2.x

You can access DynamoDB from Java using the official Amazon SDK for Java. The SDK for Java has two versions: 1.x and 2.x. For 1.x the end-of-support was announced on January 12, 2024. It entered maintenance mode on July 31, 2024 and its end-of-support is due on December 31, 2025. For new development, we highly recommend that you use 2.x, which was first released in 2018. This guide exclusively targets 2.x and focuses only on the parts of the SDK relevant to DynamoDB.

You can find more information about maintenance and support for SDKs in the Amazon SDK and Tools maintenance policy and Amazon SDKs and Tools version support matrix topics of the Amazon SDKs and Tools Reference Guide.

The Amazon SDK for Java 2.x is a major rewrite of the 1.x code base to support modern Java features, such as the non-blocking I/O introduced in Java 8. SDK for Java 2.x also adds support for pluggable HTTP client implementations to provide more flexibility and configurability for network connections.

A noticeable change between the SDK for Java 1.x and the SDK for Java 2.x is the use of a new package name. The Java 1.x SDK uses the com.amazonaws package name, while the Java 2.x SDK uses software.amazon.awssdk. Similarly, Maven artifacts for the Java 1.x SDK use the com.amazonaws groupId, while Java 2.x SDK artifacts use the software.amazon.awssdk groupId.

Important

The Amazon SDK for Java 1.x has a DynamoDB package named com.amazonaws.dynamodbv2. The v2 in the package name doesn't indicate it’s Java v2. v2 indicates that the package supports the second version of the DynamoDB low-level API instead of the original version of the low-level API.

Support for Java versions

The Amazon SDK for Java 2.x provides full support for long-term support (LTS) Java releases.

Getting started with Amazon SDK for Java 2.x

The following tutorial shows you how to use Apache Maven for defining dependencies for the SDK for Java 2.x. This tutorial also shows you how to write the code that connects to DynamoDB for listing the available DynamoDB tables. This tutorial is based on the Get started with the Amazon SDK for Java 2.x. We've edited this tutorial to make calls to DynamoDB instead of Amazon S3.

Step 1: Set up for this tutorial

Before you begin this tutorial, you need the following:

  • Permission to access Amazon DynamoDB.

  • A Java development environment that is configured to access Amazon Web Services using single sign-on to the Amazon IAM Identity Center

Use the instructions in Setup overview to get set up for this tutorial. After you have configured your development environment with single sign-on access for the Java SDK and you have an active Amazon access portal session, continue with Step 2 of this tutorial.

Step 2: Create the project

To create the project for this tutorial, you run a Maven command that prompts you for input on how to configure the project. After all input is entered and confirmed, Maven finishes building out the project by creating a pom.xml file and creating stub Java files.

  1. Open a terminal or command prompt window and navigate to a directory of your choice, for example, your Desktop or Home folder.

  2. Enter the following command at the terminal and press Enter.

    mvn archetype:generate \ -DarchetypeGroupId=software.amazon.awssdk \ -DarchetypeArtifactId=archetype-app-quickstart \ -DarchetypeVersion=2.22.0
  3. Enter the value listed in the second column for each prompt.

    Prompt Value to enter
    Define value for property 'service': dynamodb
    Define value for property 'httpClient': apache-client
    Define value for property 'nativeImage': false
    Define value for property 'credentialProvider' identity-center
    Define value for property 'groupId': org.example
    Define value for property 'artifactId': getstarted
    Define value for property 'version' 1.0-SNAPSHOT: <Enter>
    Define value for property 'package' org.example: <Enter>
  4. After the last value is entered, Maven lists the choices you made. Confirm by entering Y or re-enter values by entering N.

Maven creates the project folder named getstarted based on the artifactId value that you entered. Inside the getstarted folder, find a README.md file that you can review, a pom.xml file, and a src directory.

Maven builds the following directory tree.

getstarted ├── README.md ├── pom.xml └── src ├── main │ ├── java │ │ └── org │ │ └── example │ │ ├── App.java │ │ ├── DependencyFactory.java │ │ └── Handler.java │ └── resources │ └── simplelogger.properties └── test └── java └── org └── example └── HandlerTest.java 10 directories, 7 files

The following shows the contents of the pom.xml project file.

The dependencyManagement section contains a dependency to the Amazon SDK for Java 2.x and the dependencies section has a dependency for Amazon DynamoDB. Specifying these dependencies forces Maven to include the relevant jar files into your Java class path. By default, the Amazon SDK doesn't include all the classes for all Amazon Web Services. For DynamoDB, you should have dependency on dynamodb artifact in case you use the low-level interface or dynamodb-enhanced if you use the high-level interface. If you don’t include the relevant dependencies, your code will not compile. The project uses Java 1.8 because of the 1.8 value in the maven.compiler.source and maven.compiler.target properties.

<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>org.example</groupId> <artifactId>getstarted</artifactId> <version>1.0-SNAPSHOT</version> <packaging>jar</packaging> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <maven.compiler.source>1.8</maven.compiler.source> <maven.compiler.target>1.8</maven.compiler.target> <maven.shade.plugin.version>3.2.1</maven.shade.plugin.version> <maven.compiler.plugin.version>3.6.1</maven.compiler.plugin.version> <exec-maven-plugin.version>1.6.0</exec-maven-plugin.version> <aws.java.sdk.version>2.22.0</aws.java.sdk.version> <-------- SDK version picked up from archetype version. <slf4j.version>1.7.28</slf4j.version> <junit5.version>5.8.1</junit5.version> </properties> <dependencyManagement> <dependencies> <dependency> <groupId>software.amazon.awssdk</groupId> <artifactId>bom</artifactId> <version>${aws.java.sdk.version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <dependency> <groupId>software.amazon.awssdk</groupId> <artifactId>dynamodb</artifactId> <-------- DynamoDB dependency <exclusions> <exclusion> <groupId>software.amazon.awssdk</groupId> <artifactId>netty-nio-client</artifactId> </exclusion> <exclusion> <groupId>software.amazon.awssdk</groupId> <artifactId>apache-client</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>software.amazon.awssdk</groupId> <artifactId>sso</artifactId> <-------- Required for identity center authentication. </dependency> <dependency> <groupId>software.amazon.awssdk</groupId> <artifactId>ssooidc</artifactId> <-------- Required for identity center authentication. </dependency> <dependency> <groupId>software.amazon.awssdk</groupId> <artifactId>apache-client</artifactId> <-------- HTTP client specified. <exclusions> <exclusion> <groupId>commons-logging</groupId> <artifactId>commons-logging</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-api</artifactId> <version>${slf4j.version}</version> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-simple</artifactId> <version>${slf4j.version}</version> </dependency> <!-- Needed to adapt Apache Commons Logging used by Apache HTTP Client to Slf4j to avoid ClassNotFoundException: org.apache.commons.logging.impl.LogFactoryImpl during runtime --> <dependency> <groupId>org.slf4j</groupId> <artifactId>jcl-over-slf4j</artifactId> <version>${slf4j.version}</version> </dependency> <!-- Test Dependencies --> <dependency> <groupId>org.junit.jupiter</groupId> <artifactId>junit-jupiter</artifactId> <version>${junit5.version}</version> <scope>test</scope> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>${maven.compiler.plugin.version}</version> </plugin> </plugins> </build> </project>

Step 3: Write the code

The following code shows the App class created by Maven. The main method is the entry point into the application, which creates an instance of the Handler class and then calls its sendRequest method.

package org.example; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class App { private static final Logger logger = LoggerFactory.getLogger(App.class); public static void main(String... args) { logger.info("Application starts"); Handler handler = new Handler(); handler.sendRequest(); logger.info("Application ends"); } }

The DependencyFactory class created by Maven contains the dynamoDbClient factory method that builds and returns an DynamoDbClient instance. The DynamoDbClient instance uses an instance of the Apache-based HTTP client. This is because you specified apache-client when Maven prompted you for which HTTP client to use.

The DependencyFactory is shown in the following code.

package org.example; import software.amazon.awssdk.http.apache.ApacheHttpClient; import software.amazon.awssdk.services.dynamodb.DynamoDbClient; /** * The module containing all dependencies required by the {@link Handler}. */ public class DependencyFactory { private DependencyFactory() {} /** * @return an instance of DynamoDbClient */ public static DynamoDbClient dynamoDbClient() { return DynamoDbClient.builder() .httpClientBuilder(ApacheHttpClient.builder()) .build(); } }

The Handler class contains the main logic of your program. When an instance of Handler is created in the App class, the DependencyFactory furnishes the DynamoDbClient service client. Your code uses the DynamoDbClient instance to call the DynamoDB service.

Maven generates the following Handler class with a TODO comment. The next step in the tutorial replaces the TODO with code.

package org.example; import software.amazon.awssdk.services.dynamodb.DynamoDbClient; public class Handler { private final DynamoDbClient dynamoDbClient; public Handler() { dynamoDbClient = DependencyFactory.dynamoDbClient(); } public void sendRequest() { // TODO: invoking the API calls using dynamoDbClient. } }

To fill in the logic, replace the entire contents of the Handler class with the following code. The sendRequest method is filled in and the necessary imports are added.

The following code uses the DynamoDbClient instance to retrieve a list of existing tables. If tables exist for a given account and Region, the code uses the Logger instance to log the names of these tables.

package org.example; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import software.amazon.awssdk.services.dynamodb.DynamoDbClient; import software.amazon.awssdk.services.dynamodb.model.ListTablesResponse; public class Handler { private final DynamoDbClient dynamoDbClient; public Handler() { dynamoDbClient = DependencyFactory.dynamoDbClient(); } public void sendRequest() { Logger logger = LoggerFactory.getLogger(Handler.class); logger.info("calling the DynamoDB API to get a list of existing tables"); ListTablesResponse response = dynamoDbClient.listTables(); if (!response.hasTableNames()) { logger.info("No existing tables found for the configured account & region"); } else { response.tableNames().forEach(tableName -> logger.info("Table: " + tableName)); } } }

Step 4: Build and run the application

After the project is created and contains the complete Handler class, build and run the application.

  1. Make sure that you have an active IAM Identity Center session. To do so, run the Amazon Command Line Interface command aws sts get-caller-identity and check the response. If you don't have an active session, see Sign in using the Amazon CLI for instructions.

  2. Open a terminal or command prompt window and navigate to your project directory getstarted.

  3. Use the following command to build your project:

    mvn clean package
  4. Use the following command to run the application.

    mvn exec:java -Dexec.mainClass="org.example.App"

After you view the file, delete the object, and then delete the bucket.

Success

If your Maven project built and ran without error, then congratulations! You have successfully built your first Java application using the SDK for Java 2.x.

Cleanup

To clean up the resources you created during this tutorial, do the following:

  • Delete the project folder getstarted.

Using the Amazon SDK for Java 2.x documentation

The Amazon SDK for Java 2.x documentation covers all aspects of the SDK across all Amazon Web Services. Use the following topics as your starting points:

  • Migrate from version 1.x to 2.x — Includes a detailed explanation of the differences between 1.x and 2.x. This topic also contains instructions about how to use both major versions side-by-side.

  • DynamoDB guide for Java 2.x SDK — Shows you how to perform basic DynamoDB operations: creating a table, manipulating items, and retrieving items. These examples use the low-level interface. Java has several interfaces, as explained in the following section: Supported interfaces.

Tip

We recommend that after going through these topics in the Amazon SDK for Java 2.x documentation, you bookmark the Javadoc documentation. It covers all Amazon Web Services and will act as your main API reference.

Supported interfaces

The Amazon SDK for Java 2.x supports the following interfaces depending on the level of abstraction you want.

Low-level interface

The low-level interface provides a one-to-one mapping to the underlying service API. Every DynamoDB API is available through this interface. This means the low-level interface can provide complete functionality, but it's often more verbose and complex to use. For example, you have to use the .s() functions to hold strings and the .n() functions to hold numbers. The following example of PutItem inserts an item using the low-level interface.

import org.slf4j.*; import software.amazon.awssdk.http.crt.AwsCrtHttpClient; import software.amazon.awssdk.services.dynamodb.DynamoDbClient; import software.amazon.awssdk.services.dynamodb.model.*; import java.util.Map; public class PutItem { // Create a DynamoDB client with the default settings connected to the DynamoDB // endpoint in the default region based on the default credentials provider chain. private static final DynamoDbClient DYNAMODB_CLIENT = DynamoDbClient.create(); private static final Logger LOGGER = LoggerFactory.getLogger(PutItem.class); private void putItem() { PutItemResponse response = DYNAMODB_CLIENT.putItem(PutItemRequest.builder() .item(Map.of( "pk", AttributeValue.builder().s("123").build(), "sk", AttributeValue.builder().s("cart#123").build(), "item_data", AttributeValue.builder().s("YourItemData").build(), "inventory", AttributeValue.builder().n("500").build() // ... more attributes ... )) .returnConsumedCapacity(ReturnConsumedCapacity.TOTAL) .tableName("YourTableName") .build()); LOGGER.info("PutItem call consumed [" + response.consumedCapacity().capacityUnits() + "] Write Capacity Unites (WCU)"); } }

High-level interface

The high-level interface in Amazon SDK for Java 2.x is called the DynamoDB enhanced client. This interface provides a more idiomatic code authoring experience.

The enhanced client offers a way to map between client-side data classes and DynamoDB tables designed to store that data. You define the relationships between tables and their corresponding model classes in your code. Then, you can rely on the SDK to manage the data type manipulation. For more information about the enhanced client, see DynamoDB enhanced client API in the Amazon SDK for Java 2.x documentation.

The following example of PutItem uses the high-level interface. In this example, the DynamoDbBean named YourItem creates a TableSchema allowing its direct use as input for the putItem() call.

import org.slf4j.*; import software.amazon.awssdk.enhanced.dynamodb.*; import software.amazon.awssdk.enhanced.dynamodb.mapper.annotations.*; import software.amazon.awssdk.enhanced.dynamodb.model.*; import software.amazon.awssdk.services.dynamodb.model.ReturnConsumedCapacity; public class DynamoDbEnhancedClientPutItem { private static final DynamoDbEnhancedClient ENHANCED_DYNAMODB_CLIENT = DynamoDbEnhancedClient.builder().build(); private static final DynamoDbTable<YourItem> DYNAMODB_TABLE = ENHANCED_DYNAMODB_CLIENT.table("YourTableName", TableSchema.fromBean(YourItem.class)); private static final Logger LOGGER = LoggerFactory.getLogger(PutItem.class); private void putItem() { PutItemEnhancedResponse<YourItem> response = DYNAMODB_TABLE.putItemWithResponse(PutItemEnhancedRequest.builder(YourItem.class) .item(new YourItem("123", "cart#123", "YourItemData", 500)) .returnConsumedCapacity(ReturnConsumedCapacity.TOTAL) .build()); LOGGER.info("PutItem call consumed [" + response.consumedCapacity().capacityUnits() + "] Write Capacity Unites (WCU)"); } @DynamoDbBean public static class YourItem { public YourItem() {} public YourItem(String pk, String sk, String itemData, int inventory) { this.pk = pk; this.sk = sk; this.itemData = itemData; this.inventory = inventory; } private String pk; private String sk; private String itemData; private int inventory; @DynamoDbPartitionKey public void setPk(String pk) { this.pk = pk; } public String getPk() { return pk; } @DynamoDbSortKey public void setSk(String sk) { this.sk = sk; } public String getSk() { return sk; } public void setItemData(String itemData) { this.itemData = itemData; } public String getItemData() { return itemData; } public void setInventory(int inventory) { this.inventory = inventory; } public int getInventory() { return inventory; } } }

Amazon SDK for Java 1.x has its own high-level interface, which is often referred to by its main class DynamoDBMapper. The Amazon SDK for Java 2.x is published in a separate package (and Maven artifact) named software.amazon.awssdk.enhanced.dynamodb. The Java 2.x SDK is often referred to by its main class DynamoDbEnhancedClient.

High-level interface using immutable data classes

The mapping feature of the DynamoDB enhanced client API also works with immutable data classes. An immutable class has only getters and requires a builder class that the SDK uses to create instances of the class. Immutability in Java is a commonly used style that allows developers to create classes that are side-effect free and, therefore, more predictable in their behavior in complex multi-threaded applications. Instead of using the @DynamoDbBean annotation as shown in the High-level interface example, immutable classes use the @DynamoDbImmutable annotation, which takes the builder class as its input.

The following example takes the builder class DynamoDbEnhancedClientImmutablePutItem as input to create a table schema. The example then provides the schema as input for the PutItem API call.

import org.slf4j.*; import software.amazon.awssdk.enhanced.dynamodb.*; import software.amazon.awssdk.enhanced.dynamodb.model.*; import software.amazon.awssdk.services.dynamodb.model.ReturnConsumedCapacity; public class DynamoDbEnhancedClientImmutablePutItem { private static final DynamoDbEnhancedClient ENHANCED_DYNAMODB_CLIENT = DynamoDbEnhancedClient.builder().build(); private static final DynamoDbTable<YourImmutableItem> DYNAMODB_TABLE = ENHANCED_DYNAMODB_CLIENT.table("YourTableName", TableSchema.fromImmutableClass(YourImmutableItem.class)); private static final Logger LOGGER = LoggerFactory.getLogger(DynamoDbEnhancedClientImmutablePutItem.class); private void putItem() { PutItemEnhancedResponse<YourImmutableItem> response = DYNAMODB_TABLE.putItemWithResponse(PutItemEnhancedRequest.builder(YourImmutableItem.class) .item(YourImmutableItem.builder() .pk("123") .sk("cart#123") .itemData("YourItemData") .inventory(500) .build()) .returnConsumedCapacity(ReturnConsumedCapacity.TOTAL) .build()); LOGGER.info("PutItem call consumed [" + response.consumedCapacity().capacityUnits() + "] Write Capacity Unites (WCU)"); } }

The following example shows the immutable data class.

@DynamoDbImmutable(builder = YourImmutableItem.YourImmutableItemBuilder.class) class YourImmutableItem { private final String pk; private final String sk; private final String itemData; private final int inventory; public YourImmutableItem(YourImmutableItemBuilder builder) { this.pk = builder.pk; this.sk = builder.sk; this.itemData = builder.itemData; this.inventory = builder.inventory; } public static YourImmutableItemBuilder builder() { return new YourImmutableItemBuilder(); } @DynamoDbPartitionKey public String getPk() { return pk; } @DynamoDbSortKey public String getSk() { return sk; } public String getItemData() { return itemData; } public int getInventory() { return inventory; } static final class YourImmutableItemBuilder { private String pk; private String sk; private String itemData; private int inventory; private YourImmutableItemBuilder() {} public YourImmutableItemBuilder pk(String pk) { this.pk = pk; return this; } public YourImmutableItemBuilder sk(String sk) { this.sk = sk; return this; } public YourImmutableItemBuilder itemData(String itemData) { this.itemData = itemData; return this; } public YourImmutableItemBuilder inventory(int inventory) { this.inventory = inventory; return this; } public YourImmutableItem build() { return new YourImmutableItem(this); } } }

High-level interface using immutable data classes and third-party boilerplate generation libraries

The immutable data classes example mentioned in the previous section require some boilerplate code. For example, the getter and setter logic on the data classes in addition to the Builder classes. Third-party libraries, such as Project Lombok, can help you generate that type of boilerplate code.

The Use third-party libraries, such as Lombok section in the Amazon SDK for Java 2.x documentation introduces the concept of leveraging the Project Lombok library. Reducing most of the boilerplate code helps you limit the amount of code needed for working with immutable data classes and the Amazon SDK. This further results in improved productivity and readability of your code.

The following example demonstrates how Project Lombok simplifies the code needed to use the DynamoDB enhanced client API.

import org.slf4j.*; import software.amazon.awssdk.enhanced.dynamodb.*; import software.amazon.awssdk.enhanced.dynamodb.model.*; import software.amazon.awssdk.services.dynamodb.model.ReturnConsumedCapacity; public class DynamoDbEnhancedClientImmutableLombokPutItem { private static final DynamoDbEnhancedClient ENHANCED_DYNAMODB_CLIENT = DynamoDbEnhancedClient.builder().build(); private static final DynamoDbTable<YourImmutableLombokItem> DYNAMODB_TABLE = ENHANCED_DYNAMODB_CLIENT.table("YourTableName", TableSchema.fromImmutableClass(YourImmutableLombokItem.class)); private static final Logger LOGGER = LoggerFactory.getLogger(DynamoDbEnhancedClientImmutableLombokPutItem.class); private void putItem() { PutItemEnhancedResponse<YourImmutableLombokItem> response = DYNAMODB_TABLE.putItemWithResponse(PutItemEnhancedRequest.builder(YourImmutableLombokItem.class) .item(YourImmutableLombokItem.builder() .pk("123") .sk("cart#123") .itemData("YourItemData") .inventory(500) .build()) .returnConsumedCapacity(ReturnConsumedCapacity.TOTAL) .build()); LOGGER.info("PutItem call consumed [" + response.consumedCapacity().capacityUnits() + "] Write Capacity Unites (WCU)"); } }

The following example shows the immutable data object of the immutable data class.

import lombok.*; import software.amazon.awssdk.enhanced.dynamodb.mapper.annotations.*; @Builder @DynamoDbImmutable(builder = YourImmutableLombokItem.YourImmutableLombokItemBuilder.class) @Value public class YourImmutableLombokItem { @Getter(onMethod_=@DynamoDbPartitionKey) String pk; @Getter(onMethod_=@DynamoDbSortKey) String sk; String itemData; int inventory; }

The YourImmutableLombokItem class uses the following annotations provided by Project Lombok and the Amazon SDK:

  • @Builder — Produces complex builder APIs for data classes provided by Project Lombok.

  • @DynamoDbImmutable — Identifies the DynamoDbImmutable class as a DynamoDB mappable entity annotation provided by the Amazon SDK.

  • @Value — The immutable variant of @Data; all fields are made private and final by default, and setters are not generated. This annotation is provided by Project Lombok.

Document Interface

The Document interface avoids the need to specify data type descriptors. The data types are implied by the semantics of the data itself. If you’re familiar with the Amazon SDK for Java 1.x Document interface, the Document interface in Amazon SDK for Java 2.x offers a similar but redesigned interface.

The following Document interface example shows the PutItem call expressed using the Document interface. The example also uses EnhancedDocument. To execute commands against a DynamoDB table using the enhanced document API, you must first associate the table with your document table schema to create a DynamoDBTable resource object. The Document table schema builder requires the primary index key and attribute converter providers.

You can use AttributeConverterProvider.defaultProvider() to convert document attributes of default types. You can change the overall default behavior with a custom AttributeConverterProvider implementation. You can also change the converter for a single attribute. The Amazon SDK documentation provides more details and examples about how to use custom converters. Their primary use is for attributes of your domain classes that don’t have a default converter available. Using a custom converter, you can provide the SDK with the needed information to write or read to DynamoDB.

import org.slf4j.*; import software.amazon.awssdk.enhanced.dynamodb.*; import software.amazon.awssdk.enhanced.dynamodb.document.EnhancedDocument; import software.amazon.awssdk.enhanced.dynamodb.model.*; import software.amazon.awssdk.services.dynamodb.model.ReturnConsumedCapacity; public class DynamoDbEnhancedDocumentClientPutItem { private static final DynamoDbEnhancedClient ENHANCED_DYNAMODB_CLIENT = DynamoDbEnhancedClient.builder().build(); private static final DynamoDbTable<EnhancedDocument> DYNAMODB_TABLE = ENHANCED_DYNAMODB_CLIENT.table("YourTableName", TableSchema.documentSchemaBuilder() .addIndexPartitionKey(TableMetadata.primaryIndexName(),"pk", AttributeValueType.S) .addIndexSortKey(TableMetadata.primaryIndexName(), "sk", AttributeValueType.S) .attributeConverterProviders(AttributeConverterProvider.defaultProvider()) .build()); private static final Logger LOGGER = LoggerFactory.getLogger(DynamoDbEnhancedDocumentClientPutItem.class); private void putItem() { PutItemEnhancedResponse<EnhancedDocument> response = DYNAMODB_TABLE.putItemWithResponse( PutItemEnhancedRequest.builder(EnhancedDocument.class) .item( EnhancedDocument.builder() .attributeConverterProviders(AttributeConverterProvider.defaultProvider()) .putString("pk", "123") .putString("sk", "cart#123") .putString("item_data", "YourItemData") .putNumber("inventory", 500) .build()) .returnConsumedCapacity(ReturnConsumedCapacity.TOTAL) .build()); LOGGER.info("PutItem call consumed [" + response.consumedCapacity().capacityUnits() + "] Write Capacity Unites (WCU)"); } }

Using the following utility methods you can convert JSON documents to and from the native Amazon DynamoDB data types:

Comparing interfaces with a Query example

This section shows the same Query call expressed using the various interfaces. These queries use a couple of attributes to fine tune the query results:

  • You must specify the partition key completely because DynamoDB will target to one specific partition key value.

  • The sort key has a key condition expression using begins_with so that only cart items are targeted with this query.

  • We limit the query to return maximum of 100 item by using limit().

  • We set the scanIndexForward to false. The results are returned in order of UTF-8 bytes, which usually means the cart item with the lowest number is returned first. By setting the scanIndexForward to false we reverse the order and the cart item with the highest number is returned first.

  • We apply a filter to remove any result that does not match the criteria. The data being filtered consumes read capacity whether or not the item matches the filter.

Example Query using low-level interface

The following example queries a table named YourTableName using a keyConditionExpression, which limits the query to a specific partition key value and sort key value that begin with a specific prefix value. These key conditions limit the amount of data read from DynamoDB. Finally, the query applies a filter on the data retrieved from DynamoDB using a filterExpression.

import org.slf4j.*; import software.amazon.awssdk.services.dynamodb.DynamoDbClient; import software.amazon.awssdk.services.dynamodb.model.*; import java.util.Map; public class Query { // Create a DynamoDB client with the default settings connected to the DynamoDB // endpoint in the default region based on the default credentials provider chain. private static final DynamoDbClient DYNAMODB_CLIENT = DynamoDbClient.builder().build(); private static final Logger LOGGER = LoggerFactory.getLogger(Query.class); private static void query() { QueryResponse response = DYNAMODB_CLIENT.query(QueryRequest.builder() .expressionAttributeNames(Map.of("#name", "name")) .expressionAttributeValues(Map.of( ":pk_val", AttributeValue.fromS("id#1"), ":sk_val", AttributeValue.fromS("cart#"), ":name_val", AttributeValue.fromS("SomeName"))) .filterExpression("#name = :name_val") .keyConditionExpression("pk = :pk_val AND begins_with(sk, :sk_val)") .limit(100) .scanIndexForward(false) .tableName("YourTableName") .build()); LOGGER.info("nr of items: " + response.count()); LOGGER.info("First item pk: " + response.items().get(0).get("pk")); LOGGER.info("First item sk: " + response.items().get(0).get("sk")); } }
Example Query using Document interface

The following example queries a table named YourTableName using document interface.

import org.slf4j.Logger; import org.slf4j.LoggerFactory; import software.amazon.awssdk.enhanced.dynamodb.*; import software.amazon.awssdk.enhanced.dynamodb.document.EnhancedDocument; import software.amazon.awssdk.enhanced.dynamodb.model.*; import java.util.Map; public class DynamoDbEnhancedDocumentClientQuery { // Create a DynamoDB client with the default settings connected to the DynamoDB // endpoint in the default region based on the default credentials provider chain. private static final DynamoDbEnhancedClient ENHANCED_DYNAMODB_CLIENT = DynamoDbEnhancedClient.builder().build(); private static final DynamoDbTable<EnhancedDocument> DYNAMODB_TABLE = ENHANCED_DYNAMODB_CLIENT.table("YourTableName", TableSchema.documentSchemaBuilder() .addIndexPartitionKey(TableMetadata.primaryIndexName(),"pk", AttributeValueType.S) .addIndexSortKey(TableMetadata.primaryIndexName(), "sk", AttributeValueType.S) .attributeConverterProviders(AttributeConverterProvider.defaultProvider()) .build()); private static final Logger LOGGER = LoggerFactory.getLogger(DynamoDbEnhancedDocumentClientQuery.class); private void query() { PageIterable<EnhancedDocument> response = DYNAMODB_TABLE.query(QueryEnhancedRequest.builder() .filterExpression(Expression.builder() .expression("#name = :name_val") .expressionNames(Map.of("#name", "name")) .expressionValues(Map.of(":name_val", AttributeValue.fromS("SomeName"))) .build()) .limit(100) .queryConditional(QueryConditional.sortBeginsWith(Key.builder() .partitionValue("id#1") .sortValue("cart#") .build())) .scanIndexForward(false) .build()); LOGGER.info("nr of items: " + response.items().stream().count()); LOGGER.info("First item pk: " + response.items().iterator().next().getString("pk")); LOGGER.info("First item sk: " + response.items().iterator().next().getString("sk")); } }
Example Query using high-level interface

The following example queries a table named YourTableName using the DynamoDB enhanced client API.

import org.slf4j.*; import software.amazon.awssdk.enhanced.dynamodb.*; import software.amazon.awssdk.enhanced.dynamodb.mapper.annotations.*; import software.amazon.awssdk.enhanced.dynamodb.model.*; import software.amazon.awssdk.services.dynamodb.model.AttributeValue; import java.util.Map; public class DynamoDbEnhancedClientQuery { private static final DynamoDbEnhancedClient ENHANCED_DYNAMODB_CLIENT = DynamoDbEnhancedClient.builder().build(); private static final DynamoDbTable<YourItem> DYNAMODB_TABLE = ENHANCED_DYNAMODB_CLIENT.table("YourTableName", TableSchema.fromBean(DynamoDbEnhancedClientQuery.YourItem.class)); private static final Logger LOGGER = LoggerFactory.getLogger(DynamoDbEnhancedClientQuery.class); private void query() { PageIterable<YourItem> response = DYNAMODB_TABLE.query(QueryEnhancedRequest.builder() .filterExpression(Expression.builder() .expression("#name = :name_val") .expressionNames(Map.of("#name", "name")) .expressionValues(Map.of(":name_val", AttributeValue.fromS("SomeName"))) .build()) .limit(100) .queryConditional(QueryConditional.sortBeginsWith(Key.builder() .partitionValue("id#1") .sortValue("cart#") .build())) .scanIndexForward(false) .build()); LOGGER.info("nr of items: " + response.items().stream().count()); LOGGER.info("First item pk: " + response.items().iterator().next().getPk()); LOGGER.info("First item sk: " + response.items().iterator().next().getSk()); } @DynamoDbBean public static class YourItem { public YourItem() {} public YourItem(String pk, String sk, String name) { this.pk = pk; this.sk = sk; this.name = name; } private String pk; private String sk; private String name; @DynamoDbPartitionKey public void setPk(String pk) { this.pk = pk; } public String getPk() { return pk; } @DynamoDbSortKey public void setSk(String sk) { this.sk = sk; } public String getSk() { return sk; } public void setName(String name) { this.name = name; } public String getName() { return name; } } }
High-level interface using immutable data classes

When you perform a Query with the high-level immutable data classes, the code is the same as the high-level interface example except for the construction of the entity class YourItem or YourImmutableItem. For more information, see the PutItem example.

High-level interface using immutable data classes and third-party boilerplate generation libraries

When you perform a Query with the high-level immutable data classes, the code is the same as the high-level interface example except for the construction of the entity class YourItem or YourImmutableLombokItem. For more information, see the PutItem example.

Additional code examples

You can also refer to the following code sample repositories that provide additional examples to use DynamoDB with SDK for Java 2.x:

Synchronous and asynchronous programming

The Amazon SDK for Java 2.x provides both synchronous and asynchronous clients for Amazon Web Services, such as DynamoDB.

The DynamoDbClient and DynamoDbEnhancedClient classes provide synchronous methods that block your thread’s execution until the client receives a response from the service. This client is the most straightforward way of interacting with DynamoDB if you have no need for asynchronous operations.

The DynamoDbAsyncClient and DynamoDbEnhancedAsyncClient classses provide asynchronous methods that return immediately, and give control back to the calling thread without waiting for a response. The non-blocking client has an advantage that it allows for high concurrency across a few threads, which provides efficient handling of I/O requests with minimal compute resources. This improves throughput and responsiveness.

The Amazon SDK for Java 2.x uses the native support for non-blocking I/O. The Amazon SDK for Java 1.x had to simulate non-blocking I/O.

The synchronous methods return before a response is available, so you need a way to get the response when it’s ready. The asynchronous methods in the Amazon SDK for Java return a CompletableFuture object that contains the results of the asynchronous operation in the future. When you call get() or join() on these CompletableFuture objects, your code will block till the result is available. If you make this call at the same time as making the request, it's similar behavior to a plain synchronous call.

The Amazon SDK documentation for asynchronous programming provides more details about how to leverage this asynchronous style.

HTTP clients

For supporting every client, there exists an HTTP client that handles communication with the Amazon Web Services. You can plug in alternative HTTP clients, choosing one that has the characteristics that best fit your application. Some are more lightweight; some have more configuration options.

Some HTTP clients support only synchronous use, while others support only asynchronous use. The Amazon SDK documentation provides a flowchart that helps you select the optimal HTTP client for your workload.

The following list presents some of the possible HTTP clients:

Apache-based HTTP client

ApacheHttpClient supports synchronous service clients and is the default HTTP client for synchronous use. For information about configuring the ApacheHttpClient, see Configure the Apache-based HTTP client in the Amazon SDK for Java 2.x documentation.

URLConnection-based HTTP client

The UrlConnectionHttpClient is another option for synchronous clients. It loads more quickly than the Apache-based HTTP client, but has fewer features. For information about configuring the UrlConnectionHttpClient, see Configure the URLConnection-based HTTP client.

Netty-based HTTP client

NettyNioAsyncHttpClient supports async clients and is the default choice for async use. For information about configuring the NettyNioAsyncHttpClient, see Configure the Netty-based HTTP client.

Amazon CRT-based HTTP client

The newer AwsCrtHttpClient and AwsCrtAsyncHttpClient from the Amazon Common Runtime (CRT) libraries is another option that supports both synchronous and asynchronous clients. Compared to other HTTP clients, Amazon CRT offers:

  • Faster SDK startup time

  • Smaller memory footprint

  • Reduced latency time

  • Connection health management

  • DNS load balancing

For information about configuring AwsCrtHttpClient and AwsCrtAsyncHttpClient, see Configure the Amazon CRT-based HTTP clients.

Amazon CRT-based HTTP client isn't the default because that would break backward compatibility for existing applications. However, for DynamoDB we recommend that you use the Amazon CRT-based HTTP client for both sync and async uses.

For an introduction about the Amazon CRT-based HTTP client, see Announcing availability of the Amazon CRT HTTP Client in the Amazon SDK for Java 2.x.

Configuring an HTTP client

When configuring a client, you can provide various configuration options, including:

  • Setting timeouts for different aspects of API calls

  • Controlling if TCP Keep-Alive is enabled

  • Controlling the retry policy when encountering errors

  • Specifying execution attributes that Execution interceptor instances can modify. Execution interceptors can write code that intercept the execution of your API requests and responses. This enables you to perform tasks, such as publish metrics and modify requests in-flight.

  • Adding or manipulating HTTP headers

  • Enabling the tracking of client-side performance metrics. Using this feature helps you to collect metrics about the service clients in your application and analyze the output in Amazon CloudWatch.

  • Specifying an alternate executor service to be used for scheduling tasks, such as async retry attempts and timeout tasks

You control the configuration by providing a ClientOverrideConfiguration object to the service client Builder class. You'll see this in some code examples in the following sections.

The ClientOverrideConfiguration provides standard configuration choices. The different pluggable HTTP clients have implementation-specific configuration possibilities as well. Each of these clients also maintains its own documentation:

Timeout configuration

You can adjust the client configuration to control various timeouts related to the service calls. DynamoDB provides lower latencies compared to other Amazon Web Services. Therefore, you might want to adjust these properties to lower timeout values so you can fail fast if there’s a networking issue.

You can customize the latency related behavior using ClientOverrideConfiguration on the DynamoDB client or by changing detailed configuration options on the underlying HTTP client implementation.

You can configure the following impactful properties using ClientOverrideConfiguration:

  • apiCallAttemptTimeout—The amount of time to wait for a single attempt for an HTTP request to complete before giving up and timing out.

  • apiCallTimeout—The amount of time to allow the client to complete the execution of an API call. This includes the request handler execution that consists of all the HTTP requests including retries.

The Amazon SDK for Java 2.x provides default values for some timeout options, such as connection timeout and socket timeouts. The SDK doesn't provide default values for API call timeouts or individual API call attempt timeouts. If these timeouts aren't set in the ClientOverrideConfiguration, the SDK will use the socket timeout value instead for the overall API call timeout. The socket timeout has a default value of 30 seconds.

RetryMode

Another configuration related to the timeout configuration that you should consider is the RetryMode configuration object. This configuration object contains a collection of retry behaviors.

The SDK for Java 2.x supports the following retry modes:

  • legacy — The default retry mode if you don't explicitly change it. This retry mode is specific to the Java SDK and characterized by:

    • Up to three retries, or more for services, such as DynamoDB which has up to 8 retries.

  • standard — Named standard because it’s more consistent with other Amazon SDKs. This mode waits for a random amount of time ranging from 0ms to 1,000ms for the first retry. If another retry is necessary, it picks another random time from 0ms to 1,000ms and multiplies it by two. If an additional retry is necessary, it does the same random pick multiplied by 4, and so on. Each wait is capped at 20 seconds. This mode will perform retries on more detected failure conditions than the legacy mode. For DynamoDB, it performs up to three total max attempts unless you overridde with numRetries.

  • adaptive — Builds on standard mode and dynamically limits the rate of Amazon requests to maximize success rate. This may done be at the expense of request latency. We do not recommend adaptive retry mode when predictable latency is important.

You can find an expanded definition of these retry modes in the Retry behavior topic in the Amazon SDKs and Tools Reference Guide.

Retry policies

All RetryMode configurations have a RetryPolicy, which is built based on one or more RetryCondition configurations. The TokenBucketRetryCondition is especially important to the retry behaviour of the DynamoDB SDK client implementation. This condition limits the number of retries made by the SDK using a token bucket algorithm. Depending on the selected retry mode, throttling exceptions may or may not subtract tokens from the TokenBucket.

When a client encounters a retryable error, such as a throttling exception or a temporary server error, the SDK will automatically retry the request. You can control how many times and how quickly these retries happen.

When configuring a client, you can provide a RetryPolicy that supports the following parameters:

  • numRetries—The maximum number of retries that should be applied before a request is considered to be failed. Default value is 8 regardless of the retry mode you use.

    Warning

    Make sure that you change this default value after due consideration.

  • backoffStrategy—The BackoffStrategy to apply to the retries, with FullJitterBackoffStrategy being the default strategy. This strategy performs an exponential delay between additional retries based on the current number or retries, a base delay, and a maximum backoff time. It then adds jitter to provide a bit of randomness. The base delay used in the exponential delay is 25 ms regardless of the retry mode.

  • retryCondition—The RetryCondition determines whether a request should be retried at all. By default it will retry a specific set of HTTP status codes and exceptions that it believes are retryable. For most situations, the default configuration should be sufficient.

The following code provides an alternative retry policy. It specifies to allow a total of five retries (six total requests). The first retry should come after approximately a 100ms delay, with each additional retry doubling that time exponentially, up to a maximum of one second delay.

DynamoDbClient client = DynamoDbClient.builder() .overrideConfiguration(ClientOverrideConfiguration.builder() .retryPolicy(RetryPolicy.builder() .backoffStrategy(FullJitterBackoffStrategy.builder() .baseDelay(Duration.ofMillis(100)) .maxBackoffTime(Duration.ofSeconds(1)) .build()) .numRetries(5) .build()) .build()) .build();

DefaultsMode

The timeout properties not managed by ClientOverrideConfiguration and the RetryMode are typically not configured explicitly. Instead, their configuration is set implicitly by specifying a DefaultsMode.

Support for DefaultsMode was introduced in Amazon SDK for Java 2.x (version 2.17.102 or later) to provide a set of default values for common configurable settings, such as HTTP communication settings, retry behavior, service Regional endpoint settings, and, potentially, any SDK-related configuration. When you use this feature, you can get new configuration defaults tailored to common usage scenarios.

The default modes are standardized across all of the Amazon SDKs. The SDK for Java 2.x supports the following default modes:

  • legacy — Provides default settings that vary by SDK and existed before DefaultsMode was established.

  • standard — Provides default non-optimized settings for most scenarios.

  • in-region — Builds on the standard mode and includes settings tailored for applications that call Amazon Web Services from within the same Amazon Web Services Region.

  • cross-region — Builds on the standard mode and includes settings with high timeouts for applications that call Amazon Web Services in a different Amazon Web Services Region.

  • mobile — Builds on the standard mode and includes settings with high timeouts tailored for mobile applications with higher latencies.

  • auto — Builds on the standard mode and includes experimental features. The SDK attempts to discover the runtime environment to determine the appropriate settings automatically. The auto-detection is heuristics-based and does not provide 100% accuracy. If the runtime environment can't be determined, standard mode is used. The auto-detection might query Instance metadata and user data, which might introduce latency. If startup latency is critical to your application, we recommend choosing an explicit DefaultsMode instead.

You can configure the defaults mode in the following ways:

  • Directly on a client through AwsClientBuilder.Builder#defaultsMode(DefaultsMode).

  • On a configuration profile through the defaults_mode profile file property.

  • Globally through the aws.defaultsMode system property.

  • Globally through the AWS_DEFAULTS_MODE environment variable.

Note

For any mode other than legacy, the vended default values might change as best practices evolve. Therefore, we encourage you to perform testing when upgrading the SDK if you are using a mode other than legacy.

The Smart configuration defaults in the Amazon SDKs and Tools Reference Guide provides a list of configuration properties and their default values in the different default modes.

You choose the defaults mode value based on your application's characteristics and the Amazon Web Service your application interacts with.

These values are configured with a broad selection of Amazon Web Services in mind. For a typical DynamoDB deployment where both your DynamoDB table(s) and application are deployed in one Region, the in-region defaults mode is most relevant among the standard default modes.

Example DynamoDB SDK client configuration tuned for low-latency calls

The following example adjusts the timeouts to lower values for an expected low-latency DynamoDB call.

DynamoDbAsyncClient asyncClient = DynamoDbAsyncClient.builder() .defaultsMode(DefaultsMode.IN_REGION) .httpClientBuilder(AwsCrtAsyncHttpClient.builder()) .overrideConfiguration(ClientOverrideConfiguration.builder() .apiCallTimeout(Duration.ofSeconds(3)) .apiCallAttemptTimeout(Duration.ofMillis(500)) .build()) .build();

The individual HTTP client implementation may provide you with even more granular control over the timeout and connection usage behavior. For example, in case of the Amazon CRT-based client, you can enable ConnectionHealthConfiguration which allows the Amazon CRT-based client to actively monitor the health of the used connections. For more details, see the documentation for Amazon SDK for Java 2.x.

Keep-Alive configuration

Enabling keep-alive can reduce latencies by reusing connections. There are two different kinds of keep-alive: HTTP Keep-Alive and TCP Keep-Alive.

  • HTTP Keep-Alive attempts to maintain the HTTPS connection between the client and server so later requests can reuse that connection. This skips the heavyweight HTTPS authentication on later requests. HTTP Keep-Alive is enabled by default on all clients.

  • TCP Keep-Alive requests the underlying operating system to send small packets over the socket connection to provide extra assurance the socket is kept alive and to immediately detect any drops. This ensures a later request won't spend time trying to use a dropped socket. TCP Keep-Alive is disabled by default on all clients. You may want to enable it. The following code examples show how to do this with each HTTP client. When enabled for all non-CRT based HTTP clients, the actual Keep-Alive mechanism is dependent on the operating system. Therefore, you must configure additional TCP Keep-Alive values, such as timeout and number of packets, through the operating system. You can do this using sysctl on a Linux or Mac machine, or Registry values on a Windows machine.

Example to enable TCP Keep-Alive on an Apache-based HTTP client
DynamoDbClient client = DynamoDbClient.builder() .httpClientBuilder(ApacheHttpClient.builder().tcpKeepAlive(true)) .build();
URLConnection-based HTTP client

Any synchronous client using the URLConnection-based HTTP client HttpURLConnection doesn't have a mechanism to enable keep-alive.

Example to enable TCP Keep-Alive on a Netty-based HTTP client
DynamoDbAsyncClient client = DynamoDbAsyncClient.builder() .httpClientBuilder(NettyNioAsyncHttpClient.builder().tcpKeepAlive(true)) .build();
Example to enable TCP Keep-Alive on an Amazon CRT-based HTTP client

With the Amazon CRT-based HTTP client, you can enable TCP keep-alive and control the duration.

DynamoDbClient client = DynamoDbClient.builder() .httpClientBuilder(AwsCrtHttpClient.builder() .tcpKeepAliveConfiguration(TcpKeepAliveConfiguration.builder() .keepAliveInterval(Duration.ofSeconds(50)) .keepAliveTimeout(Duration.ofSeconds(5)) .build())) .build();

When using the asynchronous DynamoDB client, you can enable TCP Keep-Alive as shown in the following code.

DynamoDbAsyncClient client = DynamoDbAsyncClient.builder() .httpClientBuilder(AwsCrtAsyncHttpClient.builder() .tcpKeepAliveConfiguration(TcpKeepAliveConfiguration.builder() .keepAliveInterval(Duration.ofSeconds(50)) .keepAliveTimeout(Duration.ofSeconds(5)) .build())) .build();

Error handling

When it comes to exception handling, the Amazon SDK for Java 2.x uses runtime (unchecked) exceptions.

The base exception, covering all SDK exceptions, is SdkServiceException, which extends from the Java unchecked RuntimeException. If you catch this, you'll catch all exceptions thrown by the SDK.

SdkServiceException has a subclass called AwsServiceException. This subclass indicates any issue in communication with the Amazon Web Service. It has a subclass called DynamoDbException, which indicates an issue in communication with DynamoDB. If you catch this, you'll catch all exceptions related to DynamoDB but no other SDK exceptions.

There are more specific exception types under DynamoDbException. Some of these exception types apply to control-plane operations such as TableAlreadyExistsException. Others apply to data-plane operations. The following is an example of a common data-plane exception:

  • ConditionalCheckFailedException—You specified a condition in the request that evaluated to false. For example, you might have tried to perform a conditional update on an item, but the actual value of the attribute did not match the expected value in the condition. A request that fails in this manner will not be retried.

Other situations don’t have a specific exception defined. For example, when your requests get throttled the specific ProvisionedThroughputExceededException might get thrown, while in other cases the more generic DynamoDbException is thrown. In either case you can determine if the exception was caused by throttling by checking if isThrottlingException() returns true.

Depending on your application needs, you can catch all AwsServiceException or DynamoDbException instances. But often you need different behavior in different situations. The logic to deal with a condition check failure will be different compared to dealing with throttling. Define which exceptional paths you want to deal with and make sure to test the alternative paths. This helps you make sure you’re able to deal with all relevant scenarios.

The Error handling with DynamoDB lists some common errors you might encounter. You can also refer to Common Errors to see a list of common errors. For a particular API call, you can also find the exact errors possible along with its documentation, such as the Query API. For information about handling exceptions, see Exception handling for the Amazon SDK for Java 2.x.

Amazon request ID

Each request includes a Request ID which can be useful to pull if you're working with Amazon Web Services Support to diagnose an issue. Each exception derived from SdkServiceException has a method requestId() available to retrieve the request ID.

Logging

Using the logging provided by the SDK can be useful both for catching any important messages from the client libraries and for more in-depth debugging purposes. Loggers are hierarchical and the SDK uses software.amazon.awssdk as its root logger. The level may be configured with one of TRACE, DEBUG, INFO, WARN, ERROR, ALL, or OFF. The configured level will apply to that logger and down into the logger hierarchy.

The Amazon SDK for Java 2.x leverages the Simple Logging Façade for Java (SLF4J) for its logging, which acts as an abstraction layer around other loggers. This enables you to plug in the logger that you prefer. For instructions about plugging in loggers, see the SLF4J user manual.

Each logger has a particular behavior. The Log4j 2.x logger by default creates a ConsoleAppender which appends log events to System.out and defaults to the ERROR log level.

The SimpleLogger logger included in SLF4J outputs by default to System.err and defaults to the INFO log level.

We recommend that you set the level to WARN for software.amazon.awssdk for any production deployments to catch any important messages from the SDK's client libraries while limiting the output quantity.

If SLF4J can't find a supported logger on the class path (no SLF4J binding), SLF4J will default to a no operation implementation. This implementation results in logging messages to System.err explaining that SLF4J could not find a logger implementation on the classpath. To prevent this situation, you need to add a logger implementation. To do this, you can add a dependency in your Apache Maven pom.xml on artifacts, such as org.slf4j.slf4j-simple or org.apache.logging.log4j.log4j-slf4j2-imp.

The documentation for Amazon SDK for Java 2.x explains how to configure the logging in the SDK including adding logging dependencies to your application configuration. For information, see Logging with the SDK for Java 2.x.

The following configuration in the Log4j2.xml file shows how to adjust the logging behavior if you’re using the Apache Log4j 2 logger. This configuration sets the root logger level to WARN. This log level will be inherited by all loggers in the hierarchy including the software.amazon.awssdk logger.

By default, the output will go to System.out. In the following example, we still override the default output Log4j appender to apply a tailored Log4j PatternLayout.

Example of Log4j2.xml configuration file

The following configuration will log messages at the ERROR and WARN levels to the console for all logger hierarchies.

<Configuration status="WARN"> <Appenders> <Console name="ConsoleAppender" target="SYSTEM_OUT"> <PatternLayout pattern="%d{YYYY-MM-dd HH:mm:ss} [%t] %-5p %c:%L - %m%n" /> </Console> </Appenders> <Loggers> <Root level="WARN"> <AppenderRef ref="ConsoleAppender"/> </Root> </Loggers> </Configuration>

Amazon request ID logging

You can find RequestIds within exceptions when something goes wrong. However, if you want the RequestIds for requests that aren't generating exceptions, you can use logging.

Request IDs are outputted at DEBUG level by the software.amazon.awssdk.request logger. The following example extends the previous configuration example to keep the root logger level at ERROR, the software.amazon.awssdk at level WARN, and the software.amazon.awssdk.request at level DEBUG. Setting these levels helps to catch the Request IDs and other request related details, such as the endpoint and status code.

<Configuration status="WARN"> <Appenders> <Console name="ConsoleAppender" target="SYSTEM_OUT"> <PatternLayout pattern="%d{YYYY-MM-dd HH:mm:ss} [%t] %-5p %c:%L - %m%n" /> </Console> </Appenders> <Loggers> <Root level="ERROR"> <AppenderRef ref="ConsoleAppender"/> </Root> <Logger name="software.amazon.awssdk" level="WARN" /> <Logger name="software.amazon.awssdk.request" level="DEBUG" /> </Loggers> </Configuration>

Here is an example of the log output:

2022-09-23 16:02:08 [main] DEBUG software.amazon.awssdk.request:85 - Sending Request: DefaultSdkHttpFullRequest(httpMethod=POST, protocol=https, host=dynamodb.us-east-1.amazonaws.com, encodedPath=/, headers=[amz-sdk-invocation-id, Content-Length, Content-Type, User-Agent, X-Amz-Target], queryParameters=[]) 2022-09-23 16:02:08 [main] DEBUG software.amazon.awssdk.request:85 - Received successful response: 200, Request ID: QS9DUMME2NHEDH8TGT9N5V53OJVV4KQNSO5AEMVJF66Q9ASUAAJG, Extended Request ID: not available

Pagination

Some requests, such as Query and Scan, limit the size of data returned on a single request and require you make repeated requests to pull subsequent pages.

You can control the maximum number of items to be read for each page with the Limit parameter. For example, you can use the Limit parameter to retrieve only the last 10 items. This limit is how many items should be read from the table before any filtering is applied. There's no way to specify you want exactly 10 after filtering. You can only control the pre-filtered count and check client-side when you've actually retrieved 10 items. Regardless of the limit, every response always has a maximum size of 1 MB.

In the API response a LastEvaluatedKey might be included, which indicates that the response ended because it hit a count limit or a size limit. This key is the last key evaluated for that response. Interacting directly with the API you can retrieve this LastEvaluatedKey and pass it to a follow-up call as ExclusiveStartKey to read the next chunk from that starting point. If no LastEvaluatedKey is returned, it means that there're no more items matching the Query or Scan API call.

The following example uses the low-level interface to limit the items to 100 based on the keyConditionExpression parameter.

QueryRequest.Builder queryRequestBuilder = QueryRequest.builder() .expressionAttributeValues(Map.of( ":pk_val", AttributeValue.fromS("123"), ":sk_val", AttributeValue.fromN("1000"))) .keyConditionExpression("pk = :pk_val AND sk > :sk_val") .limit(100) .tableName(TABLE_NAME); while (true) { QueryResponse queryResponse = DYNAMODB_CLIENT.query(queryRequestBuilder.build()); queryResponse.items().forEach(item -> { LOGGER.info("item PK: [" + item.get("pk") + "] and SK: [" + item.get("sk") + "]"); }); if (!queryResponse.hasLastEvaluatedKey()) { break; } queryRequestBuilder.exclusiveStartKey(queryResponse.lastEvaluatedKey()); }

The Amazon SDK for Java 2.x can simplify this interaction with DynamoDB by providing auto-pagination methods that make multiple service calls to get the next pages of results for you automatically. This simplifies your code, but does take away some control on resource usage that you would keep by reading pages manually.

By using the Iterable methods available in the DynamoDB client such as QueryPaginator and ScanPaginator, the SDK takes care of the pagination. The return type of these methods is a custom iterable that you can use to iterate through all the pages. The SDK will internally handle service calls for you. Using the Java Stream API, you could handle the result of QueryPaginator as shown in the following example.

QueryPublisher queryPublisher = DYNAMODB_CLIENT.queryPaginator(QueryRequest.builder() .expressionAttributeValues(Map.of( ":pk_val", AttributeValue.fromS("123"), ":sk_val", AttributeValue.fromN("1000"))) .keyConditionExpression("pk = :pk_val AND sk > :sk_val") .limit(100) .tableName("YourTableName") .build()); queryPublisher.items().subscribe(item -> System.out.println(item.get("itemData"))).join();

Data class annotations

The Java SDK provides several annotations which you can put on your data class' attributes that will influence how the SDK interacts with them. By adding the annotation, you can have an attribute behave as an implicit atomic counter, maintain an auto-generated timestamp value, or track an item version number. For more information, see Data class annotations.