Using TransferManager for Amazon S3 operations - Amazon SDK for C++
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China (PDF).

Using TransferManager for Amazon S3 operations

You can use the Amazon SDK for C++ TransferManager class to reliably transfer files from the local environment to Amazon S3 and to copy objects from one Amazon S3 location to another. TransferManager can get the progress of a transfer and pause or resume uploads and downloads.

Note

To avoid being charged for incomplete or partial uploads, we recommend that you enable the AbortIncompleteMultipartUpload lifecycle rule on your Amazon S3 buckets.

This rule directs Amazon S3 to abort multipart uploads that don’t complete within a specified number of days after being initiated. When the set time limit is exceeded, Amazon S3 aborts the upload and then deletes the incomplete upload data.

For more information, see Setting lifecycle configuration on a bucket in the Amazon S3 User Guide.

Prerequisites

Before you begin, we recommend you read Getting started using the Amazon SDK for C++.

Download the example code and build the solution as described in Get started on code examples.

To run the examples, the user profile your code uses to make the requests must have proper permissions in Amazon (for the service and the action). For more information, see Providing Amazon credentials.

Upload and download of object using TransferManager

This example demonstrates how TransferManager transfers large object in memory. UploadFile and DownloadFile methods are both called asynchronously and return a TransferHandle to manage the status of your request. If the object uploaded is larger than bufferSize then a multipart upload will be performed. The bufferSize defaults to 5MB, but this can be configured via TransferManagerConfiguration.

auto s3_client = Aws::MakeShared<Aws::S3::S3Client>("S3Client"); auto executor = Aws::MakeShared<Aws::Utils::Threading::PooledThreadExecutor>("executor", 25); Aws::Transfer::TransferManagerConfiguration transfer_config(executor.get()); transfer_config.s3Client = s3_client; // Create buffer to hold data received by the data stream. Aws::Utils::Array<unsigned char> buffer(BUFFER_SIZE); // The local variable 'streamBuffer' is captured by reference in a lambda. // It must persist until all downloading by the 'transfer_manager' is complete. Stream::PreallocatedStreamBuf streamBuffer(buffer.GetUnderlyingData(), buffer.GetLength()); auto transfer_manager = Aws::Transfer::TransferManager::Create(transfer_config); auto uploadHandle = transfer_manager->UploadFile(LOCAL_FILE, BUCKET, KEY, "text/plain", Aws::Map<Aws::String, Aws::String>()); uploadHandle->WaitUntilFinished(); bool success = uploadHandle->GetStatus() == Transfer::TransferStatus::COMPLETED; if (!success) { auto err = uploadHandle->GetLastError(); std::cout << "File upload failed: "<< err.GetMessage() << std::endl; } else { std::cout << "File upload finished." << std::endl; // Verify that the upload retrieved the expected amount of data. assert(uploadHandle->GetBytesTotalSize() == uploadHandle->GetBytesTransferred()); auto downloadHandle = transfer_manager->DownloadFile(BUCKET, KEY, [&]() { //Define a lambda expression for the callback method parameter to stream back the data. return Aws::New<MyUnderlyingStream>("TestTag", &streamBuffer); }); downloadHandle->WaitUntilFinished();// Block calling thread until download is complete. auto downStat = downloadHandle->GetStatus(); if (downStat != Transfer::TransferStatus::COMPLETED) { auto err = downloadHandle->GetLastError(); std::cout << "File download failed: " << err.GetMessage() << std::endl; } std::cout << "File download to memory finished." << std::endl;

See the complete example on Github.