Uploading an object using multipart upload
You can use the multipart upload to programmatically upload a single object to Amazon S3. Each object is uploaded as a set of parts. Each part is a contiguous portion of the object's data. You can upload these object parts independently and in any order. If transmission of any part fails, you can retransmit that part without affecting other parts. After all parts of your object are uploaded, Amazon S3 assembles these parts and creates the object.
For an end-to-end procedure on uploading an object with multipart upload with an additional checksum, see Tutorial: Upload an object through multipart upload and verify its data integrity.
The following section show how to use multipart upload with the Amazon Command Line Interface, and Amazon SDKs.
You can upload any file type—images, backups, data, movies, and so on—into an S3 bucket. The maximum size of a file that you can upload by using the Amazon S3 console is 160 GB. To upload a file larger than 160 GB, use the Amazon Command Line Interface (Amazon CLI), Amazon SDKs, or Amazon S3 REST API.
For instructions on uploading an object via the Amazon Web Services Management Console, see Uploading objects.
The following describe the Amazon S3 operations for multipart upload using the Amazon CLI.
The following sections in the Amazon Simple Storage Service API Reference describe the REST API for multipart upload.
Some Amazon SDKs expose a high-level API that simplifies multipart upload by combining the different API operations required to complete a multipart upload into a single operation. For more information, see Uploading and copying objects using multipart upload in Amazon S3.
If you need to pause and resume multipart uploads, vary part sizes during the upload, or do not know the size of the data in advance, use the low-level API methods. The low-level API methods for multipart uploads offer additional functionality, for more information, see Using the Amazon SDKs (low-level API).
- .NET
-
To upload a file to an S3 bucket, use the
TransferUtility
class. When uploading data from a file, you must provide the object's key name. If you don't, the API uses the file name for the key name. When uploading data from a stream, you must provide the object's key name.To set advanced upload options—such as the part size, the number of threads when uploading the parts concurrently, metadata, the storage class, or ACL—use the
TransferUtilityUploadRequest
class.Note
When you're using a stream for the source of data, the
TransferUtility
class does not do concurrent uploads.The following C# example uploads a file to an Amazon S3 bucket in multiple parts. It shows how to use various
TransferUtility.Upload
overloads to upload a file. Each successive call to upload replaces the previous upload. For information about setting up and running the code examples, see Getting Started with the Amazon SDK for .NET in the Amazon SDK for .NET Developer Guide.using Amazon; using Amazon.S3; using Amazon.S3.Transfer; using System; using System.IO; using System.Threading.Tasks; namespace Amazon.DocSamples.S3 { class UploadFileMPUHighLevelAPITest { private const string bucketName = "*** provide bucket name ***"; private const string keyName = "*** provide a name for the uploaded object ***"; private const string filePath = "*** provide the full path name of the file to upload ***"; // Specify your bucket region (an example region is shown). private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2; private static IAmazonS3 s3Client; public static void Main() { s3Client = new AmazonS3Client(bucketRegion); UploadFileAsync().Wait(); } private static async Task UploadFileAsync() { try { var fileTransferUtility = new TransferUtility(s3Client); // Option 1. Upload a file. The file name is used as the object key name. await fileTransferUtility.UploadAsync(filePath, bucketName); Console.WriteLine("Upload 1 completed"); // Option 2. Specify object key name explicitly. await fileTransferUtility.UploadAsync(filePath, bucketName, keyName); Console.WriteLine("Upload 2 completed"); // Option 3. Upload data from a type of System.IO.Stream. using (var fileToUpload = new FileStream(filePath, FileMode.Open, FileAccess.Read)) { await fileTransferUtility.UploadAsync(fileToUpload, bucketName, keyName); } Console.WriteLine("Upload 3 completed"); // Option 4. Specify advanced settings. var fileTransferUtilityRequest = new TransferUtilityUploadRequest { BucketName = bucketName, FilePath = filePath, StorageClass = S3StorageClass.StandardInfrequentAccess, PartSize = 6291456, // 6 MB. Key = keyName, CannedACL = S3CannedACL.PublicRead }; fileTransferUtilityRequest.Metadata.Add("param1", "Value1"); fileTransferUtilityRequest.Metadata.Add("param2", "Value2"); await fileTransferUtility.UploadAsync(fileTransferUtilityRequest); Console.WriteLine("Upload 4 completed"); } catch (AmazonS3Exception e) { Console.WriteLine("Error encountered on server. Message:'{0}' when writing an object", e.Message); } catch (Exception e) { Console.WriteLine("Unknown encountered on server. Message:'{0}' when writing an object", e.Message); } } } }
- JavaScript
-
Upload a large file.
import { S3Client } from "@aws-sdk/client-s3"; import { Upload } from "@aws-sdk/lib-storage"; import { ProgressBar, logger, } from "@aws-doc-sdk-examples/lib/utils/util-log.js"; const twentyFiveMB = 25 * 1024 * 1024; export const createString = (size = twentyFiveMB) => { return "x".repeat(size); }; /** * Create a 25MB file and upload it in parts to the specified * Amazon S3 bucket. * @param {{ bucketName: string, key: string }} */ export const main = async ({ bucketName, key }) => { const str = createString(); const buffer = Buffer.from(str, "utf8"); const progressBar = new ProgressBar({ description: `Uploading "${key}" to "${bucketName}"`, barLength: 30, }); try { const upload = new Upload({ client: new S3Client({}), params: { Bucket: bucketName, Key: key, Body: buffer, }, }); upload.on("httpUploadProgress", ({ loaded, total }) => { progressBar.update({ current: loaded, total }); }); await upload.done(); } catch (caught) { if (caught instanceof Error && caught.name === "AbortError") { logger.error(`Multipart upload was aborted. ${caught.message}`); } else { throw caught; } } };
Download a large file.
import { GetObjectCommand, NoSuchKey, S3Client } from "@aws-sdk/client-s3"; import { createWriteStream, rmSync } from "node:fs"; const s3Client = new S3Client({}); const oneMB = 1024 * 1024; export const getObjectRange = ({ bucket, key, start, end }) => { const command = new GetObjectCommand({ Bucket: bucket, Key: key, Range: `bytes=${start}-${end}`, }); return s3Client.send(command); }; /** * @param {string | undefined} contentRange */ export const getRangeAndLength = (contentRange) => { const [range, length] = contentRange.split("/"); const [start, end] = range.split("-"); return { start: Number.parseInt(start), end: Number.parseInt(end), length: Number.parseInt(length), }; }; export const isComplete = ({ end, length }) => end === length - 1; const downloadInChunks = async ({ bucket, key }) => { const writeStream = createWriteStream( fileURLToPath(new URL(`./${key}`, import.meta.url)), ).on("error", (err) => console.error(err)); let rangeAndLength = { start: -1, end: -1, length: -1 }; while (!isComplete(rangeAndLength)) { const { end } = rangeAndLength; const nextRange = { start: end + 1, end: end + oneMB }; const { ContentRange, Body } = await getObjectRange({ bucket, key, ...nextRange, }); console.log(`Downloaded bytes ${nextRange.start} to ${nextRange.end}`); writeStream.write(await Body.transformToByteArray()); rangeAndLength = getRangeAndLength(ContentRange); } }; /** * Download a large object from and Amazon S3 bucket. * * When downloading a large file, you might want to break it down into * smaller pieces. Amazon S3 accepts a Range header to specify the start * and end of the byte range to be downloaded. * * @param {{ bucketName: string, key: string }} */ export const main = async ({ bucketName, key }) => { try { await downloadInChunks({ bucket: bucketName, key: key, }); } catch (caught) { if (caught instanceof NoSuchKey) { console.error(`Failed to download object. No such key "${key}".`); rmSync(key); } } };
- Go
-
For more information about the Go code example for multipart upload, see Upload or download large files to and from Amazon S3 using an Amazon SDK.
Upload a large object by using an upload manager to break the data into parts and upload them concurrently.
import ( "bytes" "context" "errors" "fmt" "io" "log" "os" "time" "github.com/aws/aws-sdk-go-v2/aws" "github.com/aws/aws-sdk-go-v2/feature/s3/manager" "github.com/aws/aws-sdk-go-v2/service/s3" "github.com/aws/aws-sdk-go-v2/service/s3/types" "github.com/aws/smithy-go" ) // BucketBasics encapsulates the Amazon Simple Storage Service (Amazon S3) actions // used in the examples. // It contains S3Client, an Amazon S3 service client that is used to perform bucket // and object actions. type BucketBasics struct { S3Client *s3.Client }
// UploadLargeObject uses an upload manager to upload data to an object in a bucket. // The upload manager breaks large data into parts and uploads the parts concurrently. func (basics BucketBasics) UploadLargeObject(ctx context.Context, bucketName string, objectKey string, largeObject []byte) error { largeBuffer := bytes.NewReader(largeObject) var partMiBs int64 = 10 uploader := manager.NewUploader(basics.S3Client, func(u *manager.Uploader) { u.PartSize = partMiBs * 1024 * 1024 }) _, err := uploader.Upload(ctx, &s3.PutObjectInput{ Bucket: aws.String(bucketName), Key: aws.String(objectKey), Body: largeBuffer, }) if err != nil { var apiErr smithy.APIError if errors.As(err, &apiErr) && apiErr.ErrorCode() == "EntityTooLarge" { log.Printf("Error while uploading object to %s. The object is too large.\n"+ "The maximum size for a multipart upload is 5TB.", bucketName) } else { log.Printf("Couldn't upload large object to %v:%v. Here's why: %v\n", bucketName, objectKey, err) } } else { err = s3.NewObjectExistsWaiter(basics.S3Client).Wait( ctx, &s3.HeadObjectInput{Bucket: aws.String(bucketName), Key: aws.String(objectKey)}, time.Minute) if err != nil { log.Printf("Failed attempt to wait for object %s to exist.\n", objectKey) } } return err }
Download a large object by using a download manager to get the data in parts and download them concurrently.
// DownloadLargeObject uses a download manager to download an object from a bucket. // The download manager gets the data in parts and writes them to a buffer until all of // the data has been downloaded. func (basics BucketBasics) DownloadLargeObject(ctx context.Context, bucketName string, objectKey string) ([]byte, error) { var partMiBs int64 = 10 downloader := manager.NewDownloader(basics.S3Client, func(d *manager.Downloader) { d.PartSize = partMiBs * 1024 * 1024 }) buffer := manager.NewWriteAtBuffer([]byte{}) _, err := downloader.Download(ctx, buffer, &s3.GetObjectInput{ Bucket: aws.String(bucketName), Key: aws.String(objectKey), }) if err != nil { log.Printf("Couldn't download large object from %v:%v. Here's why: %v\n", bucketName, objectKey, err) } return buffer.Bytes(), err }
- PHP
-
This topic explains how to use the high-level
Aws\S3\Model\MultipartUpload\UploadBuilder
class from the Amazon SDK for PHP for multipart file uploads. For more information about the Amazon SDK for Ruby API, go to Amazon SDK for Ruby - Version 2.The following PHP example uploads a file to an Amazon S3 bucket. The example demonstrates how to set parameters for the
MultipartUploader
object.require 'vendor/autoload.php'; use Aws\Exception\MultipartUploadException; use Aws\S3\MultipartUploader; use Aws\S3\S3Client; $bucket = '*** Your Bucket Name ***'; $keyname = '*** Your Object Key ***'; $s3 = new S3Client([ 'version' => 'latest', 'region' => 'us-east-1' ]); // Prepare the upload parameters. $uploader = new MultipartUploader($s3, '/path/to/large/file.zip', [ 'bucket' => $bucket, 'key' => $keyname ]); // Perform the upload. try { $result = $uploader->upload(); echo "Upload complete: {$result['ObjectURL']}" . PHP_EOL; } catch (MultipartUploadException $e) { echo $e->getMessage() . PHP_EOL; }
- Python
-
The following example loads an object using the high-level multipart upload Python API (the
TransferManager
class).import sys import threading import boto3 from boto3.s3.transfer import TransferConfig MB = 1024 * 1024 s3 = boto3.resource("s3") class TransferCallback: """ Handle callbacks from the transfer manager. The transfer manager periodically calls the __call__ method throughout the upload and download process so that it can take action, such as displaying progress to the user and collecting data about the transfer. """ def __init__(self, target_size): self._target_size = target_size self._total_transferred = 0 self._lock = threading.Lock() self.thread_info = {} def __call__(self, bytes_transferred): """ The callback method that is called by the transfer manager. Display progress during file transfer and collect per-thread transfer data. This method can be called by multiple threads, so shared instance data is protected by a thread lock. """ thread = threading.current_thread() with self._lock: self._total_transferred += bytes_transferred if thread.ident not in self.thread_info.keys(): self.thread_info[thread.ident] = bytes_transferred else: self.thread_info[thread.ident] += bytes_transferred target = self._target_size * MB sys.stdout.write( f"\r{self._total_transferred} of {target} transferred " f"({(self._total_transferred / target) * 100:.2f}%)." ) sys.stdout.flush() def upload_with_default_configuration( local_file_path, bucket_name, object_key, file_size_mb ): """ Upload a file from a local folder to an Amazon S3 bucket, using the default configuration. """ transfer_callback = TransferCallback(file_size_mb) s3.Bucket(bucket_name).upload_file( local_file_path, object_key, Callback=transfer_callback ) return transfer_callback.thread_info def upload_with_chunksize_and_meta( local_file_path, bucket_name, object_key, file_size_mb, metadata=None ): """ Upload a file from a local folder to an Amazon S3 bucket, setting a multipart chunk size and adding metadata to the Amazon S3 object. The multipart chunk size controls the size of the chunks of data that are sent in the request. A smaller chunk size typically results in the transfer manager using more threads for the upload. The metadata is a set of key-value pairs that are stored with the object in Amazon S3. """ transfer_callback = TransferCallback(file_size_mb) config = TransferConfig(multipart_chunksize=1 * MB) extra_args = {"Metadata": metadata} if metadata else None s3.Bucket(bucket_name).upload_file( local_file_path, object_key, Config=config, ExtraArgs=extra_args, Callback=transfer_callback, ) return transfer_callback.thread_info def upload_with_high_threshold(local_file_path, bucket_name, object_key, file_size_mb): """ Upload a file from a local folder to an Amazon S3 bucket, setting a multipart threshold larger than the size of the file. Setting a multipart threshold larger than the size of the file results in the transfer manager sending the file as a standard upload instead of a multipart upload. """ transfer_callback = TransferCallback(file_size_mb) config = TransferConfig(multipart_threshold=file_size_mb * 2 * MB) s3.Bucket(bucket_name).upload_file( local_file_path, object_key, Config=config, Callback=transfer_callback ) return transfer_callback.thread_info def upload_with_sse( local_file_path, bucket_name, object_key, file_size_mb, sse_key=None ): """ Upload a file from a local folder to an Amazon S3 bucket, adding server-side encryption with customer-provided encryption keys to the object. When this kind of encryption is specified, Amazon S3 encrypts the object at rest and allows downloads only when the expected encryption key is provided in the download request. """ transfer_callback = TransferCallback(file_size_mb) if sse_key: extra_args = {"SSECustomerAlgorithm": "AES256", "SSECustomerKey": sse_key} else: extra_args = None s3.Bucket(bucket_name).upload_file( local_file_path, object_key, ExtraArgs=extra_args, Callback=transfer_callback ) return transfer_callback.thread_info def download_with_default_configuration( bucket_name, object_key, download_file_path, file_size_mb ): """ Download a file from an Amazon S3 bucket to a local folder, using the default configuration. """ transfer_callback = TransferCallback(file_size_mb) s3.Bucket(bucket_name).Object(object_key).download_file( download_file_path, Callback=transfer_callback ) return transfer_callback.thread_info def download_with_single_thread( bucket_name, object_key, download_file_path, file_size_mb ): """ Download a file from an Amazon S3 bucket to a local folder, using a single thread. """ transfer_callback = TransferCallback(file_size_mb) config = TransferConfig(use_threads=False) s3.Bucket(bucket_name).Object(object_key).download_file( download_file_path, Config=config, Callback=transfer_callback ) return transfer_callback.thread_info def download_with_high_threshold( bucket_name, object_key, download_file_path, file_size_mb ): """ Download a file from an Amazon S3 bucket to a local folder, setting a multipart threshold larger than the size of the file. Setting a multipart threshold larger than the size of the file results in the transfer manager sending the file as a standard download instead of a multipart download. """ transfer_callback = TransferCallback(file_size_mb) config = TransferConfig(multipart_threshold=file_size_mb * 2 * MB) s3.Bucket(bucket_name).Object(object_key).download_file( download_file_path, Config=config, Callback=transfer_callback ) return transfer_callback.thread_info def download_with_sse( bucket_name, object_key, download_file_path, file_size_mb, sse_key ): """ Download a file from an Amazon S3 bucket to a local folder, adding a customer-provided encryption key to the request. When this kind of encryption is specified, Amazon S3 encrypts the object at rest and allows downloads only when the expected encryption key is provided in the download request. """ transfer_callback = TransferCallback(file_size_mb) if sse_key: extra_args = {"SSECustomerAlgorithm": "AES256", "SSECustomerKey": sse_key} else: extra_args = None s3.Bucket(bucket_name).Object(object_key).download_file( download_file_path, ExtraArgs=extra_args, Callback=transfer_callback ) return transfer_callback.thread_info
The Amazon SDK exposes a low-level API that closely resembles the Amazon S3 REST API for multipart uploads (see Uploading and copying objects using multipart upload in Amazon S3. Use the low-level API when you need to pause and resume multipart uploads, vary part sizes during the upload, or do not know the size of the upload data in advance. When you don't have these requirements, use the high-level API (see Using the Amazon SDKs (high-level API)).
- Java
The following example shows how to use the low-level Java classes to upload a file. It performs the following steps:
Initiates a multipart upload using the
AmazonS3Client.initiateMultipartUpload()
method, and passes in anInitiateMultipartUploadRequest
object.Saves the upload ID that the
AmazonS3Client.initiateMultipartUpload()
method returns. You provide this upload ID for each subsequent multipart upload operation.Uploads the parts of the object. For each part, you call the
AmazonS3Client.uploadPart()
method. You provide part upload information using anUploadPartRequest
object.For each part, saves the ETag from the response of the
AmazonS3Client.uploadPart()
method in a list. You use the ETag values to complete the multipart upload.Calls the
AmazonS3Client.completeMultipartUpload()
method to complete the multipart upload.
For instructions on creating and testing a working sample, see Getting Started in the Amazon SDK for Java Developer Guide.
import com.amazonaws.AmazonServiceException; import com.amazonaws.SdkClientException; import com.amazonaws.auth.profile.ProfileCredentialsProvider; import com.amazonaws.regions.Regions; import com.amazonaws.services.s3.AmazonS3; import com.amazonaws.services.s3.AmazonS3ClientBuilder; import com.amazonaws.services.s3.model.*; import java.io.File; import java.io.IOException; import java.util.ArrayList; import java.util.List; public class LowLevelMultipartUpload { public static void main(String[] args) throws IOException { Regions clientRegion = Regions.DEFAULT_REGION; String bucketName = "*** Bucket name ***"; String keyName = "*** Key name ***"; String filePath = "*** Path to file to upload ***"; File file = new File(filePath); long contentLength = file.length(); long partSize = 5 * 1024 * 1024; // Set part size to 5 MB. try { AmazonS3 s3Client = AmazonS3ClientBuilder.standard() .withRegion(clientRegion) .withCredentials(new ProfileCredentialsProvider()) .build(); // Create a list of ETag objects. You retrieve ETags for each object part // uploaded, // then, after each individual part has been uploaded, pass the list of ETags to // the request to complete the upload. List<PartETag> partETags = new ArrayList<PartETag>(); // Initiate the multipart upload. InitiateMultipartUploadRequest initRequest = new InitiateMultipartUploadRequest(bucketName, keyName); InitiateMultipartUploadResult initResponse = s3Client.initiateMultipartUpload(initRequest); // Upload the file parts. long filePosition = 0; for (int i = 1; filePosition < contentLength; i++) { // Because the last part could be less than 5 MB, adjust the part size as // needed. partSize = Math.min(partSize, (contentLength - filePosition)); // Create the request to upload a part. UploadPartRequest uploadRequest = new UploadPartRequest() .withBucketName(bucketName) .withKey(keyName) .withUploadId(initResponse.getUploadId()) .withPartNumber(i) .withFileOffset(filePosition) .withFile(file) .withPartSize(partSize); // Upload the part and add the response's ETag to our list. UploadPartResult uploadResult = s3Client.uploadPart(uploadRequest); partETags.add(uploadResult.getPartETag()); filePosition += partSize; } // Complete the multipart upload. CompleteMultipartUploadRequest compRequest = new CompleteMultipartUploadRequest(bucketName, keyName, initResponse.getUploadId(), partETags); s3Client.completeMultipartUpload(compRequest); } catch (AmazonServiceException e) { // The call was transmitted successfully, but Amazon S3 couldn't process // it, so it returned an error response. e.printStackTrace(); } catch (SdkClientException e) { // Amazon S3 couldn't be contacted for a response, or the client // couldn't parse the response from Amazon S3. e.printStackTrace(); } } }
- .NET
-
The following C# example shows how to use the low-level Amazon SDK for .NET multipart upload API to upload a file to an S3 bucket. For information about Amazon S3 multipart uploads, see Uploading and copying objects using multipart upload in Amazon S3.
Note
When you use the Amazon SDK for .NET API to upload large objects, a timeout might occur while data is being written to the request stream. You can set an explicit timeout using the
UploadPartRequest
.The following C# example uploads a file to an S3 bucket using the low-level multipart upload API. For information about setting up and running the code examples, see Getting Started with the Amazon SDK for .NET in the Amazon SDK for .NET Developer Guide.
using Amazon; using Amazon.Runtime; using Amazon.S3; using Amazon.S3.Model; using System; using System.Collections.Generic; using System.IO; using System.Threading.Tasks; namespace Amazon.DocSamples.S3 { class UploadFileMPULowLevelAPITest { private const string bucketName = "*** provide bucket name ***"; private const string keyName = "*** provide a name for the uploaded object ***"; private const string filePath = "*** provide the full path name of the file to upload ***"; // Specify your bucket region (an example region is shown). private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2; private static IAmazonS3 s3Client; public static void Main() { s3Client = new AmazonS3Client(bucketRegion); Console.WriteLine("Uploading an object"); UploadObjectAsync().Wait(); } private static async Task UploadObjectAsync() { // Create list to store upload part responses. List<UploadPartResponse> uploadResponses = new List<UploadPartResponse>(); // Setup information required to initiate the multipart upload. InitiateMultipartUploadRequest initiateRequest = new InitiateMultipartUploadRequest { BucketName = bucketName, Key = keyName }; // Initiate the upload. InitiateMultipartUploadResponse initResponse = await s3Client.InitiateMultipartUploadAsync(initiateRequest); // Upload parts. long contentLength = new FileInfo(filePath).Length; long partSize = 5 * (long)Math.Pow(2, 20); // 5 MB try { Console.WriteLine("Uploading parts"); long filePosition = 0; for (int i = 1; filePosition < contentLength; i++) { UploadPartRequest uploadRequest = new UploadPartRequest { BucketName = bucketName, Key = keyName, UploadId = initResponse.UploadId, PartNumber = i, PartSize = partSize, FilePosition = filePosition, FilePath = filePath }; // Track upload progress. uploadRequest.StreamTransferProgress += new EventHandler<StreamTransferProgressArgs>(UploadPartProgressEventCallback); // Upload a part and add the response to our list. uploadResponses.Add(await s3Client.UploadPartAsync(uploadRequest)); filePosition += partSize; } // Setup to complete the upload. CompleteMultipartUploadRequest completeRequest = new CompleteMultipartUploadRequest { BucketName = bucketName, Key = keyName, UploadId = initResponse.UploadId }; completeRequest.AddPartETags(uploadResponses); // Complete the upload. CompleteMultipartUploadResponse completeUploadResponse = await s3Client.CompleteMultipartUploadAsync(completeRequest); } catch (Exception exception) { Console.WriteLine("An AmazonS3Exception was thrown: { 0}", exception.Message); // Abort the upload. AbortMultipartUploadRequest abortMPURequest = new AbortMultipartUploadRequest { BucketName = bucketName, Key = keyName, UploadId = initResponse.UploadId }; await s3Client.AbortMultipartUploadAsync(abortMPURequest); } } public static void UploadPartProgressEventCallback(object sender, StreamTransferProgressArgs e) { // Process event. Console.WriteLine("{0}/{1}", e.TransferredBytes, e.TotalBytes); } } }
- PHP
-
This topic shows how to use the low-level
uploadPart
method from version 3 of the Amazon SDK for PHP to upload a file in multiple parts. For more information about the Amazon SDK for Ruby API, go to Amazon SDK for Ruby - Version 2.The following PHP example uploads a file to an Amazon S3 bucket using the low-level PHP API multipart upload.
require 'vendor/autoload.php'; use Aws\S3\Exception\S3Exception; use Aws\S3\S3Client; $bucket = '*** Your Bucket Name ***'; $keyname = '*** Your Object Key ***'; $filename = '*** Path to and Name of the File to Upload ***'; $s3 = new S3Client([ 'version' => 'latest', 'region' => 'us-east-1' ]); $result = $s3->createMultipartUpload([ 'Bucket' => $bucket, 'Key' => $keyname, 'StorageClass' => 'REDUCED_REDUNDANCY', 'Metadata' => [ 'param1' => 'value 1', 'param2' => 'value 2', 'param3' => 'value 3' ] ]); $uploadId = $result['UploadId']; // Upload the file in parts. try { $file = fopen($filename, 'r'); $partNumber = 1; while (!feof($file)) { $result = $s3->uploadPart([ 'Bucket' => $bucket, 'Key' => $keyname, 'UploadId' => $uploadId, 'PartNumber' => $partNumber, 'Body' => fread($file, 5 * 1024 * 1024), ]); $parts['Parts'][$partNumber] = [ 'PartNumber' => $partNumber, 'ETag' => $result['ETag'], ]; $partNumber++; echo "Uploading part $partNumber of $filename." . PHP_EOL; } fclose($file); } catch (S3Exception $e) { $result = $s3->abortMultipartUpload([ 'Bucket' => $bucket, 'Key' => $keyname, 'UploadId' => $uploadId ]); echo "Upload of $filename failed." . PHP_EOL; } // Complete the multipart upload. $result = $s3->completeMultipartUpload([ 'Bucket' => $bucket, 'Key' => $keyname, 'UploadId' => $uploadId, 'MultipartUpload' => $parts, ]); $url = $result['Location']; echo "Uploaded $filename to $url." . PHP_EOL;
The Amazon SDK for Ruby version 3 supports Amazon S3 multipart uploads in two ways. For the first
option, you can use managed file uploads. For more information, see Uploading Files to
Amazon S3
Manage multipart uploads for objects larger than 15MB.
Correctly open files in binary mode to avoid encoding issues.
Use multiple threads for uploading parts of large objects in parallel.
Alternatively, you can use the following multipart upload client operations directly:
-
create_multipart_upload – Initiates a multipart upload and returns an upload ID.
-
upload_part – Uploads a part in a multipart upload.
-
upload_part_copy – Uploads a part by copying data from an existing object as data source.
complete_multipart_upload – Completes a multipart upload by assembling previously uploaded parts.
abort_multipart_upload – Stops a multipart upload.