Class: Aws::S3::MultipartUploadPart

Inherits:
Object
  • Object
show all
Defined in:
gems/aws-sdk-s3/lib/aws-sdk-s3/multipart_upload_part.rb

Defined Under Namespace

Classes: Collection

Read-Only Attributes collapse

Actions collapse

Associations collapse

Instance Method Summary collapse

Constructor Details

#initialize(bucket_name, object_key, multipart_upload_id, part_number, options = {}) ⇒ MultipartUploadPart #initialize(options = {}) ⇒ MultipartUploadPart

Returns a new instance of MultipartUploadPart.

Overloads:

  • #initialize(bucket_name, object_key, multipart_upload_id, part_number, options = {}) ⇒ MultipartUploadPart

    Parameters:

    • bucket_name (String)
    • object_key (String)
    • multipart_upload_id (String)
    • part_number (Integer)

    Options Hash (options):

  • #initialize(options = {}) ⇒ MultipartUploadPart

    Options Hash (options):

    • :bucket_name (required, String)
    • :object_key (required, String)
    • :multipart_upload_id (required, String)
    • :part_number (required, Integer)
    • :client (Client)


28
29
30
31
32
33
34
35
36
37
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/multipart_upload_part.rb', line 28

def initialize(*args)
  options = Hash === args.last ? args.pop.dup : {}
  @bucket_name = extract_bucket_name(args, options)
  @object_key = extract_object_key(args, options)
  @multipart_upload_id = extract_multipart_upload_id(args, options)
  @part_number = extract_part_number(args, options)
  @data = options.delete(:data)
  @client = options.delete(:client) || Client.new(options)
  @waiter_block_warned = false
end

Instance Method Details

#bucket_nameString

Returns:

  • (String)


42
43
44
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/multipart_upload_part.rb', line 42

def bucket_name
  @bucket_name
end

#checksum_crc32String

This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This header specifies the base64-encoded, 32-bit CRC32 checksum of the object. For more information, see Checking object integrity in the Amazon S3 User Guide.

Returns:

  • (String)


89
90
91
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/multipart_upload_part.rb', line 89

def checksum_crc32
  data[:checksum_crc32]
end

#checksum_crc32cString

The base64-encoded, 32-bit CRC32C checksum of the object. This will only be present if it was uploaded with the object. When you use an API operation on an object that was uploaded using multipart uploads, this value may not be a direct checksum value of the full object. Instead, it's a calculation based on the checksum values of each individual part. For more information about how checksums are calculated with multipart uploads, see Checking object integrity in the Amazon S3 User Guide.

Returns:

  • (String)


106
107
108
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/multipart_upload_part.rb', line 106

def checksum_crc32c
  data[:checksum_crc32c]
end

#checksum_sha1String

The base64-encoded, 160-bit SHA-1 digest of the object. This will only be present if it was uploaded with the object. When you use the API operation on an object that was uploaded using multipart uploads, this value may not be a direct checksum value of the full object. Instead, it's a calculation based on the checksum values of each individual part. For more information about how checksums are calculated with multipart uploads, see Checking object integrity in the Amazon S3 User Guide.

Returns:

  • (String)


123
124
125
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/multipart_upload_part.rb', line 123

def checksum_sha1
  data[:checksum_sha1]
end

#checksum_sha256String

This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This header specifies the base64-encoded, 256-bit SHA-256 digest of the object. For more information, see Checking object integrity in the Amazon S3 User Guide.

Returns:

  • (String)


137
138
139
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/multipart_upload_part.rb', line 137

def checksum_sha256
  data[:checksum_sha256]
end

#clientClient

Returns:



144
145
146
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/multipart_upload_part.rb', line 144

def client
  @client
end

#copy_from(options = {}) ⇒ Types::UploadPartCopyOutput

Examples:

Request syntax with placeholder values


multipart_upload_part.copy_from({
  copy_source: "CopySource", # required
  copy_source_if_match: "CopySourceIfMatch",
  copy_source_if_modified_since: Time.now,
  copy_source_if_none_match: "CopySourceIfNoneMatch",
  copy_source_if_unmodified_since: Time.now,
  copy_source_range: "CopySourceRange",
  sse_customer_algorithm: "SSECustomerAlgorithm",
  sse_customer_key: "SSECustomerKey",
  sse_customer_key_md5: "SSECustomerKeyMD5",
  copy_source_sse_customer_algorithm: "CopySourceSSECustomerAlgorithm",
  copy_source_sse_customer_key: "CopySourceSSECustomerKey",
  copy_source_sse_customer_key_md5: "CopySourceSSECustomerKeyMD5",
  request_payer: "requester", # accepts requester
  expected_bucket_owner: "AccountId",
  expected_source_bucket_owner: "AccountId",
})

Parameters:

  • options (Hash) (defaults to: {})

    ({})

Options Hash (options):

  • :copy_source (required, String)

    Specifies the source object for the copy operation. You specify the value in one of two formats, depending on whether you want to access the source object through an access point:

    • For objects not accessed through an access point, specify the name of the source bucket and key of the source object, separated by a slash (/). For example, to copy the object reports/january.pdf from the bucket awsexamplebucket, use awsexamplebucket/reports/january.pdf. The value must be URL-encoded.

    • For objects accessed through access points, specify the Amazon Resource Name (ARN) of the object as accessed through the access point, in the format arn:aws:s3:<Region>:<account-id>:accesspoint/<access-point-name>/object/<key>. For example, to copy the object reports/january.pdf through access point my-access-point owned by account 123456789012 in Region us-west-2, use the URL encoding of arn:aws:s3:us-west-2:123456789012:accesspoint/my-access-point/object/reports/january.pdf. The value must be URL encoded.

      * Amazon S3 supports copy operations using Access points only when the source and destination buckets are in the same Amazon Web Services Region.

      • Access points are not supported by directory buckets.

      Alternatively, for objects accessed through Amazon S3 on Outposts, specify the ARN of the object as accessed in the format arn:aws:s3-outposts:<Region>:<account-id>:outpost/<outpost-id>/object/<key>. For example, to copy the object reports/january.pdf through outpost my-outpost owned by account 123456789012 in Region us-west-2, use the URL encoding of arn:aws:s3-outposts:us-west-2:123456789012:outpost/my-outpost/object/reports/january.pdf. The value must be URL-encoded.

    If your bucket has versioning enabled, you could have multiple versions of the same object. By default, x-amz-copy-source identifies the current version of the source object to copy. To copy a specific version of the source object to copy, append ?versionId=<version-id> to the x-amz-copy-source request header (for example, x-amz-copy-source: /awsexamplebucket/reports/january.pdf?versionId=QUpfdndhfd8438MNFDN93jdnJFkdmqnh893).

    If the current version is a delete marker and you don't specify a versionId in the x-amz-copy-source request header, Amazon S3 returns a 404 Not Found error, because the object does not exist. If you specify versionId in the x-amz-copy-source and the versionId is a delete marker, Amazon S3 returns an HTTP 400 Bad Request error, because you are not allowed to specify a delete marker as a version for the x-amz-copy-source.

    Directory buckets - S3 Versioning isn't enabled and supported for directory buckets.

  • :copy_source_if_match (String)

    Copies the object if its entity tag (ETag) matches the specified tag.

    If both of the x-amz-copy-source-if-match and x-amz-copy-source-if-unmodified-since headers are present in the request as follows:

    x-amz-copy-source-if-match condition evaluates to true, and;

    x-amz-copy-source-if-unmodified-since condition evaluates to false;

    Amazon S3 returns 200 OK and copies the data.

  • :copy_source_if_modified_since (Time, DateTime, Date, Integer, String)

    Copies the object if it has been modified since the specified time.

    If both of the x-amz-copy-source-if-none-match and x-amz-copy-source-if-modified-since headers are present in the request as follows:

    x-amz-copy-source-if-none-match condition evaluates to false, and;

    x-amz-copy-source-if-modified-since condition evaluates to true;

    Amazon S3 returns 412 Precondition Failed response code.

  • :copy_source_if_none_match (String)

    Copies the object if its entity tag (ETag) is different than the specified ETag.

    If both of the x-amz-copy-source-if-none-match and x-amz-copy-source-if-modified-since headers are present in the request as follows:

    x-amz-copy-source-if-none-match condition evaluates to false, and;

    x-amz-copy-source-if-modified-since condition evaluates to true;

    Amazon S3 returns 412 Precondition Failed response code.

  • :copy_source_if_unmodified_since (Time, DateTime, Date, Integer, String)

    Copies the object if it hasn't been modified since the specified time.

    If both of the x-amz-copy-source-if-match and x-amz-copy-source-if-unmodified-since headers are present in the request as follows:

    x-amz-copy-source-if-match condition evaluates to true, and;

    x-amz-copy-source-if-unmodified-since condition evaluates to false;

    Amazon S3 returns 200 OK and copies the data.

  • :copy_source_range (String)

    The range of bytes to copy from the source object. The range value must use the form bytes=first-last, where the first and last are the zero-based byte offsets to copy. For example, bytes=0-9 indicates that you want to copy the first 10 bytes of the source. You can copy a range only if the source object is greater than 5 MB.

  • :sse_customer_algorithm (String)

    Specifies the algorithm to use when encrypting the object (for example, AES256).

    This functionality is not supported when the destination bucket is a directory bucket.

  • :sse_customer_key (String)

    Specifies the customer-provided encryption key for Amazon S3 to use in encrypting data. This value is used to store the object and then it is discarded; Amazon S3 does not store the encryption key. The key must be appropriate for use with the algorithm specified in the x-amz-server-side-encryption-customer-algorithm header. This must be the same encryption key specified in the initiate multipart upload request.

    This functionality is not supported when the destination bucket is a directory bucket.

  • :sse_customer_key_md5 (String)

    Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321. Amazon S3 uses this header for a message integrity check to ensure that the encryption key was transmitted without error.

    This functionality is not supported when the destination bucket is a directory bucket.

  • :copy_source_sse_customer_algorithm (String)

    Specifies the algorithm to use when decrypting the source object (for example, AES256).

    This functionality is not supported when the source object is in a directory bucket.

  • :copy_source_sse_customer_key (String)

    Specifies the customer-provided encryption key for Amazon S3 to use to decrypt the source object. The encryption key provided in this header must be one that was used when the source object was created.

    This functionality is not supported when the source object is in a directory bucket.

  • :copy_source_sse_customer_key_md5 (String)

    Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321. Amazon S3 uses this header for a message integrity check to ensure that the encryption key was transmitted without error.

    This functionality is not supported when the source object is in a directory bucket.

  • :request_payer (String)

    Confirms that the requester knows that they will be charged for the request. Bucket owners need not specify this parameter in their requests. If either the source or destination S3 bucket has Requester Pays enabled, the requester will pay for corresponding charges to copy the object. For information about downloading objects from Requester Pays buckets, see Downloading Objects in Requester Pays Buckets in the Amazon S3 User Guide.

    This functionality is not supported for directory buckets.

  • :expected_bucket_owner (String)

    The account ID of the expected destination bucket owner. If the account ID that you provide does not match the actual owner of the destination bucket, the request fails with the HTTP status code 403 Forbidden (access denied).

  • :expected_source_bucket_owner (String)

    The account ID of the expected source bucket owner. If the account ID that you provide does not match the actual owner of the source bucket, the request fails with the HTTP status code 403 Forbidden (access denied).

Returns:



496
497
498
499
500
501
502
503
504
505
506
507
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/multipart_upload_part.rb', line 496

def copy_from(options = {})
  options = options.merge(
    bucket: @bucket_name,
    key: @object_key,
    upload_id: @multipart_upload_id,
    part_number: @part_number
  )
  resp = Aws::Plugins::UserAgent.feature('resource') do
    @client.upload_part_copy(options)
  end
  resp.data
end

#dataTypes::Part

Returns the data for this Aws::S3::MultipartUploadPart.

Returns:

Raises:



159
160
161
162
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/multipart_upload_part.rb', line 159

def data
  load unless @data
  @data
end

#data_loaded?Boolean

Returns true if this resource is loaded. Accessing attributes or #data on an unloaded resource will trigger a call to #load.

Returns:

  • (Boolean)

    Returns true if this resource is loaded. Accessing attributes or #data on an unloaded resource will trigger a call to #load.



167
168
169
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/multipart_upload_part.rb', line 167

def data_loaded?
  !!@data
end

#etagString

Entity tag returned when the part was uploaded.

Returns:

  • (String)


69
70
71
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/multipart_upload_part.rb', line 69

def etag
  data[:etag]
end

#last_modifiedTime

Date and time at which the part was uploaded.

Returns:

  • (Time)


63
64
65
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/multipart_upload_part.rb', line 63

def last_modified
  data[:last_modified]
end

#multipart_uploadMultipartUpload

Returns:



662
663
664
665
666
667
668
669
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/multipart_upload_part.rb', line 662

def multipart_upload
  MultipartUpload.new(
    bucket_name: @bucket_name,
    object_key: @object_key,
    id: @multipart_upload_id,
    client: @client
  )
end

#multipart_upload_idString

Returns:

  • (String)


52
53
54
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/multipart_upload_part.rb', line 52

def multipart_upload_id
  @multipart_upload_id
end

#object_keyString

Returns:

  • (String)


47
48
49
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/multipart_upload_part.rb', line 47

def object_key
  @object_key
end

#part_numberInteger

Returns:

  • (Integer)


57
58
59
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/multipart_upload_part.rb', line 57

def part_number
  @part_number
end

#sizeInteger

Size in bytes of the uploaded part data.

Returns:

  • (Integer)


75
76
77
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/multipart_upload_part.rb', line 75

def size
  data[:size]
end

#upload(options = {}) ⇒ Types::UploadPartOutput

Examples:

Request syntax with placeholder values


multipart_upload_part.upload({
  body: source_file,
  content_length: 1,
  content_md5: "ContentMD5",
  checksum_algorithm: "CRC32", # accepts CRC32, CRC32C, SHA1, SHA256
  checksum_crc32: "ChecksumCRC32",
  checksum_crc32c: "ChecksumCRC32C",
  checksum_sha1: "ChecksumSHA1",
  checksum_sha256: "ChecksumSHA256",
  sse_customer_algorithm: "SSECustomerAlgorithm",
  sse_customer_key: "SSECustomerKey",
  sse_customer_key_md5: "SSECustomerKeyMD5",
  request_payer: "requester", # accepts requester
  expected_bucket_owner: "AccountId",
})

Parameters:

  • options (Hash) (defaults to: {})

    ({})

Options Hash (options):

  • :body (String, StringIO, File)

    Object data.

  • :content_length (Integer)

    Size of the body in bytes. This parameter is useful when the size of the body cannot be determined automatically.

  • :content_md5 (String)

    The base64-encoded 128-bit MD5 digest of the part data. This parameter is auto-populated when using the command from the CLI. This parameter is required if object lock parameters are specified.

    This functionality is not supported for directory buckets.

  • :checksum_algorithm (String)

    Indicates the algorithm used to create the checksum for the object when you use the SDK. This header will not provide any additional functionality if you don't use the SDK. When you send this header, there must be a corresponding x-amz-checksum or x-amz-trailer header sent. Otherwise, Amazon S3 fails the request with the HTTP status code 400 Bad Request. For more information, see Checking object integrity in the Amazon S3 User Guide.

    If you provide an individual checksum, Amazon S3 ignores any provided ChecksumAlgorithm parameter.

    This checksum algorithm must be the same for all parts and it match the checksum value supplied in the CreateMultipartUpload request.

  • :checksum_crc32 (String)

    This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This header specifies the base64-encoded, 32-bit CRC32 checksum of the object. For more information, see Checking object integrity in the Amazon S3 User Guide.

  • :checksum_crc32c (String)

    This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This header specifies the base64-encoded, 32-bit CRC32C checksum of the object. For more information, see Checking object integrity in the Amazon S3 User Guide.

  • :checksum_sha1 (String)

    This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This header specifies the base64-encoded, 160-bit SHA-1 digest of the object. For more information, see Checking object integrity in the Amazon S3 User Guide.

  • :checksum_sha256 (String)

    This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This header specifies the base64-encoded, 256-bit SHA-256 digest of the object. For more information, see Checking object integrity in the Amazon S3 User Guide.

  • :sse_customer_algorithm (String)

    Specifies the algorithm to use when encrypting the object (for example, AES256).

    This functionality is not supported for directory buckets.

  • :sse_customer_key (String)

    Specifies the customer-provided encryption key for Amazon S3 to use in encrypting data. This value is used to store the object and then it is discarded; Amazon S3 does not store the encryption key. The key must be appropriate for use with the algorithm specified in the x-amz-server-side-encryption-customer-algorithm header. This must be the same encryption key specified in the initiate multipart upload request.

    This functionality is not supported for directory buckets.

  • :sse_customer_key_md5 (String)

    Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321. Amazon S3 uses this header for a message integrity check to ensure that the encryption key was transmitted without error.

    This functionality is not supported for directory buckets.

  • :request_payer (String)

    Confirms that the requester knows that they will be charged for the request. Bucket owners need not specify this parameter in their requests. If either the source or destination S3 bucket has Requester Pays enabled, the requester will pay for corresponding charges to copy the object. For information about downloading objects from Requester Pays buckets, see Downloading Objects in Requester Pays Buckets in the Amazon S3 User Guide.

    This functionality is not supported for directory buckets.

  • :expected_bucket_owner (String)

    The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden (access denied).

Returns:



646
647
648
649
650
651
652
653
654
655
656
657
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/multipart_upload_part.rb', line 646

def upload(options = {})
  options = options.merge(
    bucket: @bucket_name,
    key: @object_key,
    upload_id: @multipart_upload_id,
    part_number: @part_number
  )
  resp = Aws::Plugins::UserAgent.feature('resource') do
    @client.upload_part(options)
  end
  resp.data
end

#wait_until(options = {}) {|resource| ... } ⇒ Resource

Deprecated.

Use [Aws::S3::Client] #wait_until instead

Note:

The waiting operation is performed on a copy. The original resource remains unchanged.

Waiter polls an API operation until a resource enters a desired state.

Basic Usage

Waiter will polls until it is successful, it fails by entering a terminal state, or until a maximum number of attempts are made.

# polls in a loop until condition is true
resource.wait_until(options) {|resource| condition}

Example

instance.wait_until(max_attempts:10, delay:5) do |instance|
  instance.state.name == 'running'
end

Configuration

You can configure the maximum number of polling attempts, and the delay (in seconds) between each polling attempt. The waiting condition is set by passing a block to #wait_until:

# poll for ~25 seconds
resource.wait_until(max_attempts:5,delay:5) {|resource|...}

Callbacks

You can be notified before each polling attempt and before each delay. If you throw :success or :failure from these callbacks, it will terminate the waiter.

started_at = Time.now
# poll for 1 hour, instead of a number of attempts
proc = Proc.new do |attempts, response|
  throw :failure if Time.now - started_at > 3600
end

  # disable max attempts
instance.wait_until(before_wait:proc, max_attempts:nil) {...}

Handling Errors

When a waiter is successful, it returns the Resource. When a waiter fails, it raises an error.

begin
  resource.wait_until(...)
rescue Aws::Waiters::Errors::WaiterFailed
  # resource did not enter the desired state in time
end

attempts attempt in seconds invoked before each attempt invoked before each wait

Parameters:

  • options (Hash) (defaults to: {})

    a customizable set of options

Options Hash (options):

  • :max_attempts (Integer) — default: 10

    Maximum number of

  • :delay (Integer) — default: 10

    Delay between each

  • :before_attempt (Proc) — default: nil

    Callback

  • :before_wait (Proc) — default: nil

    Callback

Yield Parameters:

  • resource (Resource)

    to be used in the waiting condition.

Returns:

  • (Resource)

    if the waiter was successful

Raises:

  • (Aws::Waiters::Errors::FailureStateError)

    Raised when the waiter terminates because the waiter has entered a state that it will not transition out of, preventing success.

    yet successful.

  • (Aws::Waiters::Errors::UnexpectedError)

    Raised when an error is encountered while polling for a resource that is not expected.

  • (NotImplementedError)

    Raised when the resource does not



251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/multipart_upload_part.rb', line 251

def wait_until(options = {}, &block)
  self_copy = self.dup
  attempts = 0
  options[:max_attempts] = 10 unless options.key?(:max_attempts)
  options[:delay] ||= 10
  options[:poller] = Proc.new do
    attempts += 1
    if block.call(self_copy)
      [:success, self_copy]
    else
      self_copy.reload unless attempts == options[:max_attempts]
      :retry
    end
  end
  Aws::Plugins::UserAgent.feature('resource') do
    Aws::Waiters::Waiter.new(options).wait({})
  end
end