Amazon S3 integration - Amazon Relational Database Service
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China (PDF).

Amazon S3 integration

You can transfer files between your RDS for Oracle DB instance and an Amazon S3 bucket. You can use Amazon S3 integration with Oracle Database features such as Oracle Data Pump. For example, you can download Data Pump files from Amazon S3 to your RDS for Oracle DB instance. For more information, see Importing data into Oracle on Amazon RDS.

Note

Your DB instance and your Amazon S3 bucket must be in the same Amazon Web Services Region.

Configuring IAM permissions for RDS for Oracle integration with Amazon S3

For RDS for Oracle to integrate with Amazon S3, your DB instance must have access to an Amazon S3 bucket. The Amazon VPC used by your DB instance doesn't need to provide access to the Amazon S3 endpoints.

RDS for Oracle supports uploading files from a DB instance in one account to an Amazon S3 bucket in a different account. Where additional steps are required, they are noted in the following sections.

Step 1: Create an IAM policy for your Amazon RDS role

In this step, you create an Amazon Identity and Access Management (IAM) policy with the permissions required to transfer files from your Amazon S3 bucket to your RDS DB instance. This step assumes that you have already created an S3 bucket.

Before you create the policy, note the following pieces of information:

  • The Amazon Resource Name (ARN) for your bucket

  • The ARN for your Amazon KMS key, if your bucket uses SSE-KMS or SSE-S3 encryption

    Note

    An RDS for Oracle DB instance can't access Amazon S3 buckets encrypted with SSE-C.

For more information, see Protecting data using server-side encryption in the Amazon Simple Storage Service User Guide.

To create an IAM policy to allow Amazon RDS to access your Amazon S3 bucket
  1. Open the IAM Management Console.

  2. Under Access management, choose Policies.

  3. Choose Create Policy.

  4. On the Visual editor tab, choose Choose a service, and then choose S3.

  5. For Actions, choose Expand all, and then choose the bucket permissions and object permissions required to transfer files from an Amazon S3 bucket to Amazon RDS. For example, do the following:

    • Expand List, and then select ListBucket.

    • Expand Read, and then select GetObject.

    • Expand Write, and then select PutObject and DeleteObject.

    • Expand Permissions management, and then select PutObjectAcl. This permission is necessary if you plan to upload files to a bucket owned by a different account, and this account needs full control of the bucket contents.

    Object permissions are permissions for object operations in Amazon S3. You must grant them for objects in a bucket, not the bucket itself. For more information, see Permissions for object operations.

  6. Choose Resources, and then do the following:

    1. Choose Specific.

    2. For bucket, choose Add ARN. Enter your bucket ARN. The bucket name is filled in automatically. Then choose Add.

    3. If the object resource is shown, either choose Add ARN to add resources manually or choose Any.

      Note

      You can set Amazon Resource Name (ARN) to a more specific ARN value to allow Amazon RDS to access only specific files or folders in an Amazon S3 bucket. For more information about how to define an access policy for Amazon S3, see Managing access permissions to your Amazon S3 resources.

  7. (Optional) Choose Add additional permissions to add resources to the policy. For example, do the following:

    1. If your bucket is encrypted with a custom KMS key, select KMS for the service.

    2. For Manual actions, select the following:

      • Encrypt

      • ReEncrypt from and ReEncrypt to

      • Decrypt

      • DescribeKey

      • GenerateDataKey

    3. For Resources, choose Specific.

    4. For key, choose Add ARN. Enter the ARN of your custom key as the resource, and then choose Add.

      For more information, see Protecting Data Using Server-Side Encryption with KMS keys Stored in Amazon Key Management Service (SSE-KMS) in the Amazon Simple Storage Service User Guide.

    5. If you want Amazon RDS to access to access other buckets, add the ARNs for these buckets. Optionally, you can also grant access to all buckets and objects in Amazon S3.

  8. Choose Next: Tags and then Next: Review.

  9. For Name, enter a name for your IAM policy, for example rds-s3-integration-policy. You use this name when you create an IAM role to associate with your DB instance. You can also add an optional Description value.

  10. Choose Create policy.

Create an Amazon Identity and Access Management (IAM) policy that grants Amazon RDS access to an Amazon S3 bucket. After you create the policy, note the ARN of the policy. You need the ARN for a subsequent step.

Include the appropriate actions in the policy based on the type of access required:

  • GetObject – Required to transfer files from an Amazon S3 bucket to Amazon RDS.

  • ListBucket – Required to transfer files from an Amazon S3 bucket to Amazon RDS.

  • PutObject – Required to transfer files from Amazon RDS to an Amazon S3 bucket.

The following Amazon CLI command creates an IAM policy named rds-s3-integration-policy with these options. It grants access to a bucket named your-s3-bucket-arn.

Example

For Linux, macOS, or Unix:

aws iam create-policy \ --policy-name rds-s3-integration-policy \ --policy-document '{ "Version": "2012-10-17", "Statement": [ { "Sid": "s3integration", "Action": [ "s3:GetObject", "s3:ListBucket", "s3:PutObject" ], "Effect": "Allow", "Resource": [ "arn:aws-cn:s3:::your-s3-bucket-arn", "arn:aws-cn:s3:::your-s3-bucket-arn/*" ] } ] }'

The following example includes permissions for custom KMS keys.

aws iam create-policy \ --policy-name rds-s3-integration-policy \ --policy-document '{ "Version": "2012-10-17", "Statement": [ { "Sid": "s3integration", "Action": [ "s3:GetObject", "s3:ListBucket", "s3:PutObject", "kms:Decrypt", "kms:Encrypt", "kms:ReEncrypt", "kms:GenerateDataKey", "kms:DescribeKey", ], "Effect": "Allow", "Resource": [ "arn:aws-cn:s3:::your-s3-bucket-arn", "arn:aws-cn:s3:::your-s3-bucket-arn/*", "arn:aws-cn:kms:::your-kms-arn" ] } ] }'

For Windows:

aws iam create-policy ^ --policy-name rds-s3-integration-policy ^ --policy-document '{ "Version": "2012-10-17", "Statement": [ { "Sid": "s3integration", "Action": [ "s3:GetObject", "s3:ListBucket", "s3:PutObject" ], "Effect": "Allow", "Resource": [ "arn:aws-cn:s3:::your-s3-bucket-arn", "arn:aws-cn:s3:::your-s3-bucket-arn/*" ] } ] }'

The following example includes permissions for custom KMS keys.

aws iam create-policy ^ --policy-name rds-s3-integration-policy ^ --policy-document '{ "Version": "2012-10-17", "Statement": [ { "Sid": "s3integration", "Action": [ "s3:GetObject", "s3:ListBucket", "s3:PutObject", "kms:Decrypt", "kms:Encrypt", "kms:ReEncrypt", "kms:GenerateDataKey", "kms:DescribeKey", ], "Effect": "Allow", "Resource": [ "arn:aws-cn:s3:::your-s3-bucket-arn", "arn:aws-cn:s3:::your-s3-bucket-arn/*", "arn:aws-cn:kms:::your-kms-arn" ] } ] }'

Step 2: (Optional) Create an IAM policy for your Amazon S3 bucket

This step is necessary only in the following conditions:

  • You plan to upload files to an Amazon S3 bucket from one account (account A) and access them from a different account (account B).

  • Account B owns the bucket.

  • Account B needs full control of objects loaded into the bucket.

If the preceding conditions don't apply to you, skip to Step 3: Create an IAM role for your DB instance and attach your policy.

To create your bucket policy, make sure you have the following:

  • The account ID for account A

  • The user name for account A

  • The ARN value for the Amazon S3 bucket in account B

To create or edit a bucket policy
  1. Sign in to the Amazon Web Services Management Console and open the Amazon S3 console at https://console.amazonaws.cn/s3/.

  2. In the Buckets list, choose the name of the bucket that you want to create a bucket policy for or whose bucket policy you want to edit.

  3. Choose Permissions.

  4. Under Bucket policy, choose Edit. This opens the Edit bucket policy page.

  5. On the Edit bucket policy page, explore Policy examples in the Amazon S3 User Guide, choose Policy generator to generate a policy automatically, or edit the JSON in the Policy section.

    If you choose Policy generator, the Amazon Policy Generator opens in a new window:

    1. On the Amazon Policy Generator page, in Select Type of Policy, choose S3 Bucket Policy.

    2. Add a statement by entering the information in the provided fields, and then choose Add Statement. Repeat for as many statements as you would like to add. For more information about these fields, see the IAM JSON policy elements reference in the IAM User Guide.

      Note

      For convenience, the Edit bucket policy page displays the Bucket ARN (Amazon Resource Name) of the current bucket above the Policy text field. You can copy this ARN for use in the statements on the Amazon Policy Generator page.

    3. After you finish adding statements, choose Generate Policy.

    4. Copy the generated policy text, choose Close, and return to the Edit bucket policy page in the Amazon S3 console.

  6. In the Policy box, edit the existing policy or paste the bucket policy from the Policy generator. Make sure to resolve security warnings, errors, general warnings, and suggestions before you save your policy.

    { "Version": "2012-10-17", "Statement": [ { "Sid": "Example permissions", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::account-A-ID:account-A-user" }, "Action": [ "s3:PutObject", "s3:PutObjectAcl" ], "Resource": [ "arn:aws:s3:::account-B-bucket-arn", "arn:aws:s3:::account-B-bucket-arn/*" ] } ] }
  7. Choose Save changes, which returns you to the Bucket Permissions page.

Step 3: Create an IAM role for your DB instance and attach your policy

This step assumes that you have created the IAM policy in Step 1: Create an IAM policy for your Amazon RDS role. In this step, you create a role for your RDS for Oracle DB instance and then attach your policy to the role.

To create an IAM role to allow Amazon RDS to access an Amazon S3 bucket
  1. Open the IAM Management Console.

  2. In the navigation pane, choose Roles.

  3. Choose Create role.

  4. Choose Amazon service.

  5. For Use cases for other Amazon services:, choose RDS and then RDS – Add Role to Database. Then choose Next.

  6. For Search under Permissions policies, enter the name of the IAM policy you created in Step 1: Create an IAM policy for your Amazon RDS role, and select the policy when it appears in the list. Then choose Next.

  7. For Role name, enter a name for your IAM role, for example, rds-s3-integration-role. You can also add an optional Description value.

  8. Choose Create role.

To create a role and attach your policy to it
  1. Create an IAM role that Amazon RDS can assume on your behalf to access your Amazon S3 buckets.

    We recommend using the aws:SourceArn and aws:SourceAccount global condition context keys in resource-based trust relationships to limit the service's permissions to a specific resource. This is the most effective way to protect against the confused deputy problem.

    You might use both global condition context keys and have the aws:SourceArn value contain the account ID. In this case, the aws:SourceAccount value and the account in the aws:SourceArn value must use the same account ID when used in the same statement.

    • Use aws:SourceArn if you want cross-service access for a single resource.

    • Use aws:SourceAccount if you want to allow any resource in that account to be associated with the cross-service use.

    In the trust relationship, make sure to use the aws:SourceArn global condition context key with the full Amazon Resource Name (ARN) of the resources accessing the role.

    The following Amazon CLI command creates the role named rds-s3-integration-role for this purpose.

    Example

    For Linux, macOS, or Unix:

    aws iam create-role \ --role-name rds-s3-integration-role \ --assume-role-policy-document '{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "rds.amazonaws.com" }, "Action": "sts:AssumeRole", "Condition": { "StringEquals": { "aws:SourceAccount": my_account_ID, "aws:SourceArn": "arn:aws:rds:Region:my_account_ID:db:dbname" } } } ] }'

    For Windows:

    aws iam create-role ^ --role-name rds-s3-integration-role ^ --assume-role-policy-document '{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "rds.amazonaws.com" }, "Action": "sts:AssumeRole", "Condition": { "StringEquals": { "aws:SourceAccount": my_account_ID, "aws:SourceArn": "arn:aws:rds:Region:my_account_ID:db:dbname" } } } ] }'

    For more information, see Creating a role to delegate permissions to an IAM user in the IAM User Guide.

  2. After the role is created, note the ARN of the role. You need the ARN for a subsequent step.

  3. Attach the policy you created to the role you created.

    The following Amazon CLI command attaches the policy to the role named rds-s3-integration-role.

    Example

    For Linux, macOS, or Unix:

    aws iam attach-role-policy \ --policy-arn your-policy-arn \ --role-name rds-s3-integration-role

    For Windows:

    aws iam attach-role-policy ^ --policy-arn your-policy-arn ^ --role-name rds-s3-integration-role

    Replace your-policy-arn with the policy ARN that you noted in a previous step.

Step 4: Associate your IAM role with your RDS for Oracle DB instance

The last step in configuring permissions for Amazon S3 integration is associating your IAM role with your DB instance. Note the following requirements:

  • You must have access to an IAM role with the required Amazon S3 permissions policy attached to it.

  • You can only associate one IAM role with your RDS for Oracle DB instance at a time.

  • Your DB instance must be in the Available state.

To associate your IAM role with your RDS for Oracle DB instance
  1. Sign in to the Amazon Web Services Management Console and open the Amazon RDS console at https://console.amazonaws.cn/rds/.

  2. Choose Databases from the navigation pane.

  3. Choose the RDS for Oracle DB instance name to display its details.

  4. On the Connectivity & security tab, scroll down to the Manage IAM roles section at the bottom of the page.

  5. For Add IAM roles to this instance, choose the role that you created in Step 3: Create an IAM role for your DB instance and attach your policy.

  6. For Feature, choose S3_INTEGRATION.

    
                            Add S3_INTEGRATION role
  7. Choose Add role.

The following Amazon CLI command adds the role to an Oracle DB instance named mydbinstance.

Example

For Linux, macOS, or Unix:

aws rds add-role-to-db-instance \ --db-instance-identifier mydbinstance \ --feature-name S3_INTEGRATION \ --role-arn your-role-arn

For Windows:

aws rds add-role-to-db-instance ^ --db-instance-identifier mydbinstance ^ --feature-name S3_INTEGRATION ^ --role-arn your-role-arn

Replace your-role-arn with the role ARN that you noted in a previous step. S3_INTEGRATION must be specified for the --feature-name option.

Adding the Amazon S3 integration option

To integrate Amazon RDS for Oracle with Amazon S3, your DB instance must be associated with an option group that includes the S3_INTEGRATION option.

To configure an option group for Amazon S3 integration
  1. Create a new option group or identify an existing option group to which you can add the S3_INTEGRATION option.

    For information about creating an option group, see Creating an option group.

  2. Add the S3_INTEGRATION option to the option group.

    For information about adding an option to an option group, see Adding an option to an option group.

  3. Create a new RDS for Oracle DB instance and associate the option group with it, or modify an RDS for Oracle DB instance to associate the option group with it.

    For information about creating a DB instance, see Creating an Amazon RDS DB instance.

    For information about modifying a DB instance, see Modifying an Amazon RDS DB instance.

To configure an option group for Amazon S3 integration
  1. Create a new option group or identify an existing option group to which you can add the S3_INTEGRATION option.

    For information about creating an option group, see Creating an option group.

  2. Add the S3_INTEGRATION option to the option group.

    For example, the following Amazon CLI command adds the S3_INTEGRATION option to an option group named myoptiongroup.

    Example

    For Linux, macOS, or Unix:

    aws rds add-option-to-option-group \ --option-group-name myoptiongroup \ --options OptionName=S3_INTEGRATION,OptionVersion=1.0

    For Windows:

    aws rds add-option-to-option-group ^ --option-group-name myoptiongroup ^ --options OptionName=S3_INTEGRATION,OptionVersion=1.0
  3. Create a new RDS for Oracle DB instance and associate the option group with it, or modify an RDS for Oracle DB instance to associate the option group with it.

    For information about creating a DB instance, see Creating an Amazon RDS DB instance.

    For information about modifying an RDS for Oracle DB instance, see Modifying an Amazon RDS DB instance.

Transferring files between Amazon RDS for Oracle and an Amazon S3 bucket

To transfer files between an RDS for Oracle DB instance and an Amazon S3 bucket, you can use the Amazon RDS package rdsadmin_s3_tasks. You can compress files with GZIP when uploading them, and decompress them when downloading.

Requirements and limitations for file transfers

Before transferring files between your DB instance and an Amazon S3 bucket, note the following:

  • The rdsadmin_s3_tasks package transfers files located in a single directory. You can't include subdirectories in a transfer.

  • The maximum object size in an Amazon S3 bucket is 5 TB.

  • Tasks created by rdsadmin_s3_tasks run asynchronously.

  • You can upload files from the Data Pump directory, such as DATA_PUMP_DIR, or any user-created directory. You can't upload files from a directory used by Oracle background processes, such as the adump, bdump, or trace directories.

  • The download limit is 2000 files per procedure call for download_from_s3. If you need to download more than 2000 files from Amazon S3, split your download into separate actions, with no more than 2000 files per procedure call.

  • If a file exists in your download folder, and you attempt to download a file with the same name, download_from_s3 skips the download. To remove a file from the download directory, use the PL/SQL procedure UTL_FILE.FREMOVE.

Uploading files from your RDS for Oracle DB instance to an Amazon S3 bucket

To upload files from your DB instance to an Amazon S3 bucket, use the procedure rdsadmin.rdsadmin_s3_tasks.upload_to_s3. For example, you can upload Oracle Recovery Manager (RMAN) backup files or Oracle Data Pump files. For more information about working with objects, see Amazon Simple Storage Service User Guide. For more information about performing RMAN backups, see Performing common RMAN tasks for Oracle DB instances.

The rdsadmin.rdsadmin_s3_tasks.upload_to_s3 procedure has the following parameters.

Parameter name Data type Default Required Description

p_bucket_name

VARCHAR2

required

The name of the Amazon S3 bucket to upload files to.

p_directory_name

VARCHAR2

required

The name of the Oracle directory object to upload files from. The directory can be any user-created directory object or the Data Pump directory, such as DATA_PUMP_DIR. You can't upload files from a directory used by background processes, such as adump, bdump, and trace.

Note

You can only upload files from the specified directory. You can't upload files in subdirectories in the specified directory.

p_s3_prefix

VARCHAR2

required

An Amazon S3 file name prefix that files are uploaded to. An empty prefix uploads all files to the top level in the specified Amazon S3 bucket and doesn't add a prefix to the file names.

For example, if the prefix is folder_1/oradb, files are uploaded to folder_1. In this case, the oradb prefix is added to each file.

p_prefix

VARCHAR2

required

A file name prefix that file names must match to be uploaded. An empty prefix uploads all files in the specified directory.

p_compression_level

NUMBER

0

optional

The level of GZIP compression. Valid values range from 0 to 9:

  • 0 – No compression

  • 1 – Fastest compression

  • 9 – Highest compression

p_bucket_owner_full_control

VARCHAR2

optional

The access control setting for the bucket. The only valid values are null or FULL_CONTROL. This setting is required only if you upload files from one account (account A) into a bucket owned by a different account (account B), and account B needs full control of the files.

The return value for the rdsadmin.rdsadmin_s3_tasks.upload_to_s3 procedure is a task ID.

The following example uploads all of the files in the DATA_PUMP_DIR directory to the Amazon S3 bucket named mys3bucket. The files aren't compressed.

SELECT rdsadmin.rdsadmin_s3_tasks.upload_to_s3( p_bucket_name => 'mys3bucket', p_prefix => '', p_s3_prefix => '', p_directory_name => 'DATA_PUMP_DIR') AS TASK_ID FROM DUAL;

The following example uploads all of the files with the prefix db in the DATA_PUMP_DIR directory to the Amazon S3 bucket named mys3bucket. Amazon RDS applies the highest level of GZIP compression to the files.

SELECT rdsadmin.rdsadmin_s3_tasks.upload_to_s3( p_bucket_name => 'mys3bucket', p_prefix => 'db', p_s3_prefix => '', p_directory_name => 'DATA_PUMP_DIR', p_compression_level => 9) AS TASK_ID FROM DUAL;

The following example uploads all of the files in the DATA_PUMP_DIR directory to the Amazon S3 bucket named mys3bucket. The files are uploaded to a dbfiles folder. In this example, the GZIP compression level is 1, which is the fastest level of compression.

SELECT rdsadmin.rdsadmin_s3_tasks.upload_to_s3( p_bucket_name => 'mys3bucket', p_prefix => '', p_s3_prefix => 'dbfiles/', p_directory_name => 'DATA_PUMP_DIR', p_compression_level => 1) AS TASK_ID FROM DUAL;

The following example uploads all of the files in the DATA_PUMP_DIR directory to the Amazon S3 bucket named mys3bucket. The files are uploaded to a dbfiles folder and ora is added to the beginning of each file name. No compression is applied.

SELECT rdsadmin.rdsadmin_s3_tasks.upload_to_s3( p_bucket_name => 'mys3bucket', p_prefix => '', p_s3_prefix => 'dbfiles/ora', p_directory_name => 'DATA_PUMP_DIR') AS TASK_ID FROM DUAL;

The following example assumes that the command is run in account A, but account B requires full control of the bucket contents. The command rdsadmin_s3_tasks.upload_to_s3 transfers all files in the DATA_PUMP_DIR directory to the bucket named s3bucketOwnedByAccountB. Access control is set to FULL_CONTROL so that account B can access the files in the bucket. The GZIP compression level is 6, which balances speed and file size.

SELECT rdsadmin.rdsadmin_s3_tasks.upload_to_s3( p_bucket_name => 's3bucketOwnedByAccountB', p_prefix => '', p_s3_prefix => '', p_directory_name => 'DATA_PUMP_DIR', p_bucket_owner_full_control => 'FULL_CONTROL', p_compression_level => 6) AS TASK_ID FROM DUAL;

In each example, the SELECT statement returns the ID of the task in a VARCHAR2 data type.

You can view the result by displaying the task's output file.

SELECT text FROM table(rdsadmin.rds_file_util.read_text_file('BDUMP','dbtask-task-id.log'));

Replace task-id with the task ID returned by the procedure.

Note

Tasks are executed asynchronously.

Downloading files from an Amazon S3 bucket to an Oracle DB instance

To download files from an Amazon S3 bucket to an RDS for Oracle instance, use the Amazon RDS procedure rdsadmin.rdsadmin_s3_tasks.download_from_s3.

The download_from_s3 procedure has the following parameters.

Parameter name Data type Default Required Description

p_bucket_name

VARCHAR2

Required

The name of the Amazon S3 bucket to download files from.

p_directory_name

VARCHAR2

Required

The name of the Oracle directory object to download files to. The directory can be any user-created directory object or the Data Pump directory, such as DATA_PUMP_DIR.

p_error_on_zero_downloads

VARCHAR2

FALSE

Optional

A flag that determines whether the task raises an error when no objects in the Amazon S3 bucket match the prefix. If this parameter is not set or set to FALSE (default), the task prints a message that no objects were found, but doesn't raise an exception or fail. If this parameter is TRUE, the task raises an exception and fails.

Examples of prefix specifications that can fail match tests are spaces in prefixes, as in ' import/test9.log', and case mismatches, as in test9.log and test9.LOG.

p_s3_prefix

VARCHAR2

Required

A file name prefix that file names must match to be downloaded. An empty prefix downloads all of the top level files in the specified Amazon S3 bucket, but not the files in folders in the bucket.

The procedure downloads Amazon S3 objects only from the first level folder that matches the prefix. Nested directory structures matching the specified prefix are not downloaded.

For example, suppose that an Amazon S3 bucket has the folder structure folder_1/folder_2/folder_3. You specify the 'folder_1/folder_2/' prefix. In this case, only the files in folder_2 are downloaded, not the files in folder_1 or folder_3.

If, instead, you specify the 'folder_1/folder_2' prefix, all files in folder_1 that match the 'folder_2' prefix are downloaded, and no files in folder_2 are downloaded.

p_decompression_format

VARCHAR2

Optional

The decompression format. Valid values are NONE for no decompression and GZIP for decompression.

The return value for the rdsadmin.rdsadmin_s3_tasks.download_from_s3 procedure is a task ID.

The following example downloads all files in the Amazon S3 bucket named mys3bucket to the DATA_PUMP_DIR directory. The files aren't compressed, so no decompression is applied.

SELECT rdsadmin.rdsadmin_s3_tasks.download_from_s3( p_bucket_name => 'mys3bucket', p_directory_name => 'DATA_PUMP_DIR') AS TASK_ID FROM DUAL;

The following example downloads all of the files with the prefix db in the Amazon S3 bucket named mys3bucket to the DATA_PUMP_DIR directory. The files are compressed with GZIP, so decompression is applied. The parameter p_error_on_zero_downloads turns on prefix error checking, so if the prefix doesn't match any files in the bucket, the task raises and exception and fails.

SELECT rdsadmin.rdsadmin_s3_tasks.download_from_s3( p_bucket_name => 'mys3bucket', p_s3_prefix => 'db', p_directory_name => 'DATA_PUMP_DIR', p_decompression_format => 'GZIP', p_error_on_zero_downloads => 'TRUE') AS TASK_ID FROM DUAL;

The following example downloads all of the files in the folder myfolder/ in the Amazon S3 bucket named mys3bucket to the DATA_PUMP_DIR directory. Use the p_s3_prefix parameter to specify the Amazon S3 folder. The uploaded files are compressed with GZIP, but aren't decompressed during the download.

SELECT rdsadmin.rdsadmin_s3_tasks.download_from_s3( p_bucket_name => 'mys3bucket', p_s3_prefix => 'myfolder/', p_directory_name => 'DATA_PUMP_DIR', p_decompression_format => 'NONE') AS TASK_ID FROM DUAL;

The following example downloads the file mydumpfile.dmp in the Amazon S3 bucket named mys3bucket to the DATA_PUMP_DIR directory. No decompression is applied.

SELECT rdsadmin.rdsadmin_s3_tasks.download_from_s3( p_bucket_name => 'mys3bucket', p_s3_prefix => 'mydumpfile.dmp', p_directory_name => 'DATA_PUMP_DIR') AS TASK_ID FROM DUAL;

In each example, the SELECT statement returns the ID of the task in a VARCHAR2 data type.

You can view the result by displaying the task's output file.

SELECT text FROM table(rdsadmin.rds_file_util.read_text_file('BDUMP','dbtask-task-id.log'));

Replace task-id with the task ID returned by the procedure.

Note

Tasks are executed asynchronously.

You can use the UTL_FILE.FREMOVE Oracle procedure to remove files from a directory. For more information, see FREMOVE procedure in the Oracle documentation.

Monitoring the status of a file transfer

File transfer tasks publish Amazon RDS events when they start and when they complete. The event message contains the task ID for the file transfer. For information about viewing events, see Viewing Amazon RDS events.

You can view the status of an ongoing task in a bdump file. The bdump files are located in the /rdsdbdata/log/trace directory. Each bdump file name is in the following format.

dbtask-task-id.log

Replace task-id with the ID of the task that you want to monitor.

Note

Tasks are executed asynchronously.

You can use the rdsadmin.rds_file_util.read_text_file stored procedure to view the contents of bdump files. For example, the following query returns the contents of the dbtask-1234567890123-1234.log bdump file.

SELECT text FROM table(rdsadmin.rds_file_util.read_text_file('BDUMP','dbtask-1234567890123-1234.log'));

The following sample shows the log file for a failed transfer.

TASK_ID -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1234567890123-1234 TEXT -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2023-04-17 18:21:33.993 UTC [INFO ] File #1: Uploading the file /rdsdbdata/datapump/A123B4CDEF567890G1234567890H1234/sample.dmp to Amazon S3 with bucket name mys3bucket and key sample.dmp. 2023-04-17 18:21:34.188 UTC [ERROR] RDS doesn't have permission to write to Amazon S3 bucket name mys3bucket and key sample.dmp. 2023-04-17 18:21:34.189 UTC [INFO ] The task failed.

Troubleshooting Amazon S3 integration

For troubleshooting tips, see the Amazon re:Post article How do troubleshoot issues when I integrate Amazon RDS for Oracle with Amazon S3?.

Removing the Amazon S3 integration option

You can remove Amazon S3 integration option from a DB instance.

To remove the Amazon S3 integration option from a DB instance, do one of the following:

  • To remove the Amazon S3 integration option from multiple DB instances, remove the S3_INTEGRATION option from the option group to which the DB instances belong. This change affects all DB instances that use the option group. For more information, see Removing an option from an option group.

  • To remove the Amazon S3 integration option from a single DB instance, modify the instance and specify a different option group that doesn't include the S3_INTEGRATION option. You can specify the default (empty) option group or a different custom option group. For more information, see Modifying an Amazon RDS DB instance.