

# Getting started with Amazon S3
<a name="GetStartedWithS3"></a>

You can get started with Amazon S3 by working with buckets and objects. A *bucket* is a container for objects. An *object* is a file and any metadata that describes that file.

To store an object in Amazon S3, you create a bucket and then upload the object to the bucket. When the object is in the bucket, you can open it, download it, and move it. When you no longer need an object or a bucket, you can clean up your resources.

With Amazon S3, you pay only for what you use. For more information about Amazon S3 features and pricing, see [Amazon S3](https://www.amazonaws.cn/s3). If you are a new Amazon S3 customer, you can get started with Amazon S3 for free. For more information, see [Amazon Free Tier](https://www.amazonaws.cn/free).

**Note**  
For more information about using the Amazon S3 Express One Zone storage class with directory buckets, see [S3 Express One Zone](directory-bucket-high-performance.md#s3-express-one-zone) and [Working with directory buckets](directory-buckets-overview.md).

**Video: Getting started with Amazon S3**  

**Prerequisites**  
Before you begin, confirm that you've completed the steps in [Setting up Amazon S3](#setting-up-s3).

## Setting up Amazon S3
<a name="setting-up-s3"></a>

When you sign up for Amazon, your Amazon Web Services account is automatically signed up for all services in Amazon, including Amazon S3. You are charged only for the services that you use.

With Amazon S3, you pay only for what you use. For more information about Amazon S3 features and pricing, see [Amazon S3](https://www.amazonaws.cn/s3). If you are a new Amazon S3 customer, you can get started with Amazon S3 for free. For more information, see [Amazon Free Tier](https://www.amazonaws.cn/free).

To set up Amazon S3, use the steps in the following sections.

When you sign up for Amazon and set up Amazon S3, you can optionally change the display language in the Amazon Web Services Management Console. For more information, see [Changing the language of the Amazon Web Services Management Console](https://docs.amazonaws.cn/awsconsolehelpdocs/latest/gsg/getting-started.html#change-language) in the *Amazon Web Services Management Console Getting Started Guide*.

**Topics**
+ [Sign up for an Amazon Web Services account](#sign-up-for-aws)
+ [Secure IAM users](#secure-an-admin)

### Sign up for an Amazon Web Services account
<a name="sign-up-for-aws"></a>

If you do not have an Amazon Web Services account, use the following procedure to create one.

**To sign up for Amazon Web Services**

1. Open [http://www.amazonaws.cn/](http://www.amazonaws.cn/) and choose **Sign Up**.

1. Follow the on-screen instructions.

Amazon sends you a confirmation email after the sign-up process is complete. At any time, you can view your current account activity and manage your account by going to [http://www.amazonaws.cn/](http://www.amazonaws.cn/) and choosing **My Account**.

### Secure IAM users
<a name="secure-an-admin"></a>

After you sign up for an Amazon Web Services account, safeguard your administrative user by turning on multi-factor authentication (MFA). For instructions, see [Enable a virtual MFA device for an IAM user (console)](https://docs.amazonaws.cn/IAM/latest/UserGuide/id_credentials_mfa_enable_virtual.html#enable-virt-mfa-for-iam-user) in the *IAM User Guide*.

To give other users access to your Amazon Web Services account resources, create IAM users. To secure your IAM users, turn on MFA and only give the IAM users the permissions needed to perform their tasks.

For more information about creating and securing IAM users, see the following topics in the *IAM User Guide*: 
+ [Creating an IAM user in your Amazon Web Services account](https://docs.amazonaws.cn//IAM/latest/UserGuide/id_users_create.html)
+ [Access management for Amazon resources](https://docs.amazonaws.cn/IAM/latest/UserGuide/access.html)
+ [Example IAM identity-based policies](https://docs.amazonaws.cn/IAM/latest/UserGuide/access_policies_examples.html)

## Step 1: Create your first S3 bucket
<a name="creating-bucket"></a>

After you sign up for Amazon, you're ready to create a bucket in Amazon S3 using the Amazon Web Services Management Console. Every object in Amazon S3 is stored in a *bucket*. Before you can store data in Amazon S3, you must create a bucket. 

**Note**  
For more information about using the Amazon S3 Express One Zone storage class with directory buckets, see [S3 Express One Zone](directory-bucket-high-performance.md#s3-express-one-zone) and [Working with directory buckets](directory-buckets-overview.md).

**Note**  
You are not charged for creating a bucket. You are charged only for storing objects in the bucket and for transferring objects in and out of the bucket. The charges that you incur through following the examples in this guide are minimal (less than \$11). For more information about storage charges, see [Amazon S3 pricing](https://www.amazonaws.cn/s3/pricing/).

1. Sign in to the Amazon Web Services Management Console and open the Amazon S3 console at [https://console.amazonaws.cn/s3/](https://console.amazonaws.cn/s3/).

1. In the navigation bar on the top of the page, choose the name of the currently displayed Amazon Web Services Region. Next, choose the Region in which you want to create a bucket. 
**Note**  
After you create a bucket, you can't change its Region. 
To minimize latency and costs and address regulatory requirements, choose a Region close to you. Objects stored in a Region never leave that Region unless you explicitly transfer them to another Region. For a list of Amazon S3 Amazon Web Services Regions, see [Amazon Web Services service endpoints](https://docs.amazonaws.cn/general/latest/gr/rande.html#s3_region) in the *Amazon Web Services General Reference*.

1. In the left navigation pane, choose **General purpose buckets**.

1. Choose **Create bucket**. The **Create bucket** page opens.

1. For **Bucket name**, enter a name for your bucket.

   The bucket name must:
   + Be unique within a partition. A partition is a grouping of Regions. Amazon currently has three partitions: `aws` (commercial Regions), `aws-cn` (China Regions), and `aws-us-gov` (Amazon GovCloud (US) Regions).
   + Be between 3 and 63 characters long.
   + Consist only of lowercase letters, numbers, periods (`.`), and hyphens (`-`). For best compatibility, we recommend that you avoid using periods (`.`) in bucket names, except for buckets that are used only for static website hosting.
   + Begin and end with a letter or number. 
   + For a complete list of bucket-naming rules, see [General purpose bucket naming rules](bucketnamingrules.md).
**Important**  
After you create the bucket, you can't change its name. 
Don't include sensitive information in the bucket name. The bucket name is visible in the URLs that point to the objects in the bucket.

1. (Optional) Under **General configuration**, you can choose to copy an existing bucket's settings to your new bucket. If you don't want to copy the settings of an existing bucket, skip to the next step.
**Note**  
This option:  
Isn't available in the Amazon CLI and is only available in the Amazon S3 console
Doesn't copy the bucket policy from the existing bucket to the new bucket

    To copy an existing bucket's settings, under **Copy settings from existing bucket**, select **Choose bucket**. The **Choose bucket** window opens. Find the bucket with the settings that you want to copy, and select **Choose bucket**. The **Choose bucket** window closes, and the **Create bucket** window reopens.

   Under **Copy settings from existing bucket**, you now see the name of the bucket that you selected. The settings of your new bucket now match the settings of the bucket that you selected. If you want to remove the copied settings, choose **Restore defaults**. Review the remaining bucket settings on the **Create bucket** page. If you don't want to make any changes, you can skip to the final step. 

1. Under **Object Ownership**, to disable or enable ACLs and control ownership of objects uploaded in your bucket, choose one of the following settings:

**ACLs disabled**
   +  **Bucket owner enforced (default)** – ACLs are disabled, and the bucket owner automatically owns and has full control over every object in the general purpose bucket. ACLs no longer affect access permissions to data in the S3 general purpose bucket. The bucket uses policies exclusively to define access control.

     By default, ACLs are disabled. A majority of modern use cases in Amazon S3 no longer require the use of ACLs. We recommend that you keep ACLs disabled, except in circumstances where you must control access for each object individually. For more information, see [Controlling ownership of objects and disabling ACLs for your bucket](about-object-ownership.md).

**ACLs enabled**
   + **Bucket owner preferred** – The bucket owner owns and has full control over new objects that other accounts write to the bucket with the `bucket-owner-full-control` canned ACL. 

     If you apply the **Bucket owner preferred** setting, to require all Amazon S3 uploads to include the `bucket-owner-full-control` canned ACL, you can [add a bucket policy](ensure-object-ownership.md#ensure-object-ownership-bucket-policy) that allows only object uploads that use this ACL.
   + **Object writer** – The Amazon Web Services account that uploads an object owns the object, has full control over it, and can grant other users access to it through ACLs.
**Note**  
The default setting is **Bucket owner enforced**. To apply the default setting and keep ACLs disabled, only the `s3:CreateBucket` permission is needed. To enable ACLs, you must have the `s3:PutBucketOwnershipControls` permission.

1. Under **Block Public Access settings for this bucket**, choose the Block Public Access settings that you want to apply to the bucket. 

   By default, all four Block Public Access settings are enabled. We recommend that you keep all settings enabled, unless you know that you need to turn off one or more of them for your specific use case. For more information about blocking public access, see [Blocking public access to your Amazon S3 storage](access-control-block-public-access.md).
**Note**  
To enable all Block Public Access settings, only the `s3:CreateBucket` permission is required. To turn off any Block Public Access settings, you must have the `s3:PutBucketPublicAccessBlock` permission.

1. (Optional) By default, **Bucket Versioning** is disabled. Versioning is a means of keeping multiple variants of an object in the same bucket. You can use versioning to preserve, retrieve, and restore every version of every object stored in your bucket. With versioning, you can recover more easily from both unintended user actions and application failures. For more information about versioning, see [Retaining multiple versions of objects with S3 Versioning](Versioning.md). 

   To enable versioning on your bucket, choose **Enable**. 

1. (Optional) Under **Tags**, you can choose to add tags to your bucket. With Amazon cost allocation, you can use bucket tags to annotate billing for your use of a bucket. A tag is a key-value pair that represents a label that you assign to a bucket. For more information, see [Using cost allocation S3 bucket tags](CostAllocTagging.md).

   To add a bucket tag, enter a **Key** and optionally a **Value** and choose **Add Tag**.

1. To configure **Default encryption**, under **Encryption type**, choose one of the following: 
   + **Server-side encryption with Amazon S3 managed keys (SSE-S3)**
   + **Server-side encryption with Amazon Key Management Service keys (SSE-KMS)**
   + **Dual-layer server-side encryption with Amazon Key Management Service (Amazon KMS) keys (DSSE-KMS)**
**Important**  
If you use the SSE-KMS or DSSE-KMS option for your default encryption configuration, you are subject to the requests per second (RPS) quota of Amazon KMS. For more information about Amazon KMS quotas and how to request a quota increase, see [Quotas](https://docs.amazonaws.cn/kms/latest/developerguide/limits.html) in the *Amazon Key Management Service Developer Guide*.

   Buckets and new objects are encrypted by using server-side encryption with Amazon S3 managed keys (SSE-S3) as the base level of encryption configuration. For more information about default encryption, see [Setting default server-side encryption behavior for Amazon S3 buckets](bucket-encryption.md). For more information about SSE-S3, see [Using server-side encryption with Amazon S3 managed keys (SSE-S3)](UsingServerSideEncryption.md).

   For more information about using server-side encryption to encrypt your data, see [Protecting data with encryption](UsingEncryption.md). 

1. If you chose **Server-side encryption with Amazon Key Management Service keys (SSE-KMS)** or **Dual-layer server-side encryption with Amazon Key Management Service (Amazon KMS) keys (DSSE-KMS)**, do the following:

   1. Under **Amazon KMS key**, specify your KMS key in one of the following ways:
      + To choose from a list of available KMS keys, choose **Choose from your Amazon KMS keys**, and choose your **KMS key** from the list of available keys.

        Both the Amazon managed key (`aws/s3`) and your customer managed keys appear in this list. For more information about customer managed keys, see [Customer keys and Amazon keys](https://docs.amazonaws.cn//kms/latest/developerguide/concepts.html#key-mgmt) in the *Amazon Key Management Service Developer Guide*.
      + To enter the KMS key ARN, choose **Enter Amazon KMS key ARN**, and enter your KMS key ARN in the field that appears. 
      + To create a new customer managed key in the Amazon KMS console, choose **Create a KMS key**.

        For more information about creating an Amazon KMS key, see [Creating keys](https://docs.amazonaws.cn//kms/latest/developerguide/create-keys.html) in the *Amazon Key Management Service Developer Guide*.
**Important**  
You can use only KMS keys that are available in the same Amazon Web Services Region as the bucket. The Amazon S3 console lists only the first 100 KMS keys in the same Region as the bucket. To use a KMS key that isn't listed, you must enter your KMS key ARN. If you want to use a KMS key that's owned by a different account, you must first have permission to use the key, and then you must enter the KMS key ARN. For more information about cross account permissions for KMS keys, see [Creating KMS keys that other accounts can use](https://docs.amazonaws.cn//kms/latest/developerguide/key-policy-modifying-external-accounts.html#cross-account-console) in the *Amazon Key Management Service Developer Guide*. For more information about SSE-KMS, see [Specifying server-side encryption with Amazon KMS (SSE-KMS)](specifying-kms-encryption.md). For more information about DSSE-KMS, see [Using dual-layer server-side encryption with Amazon KMS keys (DSSE-KMS)](UsingDSSEncryption.md).  
When you use an Amazon KMS key for server-side encryption in Amazon S3, you must choose a symmetric encryption KMS key. Amazon S3 supports only symmetric encryption KMS keys and not asymmetric KMS keys. For more information, see [Identifying symmetric and asymmetric KMS keys](https://docs.amazonaws.cn//kms/latest/developerguide/find-symm-asymm.html) in the *Amazon Key Management Service Developer Guide*.

   1. When you configure your bucket to use default encryption with SSE-KMS, you can also use S3 Bucket Keys. S3 Bucket Keys lower the cost of encryption by decreasing request traffic from Amazon S3 to Amazon KMS. For more information, see [Reducing the cost of SSE-KMS with Amazon S3 Bucket Keys](bucket-key.md). S3 Bucket Keys aren't supported for DSSE-KMS.

      By default, S3 Bucket Keys are enabled in the Amazon S3 console. We recommend leaving S3 Bucket Keys enabled to lower your costs. To disable S3 Bucket Keys for your bucket, under **Bucket Key**, choose **Disable**.

1. (Optional) S3 Object Lock helps protect new objects from being deleted or overwritten. For more information, see [Locking objects with Object Lock](object-lock.md). If you want to enable S3 Object Lock, do the following:

   1. Choose **Advanced settings**.
**Important**  
Enabling Object Lock automatically enables versioning for the bucket. After you've enabled and successfully created the bucket, you must also configure the Object Lock default retention and legal hold settings on the bucket's **Properties** tab. 

   1. If you want to enable Object Lock, choose **Enable**, read the warning that appears, and acknowledge it.
**Note**  
To create an Object Lock enabled bucket, you must have the following permissions: `s3:CreateBucket`, `s3:PutBucketVersioning`, and `s3:PutBucketObjectLockConfiguration`.

1. Choose **Create bucket**.

You've created a bucket in Amazon S3. 

**Next step**  
To add an object to your bucket, see [Step 2: Upload an object to your bucket](#uploading-an-object-bucket).

## Step 2: Upload an object to your bucket
<a name="uploading-an-object-bucket"></a>

After creating a bucket in Amazon S3, you're ready to upload an object to the bucket. An object can be any kind of file: a text file, a photo, a video, and so on. 

**Note**  
For more information about using the Amazon S3 Express One Zone storage class with directory buckets, see [S3 Express One Zone](directory-bucket-high-performance.md#s3-express-one-zone) and [Working with directory buckets](directory-buckets-overview.md).

**To upload an object to a bucket**

1. Open the Amazon S3 console at [https://console.amazonaws.cn/s3/](https://console.amazonaws.cn/s3/).

1. In the **Buckets** list, choose the name of the bucket that you want to upload your object to.

1. On the **Objects** tab for your bucket, choose **Upload**.

1. Under **Files and folders**, choose **Add files**.

1. Choose a file to upload, and then choose **Open.** 

1. Choose **Upload**. 

You've successfully uploaded an object to your bucket. 

**Next step**  
To view your object, see [Step 3: Download an object](#accessing-an-object).

## Step 3: Download an object
<a name="accessing-an-object"></a>

After you upload an object to a bucket, you can view information about your object and download the object to your local computer.

**Note**  
For more information about using the Amazon S3 Express One Zone storage class with directory buckets, see [S3 Express One Zone](directory-bucket-high-performance.md#s3-express-one-zone) and [Working with directory buckets](directory-buckets-overview.md).

### Using the S3 console
<a name="download-objects-console"></a>

This section explains how to use the Amazon S3 console to download an object from an S3 bucket.

**Note**  
You can download only one object at a time.
If you use the Amazon S3 console to download an object whose key name ends with a period (`.`), the period is removed from the key name of the downloaded object. To retain the period at the end of the name of the downloaded object, you must use the Amazon Command Line Interface (Amazon CLI), Amazon SDKs, or Amazon S3 REST API. 

**To download an object from an S3 bucket**

1. Sign in to the Amazon Web Services Management Console and open the Amazon S3 console at [https://console.amazonaws.cn/s3/](https://console.amazonaws.cn/s3/).

1. In the left navigation pane, choose **General purpose buckets** or **Directory buckets**.

1. In the buckets list, choose the name of the bucket that you want to download an object from.

    

1. You can download an object from an S3 bucket in any of the following ways:
   + Select the check box next to the object, and choose **Download**. If you want to download the object to a specific folder, on the **Actions** menu, choose **Download as**.
   + If you want to download a specific version of the object, turn on **Show versions** (located next to the search box). Select the check box next to the version of the object that you want, and choose **Download**. If you want to download the object to a specific folder, on the **Actions** menu, choose **Download as**.

You've successfully downloaded your object.

**Next step**  
To copy and paste your object within Amazon S3, see [Step 4: Copy your object to a folder](#copying-an-object).

## Step 4: Copy your object to a folder
<a name="copying-an-object"></a>

You've already added an object to a bucket and downloaded the object. Now, you create a folder and copy the object and paste it into the folder.

**Note**  
For more information about using the Amazon S3 Express One Zone storage class with directory buckets, see [S3 Express One Zone](directory-bucket-high-performance.md#s3-express-one-zone) and [Working with directory buckets](directory-buckets-overview.md).

**To copy an object to a folder**

1. In the **Buckets** list, choose your bucket name.

1. Choose **Create folder** and configure a new folder: 

   1. Enter a folder name (for example, `favorite-pics`).

   1. For the folder encryption setting, choose **Disable**.

   1. Choose **Save**.

1. Navigate to the Amazon S3 bucket or folder that contains the objects that you want to copy.

1. Select the check box to the left of the names of the objects that you want to copy.

1. Choose **Actions** and choose **Copy** from the list of options that appears.

   Alternatively, choose **Copy** from the options in the upper right. 

1. Choose the destination folder:

   1. Choose **Browse S3**.

   1. Choose the option button to the left of the folder name.

      To navigate into a folder and choose a subfolder as your destination, choose the folder name.

   1. Choose **Choose destination**.

   The path to your destination folder appears in the **Destination** box. In **Destination**, you can alternately enter your destination path, for example, s3://*bucket-name*/*folder-name*/.

1. In the bottom right, choose **Copy**.

   Amazon S3 copies your objects to the destination folder.

**Next step**  
To delete an object and a bucket in Amazon S3, see [Step 5: Delete your objects and bucket](#deleting-object-bucket).

## Step 5: Delete your objects and bucket
<a name="deleting-object-bucket"></a>

When you no longer need an object or a bucket, we recommend that you delete them to prevent further charges. If you completed this getting started walkthrough as a learning exercise, and you don't plan to use your bucket or objects, we recommend that you delete your bucket and objects so that charges no longer accrue. 

Before you delete your bucket, empty the bucket or delete the objects in the bucket. After you delete your objects and bucket, they are no longer available.

If you want to continue to use the same bucket name, we recommend that you delete the objects or empty the bucket, but don't delete the bucket. After you delete a bucket, the name becomes available to reuse. However, another Amazon Web Services account might create a bucket with the same name before you have a chance to reuse it. 

**Note**  
For more information about using the Amazon S3 Express One Zone storage class with directory buckets, see [S3 Express One Zone](directory-bucket-high-performance.md#s3-express-one-zone) and [Working with directory buckets](directory-buckets-overview.md).

**Topics**
+ [Deleting an object](#clean-up-delete-objects)
+ [Emptying your bucket](#clean-up-empty-bucket)
+ [Deleting your bucket](#clean-up-delete-bucket)

### Deleting an object
<a name="clean-up-delete-objects"></a>

If you want to choose which objects you delete without emptying all the objects from your bucket, you can delete an object. 

1. In the **Buckets** list, choose the name of the bucket that you want to delete an object from.

1. Select the object that you want to delete.

1. Choose **Delete** from the options in the upper right. 

1. On the **Delete objects** page, type **delete** to confirm deletion of your objects.

1. Choose **Delete objects**.

### Emptying your bucket
<a name="clean-up-empty-bucket"></a>

If you plan to delete your bucket, you must first empty your bucket, which deletes all the objects in the bucket. 

**To empty a bucket**



1. In the **Buckets** list, select the bucket that you want to empty, and then choose **Empty**.

1. To confirm that you want to empty the bucket and delete all the objects in it, in **Empty bucket**, type **permanently delete**.
**Important**  
Emptying the bucket cannot be undone. Objects added to the bucket while the empty bucket action is in progress will be deleted.

1. To empty the bucket and delete all the objects in it, and choose **Empty**.

   An **Empty bucket: Status** page opens that you can use to review a summary of failed and successful object deletions.

1. To return to your bucket list, choose **Exit**.

### Deleting your bucket
<a name="clean-up-delete-bucket"></a>

After you empty your bucket or delete all the objects from your bucket, you can delete your bucket.

1. To delete a bucket, in the **Buckets** list, select the bucket.

1. Choose **Delete**.

1. To confirm deletion, in **Delete bucket**, type the name of the bucket.
**Important**  
Deleting a bucket cannot be undone. Bucket names are unique. If you delete your bucket, another Amazon user can use the name. If you want to continue to use the same bucket name, don't delete your bucket. Instead, empty and keep the bucket. 

1. To delete your bucket, choose **Delete bucket**.

## Next steps
<a name="getting-started-next-steps"></a>

In the preceding examples, you learned how to perform some basic Amazon S3 tasks.

The following topics explain the learning paths that you can use to gain a deeper understanding of Amazon S3 so that you can implement it in your applications.

**Note**  
For more information about using the Amazon S3 Express One Zone storage class with directory buckets, see [S3 Express One Zone](directory-bucket-high-performance.md#s3-express-one-zone) and [Working with directory buckets](directory-buckets-overview.md).

**Topics**
+ [Understand common use cases](#s3-use-cases)
+ [Control access to your buckets and objects](#control-access-resources)
+ [Protect and monitor your storage](#manage-monitor-storage)
+ [Develop with Amazon S3](#develop-with-s3)
+ [Learn from tutorials](#s3-getting-started-tutorials-list)
+ [Explore training and support](#explore-training-and-support)

### Understand common use cases
<a name="s3-use-cases"></a>

You can use Amazon S3 to support your specific use case. The [Amazon Solutions Library](https://www.amazonaws.cn/solutions/) and [Amazon Blog](https://www.amazonaws.cn/blogs/) provide use-case specific information and tutorials. The following are some common use cases for Amazon S3:
+ **Backup and storage** – Use Amazon S3 storage management features to manage costs, meet regulatory requirements, reduce latency, and save multiple distinct copies of your data for compliance requirements.
+ **Application hosting** – Deploy, install, and manage web applications that are reliable, highly scalable, and low-cost. For example, you can configure your Amazon S3 bucket to host a static website. For more information, see [Hosting a static website using Amazon S3](WebsiteHosting.md).
+ **Media hosting** – Build a highly available infrastructure that hosts video, photo, or music uploads and downloads.
+ **Software delivery** – Host your software applications for customers to download.

### Control access to your buckets and objects
<a name="control-access-resources"></a>

Amazon S3 provides a variety of security features and tools. For an overview, see [Access control in Amazon S3](access-management.md).

By default, S3 buckets and the objects in them are private. You have access only to the S3 resources that you create. You can use the following features to grant granular resource permissions that support your specific use case or to audit the permissions of your Amazon S3 resources. 
+ [S3 Block Public Access](https://docs.amazonaws.cn/AmazonS3/latest/userguide/access-control-block-public-access.html) – Block public access to S3 buckets and objects. By default, Block Public Access settings are turned on at the bucket level.
+ [Amazon Identity and Access Management (IAM) identities](https://docs.amazonaws.cn/AmazonS3/latest/userguide/security-iam.html) – Use IAM or Amazon IAM Identity Center to create IAM identities in your Amazon Web Services account to manage access to your Amazon S3 resources. For example, you can use IAM with Amazon S3 to control the type of access that a user or group of users has to an Amazon S3 bucket that your Amazon Web Services account owns. For more information about IAM identities and best practices, see [IAM identities (users, user groups, and roles)](https://docs.amazonaws.cn/IAM/latest/UserGuide/id.html) in the *IAM User Guide*.
+ [Bucket policies](https://docs.amazonaws.cn/AmazonS3/latest/userguide/bucket-policies.html) – Use IAM-based policy language to configure resource-based permissions for your S3 buckets and the objects in them.
+ [Access control lists (ACLs)](https://docs.amazonaws.cn/AmazonS3/latest/userguide/acls.html) – Grant read and write permissions for individual buckets and objects to authorized users. As a general rule, we recommend using S3 resource-based policies (bucket policies and access point policies) or IAM user policies for access control instead of ACLs. Policies are a simplified and more flexible access-control option. With bucket policies and access point policies, you can define rules that apply broadly across all requests to your Amazon S3 resources. For more information about the specific cases when you'd use ACLs instead of resource-based policies or IAM user policies, see [Identity and Access Management for Amazon S3](security-iam.md).
+ [S3 Object Ownership](https://docs.amazonaws.cn/AmazonS3/latest/userguide/about-object-ownership.html) – Take ownership of every object in your bucket, simplifying access management for data stored in Amazon S3. S3 Object Ownership is an Amazon S3 bucket-level setting that you can use to disable or enable ACLs. By default, ACLs are disabled. With ACLs disabled, the bucket owner owns all the objects in the bucket and manages access to data exclusively by using access-management policies.
+ [IAM Access Analyzer for S3](https://docs.amazonaws.cn/AmazonS3/latest/userguide/access-analyzer.html) – Evaluate and monitor your S3 bucket access policies, ensuring that the policies provide only the intended access to your S3 resources. 

### Protect and monitor your storage
<a name="manage-monitor-storage"></a>
+ [Protecting your storage](data-protection.md) – After you create buckets and upload objects in Amazon S3, you can protect your object storage. For example, you can use S3 Versioning, S3 Replication, and Multi-Region Access Point failover controls for disaster recovery, Amazon Backup to back up your data, and S3 Object Lock to set retention periods, prevent deletions and overwrites, and meet compliance requirements.
+ [Monitoring your storage](monitoring-overview.md) – Monitoring is an important part of maintaining the reliability, availability, and performance of Amazon S3 and your Amazon solutions. You can monitor storage activity and costs. Also, we recommend that you collect monitoring data from all the parts of your Amazon solution so that you can more easily debug a multipoint failure if one occurs. 

  You can also use analytics and insights in Amazon S3 to understand, analyze, and optimize your storage usage. For example, use [Amazon S3 Storage Lens](storage_lens.md) to understand, analyze, and optimize your storage. S3 Storage Lens provides 29\$1 usage and activity metrics and interactive dashboards to aggregate data for your entire organization, specific accounts, Regions, buckets, or prefixes. Use [Storage Class Analysis](analytics-storage-class.md) to analyze storage access patterns to decide when it's time to move your data to a more cost-effective storage class. To manage your costs, you can use [S3 Lifecycle](object-lifecycle-mgmt.md).

### Develop with Amazon S3
<a name="develop-with-s3"></a>

Amazon S3 is a REST service. You can send requests to Amazon S3 using the REST API or the Amazon SDK libraries, which wrap the underlying Amazon S3 REST API, simplifying your programming tasks. You can also use the Amazon Command Line Interface (Amazon CLI) to make Amazon S3 API calls. For more information, see [Making requests ](https://docs.amazonaws.cn/AmazonS3/latest/API/MakingRequests.html) in the *Amazon S3 API Reference*.

The Amazon S3 REST API is an HTTP interface to Amazon S3. With the REST API, you use standard HTTP requests to create, fetch, and delete buckets and objects. To use the REST API, you can use any toolkit that supports HTTP. You can even use a browser to fetch objects, as long as they are anonymously readable. For more information, see [Developing with Amazon S3](https://docs.amazonaws.cn/AmazonS3/latest/API/developing-s3.html) in the *Amazon S3 API Reference*.

To help you build applications using the language of your choice, we provide the following resources.

**Amazon CLI**  
You can access the features of Amazon S3 using the Amazon CLI. To download and configure the Amazon CLI, see [Developing with Amazon S3 using the Amazon CLI ](https://docs.amazonaws.cn/AmazonS3/latest/API/setup-aws-cli.html) in the *Amazon S3 API Reference*.

The Amazon CLI provides two tiers of commands for accessing Amazon S3: High-level ([s3](https://docs.amazonaws.cn/cli/latest/userguide/cli-services-s3-commands.html)) commands and API-level ([s3api](https://docs.amazonaws.cn/cli/latest/userguide/cli-services-s3-apicommands.html) and `s3control` commands. The high-level S3 commands simplify performing common tasks, such as creating, manipulating, and deleting objects and buckets. The s3api and s3control commands expose direct access to all Amazon S3 API operations, which you can use to carry out advanced operations that might not be possible with the high-level commands alone.

For a list of Amazon S3 Amazon CLI commands, see [s3](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3/index.html), [s3api](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/index.html), and [s3control](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3control/index.html).

**Amazon SDKs and Explorers**  
You can use the Amazon SDKs when developing applications with Amazon S3. The Amazon SDKs simplify your programming tasks by wrapping the underlying REST API. The Amazon Mobile SDKs and the Amplify JavaScript library are also available for building connected mobile and web applications using Amazon.

In addition to the Amazon SDKs, Amazon Explorers are available for Visual Studio and Eclipse for Java IDE. In this case, the SDKs and the explorers are bundled together as Amazon Toolkits.

For more information, see [Developing with Amazon S3 using the Amazon SDKs ](https://docs.amazonaws.cn/AmazonS3/latest/API/sdk-general-information-section.html) in the *Amazon S3 API Reference*.

**Sample Code and Libraries**  
The [Amazon Developer Center](https://www.amazonaws.cn/code/Amazon-S3) and [Amazon Code Sample Catalog](https://docs.amazonaws.cn/code-samples/latest/catalog/welcome.html) have sample code and libraries written especially for Amazon S3. You can use these code samples to understand how to implement the Amazon S3 API. You can also view the [https://docs.amazonaws.cn/AmazonS3/latest/API/Welcome.html](https://docs.amazonaws.cn/AmazonS3/latest/API/Welcome.html) to understand the Amazon S3 API operations in detail.

### Learn from tutorials
<a name="s3-getting-started-tutorials-list"></a>

You can get started with step-by-step tutorials to learn more about Amazon S3. These tutorials are intended for a lab-type environment, and they use fictitious company names, user names, and so on. Their purpose is to provide general guidance. They are not intended for direct use in a production environment without careful review and adaptation to meet the unique needs of your organization's environment.

#### Getting started
<a name="getting-started-tutorials"></a>
+ [Tutorial: Storing and retrieving a file with Amazon S3](https://www.amazonaws.cn/getting-started/hands-on/backup-files-to-amazon-s3/?ref=docs_gateway/amazons3/tutorials.html)
+ [Tutorial: Getting started using S3 Intelligent-Tiering](https://www.amazonaws.cn/getting-started/hands-on/getting-started-using-amazon-s3-intelligent-tiering/?ref=docs_gateway/amazons3/tutorials.html)
+ [Tutorial: Getting started using the S3 Glacier storage classes](https://www.amazonaws.cn/getting-started/hands-on/getting-started-using-amazon-s3-glacier-storage-classes/?ref=docs_gateway/amazons3/tutorials.html)

#### Optimizing storage costs
<a name="storage-costs-tutorials"></a>
+ [Tutorial: Getting started using S3 Intelligent-Tiering](https://www.amazonaws.cn/getting-started/hands-on/getting-started-using-amazon-s3-intelligent-tiering/?ref=docs_gateway/amazons3/tutorials.html)
+ [Tutorial: Getting started using the S3 Glacier; storage classes](https://www.amazonaws.cn/getting-started/hands-on/getting-started-using-amazon-s3-glacier-storage-classes/?ref=docs_gateway/amazons3/tutorials.html)
+ [Tutorial: Optimizing costs and gaining visibility into usage with S3 Storage Lens](https://www.amazonaws.cn/getting-started/hands-on/amazon-s3-storage-lens/?ref=docs_gateway/amazons3/tutorials.html)

#### Managing storage
<a name="storage-management-tutorials"></a>
+ [Tutorial: Getting started with Amazon S3 Multi-Region Access Points](https://www.amazonaws.cn/getting-started/hands-on/getting-started-with-amazon-s3-multi-region-access-points/?ref=docs_gateway/amazons3/tutorials.html)
+ [Tutorial: Replicating existing objects in your Amazon S3 buckets with S3 Batch Replication](https://www.amazonaws.cn/getting-started/hands-on/replicate-existing-objects-with-amazon-s3-batch-replication/?ref=docs_gateway/amazons3/tutorials.html)

#### Hosting videos and websites
<a name="host-web-video-tutorials"></a>
+ [Tutorial: Hosting on-demand streaming video with Amazon S3, Amazon CloudFront, and Amazon Route 53](tutorial-s3-cloudfront-route53-video-streaming.md)
+ [Tutorial: Configuring a static website on Amazon S3](HostingWebsiteOnS3Setup.md)
+ [Tutorial: Configuring a static website using a custom domain registered with Route 53](website-hosting-custom-domain-walkthrough.md)

#### Processing data
<a name="data-processing-tutorials"></a>
+ [Tutorial: Transforming data for your application with S3 Object Lambda](tutorial-s3-object-lambda-uppercase.md)
+ [Tutorial: Detecting and redacting PII data with S3 Object Lambda and Amazon Comprehend](tutorial-s3-object-lambda-redact-pii.md)
+ [Tutorial: Using S3 Object Lambda to dynamically watermark images as they are retrieved](https://www.amazonaws.cn/getting-started/hands-on/amazon-s3-object-lambda-to-dynamically-watermark-images/?ref=docs_gateway/amazons3/tutorials.html)
+ [Tutorial: Batch-transcoding videos with S3 Batch Operations](tutorial-s3-batchops-lambda-mediaconvert-video.md)

#### Protecting data
<a name="protect-data-tutorials"></a>
+ [Tutorial: Checking the integrity of data in Amazon S3 with additional checksums](https://www.amazonaws.cn/getting-started/hands-on/amazon-s3-with-additional-checksums/?ref=docs_gateway/amazons3/tutorials.html)
+ [Tutorial: Replicating data within and between Amazon Web Services Regions using S3 Replication](https://www.amazonaws.cn/getting-started/hands-on/replicate-data-using-amazon-s3-replication/?ref=docs_gateway/amazons3/tutorials.html)
+ [Tutorial: Protecting data on Amazon S3 against accidental deletion or application bugs using S3 Versioning, S3 Object Lock, and S3 Replication](https://www.amazonaws.cn/getting-started/hands-on/protect-data-on-amazon-s3/?ref=docs_gateway/amazons3/tutorials.html)
+ [Tutorial: Replicating existing objects in your Amazon S3 buckets with S3 Batch Replication](https://www.amazonaws.cn/getting-started/hands-on/replicate-existing-objects-with-amazon-s3-batch-replication/?ref=docs_gateway/amazons3/tutorials.html)

### Explore training and support
<a name="explore-training-and-support"></a>

You can learn from Amazon experts to advance your skills and get expert assistance achieving your objectives.
+ **Training** – Training resources provide a hands-on approach to learning Amazon S3. For more information, see [Amazon training and certification](https://www.aws.training) and [Amazon online tech talks](https://www.amazonaws.cn/events/online-tech-talks).
+ **Discussion Forums** – On the forum, you can review posts to understand what you can and can't do with Amazon S3. You can also post your questions. For more information, see [Discussion Forums](https://forums.aws.csdn.net/index.jspa).
+ **Technical Support** – If you have further questions, you can contact [Technical Support](https://www.amazonaws.cn/contact-us).