

# Accessing your data
Accessing your data

You can access data on your FSx for OpenZFS file systems within the Amazon Web Services Cloud and from on premise environments using a variety of supported clients, Amazon Web Services services, Amazon EC2 instances, and Amazon S3 access points. You can also use Amazon Container Services such as Amazon ECS with your FSx for OpenZFS volumes to access your data. 

**Topics**
+ [Accessing data within the Amazon Web Services Cloud](access-within-aws.md)
+ [Accessing data from on-premises](access-fsxopenzfs-onprem.md)
+ [Mounting volumes](mounting-volumes.md)
+ [Accessing data with S3 access points](s3accesspoints-for-FSx.md)
+ [Accessing data using Amazon container services](openzfs-integrations.md)

# Accessing your data within the Amazon Web Services Cloud
Accessing data within the Amazon Web Services Cloud

Amazon VPC helps you to launch Amazon resources into a virtual network that you define. This virtual network closely resembles a traditional network that you operate in your own data center, with the benefits of using the scalable infrastructure of Amazon. For more information, see [What is Amazon VPC](https://docs.amazonaws.cn/vpc/latest/userguide/what-is-amazon-vpc.html) in the *Amazon Virtual Private Cloud User Guide*.

Each Amazon FSx file system is associated with a Virtual Private Cloud (VPC). You can access your FSx for OpenZFS file system from anywhere in the same VPC within which it is deployed regardless of the Availability Zone (AZ). You can also access your file system from other VPCs. These VPCs can be in different accounts or regions. In addition to any requirements listed in the following sections for accessing FSx for OpenZFS resources, you also need to ensure that your file system's VPC security group has the correct settings. It needs to allow data to flow between your file system and any clients that connect to it. For more information, see [Amazon VPC security groups](limit-access-security-groups.md#fsx-vpc-security-groups).

**Topics**
+ [

## Access from within the same VPC
](#same-vpc-access)
+ [

## Access from a different VPC
](#vpc-peering)

## Access from within the same VPC
Accessing from the same VPC

When you create your Amazon FSx for OpenZFS file system, you select the Amazon VPC in which it is located. All volumes associated with the FSx for OpenZFS file system are also located in the same VPC. When the file system and the client mounting the volume are located in the same VPC and Amazon Web Services account, you can mount a volume using the file system's DNS name over the NFS protocol. For more information, see [Step 2: Mount your file system from an Amazon EC2 instance](getting-started.md#getting-started-step2).

You can achieve better performance and avoid data transfer charges by accessing an FSx for OpenZFS volume using a client in the same Availability Zone as the file system's subnet. To identify a file system's subnet, choose **File systems** in the Amazon FSx console, then choose the FSx for OpenZFS file system whose volume you are mounting. The subnet or preferred subnet (Multi-AZ) is displayed in the **Subnet** or **Preferred subnet** panel.

Accessing a Single-AZ file system using a client located in a different Availability Zone results in data transfer charges. There are no data transfer charges for accessing a Multi-AZ file system from any Availability Zone in the same region.

## Access from a different VPC


The process of accessing your data from an Amazon Web Services Region outside of the file system's VPC differs between Single-AZ and Multi-AZ file systems, as Multi-AZ file systems utilize a floating IP address. The following sections describe how to access your file systems from a different VPC depending on deployment type.

### Accessing Single-AZ file systems


You can access your FSx for OpenZFS file system from compute instances in a different VPC, Amazon Web Services account, or Amazon Web Services Region from that associated with your file system by using VPC peering or transit gateways. When you use a VPC peering connection or transit gateway to connect VPCs, compute instances that are in one VPC can access Amazon FSx file systems in another VPC. This access is possible even if the VPCs belong to different Amazon Web Services accounts, and even if the VPCs reside in different Amazon Web Services Regions. 

A *VPC peering connection* is a networking connection between two VPCs that you can use to route traffic between them using private IPv4 or IPv6 addresses. You can use VPC peering to connect VPCs within the same Amazon Web Services Region or between Amazon Web Services Regions. For more information on VPC peering, see [What is VPC peering?](https://docs.amazonaws.cn/vpc/latest/peering/what-is-vpc-peering.html) in the *Amazon Virtual Private Cloud VPC Peering Guide*. 

A *transit gateway* is a network transit hub that you can use to interconnect your VPCs and on-premises networks. For more information, see [Work with transit gateways](https://docs.amazonaws.cn/vpc/latest/tgw/working-with-transit-gateways.html) in the *Amazon VPC Transit Gateways*. 

### Accessing Multi-AZ file systems


The NFS endpoints on FSx for OpenZFS Multi-AZ file systems use floating IP addresses so that connected clients seamlessly transition between the preferred and standby file servers during a failover event. For more information about failovers, see [Failover process for FSx for OpenZFS](availability-durability.md#multi-az-failover).

When you create a file system, you can optionally specify the endpoint IP address range in which these floating IP addresses are created. By default, the Amazon FSx API selects a CIDR block of 16 available addresses from within the VPC's CIDR ranges. Additionally, you can optionally specify the VPC route tables in which rules for routing traffic to the correct file server will be created. By default, the Amazon FSx API selects the VPC's default route table.

Only [Amazon Transit Gateway](https://www.amazonaws.cn/transit-gateway/?whats-new-cards.sort-by=item.additionalFields.postDateTime&whats-new-cards.sort-order=desc) supports routing to floating IP addresses, which is also known as transitive peering. VPC Peering, Amazon Direct Connect, and Amazon VPN don't support transitive peering. Therefore, you are required to use Transit Gateway in order to access these interfaces from networks that are outside of your file system's VPC. 

When you access your Multi-AZ file system from outside of the file system's VPC, FSx for OpenZFS will manage routing configurations as long as the file system's endpoint IP address range is within the CIDR range of the file system's VPC and does not overlap with the CIDR range of any subnets in the VPC. However, if you access your Multi-AZ file system from outside of the file system's VPC, and the file system's endpoint IP address range is outside of the CIDR range of the file system's VPC, you will need to set up additional routing in Transit Gateway. For information on how to configure Transit Gateway to access your FSx for OpenZFS file system, see [Configuring routing using Amazon Transit Gateway](#configuring-routing-using-AWSTG). 

The following diagram illustrates using Transit Gateway for NFS access to a Multi-AZ file system that is in a different VPC than the clients that are accessing it.

![\[\]](http://docs.amazonaws.cn/en_us/fsx/latest/OpenZFSGuide/images/fsx-openzfs-multi-az-access-transit-gateway.png)


**Note**  
Ensure that all of the route tables you're using are associated with your Multi-AZ file system. Doing so helps prevent loss of availability during a failover. For information about associating your Amazon VPC route tables with your file system, see [Updating an Amazon FSx for OpenZFS file system](updating-file-system.md).

### Configuring routing using Amazon Transit Gateway


If you have a Multi-AZ file system with an endpoint IP address range that's outside your VPC's CIDR range, you need to set up additional routing in your Amazon Transit Gateway to access your file system from peered or on-premises networks. No additional Transit Gateway configuration is required for Single-AZ file systems or Multi-AZ file systems with an endpoint IP address range that's within your VPC's IP address range.

**Important**  
To access a Multi-AZ file system using a Transit Gateway, each of the Transit Gateway's attachments must be created in a subnet whose route table is associated with your file system.

**To configure routing using Amazon Transit Gateway**

1. Open the Amazon FSx console at [https://console.amazonaws.cn/fsx/](https://console.amazonaws.cn/fsx/).

1. Choose the FSx for OpenZFS file system for which you are configuring access from a peered network.

1. In **Network & security** copy the endpoint IP address range.

1. Add a route to Transit Gateway that routes traffic destined for this IP address range to your file system's VPC. For more information, see [Work with transit gateways](https://docs.amazonaws.cn/vpc/latest/tgw/working-with-transit-gateways.html) in the *Amazon VPC Transit Gateways*.

1. Confirm that you can access your FSx for OpenZFS file system from the peered network.

To add the route table to your file system, see [Updating an Amazon FSx for OpenZFS file system](updating-file-system.md).

**Note**  
DNS records for the NFS endpoints are only resolvable from within the same VPC as the file system. In order to mount a volume or connect to a management port from another network, you need to use the endpoint's IP address. These IP addresses do not change over time.

# Accessing your data from on-premises
Accessing data from on-premises

FSx for OpenZFS supports the use of Amazon Direct Connect or Amazon VPN to access your file systems from your on-premises compute instances. Using Amazon Direct Connect, you access your file system over a dedicated network connection from your on-premises environment. Using Amazon VPN, you access your file system from your on-premises devices over a secure and private tunnel.

After you connect your on-premises environment to the VPC associated with your Amazon FSx file system, you can access your file system using its DNS name or a DNS alias. You do so just as you do from compute instances within the VPC. For more information about Amazon Direct Connect, see [What is Amazon Direct Connect?](https://docs.amazonaws.cn/directconnect/latest/UserGuide/Welcome.html) in the *Amazon Direct Connect User Guide*. For more information on setting up Amazon VPN connections, see [VPN connections](https://docs.amazonaws.cn/vpc/latest/userguide/vpn-connections.html) in the *Amazon VPC User Guide*.

**Topics**
+ [

## Accessing Multi-AZ file systems
](#access-multi-az-fs)

## Accessing Multi-AZ file systems


Amazon FSx requires that you use Amazon Transit Gateway to access Multi-AZ file systems from an on-premises network. In order to support failover across AZs for Multi-AZ file systems, Amazon FSx uses floating IP addresses for the interfaces used for NFS endpoints. Because the NFS endpoints use floating IPs, you must use [Amazon Transit Gateway](https://www.amazonaws.cn/transit-gateway/?whats-new-cards.sort-by=item.additionalFields.postDateTime&whats-new-cards.sort-order=desc) in conjunction with Amazon Direct Connect or Amazon VPN to access these interfaces from an on-premises network. The floating IP addresses used for these interfaces are within the endpoint IP address range you specify when creating your Multi-AZ file system. By default, the Amazon FSx API selects a CIDR block of 16 available addresses from within the VPC's CIDR ranges. The floating IP addresses are used to enable a seamless transition of your clients to the standby file system in the event a failover is required. For more information, see [Failover process for FSx for OpenZFS](availability-durability.md#multi-az-failover).

If you have a Multi-AZ file system with an endpoint IP address range that's outside your VPC's CIDR range, you need to set up additional routing in your Amazon Transit Gateway to access your file system from peered or on-premises networks. For information, see [Configuring routing using Amazon Transit Gateway](access-within-aws.md#configuring-routing-using-AWSTG).

# Mounting volumes to access data
Mounting volumes

The primary way to access data on your file system is through mounting individual volumes from an Amazon EC2 instance. This section provides details on how to configure your file system to automatically remount volumes on an Amazon EC2 instance when the instance reboots, and tips for mounting a volume to maximize your file systems overall performance. For detailed instructions on how to mount a volume to a Linux, macOS, or Windows client, see [Step 2: Mount your file system from an Amazon EC2 instance](getting-started.md#getting-started-step2).

**Topics**
+ [

## Automatically mounting file systems on reboot for Linux instances
](#mount-fs-auto-mount-update-fstab)
+ [

## Additional mounting options to maximize file system performance
](#additional-mounting-options)

## Automatically mounting file systems on reboot for Linux instances


You can use the `/etc/fstab` file, which contains information about your file systems, to automatically remount your volumes on an Amazon EC2 Linux instance when the instance reboots. The command `mount -a`, which runs during instance start-up, mounts the file systems listed in `/etc/fstab`.

**Note**  
FSx for OpenZFS file systems do not support automatic mounting using `/etc/fstab` on Amazon EC2 Mac instances.

**To automatically mount your file system on reboot**

1. Connect to your EC2 instance:
   + To connect to your instance from a computer running macOS or Linux, specify the .pem file for your SSH command. To do this, use the `-i` option and the path to your private key.
   + To connect to your instance from a computer running Windows, you can use either MindTerm or PuTTY. To use PuTTY, install it and convert the .pem file to a .ppk file.

   For more information, see the following topics in the *Amazon EC2 User Guide*:
   +  [Connecting to your Linux instance using SSH](AWSEC2/latest/UserGuide/connect-linux-inst-ssh.html)
   +  [Connecting to your Linux instance from Windows using PuTTY](https://docs.amazonaws.cn/AWSEC2/latest/UserGuide/putty.html) 

1. Create a local directory that will be used to mount the FSx for OpenZFS volume.

   ```
   sudo mkdir /fsx
   ```

1. Open the `/etc/fstab` file in an editor of your choice.

1. Add the following line to the `/etc/fstab` file. Insert a tab character between each parameter. It should appear as one line with no line breaks.

   ```
   filesystem-dns-name:volume-path /localpath nfs vers=nfs-version,defaults 0 0
   ```

   The last three parameters indicate NFS options (which we set to default), dumping of file system and filesystem check (these are typically not used so we set them to 0).

1. Save the changes to the file.

1. Test the fstab entry by using the mount command with the `fake` `all` `verbose` options.

   ```
   sudo mount -fav
                           fs-dns-name:/vol_path      : successfully mounted
   ```

   Now mount the volume using the following command. The next time the EC2 instance restarts, the volume will be mounted automatically.

   ```
   sudo mount /localpath
   sudo mount filessystem-dns-name:/volume-path
   ```

Your EC2 instance is now configured to mount the FSx for OpenZFS volume whenever it restarts.

## Additional mounting options to maximize file system performance


You can also include the following options when mounting a volume to improve your file system's overall performance.
+ `rsize=1048576` – Sets the maximum number of bytes of data that the NFS client can receive for each network READ request. This value applies when reading data from a file on an FSx for OpenZFS volume. We recommend that you use the largest size possible, 1048576. Due to lower memory capacity on file systems with 64 MBps and 128 MBps of provisioned throughput, these file systems will only accept a maximum rsize of 262144 and 524288 bytes, respectively.
+ `wsize=1048576` – Sets the maximum number of bytes of data that the NFS client can send for each network WRITE request. This value applies when writing data to a file on an FSx for OpenZFS volume. We recommend that you use the largest size possible, 1048576. Due to lower memory capacity on file systems with 64 MBps and 128 MBps of provisioned throughput, these file systems will only accept a maximum wsize of 262144 and 524288 bytes, respectively.
+ `timeo=600` – Sets the timeout value that the NFS client uses to wait for a response before it retries an NFS request to 600 deciseconds (60 seconds).
+ `_netdev` – When present in `/etc/fstab`, prevents the client from attempting to mount the FSx for OpenZFS volume until the network has been enabled.

The following example uses sample values.

```
sudo mount -t nfs -o rsize=1048576,wsize=1048576,timeo=600 fs-01234567890abcdef1.fsx.us-east-1.amazonaws.com:/fsx/vol1 /fsx
```

# Accessing your data using Amazon S3 access points
Accessing data with S3 access points

Amazon S3 access points simplify managing data access for any application or Amazon service that works with S3. With S3 access points, customers with shared datasets, including data lakes, media archives, and user-generated content, can easily control and scale data access for hundreds of applications, teams, or individuals by creating individualized access points with names and permissions customized for each. You can also use S3 access points to access file data stored on Amazon FSx file systems as if it were in S3, allowing you to use it with applications and services that work with S3 without application changes or moving data out of file storage. These access points are named network endpoints that attach to either S3 general purpose buckets or FSx for OpenZFS volumes.

S3 access points attached to Amazon FSx for OpenZFS file systems support read and write access to your file data using S3 object operations (for example, `GetObject`, `PutObject`, and `ListObjectsV2`) against an Amazon S3 endpoint.

Each S3 access point attached to an FSx for OpenZFS file system has an Amazon Identity and Access Management (IAM) access point policy and an associated POSIX file system user that is used to authorize all requests made through the access point. For each request, S3 first evaluates all the relevant policies, including those on the user, access point, S3 VPC Endpoint, and service control policies, to authorize the request. Once the request is authorized by S3, the request is then authorized by the file system, which evaluates whether the file system user associated with the S3 access point has permission to access to the data on the file system. You can configure an access point to accept requests only from a virtual private cloud (VPC) to restrict Amazon S3 data access to a private network. Amazon S3 enforces Block public access by default for all access points attached to an FSx for OpenZFS volume, and you cannot modify or disable this setting.

You use the Amazon FSx console, CLI, and API to [create an S3 access point and attach](fsxz-creating-access-points.md) it to an FSx for OpenZFS volume. You can simultaneously access your file data from the S3 access point using the S3 API, and from clients using the industry-standard Network File System (NFS) protocol (v3, v4.0, v4.1, v4.2). Your data continues to reside on the FSx for OpenZFS file system.

Amazon S3 access points for FSx for OpenZFS ﬁle systems deliver latency in the tens of milliseconds range, consistent with S3 bucket access. Performance scales with your Amazon FSx ﬁle system’s provisioned throughput, with maximum throughput and requests per second bound by your underlying Amazon FSx ﬁle system conﬁguration. For more information about file system performance capabilities, see [Performance for Amazon FSx for OpenZFSPerformance](performance.md)

**Topics**
+ [

# Access points naming rules, restrictions, and limitations
](access-point-restrictions-limitations-naming-rules.md)
+ [

# Referencing access points with ARNs, access point aliases, or virtual-hosted-style URIs
](referencing-access-points.md)
+ [

# Access point compatibility
](access-points-object-api-support.md)
+ [

# Managing access point access
](s3-ap-manage-access-fsx.md)
+ [

# Creating an access point
](fsxz-creating-access-points.md)
+ [

# Managing Amazon S3 access points
](access-points-manage.md)
+ [

# Using access points
](access-points-usage-examples.md)
+ [

# Troubleshooting S3 access point issues
](troubleshooting-access-points.md)

# Access points naming rules, restrictions, and limitations
Naming rules, restrictions, and limitations

When creating an S3 access point you choose a name for it. The following topics provide information about S3 access point naming rules and restrictions and limitations.

**Topics**
+ [

## Access points naming rules
](#access-points-naming-rules)
+ [

## Access points restrictions and limitations
](#access-points-restrictions-limitations)

## Access points naming rules


When you create an S3 access point you choose its name. Access point names do not need to be unique across Amazon Web Services accounts or Amazon Web Services Regions. The same Amazon Web Services account may create access points with the same name in different Amazon Web Services Regions or two different Amazon Web Services accounts may use the same access point name. However, within a single Amazon Web Services Region an Amazon Web Services account may not have two identically named access points.

S3 access point names can't end with the suffix `-ext-s3alias`, which is reserved for access point alias. For a complete list of access point naming rules, see [Naming rules for Amazon S3 access points](https://docs.amazonaws.cn/AmazonS3/latest/userguide/access-points-restrictions-limitations-naming-rules.html#access-points-names) in the *Amazon Simple Storage Service User Guide*.

## Access points restrictions and limitations


S3 access points attached to FSx for OpenZFS volumes have the following restrictions, which do not apply to access points attached to S3 buckets:
+ S3 access points can only be attached to volumes that are hosted on high-availability (HA) Multi-AZ and Single-AZ FSx for OpenZFS file systems. For more information about the types of FSx for OpenZFS file systems, see [Availability and durability for Amazon FSx for OpenZFS](availability-durability.md).
+ The maximum number of S3 access points that can be attached to an FSx for OpenZFS (HA) file system is dependent on the file system's throughput. For more information, see [Resource quotas for each file system](limits.md#limits-openzfs-resources-file-system).
+ S3 access control lists (ACLs) are not supported.
+ The same Amazon Web Services account must own the FSx for OpenZFS file system and the S3 access point. 

  You can only create S3 access points that are attached to FSx for OpenZFS volumes that you own. You cannot create an S3 access point that is attached to a volume owned by another Amazon Web Services account.

For a complete list of all access point restrictions and limitations, see [Restrictions and limitations for access points](https://docs.amazonaws.cn/AmazonS3/latest/userguide/access-points-restrictions-limitations-naming-rules.html) in the *Amazon Simple Storage Service User Guide*.

# Referencing access points with ARNs, access point aliases, or virtual-hosted-style URIs
Referencing access points

After you create an access point attached to an FSx for OpenZFS volume, you can access your data via the Amazon CLI and S3 API, as well as S3-compatible Amazonand third-party services and applications. When referring to an access point in an Amazon Web Services service or application you can use the Amazon Resource Name (ARN), the access point alias, or virtual-hosted–style URI.

**Topics**
+ [

## Access point ARNs
](#access-point-arns)
+ [

## Access point aliases
](#access-point-aliases)
+ [

## Virtual-hosted–style URI
](#virtual-hosted-style-uri)

## Access point ARNs


Access points have Amazon Resource Names (ARNs). Access point ARNs are similar to S3 bucket ARNs, but they are explicitly typed and encode the access point's Amazon Web Services Region and the Amazon Web Services account ID of the access point's owner. For more information about ARNs, see [Identify Amazon resources with Amazon Resource Names (ARNs)](https://docs.amazonaws.cn/IAM/latest/UserGuide/reference-arns.html) in the *Amazon Identity and Access Management User Guide*.

Access point ARNs have the following format:

```
arn:aws-cn::s3:region:account-id:accesspoint/resource
```

`arn:aws-cn:s3:us-west-2:777777777777:accesspoint/test` represents the access point named *test*, owned by account 777777777777 in the Region *us-west-2*.

ARNs for objects and files accessed through an access point use the following format:

```
arn:aws-cn::s3:region:account-id:accesspoint/access-point-name/object/resource
```

`arn:aws-cn:s3:us-west-2:111122223333:accesspoint/test/object/lions.jpg` represents the file *lions.jpg*, accessed through the access point named *test*, owned by account 111122223333 in the Region *us-west-2*.

For more information about access point ARNs, see [Access point ARNs](https://docs.amazonaws.cn/AmazonS3/latest/userguide/access-points-naming.html#access-points-arns) in the *Amazon Simple Storage Service User Guide*.

## Access point aliases


When you create an access point, Amazon S3 automatically generates an access point alias that you can use anywhere you can use S3 bucket names to access data.

An access point alias cannot be changed. For an access point attached to an FSx for OpenZFS volume, the access point alias consists of the following parts:

```
access point prefix-metadata-ext-s3alias
```

The following shows the ARN and access point alias for an S3 access point attached to an FSx for OpenZFS volume, returned as part of the response to a `describe-s3-access-point-attachments` FSx CLI command. The access point in this example is named `my-openzfs-ap`.

```
...
        "S3AccessPoint": {
            "ResourceARN": "arn:aws-cn:s3:us-east-1:111122223333:accesspoint/my-openzfs-ap",
            "Alias": "my-openzfs-ap-aqfqprnstn7aefdfbarligizwgyfouse1a-ext-s3alias",
...
```

**Note**  
The `-ext-s3alias` suffix is reserved for the aliases of S3 access points attached to an FSx for OpenZFS volume, and can't be used for access point names.

You can use the access point alias instead of an Amazon S3 access point ARN in some S3 data plane operations. For a list of the supported operations, see [Access point compatibility](access-points-object-api-support.md).

For a full set of access point alias limitations, see [Access point alias limitations](https://docs.amazonaws.cn/AmazonS3/latest/userguide/access-points-naming.html#access-points-alias) in the *Amazon Simple Storage Service User Guide*.

## Virtual-hosted–style URI


Access points only support virtual-host-style addressing. In a virtual-hosted–style URI, the access point name, Amazon Web Services account, and Amazon Web Services Region is part of the domain name in the URL. To view the S3 URI for an access point attached to an FSx for OpenZFS volume, in the access point details page under **S3 access point details**, choose the access point name listed for **S3 access point**. This takes you to the access point details page in the Amazon S3 console. You can find the **S3 URI** under **Properties**.

For more information, see [Virtual-hosted–style URI](https://docs.amazonaws.cn/AmazonS3/latest/userguide/access-points-naming.html#accessing-a-bucket-through-s3-access-point) in the *Amazon Simple Storage Service User Guide*.

# Access point compatibility


You can use access points to access data stored on an FSx for OpenZFS volume using the following subset of Amazon S3 API object operations related to data access. All the operations listed below can accept either access point ARNs or access point aliases.

The following table is a partial list of Amazon S3 operations and if they are compatible with access points. The table shows which operations are supported by access points using an FSx for OpenZFS volume as a data source.


| S3 operation | Access point attached to an FSx for OpenZFS volume | 
| --- | --- | 
|  `[AbortMultipartUpload](https://docs.amazonaws.cn/AmazonS3/latest/API/API_AbortMultipartUpload.html)`  |  Supported  | 
|  `[CompleteMultipartUpload](https://docs.amazonaws.cn/AmazonS3/latest/API/API_CompleteMultipartUpload.html)`  |  Supported  | 
|  `[CopyObject](https://docs.amazonaws.cn/AmazonS3/latest/API/API_CopyObject.html)` (same-Region copies only)  |  Supported, if source and destination are the same access point  | 
|  `[CreateMultipartUpload](https://docs.amazonaws.cn/AmazonS3/latest/API/API_CreateMultipartUpload.html)`  |  Supported  | 
|  `[DeleteObject](https://docs.amazonaws.cn/AmazonS3/latest/API/API_DeleteObject.html)`  |  Supported  | 
|  `[DeleteObjects](https://docs.amazonaws.cn/AmazonS3/latest/API/API_DeleteObjects.html)`  |  Supported  | 
|  `[DeleteObjectTagging](https://docs.amazonaws.cn/AmazonS3/latest/API/API_DeleteObjectTagging.html)`  |  Supported  | 
|  `[GetBucketAcl](https://docs.amazonaws.cn/AmazonS3/latest/API/API_GetBucketAcl.html)`  |  Not supported  | 
|  `[GetBucketCors](https://docs.amazonaws.cn/AmazonS3/latest/API/API_GetBucketCors.html)`  |  Not supported  | 
|  `[GetBucketLocation](https://docs.amazonaws.cn/AmazonS3/latest/API/API_GetBucketLocation.html)`  |  Supported  | 
|  `[GetBucketNotificationConfiguration](https://docs.amazonaws.cn/AmazonS3/latest/API/API_GetBucketNotificationConfiguration.html)`  |  Not supported  | 
|  `[GetBucketPolicy](https://docs.amazonaws.cn/AmazonS3/latest/API/API_GetBucketPolicy.html)`  |  Not supported  | 
|  `[GetObject](https://docs.amazonaws.cn/AmazonS3/latest/API/API_GetObject.html)`  |  Supported  | 
|  `[GetObjectAcl](https://docs.amazonaws.cn/AmazonS3/latest/API/API_GetObjectAcl.html)`  |  Not supported  | 
|  `[GetObjectAttributes](https://docs.amazonaws.cn/AmazonS3/latest/API/API_GetObjectAttributes.html)`  |  Supported  | 
|  `[GetObjectLegalHold](https://docs.amazonaws.cn/AmazonS3/latest/API/API_GetObjectLegalHold.html)`  |  Not supported  | 
|  `[GetObjectRetention](https://docs.amazonaws.cn/AmazonS3/latest/API/API_GetObjectRetention.html)`  |  Not supported  | 
|  `[GetObjectTagging](https://docs.amazonaws.cn/AmazonS3/latest/API/API_GetObjectTagging.html)`  |  Supported  | 
|  `[HeadBucket](https://docs.amazonaws.cn/AmazonS3/latest/API/API_HeadBucket.html)`  |  Supported  | 
|  `[HeadObject](https://docs.amazonaws.cn/AmazonS3/latest/API/API_HeadObject.html)`  |  Supported  | 
|  `[ListMultipartUploads](https://docs.amazonaws.cn/AmazonS3/latest/API/API_ListMultipartUploads.html)`  |  Supported  | 
|  `[ListObjects](https://docs.amazonaws.cn/AmazonS3/latest/API/API_ListObjects.html)`  |  Supported  | 
|  `[ListObjectsV2](https://docs.amazonaws.cn/AmazonS3/latest/API/API_ListObjectsV2.html)`  |  Supported  | 
|  `[ListObjectVersions](https://docs.amazonaws.cn/AmazonS3/latest/API/API_ListObjectVersions.html)`  |  Not supported  | 
|  `[ListParts](https://docs.amazonaws.cn/AmazonS3/latest/API/API_ListParts.html)`  |  Supported  | 
|  `[Presign](https://docs.amazonaws.cn/AmazonS3/latest/API/sigv4-query-string-auth.html)`  |  Not supported  | 
|  `[PutObject](https://docs.amazonaws.cn/AmazonS3/latest/API/API_PutObject.html)`  |  Supported  | 
|  `[PutObjectAcl](https://docs.amazonaws.cn/AmazonS3/latest/API/API_PutObjectAcl.html)`  |  Not supported  | 
|  `[PutObjectLegalHold](https://docs.amazonaws.cn/AmazonS3/latest/API/API_PutObjectLegalHold.html)`  |  Not supported  | 
|  `[PutObjectRetention](https://docs.amazonaws.cn/AmazonS3/latest/API/API_PutObjectRetention.html)`  |  Not supported  | 
|  `[PutObjectTagging](https://docs.amazonaws.cn/AmazonS3/latest/API/API_PutObjectTagging.html)`  |  Supported  | 
|  `[RestoreObject](https://docs.amazonaws.cn/AmazonS3/latest/API/API_RestoreObject.html)`  |  Not supported  | 
|  `[UploadPart](https://docs.amazonaws.cn/AmazonS3/latest/API/API_UploadPart.html)`  |  Supported  | 
|  `[UploadPartCopy](https://docs.amazonaws.cn/AmazonS3/latest/API/API_UploadPartCopy.html)` (same-Region copies only)  |  Supported, if source and destination are the same access point  | 

Limitations to using Amazon S3 operations are the following:
+ Maximum object size is 5 GB
+ `FSX_OPENZFS` is the only supported storage class
+ [SSE\$1FSX](s3-ap-manage-access-fsx.md#data-encryption) is the only supported server-side encryption mode

For examples of using access points to perform data access operations on file data, see [Using access points](access-points-usage-examples.md).

# Managing access point access
Managing access

You can configure each S3 access point with distinct permissions and network controls that S3 applies for any request that is made using that access point. S3 access points support Amazon Identity and Access Management (IAM) resource policies that you can use to control the use of the access point by resource, user, or other conditions. For an application or user to access files through an access point, both the access point and the underlying volume must permit the request. For more information, see [IAM access point policies](#access-points-policies).

**Topics**
+ [

## File system user identity
](#file-system-user-identity)
+ [

## Server-side encryption with Amazon FSx (SSE-FSX)
](#data-encryption)
+ [

## IAM access point policies
](#access-points-policies)

## File system user identity


Each access point attached to an FSx for OpenZFS volume uses a file system user identity that you specify for authorizing all file access requests that are made using the access point. The file system user is a user account on the underlying Amazon FSx file system. If the file system user has *read-only* access, then only read requests made using the access point are authorized, and write requests are blocked. If the file system user has read-write access, then both read and write requests to the attached volume made using the access point are authorized.

You can also configure an S3 access point to only accept requests from a specific virtual private cloud (VPC) to restrict data access. For more information, see [Creating access points restricted to a virtual private cloud](access-points-vpc.md).

Amazon S3 access points attached to an FSx for OpenZFS volume are automatically configured with block public access enabled, which you cannot change.

**Important**  
Attaching an S3 access point to an FSx for OpenZFS volume doesn't change the volume's behavior when the volume is accessed directly via NFS. All existing operations against the volume will continue to work as before. Restrictions that you include in an S3 access point policy apply only to requests made using the access point.

## Server-side encryption with Amazon FSx (SSE-FSX)


All Amazon FSx file systems have encryption configured by default and are encrypted at rest with keys managed using Amazon Key Management Service. Data is automatically encrypted and decrypted by on the file system as data is being written to and read from the file system. These processes are handled transparently by Amazon FSx.

## IAM access point policies


Amazon S3 access points support Amazon Identity and Access Management (IAM) resource policies that allow you to control the use of the access point by resource, user, or other conditions. For an application or user to be able to access objects through an access point, both the access point and the underlying data source must permit the request.

The `s3:PutAccessPointPolicy` permission is required to create an optional access point policy.

After you attach an S3 access point to an Amazon FSx volume, all existing operations against the volume will continue to work as before. Restrictions that you include in an access point policy apply only to requests made through that access point. For more information, see [Configuring IAM policies for using access points](https://docs.amazonaws.cn/AmazonS3/latest/userguide/access-points-policies.html) in the *Amazon Simple Storage Service User Guide*.

You can configure an access point policy when you create an access point attached to an FSx for OpenZFS volume using the Amazon FSx console. To add, modify, or delete an access point policy on an existing S3 access point, you can use the S3 console, CLI, or API. 

# Creating an access point


You can create and manage S3 access point that attach to Amazon FSx volumes using the Amazon FSx console, CLI, API, and supported SDKs. 

The maximum number of S3 access points that can be attached to an FSx for OpenZFS (HA) file system is dependent on the file system's throughput. For more information, see [Resource quotas for each file system](limits.md#limits-openzfs-resources-file-system).

**Note**  
Because you might want to publicize your S3 access point name so that other users can use the access point, avoid including sensitive information in the S3 access point name. Access point names are published in a publicly accessible database known as the Domain Name System (DNS). For more information about access point names, see [Access points naming rules](access-point-restrictions-limitations-naming-rules.md#access-points-naming-rules).

## Required permissions


The following permissions are required to create an S3 access point attached to an Amazon FSx volume:
+ `fsx:CreateAndAttachS3AccessPoint`
+ `s3:CreateAccessPoint`
+ `s3:GetAccessPoint`

The `s3:PutAccessPointPolicy` permission is required to create an optional Access Point policy using either the Amazon FSx or S3 console. For more information, see [IAM access point policies](s3-ap-manage-access-fsx.md#access-points-policies).

To create an access point, see the following topics.

**Topics**
+ [

## Required permissions
](#create-ap-permissions)
+ [

# Creating access points
](create-access-points.md)
+ [

# Creating access points restricted to a virtual private cloud
](access-points-vpc.md)

# Creating access points


The FSx for OpenZFS volume must already exist in your account when creating an S3 access point for your volume.

To create the S3 access point attached to an FSx for OpenZFS volume, you specify the following properties:
+ The access point name. For information about access point naming rules, see [Access points naming rules](access-point-restrictions-limitations-naming-rules.md#access-points-naming-rules).
+ The file system user identity to use for authorizing file access requests made using the access point. Specify the POSIX user ID and group ID, and any secondary group IDs you want to include. For more information, see [File system user identity](s3-ap-manage-access-fsx.md#file-system-user-identity).
+ The access point's network configuration determines whether the access point is accessible from the internet or if access is restricted to a specific virtual private cloud (VPC). For more information, see [Creating access points restricted to a virtual private cloud](access-points-vpc.md).

## To create an S3 access point attached to an FSx volume (FSx console)


1. Open the Amazon FSx console at [https://console.amazonaws.cn/fsx/](https://console.amazonaws.cn/fsx/).

1. In the navigation bar on the top of the page, choose the Amazon Web Services Region in which you want to create an access point. The access point must be created in the same Region as the associated volume.

1. In the left navigation pane, choose **Volumes**.

1. On the **Volumes** page, choose the FSx for OpenZFS volume that you want to attach the access point to.

1. Display the **Create S3 access point** page by choosing **Create S3 access point** from the **Actions** menu.

1. For **Access point name**, enter the name for the access point. For more information about guidelines and restrictions for access point names, see [Access points naming rules](access-point-restrictions-limitations-naming-rules.md#access-points-naming-rules).

   The **Data source details** are populated with the information of the volume you chose in Step 3.

1. The file system user identity is used by Amazon FSx for authenticating file access requests that are made using this access point. Be sure that the file system user you specify has the correct permissions on the FSx for OpenZFS volume.

   For **POSIX user ID**, enter the user's POSIX user ID.

1. For **POSIX group ID** enter the user's POSIX group ID.

1. Enter any **Secondary group IDs** for the file system user identity.

1. In the **Network configuration** panel you choose whether the access point is accessible from the Internet, or access is restricted to a specific virtual private cloud.

   For **Network origin**, choose **Internet** to make the access point accessible from the internet, or choose **Virtual private cloud (VPC)**, and enter the **VPC ID** that you want to limit access to the access point from.

   For more information about network origins for access points, see [Creating access points restricted to a virtual private cloud](access-points-vpc.md).

1. (Optional) Under **Access Point Policy - *optional***, specify an optional access point policy. Be sure to resolve any policy warnings, errors, and suggestions. For more information about specifying an access point policy, see [Configuring IAM policies for using access points](https://docs.amazonaws.cn/AmazonS3/latest/userguide/access-points-policies.html) in the *Amazon Simple Storage Service User Guide*.

1. Choose **Create access point** to review the access point attachment configuration.

## To create an S3 access point attached to an FSx volume (CLI)


The following example command creates an access point named *`my-openzfs-ap`* that is attached to the FSx for OpenZFS volume *`fsvol-0123456789abcdef9`* in the account *`111122223333`*.

```
$ aws fsx create-and-attach-s3-access-point --name my-openzfs-ap --type OPENZFS --openzfs-configuration \
   VolumeId=fsvol-0123456789abcdef9,FileSystemIdentity='{Type=POSIX,PosixUser={Uid=1234567,Gid=1234567}}' \
   --s3-access-point VpcConfiguration='{VpcId=vpc-0123467},Policy=access-point-policy-json
```

For a successful request, the system responds by returning the new S3 access point attachment.

```
$ {
  {
     "S3AccessPointAttachment": {
        "CreationTime": 1728935791.8,
        "Lifecycle": "CREATING",
        "LifecycleTransitionReason": {
            "Message": "string"
        },
        "Name": "my-openzfs-ap",
        "OpenZFSConfiguration": {
            "VolumeId": "fsvol-0123456789abcdef9",
            "FileSystemIdentity": {
                "Type": "POSIX",
                "PosixUser": {
                    "Uid": "1234567",
                    "Gid": "1234567",
                    "SecondaryGids": ""
                }
            }
        },
        "S3AccessPoint": {
            "ResourceARN": "arn:aws-cn:s3:us-east-1:111122223333:accesspoint/my-openzfs-ap",
            "Alias": "my-openzfs-ap-aqfqprnstn7aefdfbarligizwgyfouse1a-ext-s3alias",
            "VpcConfiguration": {
                "VpcId": "vpc-0123467"
        }
     }
  }
}
```

# Creating access points restricted to a virtual private cloud
Creating access points restricted to a VPC

When you create an access point, you can choose to make the access point accessible from the internet, or you can specify that all requests made through that access point must originate from a specific Amazon Virtual Private Cloud. An access point that's accessible from the internet is said to have a network origin of `Internet`. It can be used from anywhere on the internet, subject to any other access restrictions in place for the access point, underlying bucket or Amazon FSx volume, and related resources, such as the requested objects. An access point that's only accessible from a specified Amazon VPC has a network origin of `VPC`, and Amazon S3 rejects any request made to the access point that doesn't originate from that Amazon VPC.

**Important**  
You can only specify an access point's network origin when you create the access point. After you create the access point, you can't change its network origin.

To restrict an access point to Amazon VPC-only access, you include the `VpcConfiguration` parameter with the request to create the access point. In the `VpcConfiguration` parameter, you specify the Amazon VPC ID that you want to be able to use the access point. If a request is made through the access point, the request must originate from the Amazon VPC or Amazon S3 will reject it. 

You can retrieve an access point's network origin using the Amazon CLI, Amazon SDKs, or REST APIs. If an access point has a Amazon VPC configuration specified, its network origin is `VPC`. Otherwise, the access point's network origin is `Internet`.

**Example**  
***Example: Create an access point that's restricted to Amazon VPC access***  
The following example creates an access point named `example-vpc-ap` for bucket `amzn-s3-demo-bucket` in account `123456789012` that allows access only from the `vpc-1a2b3c` Amazon VPC. The example then verifies that the new access point has a network origin of `VPC`.  

```
$ aws fsx create-and-attach-s3-access-point --name example-vpc-ap --type OPENZFS --openzfs-configuration \
   VolumeId=fsvol-0123456789abcdef9,FileSystemIdentity='{Type=POSIX,PosixUser={Uid=1234567,Gid=1234567}}' \
   --s3-access-point VpcConfiguration='{VpcId=vpc-id},Policy=access-point-policy-json
```

```
$ {
  {
     "S3AccessPointAttachment": {
        "Lifecycle": "CREATING",
        "CreationTime": 1728935791.8,
        "Name": "example-vpc-ap",
        "OpenZFSConfiguration": {
            "VolumeId": "fsvol-0123456789abcdef9",
            "FileSystemIdentity": {
                "Type": "UNIX",
                "UnixUser": {
                    "Name": "my-unix-user"
                }
            }
        },
        "S3AccessPoint": {
            "ResourceARN": "arn:aws-cn:s3:us-east-1:111122223333:accesspoint/example-vpc-ap",
            "Alias": "access-point-abcdef0123456789ab12jj77xy51zacd4-ext-s3alias",
            "VpcConfiguration": { 
                "VpcId": "vpc-1a2b3c"
            }
        }
     }
  }
```

To use an access point with a Amazon VPC, you must modify the access policy for your Amazon VPC endpoint. Amazon VPC endpoints allow traffic to flow from your Amazon VPC to Amazon S3. They have access control policies that control how resources within the Amazon VPC are allowed to interact with Amazon S3. Requests from your Amazon VPC to Amazon S3 only succeed through an access point if the Amazon VPC endpoint policy grants access to both the access point and the underlying bucket.

**Note**  
To make resources accessible only within a Amazon VPC, make sure to create a [private hosted zone](https://docs.amazonaws.cn/Route53/latest/DeveloperGuide/hosted-zone-private-creating.html) for your Amazon VPC endpoint. To use a private hosted zone, [modify your Amazon VPC settings](https://docs.amazonaws.cn/vpc/latest/userguide/vpc-dns.html#vpc-dns-updating) so that the [Amazon VPC network attributes](https://docs.amazonaws.cn/vpc/latest/userguide/vpc-dns.html#vpc-dns-support) `enableDnsHostnames` and `enableDnsSupport` are set to `true`.

The following example policy statement configures an Amazon VPC endpoint to allow calls to `GetObject` and an access point named `example-vpc-ap`.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
    {
        "Principal": "*",
        "Action": [
            "s3:GetObject"
        ],
        "Effect": "Allow",
        "Resource": [
            "arn:aws-cn:s3:us-east-1:123456789012:accesspoint/example-vpc-ap/object/*"
        ]
    }]
}
```

------

**Note**  
The `Resource` declaration in this example uses an Amazon Resource Name (ARN) to specify the access point. 

For more information about Amazon VPC endpoint policies, see [Gateway endpoints for Amazon S3](https://docs.amazonaws.cn/vpc/latest/userguide/vpc-endpoints-s3.html#vpc-endpoints-policies-s3) in the *Amazon VPC User Guide*.

# Managing Amazon S3 access points
Managing access points

This section explains how to manage and use your Amazon S3 access points using the Amazon Web Services Management Console, Amazon Command Line Interface, or API.

**Topics**
+ [

# Listing S3 access point attachments
](access-points-list.md)
+ [

# Viewing access point details
](access-points-details.md)
+ [

# Deleting an S3 access point attachment
](delete-access-point.md)

# Listing S3 access point attachments


This section explains how to list S3 access point using the Amazon Web Services Management Console, Amazon Command Line Interface, or REST API.

## To list all the S3 access points attached to an FSx for OpenZFS volume (Amazon FSx console)


1. Open the Amazon FSx console at [https://console.amazonaws.cn/fsx/](https://console.amazonaws.cn/fsx/).

1. In the navigation pane on the left side of the console, choose **Volumes**.

1. On the **Volumes** page, choose the **OpenZFS** volume that you want to view the access point attachments for.

1. On the Volume details page, choose **S3 - new** to view a list of all the S3 access points attached to the volume.

## To list all the S3 access points attached to an FSx for OpenZFS volume (Amazon CLI)


The following [https://docs.amazonaws.cn/fsx/latest/APIReference/API_DescribeS3AccessPointAttachments.html](https://docs.amazonaws.cn/fsx/latest/APIReference/API_DescribeS3AccessPointAttachments.html) example command shows how you can use the Amazon CLI to list S3 access point attachments.

The following command lists all the S3 access points attached to volumes on the FSx for OpenZFS file system fs-0abcdef123456789.

```
aws fsx describe-s3-access-point-attachments --filter {[Name: file-system-id, Values:{[fs-0abcdef123456789]}} 
```

The following command lists S3 access points attached to an FSx for OpenZFS volume vol-9abcdef123456789].

```
aws fsx describe-s3-access-point-attachments --filter {[Name: volume-id, Values:{[vol-9abcdef123456789]}} 
```

For more information and examples, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3control/list-access-points.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3control/list-access-points.html) in the *Amazon CLI Command Reference*.

# Viewing access point details


This section explains how to view the details of S3 access points using the Amazon Web Services Management Console, Amazon Command Line Interface, or REST API.

## To view the details of S3 access points attached to an FSx for OpenZFS volume (Amazon FSx console)


1. Open the Amazon FSx console at [https://console.amazonaws.cn/fsx/](https://console.amazonaws.cn/fsx/).

1. Navigate to the volume that is attached to the access point whose details you want to view.

1. Choose **S3 - new** to display the list of access points attached to the volume.

1. Choose the access point whose details you want to view.

1. Under **S3 access point attachment summary**, view configuration details and properties for the selected access point.

   The **File system user identity** configuration and the **S3 access point permissions** policy are also listed for the access point attachment.

1. To view the access point's S3 configuration in the Amazon S3 console, choose the S3 access point name displayed under **S3 access point**. It takes you to the access point's detail page in the Amazon S3 console.

# Deleting an S3 access point attachment


This section explains how to delete S3 access points using the Amazon Web Services Management Console, Amazon Command Line Interface, or REST API.

The `fsx:DetatchAndDeleteS3AccessPoint` and `s3control:DeleteAccessPoint` permissions are required to delete an S3 access point attachment.

## To delete an S3 access point attached to an FSx for OpenZFS volume (Amazon FSx console)


1. Open the Amazon FSx console at [https://console.amazonaws.cn/fsx/](https://console.amazonaws.cn/fsx/).

1. Navigate to the volume that the S3 access point attachment that you want to delete is attached to.

1. Choose **S3 - new** to display the list of S3 access points attached to the volume.

1. Select the S3 access point attachment that you want to delete.

1. Choose **Delete**.

1. Confirm that you want to delete the S3 access point, and choose **Delete**.

## To delete an S3 access point attached to an FSx for OpenZFS volume (Amazon CLI)

+ To delete an S3 access point attachments, use the [detach-and-delete-s3-access-point](https://docs.amazonaws.cn/cli/latest/reference/fsx/detach-and-delete-s3-access-point.html) CLI command (or the equivalent [DetachAndDeleteS3AccessPoint](https://docs.amazonaws.cn/fsx/latest/APIReference/API_DetachAndDeleteS3AccessPoint.html) API operation), as shown in the following example. Use the `--name` property to specify the name of the S3 access point attachment that you want to delete.

  ```
  aws fsx detach-and-delete-s3-access-point \
      --region us-east-1 \
      --name my-openzfs-ap
  ```

# Using access points
Using access points

The following examples demonstrate how to use access points to access file data stored on an FSx for OpenZFS volume using the S3 API. For a full list of the Amazon S3 API operations supported by access points attached to an FSx for OpenZFS volume, see [Access point compatibility](access-points-object-api-support.md). 

**Note**  
Files on FSx for OpenZFS volumes are identified with a `StorageClass` of `FSX_OPENZFS`.

**Topics**
+ [

# Downloading a file using an S3 access point
](get-object-ap.md)
+ [

# Uploading a file using an S3 access point
](put-object-ap.md)
+ [

# Listing files using an S3 access point
](list-object-ap.md)
+ [

# Tagging a file using an S3 access point
](add-tag-set-ap.md)
+ [

# Deleting a file using an S3 access point
](delete-object-ap.md)

# Downloading a file using an S3 access point
Download a file

The following `get-object` example command shows how you can use the Amazon CLI to download a file through an access point. You must include an outfile, which is a file name for the downloaded object.

The example requests the file *`my-image.jpg`* through the access point *`my-openzfs-ap`* and saves the downloaded file as *`download.jpg`*.

```
$ aws s3api get-object --key my-image.jpg --bucket my-openzfs-ap-hrzrlukc5m36ft7okagglf3gmwluquse1b-ext-s3alias download.jpg
{
    "AcceptRanges": "bytes",
    "LastModified": "Mon, 14 Oct 2024 17:01:48 GMT",
    "ContentLength": 141756,
    "ETag": "\"00751974dc146b76404bb7290f8f51bb\"",
    "ContentType": "binary/octet-stream",
    "ServerSideEncryption": "SSE_FSX",
    "Metadata": {},
    "StorageClass": "FSX_OPENZFS"
}
```

You can also use the REST API to download an object through an access point. For more information, see [https://docs.amazonaws.cn/AmazonS3/latest/API/API_GetObject.html](https://docs.amazonaws.cn/AmazonS3/latest/API/API_GetObject.html) in the *Amazon Simple Storage Service API Reference*.

# Uploading a file using an S3 access point
Upload a file

The following `put-object` example command shows how you can use the Amazon CLI to upload a file through an access point. You must include an outfile, which is a file name for the uploaded object.

The example uploads the file *`my-new-image.jpg`* through the access point *`my-openzfs-ap`* and saves the uploaded file as *`my-new-image.jpg`*.

```
$ aws s3api put-object --bucket my-openzfs-ap-hrzrlukc5m36ft7okagglf3gmwluquse1b-ext-s3alias --key my-new-image.jpg --body  my-new-image.jpg
```

You can also use the REST API to upload an object through an access point. For more information, see [https://docs.amazonaws.cn/AmazonS3/latest/API/API_PutObject.html](https://docs.amazonaws.cn/AmazonS3/latest/API/API_PutObject.html) in the *Amazon Simple Storage Service API Reference*.

# Listing files using an S3 access point
List files

The following example lists files through the access point alias `my-openzfs-ap-hrzrlukc5m36ft7okagglf3gmwluquse1b-ext-s3alias` owned by account ID *`111122223333`* in Region *`us-east-2`*.

```
$ aws s3api list-objects-v2 --bucket my-openzfs-ap-hrzrlukc5m36ft7okagglf3gmwluquse1b-ext-s3alias
{
    "Contents": [
        {
            "Key": ".hidden-dir-with-data/file.txt",
            "LastModified": "2024-10-29T14:22:05.4359",
            "ETag": "\"88990077ab44cd55ef66aa77\"",
            "Size": 18,
            "StorageClass": "FSX_OPENZFS"
        },
        {
            "Key": "documents/report.rtf",
            "LastModified": "2024-11-02T10:18:15.6621",
            "ETag": "\"ab12cd34ef56a89219zg6aa77\"",
            "Size": 1048576,
            "StorageClass": "FSX_OPENZFS"
        },
    ]
}
```

You can also use the REST API to list your files. For more information, see [https://docs.amazonaws.cn/AmazonS3/latest/API/API_ListObjectsV2.html](https://docs.amazonaws.cn/AmazonS3/latest/API/API_ListObjectsV2.html) in the *Amazon Simple Storage Service API Reference*.

# Tagging a file using an S3 access point
Add a tag-set

The following `put-object-tagging` example command shows how you can use the Amazon CLI to add a tag-set through an access point. Each tag is a key-value pair. For more information, see [Categorizing your storage using tags](https://docs.amazonaws.cn/AmazonS3/latest/userguide/object-tagging.html) in the *Amazon Simple Storage Service User Guide*.

The example adds a tag-set to the existing file `my-image.jpg` using the access point *`my-openzfs-ap`*.

```
$ aws s3api put-object-tagging --bucket my-openzfs-ap-hrzrlukc5m36ft7okagglf3gmwluquse1b-ext-s3alias --key my-image.jpg --tagging TagSet=[{Key="finance",Value="true"}] 
```

You can also use the REST API to add a tag-set to an object through an access point. For more information, see [https://docs.amazonaws.cn/AmazonS3/latest/API/API_PutObjectTagging.html](https://docs.amazonaws.cn/AmazonS3/latest/API/API_PutObjectTagging.html) in the *Amazon Simple Storage Service API Reference*.

# Deleting a file using an S3 access point
Delete a file

The following `delete-object` example command shows how you can use the Amazon CLI to delete a file through an access point.

```
$ aws s3api delete-object --bucket my-openzfs-ap-hrzrlukc5m36ft7okagglf3gmwluquse1b-ext-s3alias --key my-image.jpg 
```

You can also use the REST API to delete an object through an access point. For more information, see [https://docs.amazonaws.cn/AmazonS3/latest/API/API_DeleteObject.html](https://docs.amazonaws.cn/AmazonS3/latest/API/API_DeleteObject.html) in the *Amazon Simple Storage Service API Reference*.

# Troubleshooting S3 access point issues
Troubleshooting access points

This section describes symptoms, causes, and resolutions for when you encounter issues accessing your FSx data from S3 access points.

## The file system is unable to handle S3 requests


If the S3 request volume for a particular workload exceeds the file system’s capacity to handle the traffic, you may experience S3 request errors (for example, `Internal Server Error`, `503 Slow Down`, and `Service Unavailable`). You can proactively monitor and alarm on the performance of your file system using Amazon CloudWatch metrics (for example, `Network throughput utilization` and `CPU utilization`). If you observe degraded performance, you can resolve this issue by increasing the file system's throughput capacity.

## Client ETag mismatch error with Amazon Java SDK v1


When using the Amazon Java SDK v1 to access data via the S3 API, you may encounter an SDK client exception with the following message:

```
Unable to verify integrity of data download. Client calculated content hash didn’t match hash calculated by Amazon S3
```

This error occurs specifically when attempting to retrieve a file that was initially written or has been modified using file-protocols. To resolve this issue, you can either configure your Amazon Java SDK to disable MD5 checksum validation on `GetObject` or update to the Amazon Java SDK v2.

## Access Denied with default S3 access point permissions for automatically created service roles


Some S3-integrated Amazon services will create a custom service role and customize the attached permissions to your specific usecase. When specifying your S3 access point alias as the S3 resource, those attached permissions may include your access point using a bucket ARN format (for example, `arn:aws:s3:::my-fsx-ap-foo7detztxouyjpwtu8krroppxytruse1a-ext-s3alias`) rather than the access point ARN format (for example, `arn:aws:s3:us-east-1:1234567890:accesspoint/my-fsx-ap`). To resolve this, modify the policy to use the ARN of the access point.

# Accessing your data using Amazon container services
Accessing data using Amazon container services

In addition to using Amazon EC2 instances, you can also access your Amazon FSx for OpenZFS file systems using other Amazon Web Services services, including Amazon Elastic Container Service, Amazon Elastic Kubernetes Service, and Amazon S3. You can mount your file system from an Amazon ECS Docker container, manage file system and volume life cycle using Amazon EKS, and use S3 Access Points to access file data using S3. For more information about accessing FSx for OpenZFS file data with S3, see [Accessing your data using Amazon S3 access points](s3accesspoints-for-FSx.md).

To use Amazon EKS clusters to manage the life cycle of your file systems and volumes, see the [ Amazon FSx for OpenZFS CSI Driver README](https://github.com/kubernetes-sigs/aws-fsx-openzfs-csi-driver?tab=readme-ov-file#readme).

The following section provides instructions on how to mount your file system from an Amazon ECS Docker container on an Amazon EC2 Linux instance using a bind mount. 

**Note**  
Using the FSx for OpenZFS CSI Driver is not supported for file systems using the Intelligent-Tiering storage class.

**Topics**
+ [

## Mounting your file system from an Amazon ECS container
](#mount-openzfs-ecs-containers)

## Mounting your file system from an Amazon ECS container
Mounting from Amazon ECS

You can access your Amazon FSx for OpenZFS file systems from an Amazon Elastic Container Service (Amazon ECS) Docker container on an Amazon EC2 Linux instance by mounting volumes using a bind mount. For more information, see [Bind mounts](https://docs.amazonaws.cn/AmazonECS/latest/developerguide/bind-mounts.html) in the *Amazon Elastic Container Service Developer Guide*.

**To mount a volume on an Amazon ECS Linux container**

1. Create an ECS cluster using the EC2 Linux \$1 Networking cluster template for your Linux containers. For more information, see [ Clusters](https://docs.amazonaws.cn/AmazonECS/latest/developerguide/clusters.html) in the *Amazon ECS Developer Guide*.

1. Create a directory on the EC2 instance for mounting the volume as follows:

   ```
   sudo mkdir /fsxopenzfs
   ```

1. Mount your FSx for OpenZFS volume on the Linux EC2 instance by either using a user-data script during instance launch, or by running the following commands:

   ```
   sudo mount -t nfs -o nfsvers=NFS_version file-system-dns-name:/volume-path /localpath
   ```

   The following example uses sample values in the mount command.

   ```
   sudo mount -t nfs -o nfsvers=4.1 fs-01234567890abcdef1.fsx.us-east-1.amazonaws.com:/fsx/vol1 /fsxopenzfs
   ```

   You can also use the file system's IP address instead of its DNS name.

   ```
   sudo mount -t nfs -o nfsvers=4.1 198.51.100.1:/fsx/vol1 /fsxopenzfs
   ```

1. When creating your Amazon ECS task definitions, add the following `volumes` and `mountPoints` container properties in the JSON container definition. Replace the `sourcePath` with the mount point and directory in your FSx for OpenZFS file system.

   ```
   {
       "volumes": [
           {
               "name": "openzfs-volume",
               "host": {
                   "sourcePath": "mountpoint"
               }
           }
       ],
       "mountPoints": [
           {
               "containerPath": "container_local_path",
               "sourceVolume": "openzfs-volume"
           }
       ],
       .
       .
       .
   }
   ```