Troubleshooting data repository task failures - FSx for Lustre
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China (PDF).

Troubleshooting data repository task failures

You can turn on logging to CloudWatch Logs to log information about any failures experienced while importing or exporting files using data repository tasks. For information about CloudWatch Logs event logs, see Data repository event logs.

When a data repository task fails, you can find the number of files that Amazon FSx failed to process in Files failed to export on the console's Task status page. Or you can use the CLI or API and view the task's Status: FailedCount property. For information about accessing this information, see Accessing data repository tasks.

For data repository tasks, Amazon FSx also optionally provides information about the specific files and directories that failed in a completion report. The task completion report contains the file or directory path on the Lustre file system that failed, its status, and the failure reason. For more information, see Working with task completion reports.

A data repository task can fail for several reasons, including those listed following.

Error Code Explanation

FileSizeTooLarge

The maximum object size supported by Amazon S3 is 5 TiB.

InternalError

An error occurred within the Amazon FSx file system for an import, export, or release task. Generally, this error code means that the Amazon FSx file system that the failed task ran on is in a FAILED lifecycle state. When this occurs, the affected files might not be recoverable due to data loss. Otherwise, you can use hierarchical storage management (HSM) commands to export the files and directories to the data repository on S3. For more information, see Exporting files using HSM commands.

OperationNotPermitted

Amazon FSx was unable to release the file because it ihas not been exported to a linked S3 bucket. You must use automatic export or export data repository tasks to ensure that your files are first exported to your linked Amazon S3 bucket.

PathSizeTooLong

The export path is too long. The maximum object key length supported by S3 is 1,024 characters.

ResourceBusy

Amazon FSx was unable to export or release the file because it was being accessed by another client on the file system. You can retry the DataRepositoryTask after your workflow has finished writing to the file.

S3AccessDenied

Access was denied to Amazon S3 for a data repository export or import task.

For export tasks, the Amazon FSx file system must have permission to perform the S3:PutObject operation to export to a linked data repository on S3. This permission is granted in the AWSServiceRoleForFSxS3Access_fs-0123456789abcdef0 service-linked role. For more information, see Using service-linked roles for Amazon FSx.

For export tasks, because the export task requires data to flow outside a file system's VPC, this error can occur if the target repository has a bucket policy that contains one of the aws:SourceVpc or aws:SourceVpce IAM global condition keys.

For import tasks, the Amazon FSx file system must have permission to perform the S3:HeadObject and S3:GetObject operations to import from a linked data repository on S3.

For import tasks, if your S3 bucket uses server-side encryption with customer managed keys stored in Amazon Key Management Service (SSE-KMS), you must follow the policy configurations in Working with server-side encrypted Amazon S3 buckets.

If your S3 bucket contains objects uploaded from a different Amazon Web Services account than your file system linked S3 bucket account, you can ensure that your data repository tasks can modify S3 metadata or overwrite S3 objects regardless of which account uploaded them. We recommend that you enable the S3 Object Ownership feature for your S3 bucket. This feature enables you to take ownership of new objects that other Amazon Web Services accounts upload to your bucket, by forcing uploads to provide the -/-acl bucket-owner-full-control canned ACL. You enable S3 Object Ownership by choosing the Bucket owner preferred option in your S3 bucket. For more information, see Controlling ownership of uploaded objects using S3 Object Ownership in the Amazon S3 User Guide.

S3Error

Amazon FSx encountered an S3-related error that wasn't S3AccessDenied.

S3FileDeleted

Amazon FSx was unable to export a hard link file because the source file doesn't exist in the data repository.

S3ObjectInUnsupportedTier

Amazon FSx successfully imported a non-symlink object from an S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive storage class. The FileStatus will be succeeded with warning in the task completion report. The warning indicates that to retrieve the data, you must restore the S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive object first and then use an hsm_restore command to import the object.

S3ObjectNotFound

Amazon FSx was unable to import or export the file because it doesn't exist in the data repository.

S3ObjectPathNotPosixCompliant

The Amazon S3 object exists but can't be imported because it isn't a POSIX-compliant object. For information about supported POSIX metadata, see POSIX metadata support for data repositories.

S3ObjectUpdateInProgressFromFileRename

Amazon FSx was unable to release the file because automatic export is processing a rename of the file. The automatic export rename process must finish before the file can be released.

S3SymlinkInUnsupportedTier

Amazon FSx was unable to import a symlink object because it's in an Amazon S3 storage class that is not supported, such as an S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive storage class. The FileStatus will be failed in the task completion report.

SourceObjectDeletedBeforeReleasing

Amazon FSx was unable to release the file from the file system because the file was deleted from the data repository before it could be released.