Troubleshooting data repository task failures
You can turn on logging to CloudWatch Logs to log information about any failures experienced while importing or exporting files using data repository tasks. For information about CloudWatch Logs event logs, see Data repository event logs.
When a data repository task fails, you can find the number of files that Amazon FSx failed to
process in Files failed to export on the console's Task
status page. Or you can use the CLI or API and view the task's Status:
FailedCount
property. For information about accessing this information, see
Accessing data repository tasks.
For data repository tasks, Amazon FSx also optionally provides information about the specific files and directories that failed in a completion report. The task completion report contains the file or directory path on the Lustre file system that failed, its status, and the failure reason. For more information, see Working with task completion reports.
A data repository task can fail for several reasons, including those listed following.
Error Code | Explanation |
---|---|
|
The maximum object size supported by Amazon S3 is 5 TiB. |
|
An error occurred within the Amazon FSx file system for an import, export, or release task. Generally, this error code means that the Amazon FSx file system that the failed task ran on is in a FAILED lifecycle state. When this occurs, the affected files might not be recoverable due to data loss. Otherwise, you can use hierarchical storage management (HSM) commands to export the files and directories to the data repository on S3. For more information, see Exporting files using HSM commands. |
|
Amazon FSx was unable to release the file because it ihas not been exported to a linked S3 bucket. You must use automatic export or export data repository tasks to ensure that your files are first exported to your linked Amazon S3 bucket. |
|
The export path is too long. The maximum object key length supported by S3 is 1,024 characters. |
|
Amazon FSx was unable to export or release the file because it was being accessed by another client on the file system. You can retry the DataRepositoryTask after your workflow has finished writing to the file. |
|
Access was denied to Amazon S3 for a data repository export or import task. For export tasks, the Amazon FSx file system must have permission to perform the
For export tasks, because the export task requires data to flow
outside a file system's VPC, this error can occur if the target repository has a bucket
policy that contains one of the For import tasks, the Amazon FSx file system must have permission to perform the
For import tasks, if your S3 bucket uses server-side encryption with customer managed keys stored in Amazon Key Management Service (SSE-KMS), you must follow the policy configurations in Working with server-side encrypted Amazon S3 buckets. If your S3 bucket contains objects uploaded from a different Amazon Web Services account than your
file system linked S3 bucket account, you can ensure that your data repository tasks can modify
S3 metadata or overwrite S3 objects regardless of which account uploaded them. We recommend
that you enable the S3 Object Ownership feature for your S3 bucket. This feature enables you
to take ownership of new objects that other Amazon Web Services accounts upload to your bucket, by forcing
uploads to provide the |
|
Amazon FSx encountered an S3-related error that wasn't |
|
Amazon FSx was unable to export a hard link file because the source file doesn't exist in the data repository. |
|
Amazon FSx successfully imported a non-symlink object from an S3 Glacier Flexible Retrieval or
S3 Glacier Deep Archive storage class. The
|
|
Amazon FSx was unable to import or export the file because it doesn't exist in the data repository. |
|
The Amazon S3 object exists but can't be imported because it isn't a POSIX-compliant object. For information about supported POSIX metadata, see POSIX metadata support for data repositories. |
|
Amazon FSx was unable to release the file because automatic export is processing a rename of the file. The automatic export rename process must finish before the file can be released. |
|
Amazon FSx was unable to import a symlink object because it's in an Amazon S3 storage class
that is not supported, such as an S3 Glacier Flexible Retrieval or
S3 Glacier Deep Archive storage class. The
|
|
Amazon FSx was unable to release the file from the file system because the file was deleted from the data repository before it could be released. |