Troubleshooting issues with Amazon DataSync transfers
The following topics describe issues common to Amazon DataSync locations and tasks and how you can resolve them.
How do I configure DataSync to use a specific NFS or SMB version to mount my file share?
For locations that support Network File System (NFS) or Server Message Block (SMB), DataSync by default chooses the protocol version for you. You can also specify the version yourself by using the DataSync console or API.
Action to take (DataSync console)
When creating your NFS or SMB location, configure the protocol version that you want DataSync to use. For more information, see Configuring Amazon DataSync transfers with an NFS file server or Configuring Amazon DataSync transfers with an SMB file server).
Action to take (DataSync API)
When creating or updating your NFS or SMB location, specify the
Version
parameter. For example, see CreateLocationNfs or
CreateLocationSmb.
The following example Amazon CLI command creates an NFS location that DataSync mounts using NFS version 4.0.
$ aws datasync create-location-nfs --server-hostname
your-server-address
\ --on-prem-config AgentArns=your-agent-arns
\ --subdirectorynfs-export-path
\ --mount-options Version="NFS4_0"
The following example Amazon CLI command creates an SMB location that DataSync mounts using SMB version 3.
$ aws datasync create-location-smb --server-hostname
your-server-address
\ --on-prem-config AgentArns=your-agent-arns
\ --subdirectorysmb-export-path
\ --mount-options Version="SMB3"
Error: Invalid SyncOption value. Option:
TransferMode,PreserveDeletedFiles, Value: ALL,REMOVE.
This error occurs when you're creating or editing your DataSync task and you select the Transfer all data option and deselect the Keep deleted files option. When you transfer all data, DataSync doesn't scan your destination location and doesn't know what to delete.
My task keeps failing with an
EniNotFound
error
This error occurs if you delete one of your task's network interfaces in your virtual private cloud (VPC). If your task is scheduled or queued, the task will fail if it's missing a network interface required to transfer your data.
Actions to take
You have the following options to work around this issue:
-
Manually restart the task. When you do this, DataSync will create any missing network interfaces it needs to run the task.
-
If you need to clean up resources in your VPC, make sure you don't delete network interfaces related to a DataSync task you're still using.
To see the network interfaces allocated to your task, do one of the following:
-
Use the DescribeTask operation. You can view the network interfaces in the
SourceNetworkInterfaceArns
andDestinationNetworkInterfaceArns
response elements. -
In the Amazon EC2 console, search for your task ID (such as
task-f012345678abcdef0
) to find its network interfaces.
-
-
Consider not running your tasks automatically. This could include disabling task queueing or scheduling (through DataSync or custom automation).
My task failed with a DataSync currently does
not support server-side NFSv4 ID mapping
error
This error can occur if a file system involved in your transfer uses NFS version 4 ID mapping, a feature that DataSync doesn't support.
Action to take
You have a couple options to work around this issue:
-
Create a new DataSync location for the file system that uses NFS version 3.
-
Disable NFS version 4 ID mapping on the file system.
Retry the transfer. Either option should resolve the issue.
My task status is unavailable and indicates a mount error
DataSync will indicate that your task is unavailable if your agent can't mount an NFS location.
Action to take
First, make sure that the NFS server and export that you specified are both valid. If they aren't, delete the task, create a new one using the correct NFS server, and then export. For more information, see Configuring Amazon DataSync transfers with an NFS file server.
If the NFS server and export are both valid, it generally indicates one of two things. Either a firewall is preventing the agent from mounting the NFS server, or the NFS server isn't configured to allow the agent to mount it.
Make sure that there's no firewall between the agent and the NFS server. Then make sure that the NFS server is configured to allow the agent to mount the export end specified in the task. For information about network and firewall requirements, see Amazon DataSync network requirements.
If you perform these actions and the agent still can't mount the NFS server and export, open a support channel with Amazon Support. For information about how to open a support channel, see Getting help with your agent from Amazon Web Services Support.
My task failed with a Cannot
allocate memory
error
When your DataSync task fails with a Cannot allocate memory
error,
it can mean a few different things.
Action to take
Try the following until you no longer see the issue:
-
If your transfer involves an agent, make sure that the agent meets the virtual machine (VM) requirements.
-
Split your transfer into multiple tasks using filters. It's possible that you're trying to transfer more files or objects than what one DataSync task can handle.
-
If you still see the issue, contact Amazon Web Services Support
.
My task failed with an input/output error
You can get an input/output error message if your storage system fails I/O requests from the DataSync agent. Common reasons for this include a server disk failure, changes to your firewall configuration, or a network router failure.
If the error involves an NFS server or Hadoop Distributed File System (HDFS) cluster, use the following steps to resolve the error.
Action to take (NFS)
First, check your NFS server's logs and metrics to determine if the problem started on the NFS server. If yes, resolve that issue.
Next, check that your network configuration hasn't changed. To check if the NFS server is configured correctly and that DataSync can access it, do the following:
-
Set up another NFS client on the same network subnet as the agent.
-
Mount your share on that client.
-
Validate that the client can read and write to the share successfully.
Action to take (HDFS)
Make sure that your HDFS cluster allows the agent to communicate with the cluster's NameNode and DataNode ports. In most clusters, you can find the port numbers the cluster uses in the following configuration files.
-
To find the NameNode port, look in the
core-site.xml
file under thefs.default
orfs.default.name
property (depending on the Hadoop distribution). -
To find the DataNode port, look in the
hdfs-site.xml
file under thedfs.datanode.address
property.
My task execution has a launching status but nothing seems to be happening
Your DataSync task can get stuck with a Launching status typically because the agent is powered off or has lost network connectivity.
Action to take
Make sure that your agent's status is ONLINE. If the agent is OFFLINE, make sure it's powered on.
If the agent is powered on and the task is still Launching, then there's likely a network connection problem between your agent and Amazon. For information about how to test network connectivity, see Testing your agent's connection to Amazon.
If you're still having this issue, see Getting help with your agent from Amazon Web Services Support.
My task execution has had the preparing status for a long time
The time your DataSync transfer task has the Preparing status depends on the amount of data in your transfer source and destination and the performance of those storage systems.
When a task starts, DataSync performs a recursive directory listing to discover all files, objects, directories, and metadata in your source and destination. DataSync uses these listings to identify differences between storage systems and determine what to copy. This process can take a few minutes or even a few hours.
Action to take
You shouldn't have to do anything. Continue to wait for the task status to change
to Transferring. If the status still doesn't change,
contact Amazon Web Services Support Center
My NFS transfer has a permissions denied error
You can get a "permissions denied" error message if you configure your NFS file server
with root_squash
or all_squash
and your files don't all
have read access.
Action to take
To fix this issue, configure your NFS export with no_root_squash
or make sure that the permissions for all of the files that you want to transfer
allow read access for all users.
For DataSync to access directories, you must also enable all-execute access. To make sure that the directory can be mounted, first connect to any computer that has the same network configuration as your agent. Then run the following CLI command:
mount -t nfs -o nfsvers=<
your-nfs-server-version
>
<your-nfs-server-name
>:<nfs-export-path-you-specified
>
<new-test-folder-on-your-computer
>
If the issue still isn't resolved, contact Amazon Web Services Support Center
How long does it take DataSync to verify a task I've run?
By default, DataSync verifies data integrity at the end of a transfer. How long verification takes depends on a number of factors, such as the number of files or objects, the total amount of data in the source and destination storage systems, and the performance of these systems. Verification includes an SHA256 checksum on all file content and an exact comparison of all file metadata.
Action to take
You shouldn't have to do anything. If task status still doesn't change to
Success or Error, contact
Amazon Web Services Support Center
My task fails when transferring to an S3 bucket in another Amazon Web Services account
Unlike DataSync transfers between resources in the same Amazon Web Services account, copying data to an S3 bucket in a different Amazon Web Services account requires some extra steps.
-
If your DataSync task fails with an error related to S3 bucket permissions: When creating the task, make sure you're logged in to the Amazon Web Services Management Console using the same IAM role that you specified in your destination S3 bucket's policy. (Note: This isn't the IAM role that gives DataSync permission to write to the S3 bucket.)
-
If you're also copying data to a bucket in another Amazon Web Services Region and get an S3 endpoint connection error: Create the DataSync task in the same Region as the destination S3 bucket.
My task fails with an Unable
to list Azure Blobs on the volume root
error
If your DataSync transfer task fails with an Unable to list Azure Blobs on the
volume root
error, there might be an issue with your shared access
signature (SAS) token or your Azure storage account's network.
Actions to take
Try the following and run your task again until you fix the issue:
-
Make sure that your SAS token has the right permissions to access your Microsoft Azure Blob Storage.
-
If you're running your DataSync agent in Azure, configure your storage account to allow access from the virtual network where your agent resides.
-
If you're running your agent on Amazon EC2, configure your Azure storage firewall to allow access from the agent's public IP address.
For information on how to configure your Azure storage account's
network, see the Azure Blob Storage documentation
My task's start and end times don't match up with what's in the logs
Your task execution's start and end times that you see in the DataSync console may differ between timestamps you see elsewhere related to your transfer. This is because the console doesn’t take into account the time a task execution spends in the launching or queueing states.
For example, your Amazon CloudWatch logs can indicate that your task execution ended later than what's displayed in the DataSync console. You may notice a similar discrepancy in the following areas:
-
Logs for the file system or object storage system involved in your transfer
-
The last modified date on an Amazon S3 object that DataSync wrote to
-
Network traffic coming from the DataSync agent
-
Amazon EventBridge events
Error: SyncTaskDeletedByUser
You may see this error unexpectedly when automating some DataSync workflows. For example, maybe you have a script that's deleting your task before a task execution has finished or is in queue.
To fix this issue, reconfigure your automation so that these types of actions don't overlap.
Error: NoMem
The set of data you're trying to transfer may be too large for DataSync. If you see this
error, contact Amazon Web Services Support Center
Error:
FsS3UnableToConnectToEndpoint
DataSync can't connect to your Amazon S3 location. This could mean the location's S3 bucket isn't reachable or the location isn't configured correctly.
Do the following until you resolve the issue:
-
Check if DataSync can access your S3 bucket.
-
Make your sure location is configured correctly by using the DataSync console or DescribeLocationS3 operation.
Error: FsS3HeadBucketFailed
DataSync can't access the S3 bucket that you're transferring to or from. Check if DataSync has permission to access the bucket by using the Amazon S3 HeadBucket operation.