Use Amazon SageMaker Training Storage Paths for Training Datasets, Checkpoints, Model Artifacts, and Outputs - Amazon SageMaker
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China (PDF).

Use Amazon SageMaker Training Storage Paths for Training Datasets, Checkpoints, Model Artifacts, and Outputs

This page provides a high-level summary of how the SageMaker training platform manages storage paths for training datasets, model artifacts, checkpoints, and outputs between Amazon cloud storage and training jobs in SageMaker. Throughout this guide, you learn to identify the default paths set by the SageMaker platform and how the data channels can be streamlined with your data sources in Amazon Simple Storage Service (Amazon S3), FSx for Lustre, and Amazon EFS. For more information about various data channel input modes and storage options, see Access Training Data.

Overview

The following diagram shows the simplest example of how SageMaker manages input and output paths when you run a training job using the SageMaker Python SDK Estimator class and its fit method. It's based on using file mode as the data access strategy and Amazon S3 as the data source for the training input channels.

This figure shows an overview of how SageMaker pairs storage paths between an Amazon S3 bucket as the data source and the SageMaker training instance based on how the paths are specified in a SageMaker estimator class. More information about the paths, how they read from or write to the paths, and purposes of the paths are described in the following section SageMaker Environment Variables and Default Paths for Training Storage Locations.

You can use OutputDataConfig in the CreateTrainingJob API to find where your S3 bucket is located. Use the ModelArtifacts API to find the S3 location that contains your model artifacts. See the abalone_build_train_deploy notebook for an example of output paths and how they are used in API calls.

For more information and examples of how SageMaker manages data source, input modes, and local paths in SageMaker training instances, see Access Training Data.

Uncompressed model output

SageMaker stores your model in /opt/ml/model and your data in /opt/ml/output/data. After the model and data are written to those locations, they're uploaded to your Amazon S3 bucket as compressed files by default.

You can save time on large data file compression by uploading model and data outputs to your S3 bucket as uncompressed files. To do this, create a training job in uncompressed upload mode by using either the Amazon Command Line Interface (Amazon CLI) or the SageMaker Python SDK.

The following code example shows how to create a training job in uncompressed upload mode when using the Amazon CLI. To enable uncompressed upload mode, set CompressionType field in the OutputDataConfig API to NONE.

{ "TrainingJobName": "uncompressed_model_upload", ... "OutputDataConfig": { "S3OutputPath": "s3://DOC-EXAMPLE-BUCKET/uncompressed_upload/output", "CompressionType": "NONE" }, ... }

The following code example shows you how to create a training job in uncompressed upload mode using the SageMaker Python SDK.

import sagemaker from sagemaker.estimator import Estimator estimator = Estimator( image_uri="your-own-image-uri", role=sagemaker.get_execution_role(), sagemaker_session=sagemaker.Session(), instance_count=1, instance_type='ml.c4.xlarge', disable_output_compression=True )

Tips and Considerations for Setting Up Storage Paths

Consider the following items when setting up storage paths for training jobs in SageMaker.

  • If you want to store training artifacts for distributed training in the /opt/ml/output/data directory, you must properly append subdirectories or use unique file names for the artifacts through your model definition or training script. If the subdirectories and file names are not properly configured, all of the distributed training workers might write outputs to the same file name in the same output path in Amazon S3.

  • If you use a custom training container, make sure you install the SageMaker Training Toolkit that helps set up the environment for SageMaker training jobs. Otherwise, you must specify the environment variables explicitly in your Dockerfile. For more information, see Create a container with your own algorithms and models.

  • When using an ML instance with NVMe SSD volumes, SageMaker doesn't provision Amazon EBS gp2 storage. Available storage is fixed to the NVMe-type instance's storage capacity. SageMaker configures storage paths for training datasets, checkpoints, model artifacts, and outputs to use the entire capacity of the instance storage. For example, ML instance families with the NVMe-type instance storage include ml.p4d, ml.g4dn, and ml.g5. When using an ML instance with the EBS-only storage option and without instance storage, you must define the size of EBS volume through the volume_size parameter in the SageMaker estimator class (or VolumeSizeInGB if you are using the ResourceConfig API). For example, ML instance families that use EBS volumes include ml.c5 and ml.p2. To look up instance types and their instance storage types and volumes, see Amazon EC2 Instance Types.

  • The default paths for SageMaker training jobs are mounted to Amazon EBS volumes or NVMe SSD volumes of the ML instance. When you adapt your training script to SageMaker, make sure that you use the default paths listed in the previous topic about SageMaker Environment Variables and Default Paths for Training Storage Locations. We recommend that you use the /tmp directory as a scratch space for temporarily storing any large objects during training. This means that you must not use directories that are mounted to small disk space allocated for system, such as /user and /home, to avoid out-of-space errors.

To learn more, see the Amazon machine learning blog Choose the best data source for your Amazon SageMaker training job that further discusses case studies and performance benchmarks of data sources and input modes.

SageMaker Environment Variables and Default Paths for Training Storage Locations

The following table summarizes input and output paths for training datasets, checkpoints, model artifacts, and outputs, managed by the SageMaker training platform.

Local path in SageMaker training instance SageMaker environment variable Purpose Read from S3 during start Read from S3 during Spot-restart Writes to S3 during training Writes to S3 when job is terminated

/opt/ml/input/data/channel_name1

SM_CHANNEL_CHANNEL_NAME

Reading training data from the input channels specified through the SageMaker Python SDK Estimator class or the CreateTrainingJob API operation. For more information about how to specify it in your training script using the SageMaker Python SDK, see Prepare a Training script.

Yes Yes No No

/opt/ml/output/data2

SM_OUTPUT_DIR

Saving outputs such as loss, accuracy, intermediate layers, weights, gradients, bias, and TensorBoard-compatible outputs. You can also save any arbitrary output you’d like using this path. Note that this is a different path from the one for storing the final model artifact /opt/ml/model/.

No No No Yes

/opt/ml/model3

SM_MODEL_DIR

Storing the final model artifact. This is also the path from where the model artifact is deployed for Real-time inference in SageMaker Hosting.

No No No Yes

/opt/ml/checkpoints4

-

Saving model checkpoints (the state of model) to resume training from a certain point, and recover from unexpected or Managed Spot Training interruptions.

Yes Yes Yes No

/opt/ml/code

SAGEMAKER_SUBMIT_DIRECTORY

Copying training scripts, additional libraries, and dependencies.

Yes Yes No No

/tmp

-

Reading or writing to /tmp as a scratch space.

No No No No

1 channel_name is the place to specify user-defined channel names for training data inputs. Each training job can contain several data input channels. You can specify up to 20 training input channels per training job. Note that the data downloading time from the data channels is counted to the billable time. For more information about data input paths, see How Amazon SageMaker Provides Training Information. Also, there are three types of data input modes that SageMaker supports: file, FastFile, and pipe mode. To learn more about the data input modes for training in SageMaker, see Access Training Data.

2 SageMaker compresses and writes training artifacts to TAR files (tar.gz). Compression and uploading time is counted to the billable time. For more information, see How Amazon SageMaker Processes Training Output.

3 SageMaker compresses and writes the final model artifact to a TAR file (tar.gz). Compression and uploading time is counted to the billable time. For more information, see How Amazon SageMaker Processes Training Output.

4 Sync with Amazon S3 during training. Write as is without compressing to TAR files. For more information, see Use Checkpoints in Amazon SageMaker.