AWS::SageMaker::Model ContainerDefinition - Amazon CloudFormation
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China (PDF).

AWS::SageMaker::Model ContainerDefinition

Describes the container, as part of model definition.

Syntax

To declare this entity in your Amazon CloudFormation template, use the following syntax:

JSON

{ "ContainerHostname" : String, "Environment" : Json, "Image" : String, "ImageConfig" : ImageConfig, "InferenceSpecificationName" : String, "Mode" : String, "ModelDataSource" : ModelDataSource, "ModelDataUrl" : String, "ModelPackageName" : String, "MultiModelConfig" : MultiModelConfig }

Properties

ContainerHostname

This parameter is ignored for models that contain only a PrimaryContainer.

When a ContainerDefinition is part of an inference pipeline, the value of the parameter uniquely identifies the container for the purposes of logging and metrics. For information, see Use Logs and Metrics to Monitor an Inference Pipeline. If you don't specify a value for this parameter for a ContainerDefinition that is part of an inference pipeline, a unique name is automatically assigned based on the position of the ContainerDefinition in the pipeline. If you specify a value for the ContainerHostName for any ContainerDefinition that is part of an inference pipeline, you must specify a value for the ContainerHostName parameter of every ContainerDefinition in that pipeline.

Required: No

Type: String

Pattern: ^[a-zA-Z0-9](-*[a-zA-Z0-9]){0,62}

Maximum: 63

Update requires: Replacement

Environment

The environment variables to set in the Docker container.

The maximum length of each key and value in the Environment map is 1024 bytes. The maximum length of all keys and values in the map, combined, is 32 KB. If you pass multiple containers to a CreateModel request, then the maximum length of all of their maps, combined, is also 32 KB.

Required: No

Type: Json

Update requires: Replacement

Image

The path where inference code is stored. This can be either in Amazon EC2 Container Registry or in a Docker registry that is accessible from the same VPC that you configure for your endpoint. If you are using your own custom algorithm instead of an algorithm provided by SageMaker, the inference code must meet SageMaker requirements. SageMaker supports both registry/repository[:tag] and registry/repository[@digest] image path formats. For more information, see Using Your Own Algorithms with Amazon SageMaker.

Note

The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 Container Registry must be in the same region as the model or endpoint you are creating.

Required: No

Type: String

Pattern: [\S]+

Maximum: 255

Update requires: Replacement

ImageConfig

Specifies whether the model container is in Amazon ECR or a private Docker registry accessible from your Amazon Virtual Private Cloud (VPC). For information about storing containers in a private Docker registry, see Use a Private Docker Registry for Real-Time Inference Containers.

Note

The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 Container Registry must be in the same region as the model or endpoint you are creating.

Required: No

Type: ImageConfig

Update requires: Replacement

InferenceSpecificationName

The inference specification name in the model package version.

Required: No

Type: String

Update requires: Replacement

Mode

Whether the container hosts a single model or multiple models.

Required: No

Type: String

Allowed values: SingleModel | MultiModel

Update requires: Replacement

ModelDataSource

Specifies the location of ML model data to deploy.

Note

Currently you cannot use ModelDataSource in conjunction with SageMaker batch transform, SageMaker serverless endpoints, SageMaker multi-model endpoints, and SageMaker Marketplace.

Required: No

Type: ModelDataSource

Update requires: Replacement

ModelDataUrl

The S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix). The S3 path is required for SageMaker built-in algorithms, but not if you use your own algorithms. For more information on built-in algorithms, see Common Parameters.

Note

The model artifacts must be in an S3 bucket that is in the same region as the model or endpoint you are creating.

If you provide a value for this parameter, SageMaker uses Amazon Security Token Service to download model artifacts from the S3 path you provide. Amazon STS is activated in your Amazon account by default. If you previously deactivated Amazon STS for a region, you need to reactivate Amazon STS for that region. For more information, see Activating and Deactivating Amazon STS in an Amazon Region in the Amazon Identity and Access Management User Guide.

Important

If you use a built-in algorithm to create a model, SageMaker requires that you provide a S3 path to the model artifacts in ModelDataUrl.

Required: No

Type: String

Pattern: ^(https|s3)://([^/]+)/?(.*)$

Maximum: 1024

Update requires: Replacement

ModelPackageName

The name or Amazon Resource Name (ARN) of the model package to use to create the model.

Required: No

Type: String

Pattern: (arn:aws[a-z\-]*:sagemaker:[a-z0-9\-]*:[0-9]{12}:[a-z\-]*\/)?([a-zA-Z0-9]([a-zA-Z0-9-]){0,62})(?<!-)(\/[0-9]{1,5})?$

Minimum: 1

Maximum: 176

Update requires: Replacement

MultiModelConfig

Specifies additional configuration for multi-model endpoints.

Required: No

Type: MultiModelConfig

Update requires: Replacement