AWS::SageMaker::ModelPackage InferenceSpecification - Amazon CloudFormation
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China (PDF).

AWS::SageMaker::ModelPackage InferenceSpecification

Defines how to perform inference generation after a training job is run.

Syntax

To declare this entity in your Amazon CloudFormation template, use the following syntax:

JSON

{ "Containers" : [ ModelPackageContainerDefinition, ... ], "SupportedContentTypes" : [ String, ... ], "SupportedRealtimeInferenceInstanceTypes" : [ String, ... ], "SupportedResponseMIMETypes" : [ String, ... ], "SupportedTransformInstanceTypes" : [ String, ... ] }

Properties

Containers

The Amazon ECR registry path of the Docker image that contains the inference code.

Required: Yes

Type: Array of ModelPackageContainerDefinition

Minimum: 1

Maximum: 15

Update requires: Replacement

SupportedContentTypes

The supported MIME types for the input data.

Required: Yes

Type: Array of String

Update requires: Replacement

SupportedRealtimeInferenceInstanceTypes

A list of the instance types that are used to generate inferences in real-time.

This parameter is required for unversioned models, and optional for versioned models.

Required: No

Type: Array of String

Update requires: Replacement

SupportedResponseMIMETypes

The supported MIME types for the output data.

Required: Yes

Type: Array of String

Update requires: Replacement

SupportedTransformInstanceTypes

A list of the instance types on which a transformation job can be run or on which an endpoint can be deployed.

This parameter is required for unversioned models, and optional for versioned models.

Required: No

Type: Array of String

Minimum: 1

Update requires: Replacement