Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions,
see Getting Started with Amazon Web Services in China
(PDF).
Containers with custom inference code
You can use Amazon SageMaker AI to interact with Docker containers and run your own inference code in one
of two ways:
-
To use your own inference code with a persistent endpoint to get one prediction at
a time, use SageMaker AI hosting services.
-
To use your own inference code to get predictions for an entire dataset, use SageMaker AI
batch transform.
Training Output
Custom Inference Code with Hosting
Services