Amazon SageMaker Autopilot model deployment and prediction - Amazon SageMaker
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China (PDF).

Amazon SageMaker Autopilot model deployment and prediction

This Amazon SageMaker Autopilot guide includes steps for model deployment, setting up real-time inference, and running inference with batch jobs.

After you train your Autopilot models, you can deploy them to get predictions in one of two ways:

  1. Use Real-time inferencing to set up an endpoint and obtain predictions interactively.

  2. Use Batch inferencing to make predictions in parallel on batches of observations on an entire dataset.

Note

To avoid incurring unnecessary charges: After the endpoints and resources that were created from model deployment are no longer needed, you can delete them. For information about pricing of instances by Region, see Amazon SageMaker Pricing.