Deploy a Model in Amazon SageMaker - Amazon SageMaker
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China (PDF).

Deploy a Model in Amazon SageMaker

After you train your machine learning model, you can deploy it using Amazon SageMaker to get predictions. Amazon SageMaker supports the following ways to deploy a model, depending on your use case:

SageMaker also provides features to manage resources and optimize inference performance when deploying machine learning models:

  • To manage models on edge devices so that you can optimize, secure, monitor, and maintain machine learning models on fleets of edge devices, see Deploy models at the edge with SageMaker Edge Manager. This applies to edge devices like smart cameras, robots, personal computers, and mobile devices.

  • To optimize Gluon, Keras, MXNet, PyTorch, TensorFlow, TensorFlow-Lite, and ONNX models for inference on Android, Linux, and Windows machines based on processors from Ambarella, ARM, Intel, Nvidia, NXP, Qualcomm, Texas Instruments, and Xilinx, see Optimize model performance using Neo.

For more information about all deployment options, see Deploy models for inference.