Amazon Bedrock inference - Amazon SageMaker AI
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China (PDF).

Amazon Bedrock inference

In this topic, you will learn how to deploy a trained Amazon Nova model to Amazon Bedrock for inference. The purpose of inference is to deploy trained model to Amazon Bedrock for production use. The deployment process typically involves the steps to use Amazon Bedrock APIs to create custom model, point to model artifacts in service-managed Amazon S3 bucket, wait for model to become ACTIVE, and configure provisioned throughput. The output of the process is a deployed model endpoint for application integration.

For detailed explanation, see Import a SageMaker AI-trained Amazon Nova model.