AWS SDK Version 3 for .NET
API Reference

AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.

Creates an inference component, which is a SageMaker hosting object that you can use to deploy a model to an endpoint. In the inference component settings, you specify the model, the endpoint, and how the model utilizes the resources that the endpoint hosts. You can optimize resource utilization by tailoring how the required CPU cores, accelerators, and memory are allocated. You can deploy multiple inference components to an endpoint, where each inference component contains one model and the resource utilization needs for that individual model. After you deploy an inference component, you can directly invoke the associated model when you use the InvokeEndpoint API action.

Note:

For .NET Core this operation is only available in asynchronous form. Please refer to CreateInferenceComponentAsync.

Namespace: Amazon.SageMaker
Assembly: AWSSDK.SageMaker.dll
Version: 3.x.y.z

Syntax

C#
public virtual CreateInferenceComponentResponse CreateInferenceComponent(
         CreateInferenceComponentRequest request
)

Parameters

request
Type: Amazon.SageMaker.Model.CreateInferenceComponentRequest

Container for the necessary parameters to execute the CreateInferenceComponent service method.

Return Value


The response from the CreateInferenceComponent service method, as returned by SageMaker.

Exceptions

ExceptionCondition
ResourceLimitExceededException You have exceeded an SageMaker resource limit. For example, you might have too many training jobs created.

Version Information

.NET Framework:
Supported in: 4.5, 4.0, 3.5

See Also