Skip to content

/AWS1/CL_LOV=>STARTMODEL()

About StartModel

Starts the running of the version of an Amazon Lookout for Vision model. Starting a model takes a while to complete. To check the current state of the model, use DescribeModel.

A model is ready to use when its status is HOSTED.

Once the model is running, you can detect custom labels in new images by calling DetectAnomalies.

You are charged for the amount of time that the model is running. To stop a running model, call StopModel.

This operation requires permissions to perform the lookoutvision:StartModel operation.

Method Signature

IMPORTING

Required arguments:

IV_PROJECTNAME TYPE /AWS1/LOVPROJECTNAME /AWS1/LOVPROJECTNAME

The name of the project that contains the model that you want to start.

IV_MODELVERSION TYPE /AWS1/LOVMODELVERSION /AWS1/LOVMODELVERSION

The version of the model that you want to start.

IV_MININFERENCEUNITS TYPE /AWS1/LOVINFERENCEUNITS /AWS1/LOVINFERENCEUNITS

The minimum number of inference units to use. A single inference unit represents 1 hour of processing. Use a higher number to increase the TPS throughput of your model. You are charged for the number of inference units that you use.

Optional arguments:

IV_CLIENTTOKEN TYPE /AWS1/LOVCLIENTTOKEN /AWS1/LOVCLIENTTOKEN

ClientToken is an idempotency token that ensures a call to StartModel completes only once. You choose the value to pass. For example, An issue might prevent you from getting a response from StartModel. In this case, safely retry your call to StartModel by using the same ClientToken parameter value.

If you don't supply a value for ClientToken, the AWS SDK you are using inserts a value for you. This prevents retries after a network error from making multiple start requests. You'll need to provide your own value for other use cases.

An error occurs if the other input parameters are not the same as in the first request. Using a different
value for ClientToken is considered a new call to StartModel. An idempotency token is active for 8 hours.

IV_MAXINFERENCEUNITS TYPE /AWS1/LOVINFERENCEUNITS /AWS1/LOVINFERENCEUNITS

The maximum number of inference units to use for auto-scaling the model. If you don't specify a value, Amazon Lookout for Vision doesn't auto-scale the model.

RETURNING

OO_OUTPUT TYPE REF TO /AWS1/CL_LOVSTARTMODELRESPONSE /AWS1/CL_LOVSTARTMODELRESPONSE