Neptune ML model training API - Amazon Neptune
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China (PDF).

Neptune ML model training API

Model training actions:

Model training structures:

StartMLModelTrainingJob (action)

        The Amazon CLI name for this API is: start-ml-model-training-job.

Creates a new Neptune ML model training job. See Model training using the modeltraining command.

When invoking this operation in a Neptune cluster that has IAM authentication enabled, the IAM user or role making the request must have a policy attached that allows the neptune-db:StartMLModelTrainingJob IAM action in that cluster.

Request

  • baseProcessingInstanceType  (in the CLI: --base-processing-instance-type) –  a String, of type: string (a UTF-8 encoded string).

    The type of ML instance used in preparing and managing training of ML models. This is a CPU instance chosen based on memory requirements for processing the training data and model.

  • customModelTrainingParameters  (in the CLI: --custom-model-training-parameters) –  A CustomModelTrainingParameters object.

    The configuration for custom model training. This is a JSON object.

  • dataProcessingJobId  (in the CLI: --data-processing-job-id) –  Required: a String, of type: string (a UTF-8 encoded string).

    The job ID of the completed data-processing job that has created the data that the training will work with.

  • enableManagedSpotTraining  (in the CLI: --enable-managed-spot-training) –  a Boolean, of type: boolean (a Boolean (true or false) value).

    Optimizes the cost of training machine-learning models by using Amazon Elastic Compute Cloud spot instances. The default is False.

  • id  (in the CLI: --id) –  a String, of type: string (a UTF-8 encoded string).

    A unique identifier for the new job. The default is An autogenerated UUID.

  • maxHPONumberOfTrainingJobs  (in the CLI: --max-hpo-number-of-training-jobs) –  an Integer, of type: integer (a signed 32-bit integer).

    Maximum total number of training jobs to start for the hyperparameter tuning job. The default is 2. Neptune ML automatically tunes the hyperparameters of the machine learning model. To obtain a model that performs well, use at least 10 jobs (in other words, set maxHPONumberOfTrainingJobs to 10). In general, the more tuning runs, the better the results.

  • maxHPOParallelTrainingJobs  (in the CLI: --max-hpo-parallel-training-jobs) –  an Integer, of type: integer (a signed 32-bit integer).

    Maximum number of parallel training jobs to start for the hyperparameter tuning job. The default is 2. The number of parallel jobs you can run is limited by the available resources on your training instance.

  • neptuneIamRoleArn  (in the CLI: --neptune-iam-role-arn) –  a String, of type: string (a UTF-8 encoded string).

    The ARN of an IAM role that provides Neptune access to SageMaker and Amazon S3 resources. This must be listed in your DB cluster parameter group or an error will occur.

  • previousModelTrainingJobId  (in the CLI: --previous-model-training-job-id) –  a String, of type: string (a UTF-8 encoded string).

    The job ID of a completed model-training job that you want to update incrementally based on updated data.

  • s3OutputEncryptionKMSKey  (in the CLI: --s-3-output-encryption-kms-key) –  a String, of type: string (a UTF-8 encoded string).

    The Amazon Key Management Service (KMS) key that SageMaker uses to encrypt the output of the processing job. The default is none.

  • sagemakerIamRoleArn  (in the CLI: --sagemaker-iam-role-arn) –  a String, of type: string (a UTF-8 encoded string).

    The ARN of an IAM role for SageMaker execution.This must be listed in your DB cluster parameter group or an error will occur.

  • securityGroupIds  (in the CLI: --security-group-ids) –  a String, of type: string (a UTF-8 encoded string).

    The VPC security group IDs. The default is None.

  • subnets  (in the CLI: --subnets) –  a String, of type: string (a UTF-8 encoded string).

    The IDs of the subnets in the Neptune VPC. The default is None.

  • trainingInstanceType  (in the CLI: --training-instance-type) –  a String, of type: string (a UTF-8 encoded string).

    The type of ML instance used for model training. All Neptune ML models support CPU, GPU, and multiGPU training. The default is ml.p3.2xlarge. Choosing the right instance type for training depends on the task type, graph size, and your budget.

  • trainingInstanceVolumeSizeInGB  (in the CLI: --training-instance-volume-size-in-gb) –  an Integer, of type: integer (a signed 32-bit integer).

    The disk volume size of the training instance. Both input data and the output model are stored on disk, so the volume size must be large enough to hold both data sets. The default is 0. If not specified or 0, Neptune ML selects a disk volume size based on the recommendation generated in the data processing step.

  • trainingTimeOutInSeconds  (in the CLI: --training-time-out-in-seconds) –  an Integer, of type: integer (a signed 32-bit integer).

    Timeout in seconds for the training job. The default is 86,400 (1 day).

  • trainModelS3Location  (in the CLI: --train-model-s3-location) –  Required: a String, of type: string (a UTF-8 encoded string).

    The location in Amazon S3 where the model artifacts are to be stored.

  • volumeEncryptionKMSKey  (in the CLI: --volume-encryption-kms-key) –  a String, of type: string (a UTF-8 encoded string).

    The Amazon Key Management Service (KMS) key that SageMaker uses to encrypt data on the storage volume attached to the ML compute instances that run the training job. The default is None.

Response

  • arn   – a String, of type: string (a UTF-8 encoded string).

    The ARN of the new model training job.

  • creationTimeInMillis   – a Long, of type: long (a signed 64-bit integer).

    The model training job creation time, in milliseconds.

  • id   – a String, of type: string (a UTF-8 encoded string).

    The unique ID of the new model training job.

ListMLModelTrainingJobs (action)

        The Amazon CLI name for this API is: list-ml-model-training-jobs.

Lists Neptune ML model-training jobs. See Model training using the modeltraining command.

When invoking this operation in a Neptune cluster that has IAM authentication enabled, the IAM user or role making the request must have a policy attached that allows the neptune-db:neptune-db:ListMLModelTrainingJobs IAM action in that cluster.

Request

  • maxItems  (in the CLI: --max-items) –  a ListMLModelTrainingJobsInputMaxItemsInteger, of type: integer (a signed 32-bit integer), not less than 1 or more than 1024 ?st?s.

    The maximum number of items to return (from 1 to 1024; the default is 10).

  • neptuneIamRoleArn  (in the CLI: --neptune-iam-role-arn) –  a String, of type: string (a UTF-8 encoded string).

    The ARN of an IAM role that provides Neptune access to SageMaker and Amazon S3 resources. This must be listed in your DB cluster parameter group or an error will occur.

Response

  • ids   – a String, of type: string (a UTF-8 encoded string).

    A page of the list of model training job IDs.

GetMLModelTrainingJob (action)

        The Amazon CLI name for this API is: get-ml-model-training-job.

Retrieves information about a Neptune ML model training job. See Model training using the modeltraining command.

When invoking this operation in a Neptune cluster that has IAM authentication enabled, the IAM user or role making the request must have a policy attached that allows the neptune-db:GetMLModelTrainingJobStatus IAM action in that cluster.

Request

  • id  (in the CLI: --id) –  Required: a String, of type: string (a UTF-8 encoded string).

    The unique identifier of the model-training job to retrieve.

  • neptuneIamRoleArn  (in the CLI: --neptune-iam-role-arn) –  a String, of type: string (a UTF-8 encoded string).

    The ARN of an IAM role that provides Neptune access to SageMaker and Amazon S3 resources. This must be listed in your DB cluster parameter group or an error will occur.

Response

  • hpoJob   – A MlResourceDefinition object.

    The HPO job.

  • id   – a String, of type: string (a UTF-8 encoded string).

    The unique identifier of this model-training job.

  • mlModels   – An array of MlConfigDefinition objects.

    A list of the configurations of the ML models being used.

  • modelTransformJob   – A MlResourceDefinition object.

    The model transform job.

  • processingJob   – A MlResourceDefinition object.

    The data processing job.

  • status   – a String, of type: string (a UTF-8 encoded string).

    The status of the model training job.

CancelMLModelTrainingJob (action)

        The Amazon CLI name for this API is: cancel-ml-model-training-job.

Cancels a Neptune ML model training job. See Model training using the modeltraining command.

When invoking this operation in a Neptune cluster that has IAM authentication enabled, the IAM user or role making the request must have a policy attached that allows the neptune-db:CancelMLModelTrainingJob IAM action in that cluster.

Request

  • clean  (in the CLI: --clean) –  a Boolean, of type: boolean (a Boolean (true or false) value).

    If set to TRUE, this flag specifies that all Amazon S3 artifacts should be deleted when the job is stopped. The default is FALSE.

  • id  (in the CLI: --id) –  Required: a String, of type: string (a UTF-8 encoded string).

    The unique identifier of the model-training job to be canceled.

  • neptuneIamRoleArn  (in the CLI: --neptune-iam-role-arn) –  a String, of type: string (a UTF-8 encoded string).

    The ARN of an IAM role that provides Neptune access to SageMaker and Amazon S3 resources. This must be listed in your DB cluster parameter group or an error will occur.

Response

  • status   – a String, of type: string (a UTF-8 encoded string).

    The status of the cancellation.

Model training structures:

CustomModelTrainingParameters (structure)

Contains custom model training parameters. See Custom models in Neptune ML.

Fields
  • sourceS3DirectoryPath – This is Required: a String, of type: string (a UTF-8 encoded string).

    The path to the Amazon S3 location where the Python module implementing your model is located. This must point to a valid existing Amazon S3 location that contains, at a minimum, a training script, a transform script, and a model-hpo-configuration.json file.

  • trainingEntryPointScript – This is a String, of type: string (a UTF-8 encoded string).

    The name of the entry point in your module of a script that performs model training and takes hyperparameters as command-line arguments, including fixed hyperparameters. The default is training.py.

  • transformEntryPointScript – This is a String, of type: string (a UTF-8 encoded string).

    The name of the entry point in your module of a script that should be run after the best model from the hyperparameter search has been identified, to compute the model artifacts necessary for model deployment. It should be able to run with no command-line arguments.The default is transform.py.