

# Example: Hyperparameter Tuning Job
<a name="automatic-model-tuning-ex"></a>

This example shows how to create a new notebook for configuring and launching a hyperparameter tuning job. The tuning job uses the [XGBoost algorithm with Amazon SageMaker AI](xgboost.md) to train a model to predict whether a customer will enroll for a term deposit at a bank after being contacted by phone.

You use the low-level SDK for Python (Boto3) to configure and launch the hyperparameter tuning job, and the Amazon Web Services Management Console to monitor the status of hyperparameter tuning jobs. You can also use the Amazon SageMaker AI high-level [Amazon SageMaker Python SDK](https://sagemaker.readthedocs.io/en/stable) to configure, run, monitor, and analyze hyperparameter tuning jobs. For more information, see [https://github.com/aws/sagemaker-python-sdk](https://github.com/aws/sagemaker-python-sdk).

## Prerequisites
<a name="automatic-model-tuning-ex-prereq"></a>

To run the code in this example, you need
+ [An Amazon account and an administrator user](gs-set-up.md)
+ An Amazon S3 bucket for storing your training dataset and the model artifacts created during training
+ [A running SageMaker AI notebook instance](gs-setup-working-env.md)

**Topics**
+ [

## Prerequisites
](#automatic-model-tuning-ex-prereq)
+ [

# Create a Notebook Instance
](automatic-model-tuning-ex-notebook.md)
+ [

# Get the Amazon SageMaker AI Boto 3 Client
](automatic-model-tuning-ex-client.md)
+ [

# Get the SageMaker AI Execution Role
](automatic-model-tuning-ex-role.md)
+ [

# Use an Amazon S3 bucket for input and output
](automatic-model-tuning-ex-bucket.md)
+ [

# Download, Prepare, and Upload Training Data
](automatic-model-tuning-ex-data.md)
+ [

# Configure and Launch a Hyperparameter Tuning Job
](automatic-model-tuning-ex-tuning-job.md)
+ [

# Clean up
](automatic-model-tuning-ex-cleanup.md)

# Create a Notebook Instance
<a name="automatic-model-tuning-ex-notebook"></a>

**Important**  
Custom IAM policies that allow Amazon SageMaker Studio or Amazon SageMaker Studio Classic to create Amazon SageMaker resources must also grant permissions to add tags to those resources. The permission to add tags to resources is required because Studio and Studio Classic automatically tag any resources they create. If an IAM policy allows Studio and Studio Classic to create resources but does not allow tagging, "AccessDenied" errors can occur when trying to create resources. For more information, see [Provide permissions for tagging SageMaker AI resources](security_iam_id-based-policy-examples.md#grant-tagging-permissions).  
[Amazon managed policies for Amazon SageMaker AI](security-iam-awsmanpol.md) that give permissions to create SageMaker resources already include permissions to add tags while creating those resources.

Create a Jupyter notebook that contains a pre-installed environment with the default Anaconda installation and Python3. 

**To create a Jupyter notebook**

1. Open the Amazon SageMaker AI console at [https://console.amazonaws.cn/sagemaker/](https://console.amazonaws.cn/sagemaker/).

1. Open a running notebook instance, by choosing **Open** next to its name. The Jupyter notebook server page appears:

     
![\[Example Jupyter notebook server page.\]](http://docs.amazonaws.cn/en_us/sagemaker/latest/dg/images/notebook-dashboard.png)

1. To create a notebook, choose **Files**, **New**, and **conda\$1python3**. .

1. Name the notebook.

## Next Step
<a name="automatic-model-tuning-ex-next-client"></a>

[Get the Amazon SageMaker AI Boto 3 Client](automatic-model-tuning-ex-client.md)

# Get the Amazon SageMaker AI Boto 3 Client
<a name="automatic-model-tuning-ex-client"></a>

Import Amazon SageMaker Python SDK, Amazon SDK for Python (Boto3), and other Python libraries. In a new Jupyter notebook, paste the following code to the first cell:

```
import sagemaker
import boto3

import numpy as np                                # For performing matrix operations and numerical processing
import pandas as pd                               # For manipulating tabular data
from time import gmtime, strftime
import os

region = boto3.Session().region_name
smclient = boto3.Session().client('sagemaker')
```

The preceding code cell defines `region` and `smclient` objects that you will use to call the built-in XGBoost algorithm and set the SageMaker AI hyperparameter tuning job.

## Next Step
<a name="automatic-model-tuning-ex-next-role"></a>

[Get the SageMaker AI Execution Role](automatic-model-tuning-ex-role.md)

# Get the SageMaker AI Execution Role
<a name="automatic-model-tuning-ex-role"></a>

Get the execution role for the notebook instance. This is the IAM role that you created for your notebook instance.

To find the ARN of the IAM execution role attached to a notebook instance:

1. Open the IAM console at [https://console.amazonaws.cn/iam/](https://console.amazonaws.cn/iam/).

1. On the left navigation pane, choose **Notebook** then **Notebook instances**.

1. From the list of notebooks, select the notebook that you want to view.

1. The ARN is in the **Permissions and encryption** section.

Alternatively, [Amazon SageMaker Python SDK](https://sagemaker.readthedocs.io/en/stable) users can retrieve the ARN of the execution role attached to their user profile or a notebook instance by running the following code:

```
from sagemaker import get_execution_role

role = get_execution_role()
print(role)
```

For more information about using `get_execution_role` in the [Amazon SageMaker Python SDK](https://sagemaker.readthedocs.io/en/stable), see [Session](https://sagemaker.readthedocs.io/en/stable/api/utility/session.html). For more information about roles, see [How to use SageMaker AI execution roles](sagemaker-roles.md).

## Next Step
<a name="automatic-model-tuning-ex-next-bucket"></a>

[Use an Amazon S3 bucket for input and output](automatic-model-tuning-ex-bucket.md)

# Use an Amazon S3 bucket for input and output
<a name="automatic-model-tuning-ex-bucket"></a>

Set up a S3 bucket to upload training datasets and save training output data for your hyperparameter tuning job.

**To use a default S3 bucket**

Use the following code to specify the default S3 bucket allocated for your SageMaker AI session. `prefix` is the path within the bucket where SageMaker AI stores the data for the current training job.

```
sess = sagemaker.Session()
bucket = sess.default_bucket() # Set a default S3 bucket
prefix = 'DEMO-automatic-model-tuning-xgboost-dm'
```

**To use a specific S3 bucket (Optional)**

If you want to use a specific S3 bucket, use the following code and replace the strings to the exact name of the S3 bucket. The name of the bucket must contain **sagemaker**, and be globally unique. The bucket must be in the same Amazon Region as the notebook instance that you use for this example.

```
bucket = "sagemaker-your-preferred-s3-bucket"

sess = sagemaker.Session(
    default_bucket = bucket
)
```

**Note**  
The name of the bucket doesn't need to contain **sagemaker** if the IAM role that you use to run the hyperparameter tuning job has a policy that gives the `S3FullAccess` permission.

## Next Step
<a name="automatic-model-tuning-ex-next-data"></a>

[Download, Prepare, and Upload Training Data](automatic-model-tuning-ex-data.md)

# Download, Prepare, and Upload Training Data
<a name="automatic-model-tuning-ex-data"></a>

For this example, you use a training dataset of information about bank customers that includes the customer's job, marital status, and how they were contacted during the bank's direct marketing campaign. To use a dataset for a hyperparameter tuning job, you download it, transform the data, and then upload it to an Amazon S3 bucket.

For more information about the dataset and the data transformation that the example performs, see the *hpo\$1xgboost\$1direct\$1marketing\$1sagemaker\$1APIs* notebook in the **Hyperparameter Tuning** section of the **SageMaker AI Examples** tab in your notebook instance.

## Download and Explore the Training Dataset
<a name="automatic-model-tuning-ex-data-download"></a>

To download and explore the dataset, run the following code in your notebook:

```
!wget -N https://archive.ics.uci.edu/ml/machine-learning-databases/00222/bank-additional.zip
!unzip -o bank-additional.zip
data = pd.read_csv('./bank-additional/bank-additional-full.csv', sep=';')
pd.set_option('display.max_columns', 500)     # Make sure we can see all of the columns
pd.set_option('display.max_rows', 5)         # Keep the output on one page
data
```

## Prepare and Upload Data
<a name="automatic-model-tuning-ex-data-transform"></a>

Before creating the hyperparameter tuning job, prepare the data and upload it to an S3 bucket where the hyperparameter tuning job can access it.

Run the following code in your notebook:

```
data['no_previous_contact'] = np.where(data['pdays'] == 999, 1, 0)                                 # Indicator variable to capture when pdays takes a value of 999
data['not_working'] = np.where(np.in1d(data['job'], ['student', 'retired', 'unemployed']), 1, 0)   # Indicator for individuals not actively employed
model_data = pd.get_dummies(data)                                                                  # Convert categorical variables to sets of indicators
model_data
model_data = model_data.drop(['duration', 'emp.var.rate', 'cons.price.idx', 'cons.conf.idx', 'euribor3m', 'nr.employed'], axis=1)

train_data, validation_data, test_data = np.split(model_data.sample(frac=1, random_state=1729), [int(0.7 * len(model_data)), int(0.9*len(model_data))])

pd.concat([train_data['y_yes'], train_data.drop(['y_no', 'y_yes'], axis=1)], axis=1).to_csv('train.csv', index=False, header=False)
pd.concat([validation_data['y_yes'], validation_data.drop(['y_no', 'y_yes'], axis=1)], axis=1).to_csv('validation.csv', index=False, header=False)
pd.concat([test_data['y_yes'], test_data.drop(['y_no', 'y_yes'], axis=1)], axis=1).to_csv('test.csv', index=False, header=False)

boto3.Session().resource('s3').Bucket(bucket).Object(os.path.join(prefix, 'train/train.csv')).upload_file('train.csv')
boto3.Session().resource('s3').Bucket(bucket).Object(os.path.join(prefix, 'validation/validation.csv')).upload_file('validation.csv')
```

## Next Step
<a name="automatic-model-tuning-ex-next-tuning-job"></a>

[Configure and Launch a Hyperparameter Tuning Job](automatic-model-tuning-ex-tuning-job.md)

# Configure and Launch a Hyperparameter Tuning Job
<a name="automatic-model-tuning-ex-tuning-job"></a>

**Important**  
Custom IAM policies that allow Amazon SageMaker Studio or Amazon SageMaker Studio Classic to create Amazon SageMaker resources must also grant permissions to add tags to those resources. The permission to add tags to resources is required because Studio and Studio Classic automatically tag any resources they create. If an IAM policy allows Studio and Studio Classic to create resources but does not allow tagging, "AccessDenied" errors can occur when trying to create resources. For more information, see [Provide permissions for tagging SageMaker AI resources](security_iam_id-based-policy-examples.md#grant-tagging-permissions).  
[Amazon managed policies for Amazon SageMaker AI](security-iam-awsmanpol.md) that give permissions to create SageMaker resources already include permissions to add tags while creating those resources.

A hyperparameter is a high-level parameter that influences the learning process during model training. To get the best model predictions, you can optimize a hyperparameter configuration or set hyperparameter values. The process of finding an optimal configuration is called hyperparameter tuning. To configure and launch a hyperparameter tuning job, complete the steps in these guides.

**Topics**
+ [

## Settings for the hyperparameter tuning job
](#automatic-model-tuning-ex-low-tuning-config)
+ [

## Configure the training jobs
](#automatic-model-tuning-ex-low-training-def)
+ [

## Name and launch the hyperparameter tuning job
](#automatic-model-tuning-ex-low-launch)
+ [

# Monitor the Progress of a Hyperparameter Tuning Job
](automatic-model-tuning-monitor.md)
+ [

## View the Status of the Training Jobs
](#automatic-model-tuning-monitor-training)
+ [

## View the Best Training Job
](#automatic-model-tuning-best-training-job)

## Settings for the hyperparameter tuning job
<a name="automatic-model-tuning-ex-low-tuning-config"></a>

To specify settings for the hyperparameter tuning job, define a JSON object when you create the tuning job. Pass this JSON object as the value of the `HyperParameterTuningJobConfig` parameter to the [https://docs.amazonaws.cn/sagemaker/latest/APIReference/API_CreateHyperParameterTuningJob.html](https://docs.amazonaws.cn/sagemaker/latest/APIReference/API_CreateHyperParameterTuningJob.html) API.

In this JSON object, specify the following:

In this JSON object, you specify:
+ `HyperParameterTuningJobObjective` – The objective metric used to evaluate the performance of the training job launched by the hyperparameter tuning job.
+ `ParameterRanges` – The range of values that a tunable hyperparameter can use during optimization. For more information, see [Define Hyperparameter Ranges](automatic-model-tuning-define-ranges.md)
+ `RandomSeed` – A value used to initialize a pseudo-random number generator. Setting a random seed will allow the hyperparameter tuning search strategies to produce more consistent configurations for the same tuning job (optional).
+ `ResourceLimits` – The maximum number of training and parallel training jobs that the hyperparameter tuning job can use.

**Note**  
If you use your own algorithm for hyperparameter tuning, rather than a SageMaker AI [built-in algorithm](https://docs.amazonaws.cn/sagemaker/latest/dg/algos.html), you must define metrics for your algorithm. For more information, see [Define metrics](automatic-model-tuning-define-metrics-variables.md#automatic-model-tuning-define-metrics).

The following code example shows how to configure a hyperparameter tuning job using the built-in [XGBoost algorithm](https://docs.amazonaws.cn/sagemaker/latest/dg/xgboost.html). The code example shows how to define ranges for the `eta`, `alpha`, `min_child_weight`, and `max_depth` hyperparameters. For more information about these and other hyperparameters see [XGBoost Parameters](https://xgboost.readthedocs.io/en/release_1.2.0/parameter.html). 

In this code example, the objective metric for the hyperparameter tuning job finds the hyperparameter configuration that maximizes `validation:auc`. SageMaker AI built-in algorithms automatically write the objective metric to CloudWatch Logs. The following code example also shows how to set a `RandomSeed`. 

```
tuning_job_config = {
    "ParameterRanges": {
      "CategoricalParameterRanges": [],
      "ContinuousParameterRanges": [
        {
          "MaxValue": "1",
          "MinValue": "0",
          "Name": "eta"
        },
        {
          "MaxValue": "2",
          "MinValue": "0",
          "Name": "alpha"
        },
        {
          "MaxValue": "10",
          "MinValue": "1",
          "Name": "min_child_weight"
        }
      ],
      "IntegerParameterRanges": [
        {
          "MaxValue": "10",
          "MinValue": "1",
          "Name": "max_depth"
        }
      ]
    },
    "ResourceLimits": {
      "MaxNumberOfTrainingJobs": 20,
      "MaxParallelTrainingJobs": 3
    },
    "Strategy": "Bayesian",
    "HyperParameterTuningJobObjective": {
      "MetricName": "validation:auc",
      "Type": "Maximize"
    },
    "RandomSeed" : 123
  }
```

## Configure the training jobs
<a name="automatic-model-tuning-ex-low-training-def"></a>

The hyperparameter tuning job will launch training jobs to find an optimal configuration of hyperparameters. These training jobs should be configured using the SageMaker AI [https://docs.amazonaws.cn/sagemaker/latest/APIReference/API_CreateHyperParameterTuningJob.html](https://docs.amazonaws.cn/sagemaker/latest/APIReference/API_CreateHyperParameterTuningJob.html) API. 

To configure the training jobs, define a JSON object and pass it as the value of the `TrainingJobDefinition` parameter inside `CreateHyperParameterTuningJob`.

In this JSON object, you can specify the following: 
+ `AlgorithmSpecification` – The [registry path](https://docs.amazonaws.cn/sagemaker/latest/dg/sagemaker-algo-docker-registry-paths.html) of the Docker image containing the training algorithm and related metadata. To specify an algorithm, you can use your own [custom built algorithm](https://docs.amazonaws.cn/sagemaker/latest/dg/your-algorithms.html) inside a [Docker](https://docs.docker.com/get-started/overview/) container or a [SageMaker AI built-in algorithm](https://docs.amazonaws.cn/sagemaker/latest/dg/algos.html) (required).
+ `InputDataConfig` – The input configuration, including the `ChannelName`, `ContentType`, and data source for your training and test data (required).
+ `InputDataConfig` – The input configuration, including the `ChannelName`, `ContentType`, and data source for your training and test data (required).
+ The storage location for the algorithm's output. Specify the S3 bucket where you want to store the output of the training jobs.
+ `RoleArn` – The [Amazon Resource Name](https://docs.amazonaws.cn/general/latest/gr/aws-arns-and-namespaces.html) (ARN) of an Amazon Identity and Access Management (IAM) role that SageMaker AI uses to perform tasks. Tasks include reading input data, downloading a Docker image, writing model artifacts to an S3 bucket, writing logs to Amazon CloudWatch Logs, and writing metrics to Amazon CloudWatch (required).
+ `StoppingCondition` – The maximum runtime in seconds that a training job can run before being stopped. This value should be greater than the time needed to train your model (required).
+ `MetricDefinitions` – The name and regular expression that defines any metrics that the training jobs emit. Define metrics only when you use a custom training algorithm. The example in the following code uses a built-in algorithm, which already has metrics defined. For information about defining metrics (optional), see [Define metrics](automatic-model-tuning-define-metrics-variables.md#automatic-model-tuning-define-metrics).
+ `TrainingImage` – The [Docker](https://docs.docker.com/get-started/overview/)container image that specifies the training algorithm (optional).
+ `StaticHyperParameters` – The name and values of hyperparameters that are not tuned in the tuning job (optional).

The following code example sets static values for the `eval_metric`, `num_round`, `objective`, `rate_drop`, and `tweedie_variance_power` parameters of the [XGBoost algorithm with Amazon SageMaker AI](xgboost.md) built-in algorithm.

------
#### [ SageMaker Python SDK v1 ]

```
from sagemaker.amazon.amazon_estimator import get_image_uri
training_image = get_image_uri(region, 'xgboost', repo_version='1.0-1')

s3_input_train = 's3://{}/{}/train'.format(bucket, prefix)
s3_input_validation ='s3://{}/{}/validation/'.format(bucket, prefix)

training_job_definition = {
    "AlgorithmSpecification": {
      "TrainingImage": training_image,
      "TrainingInputMode": "File"
    },
    "InputDataConfig": [
      {
        "ChannelName": "train",
        "CompressionType": "None",
        "ContentType": "csv",
        "DataSource": {
          "S3DataSource": {
            "S3DataDistributionType": "FullyReplicated",
            "S3DataType": "S3Prefix",
            "S3Uri": s3_input_train
          }
        }
      },
      {
        "ChannelName": "validation",
        "CompressionType": "None",
        "ContentType": "csv",
        "DataSource": {
          "S3DataSource": {
            "S3DataDistributionType": "FullyReplicated",
            "S3DataType": "S3Prefix",
            "S3Uri": s3_input_validation
          }
        }
      }
    ],
    "OutputDataConfig": {
      "S3OutputPath": "s3://{}/{}/output".format(bucket,prefix)
    },
    "ResourceConfig": {
      "InstanceCount": 2,
      "InstanceType": "ml.c4.2xlarge",
      "VolumeSizeInGB": 10
    },
    "RoleArn": role,
    "StaticHyperParameters": {
      "eval_metric": "auc",
      "num_round": "100",
      "objective": "binary:logistic",
      "rate_drop": "0.3",
      "tweedie_variance_power": "1.4"
    },
    "StoppingCondition": {
      "MaxRuntimeInSeconds": 43200
    }
}
```

------
#### [ SageMaker Python SDK v2 ]

```
training_image = sagemaker.image_uris.retrieve('xgboost', region, '1.0-1')

s3_input_train = 's3://{}/{}/train'.format(bucket, prefix)
s3_input_validation ='s3://{}/{}/validation/'.format(bucket, prefix)

training_job_definition = {
    "AlgorithmSpecification": {
      "TrainingImage": training_image,
      "TrainingInputMode": "File"
    },
    "InputDataConfig": [
      {
        "ChannelName": "train",
        "CompressionType": "None",
        "ContentType": "csv",
        "DataSource": {
          "S3DataSource": {
            "S3DataDistributionType": "FullyReplicated",
            "S3DataType": "S3Prefix",
            "S3Uri": s3_input_train
          }
        }
      },
      {
        "ChannelName": "validation",
        "CompressionType": "None",
        "ContentType": "csv",
        "DataSource": {
          "S3DataSource": {
            "S3DataDistributionType": "FullyReplicated",
            "S3DataType": "S3Prefix",
            "S3Uri": s3_input_validation
          }
        }
      }
    ],
    "OutputDataConfig": {
      "S3OutputPath": "s3://{}/{}/output".format(bucket,prefix)
    },
    "ResourceConfig": {
      "InstanceCount": 2,
      "InstanceType": "ml.c4.2xlarge",
      "VolumeSizeInGB": 10
    },
    "RoleArn": role,
    "StaticHyperParameters": {
      "eval_metric": "auc",
      "num_round": "100",
      "objective": "binary:logistic",
      "rate_drop": "0.3",
      "tweedie_variance_power": "1.4"
    },
    "StoppingCondition": {
      "MaxRuntimeInSeconds": 43200
    }
}
```

------

## Name and launch the hyperparameter tuning job
<a name="automatic-model-tuning-ex-low-launch"></a>

After you configure the hyperparameter tuning job, you can launch it by calling the [https://docs.amazonaws.cn/sagemaker/latest/APIReference/API_CreateHyperParameterTuningJob.html](https://docs.amazonaws.cn/sagemaker/latest/APIReference/API_CreateHyperParameterTuningJob.html) API. The following code example uses `tuning_job_config` and `training_job_definition`. These were defined in the previous two code examples to create a hyperparameter tuning job.

```
tuning_job_name = "MyTuningJob"
smclient.create_hyper_parameter_tuning_job(HyperParameterTuningJobName = tuning_job_name,
                                           HyperParameterTuningJobConfig = tuning_job_config,
                                           TrainingJobDefinition = training_job_definition)
```

# Monitor the Progress of a Hyperparameter Tuning Job
<a name="automatic-model-tuning-monitor"></a>

To monitor the progress of a hyperparameter tuning job and the training jobs that it launches, use the Amazon SageMaker AI console.

**Topics**
+ [

## View the Status of the Hyperparameter Tuning Job
](#automatic-model-tuning-monitor-tuning)

## View the Status of the Hyperparameter Tuning Job
<a name="automatic-model-tuning-monitor-tuning"></a>

**To view the status of the hyperparameter tuning job**

1. Open the Amazon SageMaker AI console at [https://console.amazonaws.cn/sagemaker/](https://console.amazonaws.cn/sagemaker/).

1. Choose **Hyperparameter tuning jobs**.  
![\[Hyperparameter tuning job console.\]](http://docs.amazonaws.cn/en_us/sagemaker/latest/dg/images/console-tuning-jobs.png)

1. In the list of hyperparameter tuning jobs, check the status of the hyperparameter tuning job you launched. A tuning job can be:
   + `Completed`—The hyperparameter tuning job successfully completed.
   + `InProgress`—The hyperparameter tuning job is in progress. One or more training jobs are still running.
   + `Failed`—The hyperparameter tuning job failed.
   + `Stopped`—The hyperparameter tuning job was manually stopped before it completed. All training jobs that the hyperparameter tuning job launched are stopped.
   + `Stopping`—The hyperparameter tuning job is in the process of stopping.

## View the Status of the Training Jobs
<a name="automatic-model-tuning-monitor-training"></a>

**To view the status of the training jobs that the hyperparameter tuning job launched**

1. In the list of hyperparameter tuning jobs, choose the job that you launched.

1. Choose **Training jobs**.  
![\[Location of Training jobs in the .\]](http://docs.amazonaws.cn/en_us/sagemaker/latest/dg/images/hyperparameter-training-jobs.png)

1. View the status of each training job. To see more details about a job, choose it in the list of training jobs. To view a summary of the status of all of the training jobs that the hyperparameter tuning job launched, see **Training job status counter**.

   A training job can be:
   + `Completed`—The training job successfully completed.
   + `InProgress`—The training job is in progress.
   + `Stopped`—The training job was manually stopped before it completed.
   + `Failed (Retryable)`—The training job failed, but can be retried. A failed training job can be retried only if it failed because an internal service error occurred.
   + `Failed (Non-retryable)`—The training job failed and can't be retried. A failed training job can't be retried when a client error occurs.
**Note**  
Hyperparameter tuning jobs can be stopped and the underlying resources [ deleted](https://docs.amazonaws.cn/sagemaker/latest/dg/automatic-model-tuning-ex-cleanup.html), but the jobs themselves cannot be deleted.

## View the Best Training Job
<a name="automatic-model-tuning-best-training-job"></a>

A hyperparameter tuning job uses the objective metric that each training job returns to evaluate training jobs. While the hyperparameter tuning job is in progress, the best training job is the one that has returned the best objective metric so far. After the hyperparameter tuning job is complete, the best training job is the one that returned the best objective metric.

To view the best training job, choose **Best training job**.

![\[Location of Best training job in the hyperparameter tuning job console.\]](http://docs.amazonaws.cn/en_us/sagemaker/latest/dg/images/best-training-job.png)


To deploy the best training job as a model that you can host at a SageMaker AI endpoint, choose **Create model**.

### Next Step
<a name="automatic-model-tuning-ex-next-cleanup"></a>

[Clean up](automatic-model-tuning-ex-cleanup.md)

# Clean up
<a name="automatic-model-tuning-ex-cleanup"></a>

To avoid incurring unnecessary charges, when you are done with the example, use the Amazon Web Services Management Console to delete the resources that you created for it. 

**Note**  
If you plan to explore other examples, you might want to keep some of these resources, such as your notebook instance, S3 bucket, and IAM role.

1. Open the SageMaker AI console at [https://console.amazonaws.cn/sagemaker/](https://console.amazonaws.cn/sagemaker/) and delete the notebook instance. Stop the instance before deleting it.

1. Open the Amazon S3 console at [https://console.amazonaws.cn/s3/](https://console.amazonaws.cn/s3/) and delete the bucket that you created to store model artifacts and the training dataset. 

1. Open the IAM console at [https://console.amazonaws.cn/iam/](https://console.amazonaws.cn/iam/) and delete the IAM role. If you created permission policies, you can delete them, too.

1. Open the Amazon CloudWatch console at [https://console.amazonaws.cn/cloudwatch/](https://console.amazonaws.cn/cloudwatch/) and delete all of the log groups that have names starting with `/aws/sagemaker/`.