View and Update the Details of a Model Version - Amazon SageMaker
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China (PDF).

View and Update the Details of a Model Version

You can view and update details of a specific model version by using either the Amazon SDK for Python (Boto3) or the Amazon SageMaker Studio console.

Important

Amazon SageMaker integrates Model Cards into Model Registry. A model package registered in the Model Registry includes a simplified Model Card as a component of the model package. For more information, see Model package model card schema (Studio).

View and Update the Details of a Model Version (Boto3)

To view the details of a model version by using Boto3, complete the following steps.

  1. Call the list_model_packages API operation to view the model versions in a Model Group.

    sm_client.list_model_packages(ModelPackageGroupName="ModelGroup1")

    The response is a list of model package summaries. You can get the Amazon Resource Name (ARN) of the model versions from this list.

    {'ModelPackageSummaryList': [{'ModelPackageGroupName': 'AbaloneMPG-16039329888329896', 'ModelPackageVersion': 1, 'ModelPackageArn': 'arn:aws:sagemaker:us-east-2:123456789012:model-package/ModelGroup1/1', 'ModelPackageDescription': 'TestMe', 'CreationTime': datetime.datetime(2020, 10, 29, 1, 27, 46, 46000, tzinfo=tzlocal()), 'ModelPackageStatus': 'Completed', 'ModelApprovalStatus': 'Approved'}], 'ResponseMetadata': {'RequestId': '12345678-abcd-1234-abcd-aabbccddeeff', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amzn-requestid': '12345678-abcd-1234-abcd-aabbccddeeff', 'content-type': 'application/x-amz-json-1.1', 'content-length': '349', 'date': 'Mon, 23 Nov 2020 04:56:50 GMT'}, 'RetryAttempts': 0}}
  2. Call describe_model_package to see the details of the model version. You pass in the ARN of a model version that you got in the output of the call to list_model_packages.

    sm_client.describe_model_package(ModelPackageName="arn:aws:sagemaker:us-east-2:123456789012:model-package/ModelGroup1/1")

    The output of this call is a JSON object with the model version details.

    {'ModelPackageGroupName': 'ModelGroup1', 'ModelPackageVersion': 1, 'ModelPackageArn': 'arn:aws:sagemaker:us-east-2:123456789012:model-package/ModelGroup/1', 'ModelPackageDescription': 'Test Model', 'CreationTime': datetime.datetime(2020, 10, 29, 1, 27, 46, 46000, tzinfo=tzlocal()), 'InferenceSpecification': {'Containers': [{'Image': '257758044811.dkr.ecr.us-east-2.amazonaws.com/sagemaker-xgboost:1.0-1-cpu-py3', 'ImageDigest': 'sha256:99fa602cff19aee33297a5926f8497ca7bcd2a391b7d600300204eef803bca66', 'ModelDataUrl': 's3://sagemaker-us-east-2-123456789012/ModelGroup1/pipelines-0gdonccek7o9-AbaloneTrain-stmiylhtIR/output/model.tar.gz'}], 'SupportedTransformInstanceTypes': ['ml.m5.xlarge'], 'SupportedRealtimeInferenceInstanceTypes': ['ml.t2.medium', 'ml.m5.xlarge'], 'SupportedContentTypes': ['text/csv'], 'SupportedResponseMIMETypes': ['text/csv']}, 'ModelPackageStatus': 'Completed', 'ModelPackageStatusDetails': {'ValidationStatuses': [], 'ImageScanStatuses': []}, 'CertifyForMarketplace': False, 'ModelApprovalStatus': 'PendingManualApproval', 'LastModifiedTime': datetime.datetime(2020, 10, 29, 1, 28, 0, 438000, tzinfo=tzlocal()), 'ResponseMetadata': {'RequestId': '12345678-abcd-1234-abcd-aabbccddeeff', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amzn-requestid': '212345678-abcd-1234-abcd-aabbccddeeff', 'content-type': 'application/x-amz-json-1.1', 'content-length': '1038', 'date': 'Mon, 23 Nov 2020 04:59:38 GMT'}, 'RetryAttempts': 0}}

Model package model card schema (Studio)

All details related to the model version are encapsulated in the model package’s model card. The model card of a model package is a special usage of the Amazon SageMaker Model Card and its schema is simplified. The model package model card schema is shown in the following expandable dropdown.

{ "title": "SageMakerModelCardSchema", "description": "Schema of a model package’s model card.", "version": "0.1.0", "type": "object", "additionalProperties": false, "properties": { "model_overview": { "description": "Overview about the model.", "type": "object", "additionalProperties": false, "properties": { "model_creator": { "description": "Creator of model.", "type": "string", "maxLength": 1024 }, "model_artifact": { "description": "Location of the model artifact.", "type": "array", "maxContains": 15, "items": { "type": "string", "maxLength": 1024 } } } }, "intended_uses": { "description": "Intended usage of model.", "type": "object", "additionalProperties": false, "properties": { "purpose_of_model": { "description": "Reason the model was developed.", "type": "string", "maxLength": 2048 }, "intended_uses": { "description": "Intended use cases.", "type": "string", "maxLength": 2048 }, "factors_affecting_model_efficiency": { "type": "string", "maxLength": 2048 }, "risk_rating": { "description": "Risk rating for model card.", "$ref": "#/definitions/risk_rating" }, "explanations_for_risk_rating": { "type": "string", "maxLength": 2048 } } }, "business_details": { "description": "Business details of model.", "type": "object", "additionalProperties": false, "properties": { "business_problem": { "description": "Business problem solved by the model.", "type": "string", "maxLength": 2048 }, "business_stakeholders": { "description": "Business stakeholders.", "type": "string", "maxLength": 2048 }, "line_of_business": { "type": "string", "maxLength": 2048 } } }, "training_details": { "description": "Overview about the training.", "type": "object", "additionalProperties": false, "properties": { "objective_function": { "description": "The objective function for which the model is optimized.", "function": { "$ref": "#/definitions/objective_function" }, "notes": { "type": "string", "maxLength": 1024 } }, "training_observations": { "type": "string", "maxLength": 1024 }, "training_job_details": { "type": "object", "additionalProperties": false, "properties": { "training_arn": { "description": "SageMaker Training job ARN.", "type": "string", "maxLength": 1024 }, "training_datasets": { "description": "Location of the model datasets.", "type": "array", "maxContains": 15, "items": { "type": "string", "maxLength": 1024 } }, "training_environment": { "type": "object", "additionalProperties": false, "properties": { "container_image": { "description": "SageMaker training image URI.", "type": "array", "maxContains": 15, "items": { "type": "string", "maxLength": 1024 } } } }, "training_metrics": { "type": "array", "items": { "maxItems": 50, "$ref": "#/definitions/training_metric" } }, "user_provided_training_metrics": { "type": "array", "items": { "maxItems": 50, "$ref": "#/definitions/training_metric" } }, "hyper_parameters": { "type": "array", "items": { "maxItems": 100, "$ref": "#/definitions/training_hyper_parameter" } }, "user_provided_hyper_parameters": { "type": "array", "items": { "maxItems": 100, "$ref": "#/definitions/training_hyper_parameter" } } } } } }, "evaluation_details": { "type": "array", "default": [], "items": { "type": "object", "required": [ "name" ], "additionalProperties": false, "properties": { "name": { "type": "string", "pattern": ".{1,63}" }, "evaluation_observation": { "type": "string", "maxLength": 2096 }, "evaluation_job_arn": { "type": "string", "maxLength": 256 }, "datasets": { "type": "array", "items": { "type": "string", "maxLength": 1024 }, "maxItems": 10 }, "metadata": { "description": "Additional attributes associated with the evaluation results.", "type": "object", "additionalProperties": { "type": "string", "maxLength": 1024 } }, "metric_groups": { "type": "array", "default": [], "items": { "type": "object", "required": [ "name", "metric_data" ], "properties": { "name": { "type": "string", "pattern": ".{1,63}" }, "metric_data": { "type": "array", "items": { "anyOf": [ { "$ref": "#/definitions/simple_metric" }, { "$ref": "#/definitions/linear_graph_metric" }, { "$ref": "#/definitions/bar_chart_metric" }, { "$ref": "#/definitions/matrix_metric" } ] } } } } } } } }, "additional_information": { "additionalProperties": false, "type": "object", "properties": { "ethical_considerations": { "description": "Ethical considerations for model users.", "type": "string", "maxLength": 2048 }, "caveats_and_recommendations": { "description": "Caveats and recommendations for model users.", "type": "string", "maxLength": 2048 }, "custom_details": { "type": "object", "additionalProperties": { "$ref": "#/definitions/custom_property" } } } } }, "definitions": { "source_algorithms": { "type": "array", "minContains": 1, "maxContains": 1, "items": { "type": "object", "additionalProperties": false, "required": [ "algorithm_name" ], "properties": { "algorithm_name": { "description": "The name of the algorithm used to create the model package. The algorithm must be either an algorithm resource in your SageMaker account or an algorithm in Amazon Web Services Marketplace that you are subscribed to.", "type": "string", "maxLength": 170 }, "model_data_url": { "description": "Amazon S3 path where the model artifacts, which result from model training, are stored.", "type": "string", "maxLength": 1024 } } } }, "inference_specification": { "type": "object", "additionalProperties": false, "required": [ "containers" ], "properties": { "containers": { "description": "Contains inference related information used to create model package.", "type": "array", "minContains": 1, "maxContains": 15, "items": { "type": "object", "additionalProperties": false, "required": [ "image" ], "properties": { "model_data_url": { "description": "Amazon S3 path where the model artifacts, which result from model training, are stored.", "type": "string", "maxLength": 1024 }, "image": { "description": "Inference environment path. The Amazon Elastic Container Registry (Amazon ECR) path where inference code is stored.", "type": "string", "maxLength": 255 }, "nearest_model_name": { "description": "The name of a pre-trained machine learning benchmarked by an Amazon SageMaker Inference Recommender model that matches your model.", "type": "string" } } } } } }, "risk_rating": { "description": "Risk rating of model.", "type": "string", "enum": [ "High", "Medium", "Low", "Unknown" ] }, "custom_property": { "description": "Additional property.", "type": "string", "maxLength": 1024 }, "objective_function": { "description": "Objective function for which the training job is optimized.", "additionalProperties": false, "properties": { "function": { "type": "string", "enum": [ "Maximize", "Minimize" ] }, "facet": { "type": "string", "maxLength": 63 }, "condition": { "type": "string", "maxLength": 63 } } }, "training_metric": { "description": "Training metric data.", "type": "object", "required": [ "name", "value" ], "additionalProperties": false, "properties": { "name": { "type": "string", "pattern": ".{1,255}" }, "notes": { "type": "string", "maxLength": 1024 }, "value": { "type": "number" } } }, "training_hyper_parameter": { "description": "Training hyperparameter.", "type": "object", "required": [ "name", "value" ], "additionalProperties": false, "properties": { "name": { "type": "string", "pattern": ".{1,255}" }, "value": { "type": "string", "pattern": ".{1,255}" } } }, "linear_graph_metric": { "type": "object", "required": [ "name", "type", "value" ], "additionalProperties": false, "properties": { "name": { "type": "string", "pattern": ".{1,255}" }, "notes": { "type": "string", "maxLength": 1024 }, "type": { "type": "string", "enum": [ "linear_graph" ] }, "value": { "anyOf": [ { "type": "array", "items": { "type": "array", "items": { "type": "number" }, "minItems": 2, "maxItems": 2 }, "minItems": 1 } ] }, "x_axis_name": { "$ref": "#/definitions/axis_name_string" }, "y_axis_name": { "$ref": "#/definitions/axis_name_string" } } }, "bar_chart_metric": { "type": "object", "required": [ "name", "type", "value" ], "additionalProperties": false, "properties": { "name": { "type": "string", "pattern": ".{1,255}" }, "notes": { "type": "string", "maxLength": 1024 }, "type": { "type": "string", "enum": [ "bar_chart" ] }, "value": { "anyOf": [ { "type": "array", "items": { "type": "number" }, "minItems": 1 } ] }, "x_axis_name": { "$ref": "#/definitions/axis_name_array" }, "y_axis_name": { "$ref": "#/definitions/axis_name_string" } } }, "matrix_metric": { "type": "object", "required": [ "name", "type", "value" ], "additionalProperties": false, "properties": { "name": { "type": "string", "pattern": ".{1,255}" }, "notes": { "type": "string", "maxLength": 1024 }, "type": { "type": "string", "enum": [ "matrix" ] }, "value": { "anyOf": [ { "type": "array", "items": { "type": "array", "items": { "type": "number" }, "minItems": 1, "maxItems": 20 }, "minItems": 1, "maxItems": 20 } ] }, "x_axis_name": { "$ref": "#/definitions/axis_name_array" }, "y_axis_name": { "$ref": "#/definitions/axis_name_array" } } }, "simple_metric": { "description": "Metric data.", "type": "object", "required": [ "name", "type", "value" ], "additionalProperties": false, "properties": { "name": { "type": "string", "pattern": ".{1,255}" }, "notes": { "type": "string", "maxLength": 1024 }, "type": { "type": "string", "enum": [ "number", "string", "boolean" ] }, "value": { "anyOf": [ { "type": "number" }, { "type": "string", "maxLength": 63 }, { "type": "boolean" } ] }, "x_axis_name": { "$ref": "#/definitions/axis_name_string" }, "y_axis_name": { "$ref": "#/definitions/axis_name_string" } } }, "axis_name_array": { "type": "array", "items": { "type": "string", "maxLength": 63 } }, "axis_name_string": { "type": "string", "maxLength": 63 } } }

View and Update the Details of a Model Version (Studio or Studio Classic)

To view and update the details of a model version, complete the following steps based on whether you use Studio or Studio Classic. In Studio Classic, you can update the approval status for a model version. For details, see Update the Approval Status of a Model. In Studio, on the other hand, SageMaker creates a model card for a model package, and the model version UI provides options to update details in the model card.

Studio
  1. Open the SageMaker Studio console by following the instructions in Launch Amazon SageMaker Studio.

  2. In the left navigation pane, choose Models from the menu.

  3. Choose the Registered models tab, if not selected already.

  4. Immediately below the Registered models tab label, choose Model Groups, if not selected already.

  5. Select the name of the model group containing the model version to view.

  6. In the list of model versions, select the model version to view.

  7. Choose one of the following tabs.

    • Training: To view or edit details related to your training job, including performance metrics, artifacts, IAM role and encryption, and containers. For more information, see Training job information (Studio).

    • Evaluate: To view or edit details related to your training job, such as performance metrics, evaluation datasets, and security. For more information, see Evaluation job information (Studio).

    • Audit: To view or edit high-level details related to the model’s business purpose, usage, risk, and technical details such as algorithm and performance limitations. For more information, see Audit (governance) information (Studio).

    • Deploy: To view or edit the location of your inference image container and instances which compose the endpoint. For more information, see Deployment information (Studio).

Studio Classic
  1. Sign in to Amazon SageMaker Studio Classic. For more information, see Launch Amazon SageMaker Studio Classic.

  2. In the left navigation pane, choose the Home icon ( ).

  3. Choose Models, and then Model registry.

  4. From the model groups list, select the name of the Model Group you want to view.

  5. A new tab appears with a list of the model versions in the Model Group.

  6. In the list of model versions, select the name of the model version for which you want to view details.

  7. On the model version tab that opens, choose one of the following to see details about the model version:

    • Activity: Shows events for the model version, such as approval status updates.

    • Model quality: Reports metrics related to your Model Monitor model quality checks, which compare model predictions to Ground Truth. For more information about Model Monitor model quality checks, see Monitor model quality.

    • Explainability: Reports metrics related to your Model Monitor feature attribution checks, which compare the relative rankings of your features in training data versus live data. For more information about Model Monitor explainability checks, see Monitor Feature Attribution Drift for Models in Production.

    • Bias: Reports metrics related to your Model Monitor bias drift checks, which compare the distribution of live data to training data. For more information about Model Monitor bias drift checks, see Monitor Bias Drift for Models in Production.

    • Inference recommender: Provides initial instance recommendations for optimal performance based on your model and sample payloads.

    • Load test: Runs load tests across your choice of instance types when you provide your specific production requirements, such as latency and throughput constraints.

    • Inference specification: Displays instance types for your real-time inference and transform jobs, and information about your Amazon ECR containers.

    • Information: Shows information such as the project with which the model version is associated, the pipeline that generated the model, the Model Group, and the model's location in Amazon S3.

Training job information (Studio)

Important

As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the updated Studio experience. For information about using the Studio Classic application, see Amazon SageMaker Studio Classic.

You can add one training job, created externally or with SageMaker, to your model. If you add a SageMaker training job, SageMaker prepopulates the fields for all of the subpages in the Train tab. If you add an externally created training job, you need to add details related to your training job manually. To add, remove, view, or update information about the training job you added, follow the steps in this section.

To add a training job to your model package, complete the following steps.
  1. Choose the Train tab.

  2. Choose Add. If you do not see this option, you may already have a training job attached. If you want to remove this training job, complete the following instructions to remove a training job.

  3. You can add a training job you created in SageMaker or a training job you created externally.

    1. To add a training job you created in SageMaker, complete the following steps.

      1. Choose SageMaker.

      2. Select the radio box next to the training job you want to add.

      3. Choose Add.

    2. To add a training job you created externally, complete the following steps.

      1. Choose Custom.

      2. In the Name field, insert the name of your custom training job.

      3. Choose Add.

To remove a training job from your model package, complete the following steps.
  1. Choose Train.

  2. Choose the Gear ( ) icon under the Train tab.

  3. Choose Remove next to your training job.

  4. Choose Yes, I want to remove <name of your training job>.

  5. Choose Done.

To update (and view) details related to the training job:
  1. On the Train tab, view the status of the training job. The status is Complete if you added a training job to your model package and Undefined if not.

  2. To view details related to your training job such as performance, hyperparameters, and identifying details, choose the Train tab.

  3. To update and view details related to model performance, complete the following steps.

    1. Choose Performance in the left sidebar of the Train tab.

    2. View Metrics related to your training job. The Performance page lists metrics by name, value, and any notes you added related to the metric.

    3. (Optional) To add notes to existing metrics, complete the following steps.

      1. Choose the vertical ellipsis in the top right corner of the model version page, and choose Edit.

      2. Add notes to any of the listed metrics.

      3. At the top of the model version page, choose Save in the Editing Model Version... banner.

    4. View Custom Metrics related to your training job. Custom metrics are formatted similarly to metrics.

    5. (Optional) To add custom metrics, complete the following steps.

      1. Choose Add.

      2. Insert a name, value, and any optional notes for your new metric.

    6. (Optional) To remove custom metrics, choose the Trash icon next to the metric you want to remove.

    7. In the Observations text box, view any notes you added related to the performance of your training job.

    8. (Optional) To add or update observations, complete the following steps.

      1. Choose the vertical ellipsis in the top right corner of the model version page, and choose Edit.

      2. Add or update your notes in the Observations text box.

      3. At the top of the model version page, choose Save in the Editing Model Version... banner.

  4. To update and view details related to model artifacts, complete the following steps.

    1. Choose Artifacts in the left sidebar of the Train tab.

    2. In the Location (S3 URI) field, view the Amazon S3 location of your training datasets.

    3. In the Models field, view the name and Amazon S3 locations of model artifacts from other models that you included in the training job.

    4. To update any of the fields in the Artifacts page, complete the following steps.

      1. Choose the vertical ellipsis in the top right of the model version page, and choose Edit.

      2. Enter new values in any of the fields.

      3. At the top of the model version page, choose Save in the Editing Model Version... banner.

  5. To update and view details related to hyperparameters, complete the following steps.

    1. Choose Hyperparameters in the left sidebar of the Train tab.

    2. View the SageMaker provided and custom hyperparameters defined. Each hyperparameter is listed with its name and value.

    3. View the custom hyperparameters you added.

    4. (Optional) To add an additional custom hyperparameter, complete the following steps.

      1. Above the top right corner of the Custom Hyperparameters table, choose Add. A pair of new blank fields appears.

      2. Enter the name and value of the new custom hyperparameter. These values are automatically saved.

    5. (Optional) To remove a custom hyperparameter, choose the Trash icon to the right of the hyperparameter.

  6. To update and view details related to the training job environment, complete the following steps.

    1. Choose Environment in the left sidebar of the Train tab.

    2. View the Amazon ECR URI locations for any training job containers added by SageMaker (for a SageMaker training job) or by you (for a custom training job).

    3. (Optional) To add an additional training job container, choose Add, and then enter the URI of the new training container.

  7. To update and view the training job name and the Amazon Resource Names (ARN) for the training job, complete the following steps.

    1. Choose Details in the left sidebar of the Train tab.

    2. View the training job name and ARN of the training job.

Evaluation job information (Studio)

Important

As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the updated Studio experience. For information about using the Studio Classic application, see Amazon SageMaker Studio Classic.

After you register your model, you can test your model with one or more datasets to assess its performance. You can add one or more evaluation jobs from Amazon S3 or define your own evaluation job by manually entering all details. If you add a job from Amazon S3, SageMaker prepopulates the fields for all of the subpages in the Evaluate tab. If you define your own evaluation job, you need to add details related to your evaluation job manually.

To add your first evaluation job to your model package, complete the following steps.
  1. Choose the Evaluate tab.

  2. Choose Add.

  3. You can add an evaluation job from Amazon S3 or a custom evaluation job.

    1. To add an evaluation job with collaterals from Amazon S3, complete the following steps.

      1. Choose S3.

      2. Enter a name for the evaluation job.

      3. Enter the Amazon S3 location to the output collaterals of your evaluation job.

      4. Choose Add.

    2. To add a custom evaluation job, complete the following step:

      1. Choose Custom.

      2. Enter a name for the evaluation job.

      3. Choose Add.

To add an additional evaluation job to your model package, complete the following steps.
  1. Choose the Evaluate tab.

  2. Choose the Gear ( ) icon under the Train tab.

  3. In the dialog box, choose Add.

  4. You can add an evaluation job from Amazon S3 or a custom evaluation job.

    1. To add an evaluation job with collaterals from Amazon S3, complete the following steps.

      1. Choose S3.

      2. Enter a name for the evaluation job.

      3. Enter the Amazon S3 location to the output collaterals of your evaluation job.

      4. Choose Add.

    2. To add a custom evaluation job, complete the following step:

      1. Choose Custom.

      2. Enter a name for the evaluation job.

      3. Choose Add.

To remove an evaluation job from your model package, complete the following steps.
  1. Choose the Evaluate tab.

  2. Choose the Gear ( ) icon under the Train tab.

  3. (Optional) To find your evaluation job from the list, enter a search term in the search box to narrow the list of choices.

  4. Choose the radio button next to your evaluation job.

  5. Choose Remove.

  6. Choose Yes, I want to remove <name of your evaluation job>.

  7. Choose Done.

To update (and view) details related to the evaluation job:
  1. On the Evaluate tab, view the status of the evaluation job. The status is Complete if you added an evaluation job to your model package and Undefined if not.

  2. To view details related to your evaluation job, such as performance and artifacts location, choose the Evaluate tab.

  3. To update and view details related to model performance during evaluation, complete the following steps.

    1. Choose Performance in the Evaluate tab sidebar.

    2. View metrics related to your evaluation job in the Metrics list. The Metrics list displays the individual metrics by name, value, and any notes you added related to the metric.

    3. In the Observations text box, view any notes you added related to the performance of your evaluation job.

    4. To update any of the Notes fields for any metric or the Observations field, complete the following steps.

      1. Choose the vertical ellipsis in the top right of the model version page, and choose Edit.

      2. Enter notes for any metric or in the Observations text box.

      3. At the top of the model version page, choose Save in the Editing Model Version... banner.

  4. To update and view details related to your evaluation job datasets, complete the following steps.

    1. Choose Artifacts in the left sidebar of the Evaluate page.

    2. View datasets used in your evaluation job.

    3. (Optional) To add a dataset, choose Add and enter an Amazon S3 URI to the dataset.

    4. (Optional) To remove a dataset, choose the Trash icon next to the dataset you want to remove.

  5. To view the job name and evaluation job ARN, choose Details.

Audit (governance) information (Studio)

Important

As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the updated Studio experience. For information about using the Studio Classic application, see Amazon SageMaker Studio Classic.

Document important model details to help your organization establish a robust framework of model governance. You and your team members can reference these details so they use the model for the appropriate use cases, know the business domain and owners of the model, and understand model risks. You can also save details about how the model is expected to perform and reasons for performance limitations.

To view or update details related to the model governance, complete the following steps.
  1. On the Audit tab, view the approval status of the model card. The status can be one the following:

    • Draft: The model card is still a draft.

    • Pending approval: The model card is waiting to be approved.

    • Approved: The model card is approved.

  2. To update the approval status of the model card, choose the pulldown menu next to the approval status and choose the updated approval status.

  3. To update and view details related to your model package risk, complete the following steps.

    1. Choose Risk in the left sidebar of the Audit tab.

    2. View the current risk rating and explanation for the risk rating.

    3. To update the rating or explanation, complete the following steps.

      1. Choose the vertical ellipsis at the top right corner of the Audit page, and choose Edit.

      2. (Optional) Choose an updated risk rating.

      3. (Optional) Update the risk rating explanation.

      4. At the top of the model version page, choose Save in the Editing Model Version... banner.

  4. To update and view details related to the usage of your model package, complete the following steps.

    1. Choose Usage in the left sidebar of the Audit tab.

    2. View text you added in the following fields:

      • Problem type: The category of machine learning algorithm used to build your model.

      • Algorithm type: The specific algorithm used to create your model.

      • Intended uses: The current application of the model in your business problem.

      • Factors affecting model efficacy: Notes about your model’s performance limitations.

      • Recommended use: The types of applications you can create with the model, the scenarios in which you can expect a reasonable performance, or the type of data to use with the model.

      • Ethical considerations: A description of how your model might discriminate based on factors such as age or gender.

    3. To update any of the previously listed fields, complete the following steps.

      1. Choose the vertical ellipsis at the top right corner of the model version page, and choose Edit.

      2. (Optional) Use the dropdown menus for Problem type and Algorithm type to select new values, if needed.

      3. (Optional) Update the text descriptions in the remaining fields.

      4. At the top of the model version page, choose Save in the Editing Model Version... banner.

  5. To update and view details related to the stakeholders of your model package, complete the following steps.

    1. Choose Stakeholders in the left sidebar of the Audit tab.

    2. View the current model owner and creator, if any.

    3. To update the model owner or creator, complete the following steps:

      1. Choose the vertical ellipsis at the top right corner of the model version page, and choose Edit.

      2. Update the model owner or model creator fields.

      3. At the top of the model version page, choose Save in the Editing Model Version... banner.

  6. To update and view details related to the business problem that your model package addresses, complete the following steps.

    1. Choose Business in the left sidebar of the Audit tab.

    2. View the current descriptions, if any, for the business problem that the model addresses, the business problem stakeholders, and the line of business.

    3. To update any of the fields in the Business tab, complete the following steps.

      1. Choose the vertical ellipsis at the top right corner of the model version page, and choose Edit.

      2. Update the descriptions in any of the fields.

      3. At the top of the model version page, choose Save in the Editing Model Version... banner.

  7. To update and view existing documentation (represented as key-value pairs) for your model, complete the following steps.

    1. Choose Documentation in the left sidebar of the Audit page.

    2. View existing key-value pairs.

    3. To add any key-value pairs, complete the following steps.

      1. Choose the vertical ellipsis at the top right corner of the model version page, and choose Edit.

      2. Choose Add.

      3. Enter a new key and associated value.

      4. At the top of the model version page, choose Save in the Editing Model Version... banner.

    4. To remove any key-value pairs, complete the following steps.

      1. Choose the vertical ellipsis at the top right corner of the model version page, and choose Edit.

      2. Choose the Trash icon next to the key-value pair to remove.

      3. At the top of the model version page, choose Save in the Editing Model Version... banner.

Deployment information (Studio)

Important

As of November 30, 2023, the previous Amazon SageMaker Studio experience is now named Amazon SageMaker Studio Classic. The following section is specific to using the updated Studio experience. For information about using the Studio Classic application, see Amazon SageMaker Studio Classic.

After you evaluate your model performance and determine that it is ready to use for production workloads, you can change the approval status of the model to initiate CI/CD deployment. For more about approval status definitions, see Update the Approval Status of a Model.

To view or update details related to the model package deployment, complete the following steps.
  1. On the Deploy tab, view the model package approval status. Possible values can be the following:

    • Pending Approval: The model is registered but not yet approved or rejected for deployment.

    • Approved: The model is approved for CI/CD deployment. If there is an EventBridge rule in place that initiates model deployment upon a model approval event, as is the case for a model built from a SageMaker project template, SageMaker also deploys the model.

    • Rejected: The model is rejected for deployment.

    If you need to change the approval status, choose the dropdown menu next to the status and choose the updated status.

  2. To update the model package approval status, choose the dropdown next to the approval status and choose the updated approval status.

  3. In the Containers list, view the inference image containers.

  4. In the Instances list, view the instances which compose your deployment endpoint.