SageMaker integrations - Amazon SageMaker
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China (PDF).

SageMaker integrations

Amazon SageMaker Experiments is integrated with a number of SageMaker features. Certain SageMaker jobs automatically create experiments. You can view and manage SageMaker Clarify bias reports or SageMaker Debugger output tensors for specific experiment runs directly in the Studio Classic Experiments UI.

Automatic experiment creation

Amazon SageMaker automatically creates experiments when running Autopilot jobs, hyperparameter optimization (HPO) jobs, or Pipeline executions. You can view these experiments in the Studio Classic Experiments UI.

Autopilot

Amazon SageMaker Experiments is integrated with Amazon SageMaker Autopilot. When you perform an Autopilot job, SageMaker Experiments creates an experiment for that job as well as runs for each of the different combinations of the available run components, parameters, and artifacts. You can find these runs in the SageMaker Experiments UI by filtering for the run type Autopilot. For more information, see Automate model development with Amazon SageMaker Autopilot.

HPO

Amazon SageMaker Experiments is integrated with HPO jobs. An HPO job automatically creates Amazon SageMaker experiments, runs, and components for each training job that it completes. You can find these runs in the SageMaker Experiments UI by filtering for the run type HPO. For more information, see Tune Multiple Algorithms with Hyperparameter Optimization to Find the Best Model.

Pipelines

Amazon SageMaker Model Building Pipelines is closely integrated with Amazon SageMaker Experiments. By default, when SageMaker Pipelines creates and executes a pipeline, experiments, runs, and components are created if they do not already exist. You can find these runs in the SageMaker Experiments UI by filtering for the run type Pipelines. For more information, see Amazon SageMaker Experiments Integration.

Bias and explainability reports

Manage SageMaker Clarify bias and explainability reports for experiment runs directly through Studio Classic. To view reports, find and select the name of the experiment run of your choice in Studio Classic. Choose Bias reports to see any Clarify bias reports associated with the experiment run.


        A SageMaker Clarify bias report for an experiment run in the SageMaker Experiments UI

Choose Explanations to see any Clarify explainability reports associated with the experiment run.


        A SageMaker Clarify bias report for an experiment run in the SageMaker Experiments UI

You can generate pre-training or post-training bias reports that analyze bias in datasets or model predictions using labels and bias metrics with SageMaker Clarify. You can also use SageMaker Clarify to generate explainability reports that document model behavior for global or local data samples. For more information, see Amazon SageMaker Clarify Bias Detection and Model Explainability.

Debugging

You can debug model training progress with Amazon SageMaker Debugger and view debug output tensors in the Studio Classic Experiments UI. Choose the name of the run associated with the Debugger report and choose Debugger.


        Debugger overview page for an experiment run in the SageMaker Experiments UI

Then, choose the training job name to view the associated Amazon SageMaker Debugger dashboard.


        Debugger insights dashboard example in the SageMaker Experiments UI

For more information, see Debug Training Jobs Using Amazon SageMaker Debugger.