Manage Amazon SageMaker Experiments in Studio Classic - Amazon SageMaker
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China (PDF).

Manage Amazon SageMaker Experiments in Studio Classic

Important

Experiment tracking using the SageMaker Experiments Python SDK is only available in Studio Classic. We recommend using the new Studio experience and creating experiments using the latest SageMaker integrations with MLflow. There is no MLflow UI integration with Studio Classic. If you want to use MLflow with Studio, you must launch the MLflow UI using the Amazon CLI. For more information, see Launch the MLflow UI using the Amazon CLI.

Amazon SageMaker Experiments Classic is a capability of Amazon SageMaker that lets you create, manage, analyze, and compare your machine learning experiments in Studio Classic.

Experiments Classic automatically tracks the inputs, parameters, configurations, and results of your iterations as runs. You can assign, group, and organize these runs into experiments. SageMaker Experiments is integrated with Amazon SageMaker Studio Classic, providing a visual interface to browse your active and past experiments, compare runs on key performance metrics, and identify the best performing models. SageMaker Experiments tracks all of the steps and artifacts that went into creating a model, and you can quickly revisit the origins of a model when you are troubleshooting issues in production, or auditing your models for compliance verifications.

Use SageMaker Experiments to view, manage, analyze, and compare both custom experiments that you programmatically create and experiments automatically created from SageMaker jobs.

Example notebooks for Experiments Classic

The following tutorials demonstrate how to track runs for various model training experiments. You can view the resulting experiments in Studio Classic after running the notebooks. For a tutorial that showcases additional features of Studio Classic, see Amazon SageMaker Studio Classic Tour.

Track experiments in a notebook environment

To learn more about tracking experiments in a notebook environment, see the following example notebooks:

Track bias and explainability for your experiments with SageMaker Clarify

For a step-by-step guide on tracking bias and explainability for your experiments, see the following example notebook:

Track experiments for SageMaker training jobs using script mode

For more information about tracking experiments for SageMaker training jobs, see the following example notebooks:

View experiments and runs

Amazon SageMaker Studio Classic provides an experiments browser that you can use to view lists of experiments and runs. You can choose one of these entities to view detailed information about the entity or choose multiple entities for comparison. You can filter the list of experiments by entity name, type, and tags.

To view experiments and runs
  1. To view the experiment in Studio Classic, in the left sidebar, choose Experiments.

    Select the name of the experiment to view all associated runs. You can search experiments by typing directly into the Search bar or filtering for experiment type. You can also choose which columns to display in your experiment or run list.

    It might take a moment for the list to refresh and display a new experiment or experiment run. You can click Refresh to update the page. Your experiment list should look similar to the following:

    A list of experiments in the SageMaker Experiments UI
  2. In the experiments list, double-click an experiment to display a list of the runs in the experiment.

    Note

    Experiment runs that are automatically created by SageMaker jobs and containers are visible in the Experiments Studio Classic UI by default. To hide runs created by SageMaker jobs for a given experiment, choose the settings icon ( The settings icon for Studio Classic. ) and toggle Show jobs.

    A list of experiment runs in the SageMaker Experiments UI
  3. Double-click a run to display information about a specific run.

    In the Overview pane, choose any of the following headings to see available information about each run:

    • Metrics – Metrics that are logged during a run.

    • Charts – Build your own charts to compare runs.

    • Output artifacts – Any resulting artifacts of the experiment run and the artifact locations in Amazon S3.

    • Bias reports – Pre-training or post-training bias reports generated using Clarify.

    • Explainability– Explainability reports generated using Clarify.

    • Debugs – A list of debugger rules and any issues found.

Migrate from Experiments Classic to Amazon SageMaker with MLflow

Past experiments created using Experiments Classic are still available to view in Studio Classic. If you want to maintain and use past experiment code with MLflow, you must update your training code to use the MLflow SDK and run the training experiments again. For more information on getting started with the MLflow SDK and the Amazon MLflow plugin, see Track experiments with MLflow.