SageMaker Experiments - Amazon SageMaker
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China (PDF).

SageMaker Experiments

ML model building requires many iterations of training as you tune the algorithm, model architecture, and parameters to achieve high prediction accuracy. You can track the inputs and outputs across these training iterations to improve repeatability of trials and collaboration within your team using Amazon SageMaker Experiments. You can also track parameters, metrics, datasets, and other artifacts related to your model training jobs. SageMaker Experiments offers a single interface where you can visualize your in-progress training jobs, share experiments within your team, and deploy models directly from an experiment.

To learn about SageMaker Experiments, see Manage Amazon SageMaker Experiments in Studio Classic.