Use Amazon SageMaker Debugger to debug and improve model performance - Amazon SageMaker
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China (PDF).

Use Amazon SageMaker Debugger to debug and improve model performance

Debug model output tensors from machine learning training jobs in real time and detect non-converging issues using Amazon SageMaker Debugger.

Amazon SageMaker Debugger Features

A machine learning (ML) training job can have problems such as overfitting, saturated activation functions, and vanishing gradients, which can compromise model performance.

SageMaker Debugger provides tools to debug training jobs and resolve such problems to improve the performance of your model. Debugger also offers tools to send alerts when training anomalies are found, take actions against the problems, and identify the root cause of them by visualizing collected metrics and tensors.

SageMaker Debugger supports the Apache MXNet, PyTorch, TensorFlow, and XGBoost frameworks. For more information about available frameworks and versions supported by SageMaker Debugger, see Supported Frameworks and Algorithms.


                Overview of how Amazon SageMaker Debugger works.

The high-level Debugger workflow is as follows:

  1. Modify your training script with the sagemaker-debugger Python SDK if needed.

  2. Configure a SageMaker training job with SageMaker Debugger.

  3. Start a training job and monitor training issues in real time.

  4. Get alerts and take prompt actions against the training issues.

  5. Explore deep analysis of the training issues.

  6. Fix the issues, consider the suggestions provided by Debugger, and repeat steps 1–5 until you optimize your model and achieve target accuracy.

The SageMaker Debugger developer guide walks you through the following topics.