Features of the DLAMI - Deep Learning AMI
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China (PDF).

Features of the DLAMI

Preinstalled Frameworks

There are currently two primary flavors of the DLAMI with other variations related to the operating system (OS) and software versions:

The Deep Learning AMI with Conda uses conda environments to isolate each framework, so you can switch between them at will and not worry about their dependencies conflicting.

This is the full list of supported frameworks by Deep Learning AMI with Conda:

  • Apache MXNet (Incubating)

  • PyTorch

  • TensorFlow 2

Note

We no longer include the CNTK, Caffe, Caffe2, Theano, Chainer, or Keras Conda environments in the Amazon Deep Learning AMI starting with the v28 release. Previous releases of the Amazon Deep Learning AMI that contain these environments will continue to be available. However, we will only provide updates to these environments if there are security fixes published by the open source community for these frameworks.

Preinstalled GPU Software

Even if you use a CPU-only instance, the DLAMI will have NVIDIA CUDA and NVIDIA cuDNN. The installed software is the same regardless of the instance type. Keep in mind that GPU-specific tools only work on an instance that has at least one GPU. More information on this is covered in the Selecting the Instance Type for DLAMI.

For more information on CUDA Installation, see the CUDA Installations and Framework Bindings.

Model Serving and Visualization

Deep Learning AMI with Conda comes preinstalled with two kinds of model servers, one for MXNet and one for TensorFlow, as well as TensorBoard, for model visualizations.