What are Amazon Deep Learning Containers?
Amazon Deep Learning Containers are pre-built Docker images that make it easier to run popular deep learning frameworks and tools on Amazon. They provide a consistent, up-to-date, secure, and optimized runtime environment for your deep learning applications hosted on Amazon infrastructure. To get started, see Getting Started with Amazon Deep Learning Containers.
Key Features
Pre-Installed Deep Learning Frameworks
Amazon Deep Learning Containers include pre-installed and configured versions of leading deep learning frameworks such as TensorFlow and PyTorch. This eliminates the need to build and maintain your own Docker images from scratch.
Hardware Acceleration
Amazon Deep Learning Containers are optimized for CPU-based, GPU-accelerated, and Amazon silicon-based deep learning. They support CUDA, cuDNN, and other necessary libraries for leveraging the power of GPU-based Amazon EC2 instances, as well as Amazon-designed chips like Graviton CPUs and GPUs, Amazon Trainium, and Intel's Habana-Gaudi processors.
Amazon Service Integration
Amazon Deep Learning Containers seamlessly integrate with a variety of Amazon services, including SageMaker AI, Amazon Elastic Container Service (ECS), Amazon Elastic Kubernetes Service (EKS), Amazon EC2, and Amazon ParallelCluster. This makes it easy to deploy and run your deep learning models and applications on Amazon infrastructure.
Secure and Regularly Updated
Amazon regularly maintains and updates the Amazon Deep Learning Containers to ensure you have access to the latest versions of deep learning frameworks and dependencies. This helps keep your Amazon-based deep learning environment secure and up-to-date, without the overhead of managing security patches and updates yourself. Keeping your deep learning containers updated with the latest security patches can be a resource-intensive task, but Amazon Deep Learning Containers eliminate this burden by providing regular, automatic updates. This ensures your deep learning environment remains secure and current, without requiring significant manual effort on your part. By automating the update process, Amazon Deep Learning Containers allow you to focus on developing your deep learning models and applications, rather than worrying about the underlying infrastructure and security upkeep, which can improve your team's productivity and allow you to more efficiently leverage the latest deep learning capabilities in your Amazon-hosted projects.
Use Cases
Amazon Deep Learning Containers are particularly useful in the following Amazon-based deep learning scenarios:
Model Training
Use Amazon Deep Learning Containers to train your deep learning models on CPU-based, GPU-accelerated, or Amazon silicon-powered Amazon EC2 instances, or leverage multi-node training on Amazon ParallelCluster or SageMaker Hyperpod.
Model Deployment
Deploy your trained models using the Amazon Deep Learning Containers for scalable, production-ready inference on Amazon, such as through SageMaker AI.
Experimentation and Prototyping
Quickly spin up deep learning development environments on Amazon using the pre-configured containers. Amazon Deep Learning Containers are the default option for notebook in SageMaker AI Studio, making it easy to get started with experimentation and prototyping.
Continuous Integration and Delivery
Integrate the containers into your Amazon-based CI/CD pipelines, such as those using Amazon ECS or Amazon EKS, for consistent, automated deep learning workloads.