Help improve this page
Want to contribute to this user guide? Choose the Edit this page on GitHub link that is located in the right pane of every page. Your contributions will help make our user guide better for everyone.
Try tutorials for deploying Machine Learning workloads and platforms on EKS
If you are interested in setting up Machine Learning platforms and frameworks in EKS, explore the tutorials described in this page. These tutorials cover everything from patterns for making the best use of GPU processors to choosing modeling tools to building frameworks for specialized industries.
Build generative AI platforms on EKS
Run specialized generative AI frameworks on EKS
Maximize NVIDIA GPU performance for ML on EKS
-
Implement GPU sharing to efficiently use NVIDIA GPUs for your EKS clusters:
GPU sharing on Amazon EKS with NVIDIA time-slicing and accelerated EC2 instances
-
Use Multi-Instance GPUs (MIGs) and NIM microservices to run more pods per GPU on your EKS clusters:
-
Leverage NVIDIA NIM microservices to optimize inference workloads using optimized microservices to deploy AI models at scale:
Part 1: Deploying generative AI applications with NVIDIA NIMs on Amazon EKS
-
Scaling a Large Language Model with NVIDIA NIM on Amazon EKS with Karpenter
-
Build and deploy a scalable machine learning system on Kubernetes with Kubeflow on Amazon