Inference - Amazon Deep Learning Containers
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China (PDF).

Inference

Once you've created a cluster using the steps in Amazon EKS Setup, you can use it to run inference jobs. For inference, you can use either a CPU or GPU example depending on the nodes in your cluster. Inference supports only single node configurations. The following topics show how to run inference with Amazon Deep Learning Containers on EKS using Apache MXNet (Incubating), PyTorch, TensorFlow, and TensorFlow 2.