Maintain nodes yourself with self-managed nodes - Amazon EKS
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China (PDF).

Maintain nodes yourself with self-managed nodes

A cluster contains one or more Amazon EC2 nodes that Pods are scheduled on. Amazon EKS nodes run in your Amazon account and connect to the control plane of your cluster through the cluster API server endpoint. You’re billed for them based on Amazon EC2 prices. For more information, see Amazon EC2 pricing.

A cluster can contain several node groups. Each node group contains one or more nodes that are deployed in an Amazon EC2 Auto Scaling group. The instance type of the nodes within the group can vary, such as when using attribute-based instance type selection with Karpenter. All instances in a node group must use the Amazon EKS node IAM role.

Amazon EKS provides specialized Amazon Machine Images (AMIs) that are called Amazon EKS optimized AMIs. The AMIs are configured to work with Amazon EKS. Their components include containerd, kubelet, and the Amazon IAM Authenticator. The AMIs also contain a specialized bootstrap script that allows it to discover and connect to your cluster’s control plane automatically.

If you restrict access to the public endpoint of your cluster using CIDR blocks, we recommend that you also enable private endpoint access. This is so that nodes can communicate with the cluster. Without the private endpoint enabled, the CIDR blocks that you specify for public access must include the egress sources from your VPC. For more information, see Control network access to cluster API server endpoint.

To add self-managed nodes to your Amazon EKS cluster, see the topics that follow. If you launch self-managed nodes manually, add the following tag to each node. For more information, see Adding and deleting tags on an individual resource. If you follow the steps in the guides that follow, the required tag is automatically added to nodes for you.

Key Value

kubernetes.io/cluster/[.replaceable]`my-cluster`

owned

For more information about nodes from a general Kubernetes perspective, see Nodes in the Kubernetes documentation.

Feature comparison table

Criteria Self managed nodes

Can be deployed to Amazon Outposts

Yes

Can be deployed to an Amazon Local Zone

Yes

Can run containers that require Windows

Yes – Your cluster still requires at least one (two recommended for availability) Linux node though.

Can run containers that require Linux

Yes

Can run workloads that require the Inferentia chip

Yes – Amazon Linux only

Can run workloads that require a GPU

Yes – Amazon Linux only

Can run workloads that require Arm processors

Yes

Can run Amazon Bottlerocket

Yes

Pods share a kernel runtime environment with other Pods

Yes – All of your Pods on each of your nodes

Pods share CPU, memory, storage, and network resources with other Pods.

Yes – Can result in unused resources on each node

Pods can use more hardware and memory than requested in Pod specs

Yes – If the Pod requires more resources than requested, and resources are available on the node, the Pod can use additional resources.

Must deploy and manage Amazon EC2 instances

Yes – Manual configuration or using Amazon EKS provided Amazon CloudFormation templates to deploy Linux (x86), Linux (Arm), or Windows nodes.

Must secure, maintain, and patch the operating system of Amazon EC2 instances

Yes

Can provide bootstrap arguments at deployment of a node, such as extra kubelet arguments.

Yes – For more information, see the bootstrap script usage information on GitHub.

Can assign IP addresses to Pods from a different CIDR block than the IP address assigned to the node.

Yes – For more information, see Deploy pods in alternate subnets with custom networking.

Can SSH into node

Yes

Can deploy your own custom AMI to nodes

Yes

Can deploy your own custom CNI to nodes

Yes

Must update node AMI on your own

Yes – Using tools other than the Amazon EKS console. This is because self-managed nodes can’t be managed with the Amazon EKS console.

Must update node Kubernetes version on your own

Yes – Using tools other than the Amazon EKS console. This is because self-managed nodes can’t be managed with the Amazon EKS console.

Can use Amazon EBS storage with Pods

Yes

Can use Amazon EFS storage with Pods

Yes

Can use Amazon FSx for Lustre storage with Pods

Yes

Can use Network Load Balancer for services

Yes

Pods can run in a public subnet

Yes

Can assign different VPC security groups to individual Pods

Yes – Linux nodes only

Can run Kubernetes DaemonSets

Yes

Support HostPort and HostNetwork in the Pod manifest

Yes

Amazon Region availability

All Amazon EKS supported regions

Can run containers on Amazon EC2 dedicated hosts

Yes

Pricing

Cost of Amazon EC2 instance that runs multiple Pods. For more information, see Amazon EC2 pricing.

Topics