Creating a local cluster on an Outpost
This topic provides an overview of what to consider when running a local cluster on an Outpost. The topic also provides instructions for how to deploy a local cluster on an Outpost.
Considerations
-
These considerations aren't replicated in related Amazon EKS documentation. If other Amazon EKS documentation topics conflict with the considerations here, follow the considerations here.
-
These considerations are subject to change and might change frequently. So, we recommend that you regularly review this topic.
-
Many of the considerations are different than the considerations for creating a cluster on the Amazon Web Services Cloud.
-
Local clusters support Outpost racks only. A single local cluster can run across multiple physical Outpost racks that comprise a single logical Outpost. A single local cluster can't run across multiple logical Outposts. Each logical Outpost has a single Outpost ARN.
-
Local clusters run and manage the Kubernetes control plane in your account on the Outpost. You can't run workloads on the Kubernetes control plane instances or modify the Kubernetes control plane components. These nodes are managed by the Amazon EKS service. Changes to the Kubernetes control plane don't persist through automatic Amazon EKS management actions, such as patching.
-
Local clusters support self-managed add-ons and self-managed Amazon Linux 2 node groups. The Amazon VPC CNI plugin for Kubernetes, kube-proxy, and CoreDNS add-ons are automatically installed on local clusters.
-
Local clusters require the use of Amazon EBS on Outposts. Your Outpost must have Amazon EBS available for the Kubernetes control plane storage.
-
Local clusters use Amazon EBS on Outposts. Your Outpost must have Amazon EBS available for the Kubernetes control plane storage. Outposts support Amazon EBS
gp2
volumes only. -
Amazon EBS backed Kubernetes
PersistentVolumes
are supported using the Amazon EBS CSI driver.
Prerequisites
-
Familiarity with the Outposts deployment options, Capacity considerations, and Amazon EKS local cluster VPC and subnet requirements and considerations.
-
An existing Outpost. For more information, see What is Amazon Outposts.
-
The
kubectl
command line tool is installed on your computer or Amazon CloudShell. The version can be the same as or up to one minor version earlier or later than the Kubernetes version of your cluster. For example, if your cluster version is1.24
, you can usekubectl
version1.23
,1.24
, or1.25
with it. To install or upgradekubectl
, see Installing or updating kubectl. -
Version
2.11.3
or later or1.27.93
or later of the Amazon CLI installed and configured on your device or Amazon CloudShell. You can check your current version withaws --version | cut -d / -f2 | cut -d ' ' -f1
. Package managers suchyum
,apt-get
, or Homebrew for macOS are often several versions behind the latest version of the Amazon CLI. To install the latest version, see Installing, updating, and uninstalling the Amazon CLI and Quick configuration withaws configure
in the Amazon Command Line Interface User Guide. The Amazon CLI version installed in the Amazon CloudShell may also be several versions behind the latest version. To update it, see Installing Amazon CLI to your home directory in the Amazon CloudShell User Guide. -
An IAM principal with permissions to
create
anddescribe
an Amazon EKS cluster. For more information, see Create a local Kubernetes cluster on an Outpost and List or describe all clusters.
When a local Amazon EKS cluster is created, the IAM principal that creates the
cluster is permanently added. The principal is specifically added to the Kubernetes RBAC
authorization table as the administrator. This entity has system:masters
permissions. The identity of this entity isn't visible in your cluster configuration. So,
it's important to note the entity that created the cluster and make sure that you never
delete it. Initially, only the principal that created the server can make calls to the
Kubernetes API server using kubectl
. If you use the console to create the cluster, make sure
that the same IAM credentials are in the Amazon SDK credential chain when you run kubectl
commands on your cluster. After your cluster is created, you can grant other IAM principals access to your cluster.
To create a local Amazon EKS local cluster
You can create a local cluster with eksctl
, the Amazon Web Services Management Console, the Amazon CLI, the Amazon EKS API, the Amazon SDKs
-
Create a local cluster.
-
After your cluster is created, you can view the Amazon EC2 control plane instances that were created.
aws ec2 describe-instances --query 'Reservations[*].Instances[*].{Name:Tags[?Key==`Name`]|[0].Value}' | grep
my-cluster-
control-planeThe example output is as follows.
"Name": "
my-cluster
-control-plane-id1
" "Name": "my-cluster
-control-plane-id2
" "Name": "my-cluster
-control-plane-id3
"Each instance is tainted with
node-role.eks-local.amazonaws.com/control-plane
so that no workloads are ever scheduled on the control plane instances. For more information about taints, see Taints and Tolerationsin the Kubernetes documentation. Amazon EKS continuously monitors the state of local clusters. We perform automatic management actions, such as security patches and repairing unhealthy instances. When local clusters are disconnected from the cloud, we complete actions to ensure that the cluster is repaired to a healthy state upon reconnect. -
If you created your cluster using
eksctl
, then you can skip this step.Eksctl
completes this step for you. Enablekubectl
to communicate with your cluster by adding a new context to thekubectl
config
file. For instructions on how to create and update the file, see Creating or updating a kubeconfig file for an Amazon EKS cluster.aws eks update-kubeconfig --region
region-code
--namemy-cluster
The example output is as follows.
Added new context arn:aws-cn:eks:
region-code
:111122223333
:cluster/my-cluster
to/home/username/
.kube/config -
To connect to your local cluster's Kubernetes API server, have access to the local gateway for the subnet, or connect from within the VPC. For more information about connecting an Outpost rack to your on-premises network, see How local gateways for racks work in the Amazon Outposts User Guide. If you use Direct VPC Routing and the Outpost subnet has a route to your local gateway, the private IP addresses of the Kubernetes control plane instances are automatically broadcasted over your local network. The local cluster's Kubernetes API server endpoint is hosted in Amazon Route 53 (Route 53). The API service endpoint can be resolved by public DNS servers to the Kubernetes API servers' private IP addresses.
Local clusters' Kubernetes control plane instances are configured with static elastic network interfaces with fixed private IP addresses that don't change throughout the cluster lifecycle. Machines that interact with the Kubernetes API server might not have connectivity to Route 53 during network disconnects. If this is the case, we recommend configuring
/etc/hosts
with the static private IP addresses for continued operations. We also recommend setting up local DNS servers and connecting them to your Outpost. For more information, see the Amazon Outposts documentation. Run the following command to confirm that communication's established with your cluster.kubectl get svc
The example output is as follows.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 28h
-
(Optional) Test authentication to your local cluster when it's in a disconnected state from the Amazon Web Services Cloud. For instructions, see Preparing for network disconnects.
Internal resources
Amazon EKS creates the following resources on your cluster. The resources are for Amazon EKS internal use. For proper functioning of your cluster, don't edit or modify these resources.
-
The following mirror pods
: -
aws-iam-authenticator-
node-hostname
-
eks-certificates-controller-
node-hostname
-
etcd-
node-hostname
-
kube-apiserver-
node-hostname
-
kube-controller-manager-
node-hostname
-
kube-scheduler-
node-hostname
-
-
The following self-managed add-ons:
-
kube-system/coredns
-
kube-system/
kube-proxy
(not created until you add your first node) -
kube-system/aws-node
(not created until you add your first node). Local clusters use the Amazon VPC CNI plugin for Kubernetes plugin for cluster networking. Do not change the configuration for control plane instances (Pods namedaws-node-controlplane-*
). There are configuration variables that you can use to change the default value for when the plugin creates new network interfaces. For more information, see the documentationon GitHub.
-
-
The following services:
-
default/kubernetes
-
kube-system/kube-dns
-
-
A
PodSecurityPolicy
namedeks.system
-
A
ClusterRole
namedeks:system:podsecuritypolicy
-
A
ClusterRoleBinding
namedeks:system
-
A default PodSecurityPolicy
-
In addition to the cluster security group, Amazon EKS creates a security group in your Amazon Web Services account that's named
eks-local-internal-do-not-use-or-edit-
. This security group allows traffic to flow freely between Kubernetes components running on the control plane instances.cluster-name
-uniqueid
Recommended next steps:
-
Grant IAM entities access to your cluster. If you want the entities to view Kubernetes resources in the Amazon EKS console, grant the Required permissions to the entities.
-
Familiarize yourself with what happens during network disconnects.
-
Consider setting up a backup plan for your
etcd
. Amazon EKS doesn't support automated backup and restore ofetcd
for local clusters. For more information, see Backing up anetcd
clusterin the Kubernetes documentation. The two main options are using etcdctl
to automate taking snapshots or using Amazon EBS storage volume backup.