Updating an Amazon EKS cluster Kubernetes version - Amazon EKS
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China (PDF).

Updating an Amazon EKS cluster Kubernetes version

When a new Kubernetes version is available in Amazon EKS, you can update your Amazon EKS cluster to the latest version.

Important

We recommend that, before you update to a new Kubernetes version, you review the information in Amazon EKS Kubernetes versions and also review in the update steps in this topic. If you're updating to version 1.22, you must make the changes listed in Kubernetes version 1.22 prerequisites to your cluster before updating it.

New Kubernetes versions sometimes introduce significant changes. Therefore, we recommend that you test the behavior of your applications against a new Kubernetes version before you update your production clusters. You can do this by building a continuous integration workflow to test your application behavior before moving to a new Kubernetes version.

The update process consists of Amazon EKS launching new API server nodes with the updated Kubernetes version to replace the existing ones. Amazon EKS performs standard infrastructure and readiness health checks for network traffic on these new nodes to verify that they're working as expected. If any of these checks fail, Amazon EKS reverts the infrastructure deployment, and your cluster remains on the prior Kubernetes version. Running applications aren't affected, and your cluster is never left in a non-deterministic or unrecoverable state. Amazon EKS regularly backs up all managed clusters, and mechanisms exist to recover clusters if necessary. We're constantly evaluating and improving our Kubernetes infrastructure management processes.

To update the cluster, Amazon EKS requires up to five available IP addresses from the subnets that you specified when you created your cluster. Amazon EKS creates new cluster elastic network interfaces (network interfaces) in any of the subnets that you specified. The network interfaces may be created in different subnets than your existing network interfaces are in, so make sure that your security group rules allow required cluster communication for any of the subnets that you specified when you created your cluster. If any of the subnets that you specified when you created the cluster don't exist, don't have enough available IP addresses, or don't have security group rules that allows necessary cluster communication, then the update can fail.

Note

Even though Amazon EKS runs a highly available control plane, you might experience minor service interruptions during an update. For example, assume that you attempt to connect to an API server around when it's terminated and replaced by a new API server that's running the new version of Kubernetes. You might experience API call errors or connectivity issues. If this happens, retry your API operations until they succeed.

Update the Kubernetes version for your Amazon EKS cluster

To update the Kubernetes version for your cluster
  1. Compare the Kubernetes version of your cluster control plane to the Kubernetes version of your nodes.

    • Get the Kubernetes version of your cluster control plane.

      kubectl version --short
    • Get the Kubernetes version of your nodes. This command returns all self-managed and managed Amazon EC2 and Fargate nodes. Each Fargate pod is listed as its own node.

      kubectl get nodes

    Before updating your control plane to a new Kubernetes version, make sure that the Kubernetes minor version of both the managed nodes and Fargate nodes in your cluster are the same as your control plane's version. For example, if your control plane is running version 1.24 and one of your nodes is running version 1.23, then you must update your nodes to version 1.24 before updating your control plane to 1.25. We also recommend that you update your self-managed nodes to the same version as your control plane before updating the control plane. For more information, see Updating a managed node group and Self-managed node updates. If you have Fargate nodes with a minor version lower than the control plane version, first delete the pod that's represented by the node. Then update your control plane. Any remaining pods will update to the new version after you redeploy them.

  2. By default, the pod security policy admission controller is enabled on Amazon EKS clusters. Before updating your cluster, ensure that the proper pod security policies are in place. This is to avoid potential security issues. You can check for the default policy with the kubectl get psp eks.privileged command.

    kubectl get psp eks.privileged

    If you receive the following error, see default pod security policy before proceeding.

    Error from server (NotFound): podsecuritypolicies.extensions "eks.privileged" not found
  3. If the Kubernetes version that you originally deployed your cluster with was Kubernetes 1.18 or later, skip this step.

    You might need to remove a discontinued term from your CoreDNS manifest.

    1. Check to see if your CoreDNS manifest has a line that only has the word upstream.

      kubectl get configmap coredns -n kube-system -o jsonpath='{$.data.Corefile}' | grep upstream

      If no output is returned, this means that your manifest doesn't have the line. If this is the case, skip to the next step. If the word upstream is returned, remove the line.

    2. Remove the line near the top of the file that only has the word upstream in the configmap file. Don't change anything else in the file. After the line is removed, save the changes.

      kubectl edit configmap coredns -n kube-system -o yaml
  4. Update your cluster using eksctl, the Amazon Web Services Management Console, or the Amazon CLI.

    Important
    • If you're updating to version 1.22, you must make the changes listed in Kubernetes version 1.22 prerequisites to your cluster before updating it.

    • If you're updating to version 1.23 and use Amazon EBS volumes in your cluster, then you must install the Amazon EBS CSI driver in your cluster before updating your cluster to version 1.23 to avoid workload disruptions. For more information, see Kubernetes 1.23 and Amazon EBS CSI driver.

    • Because Amazon EKS runs a highly available control plane, you can update only one minor version at a time. For more information about this requirement, see Kubernetes Version and Version Skew Support Policy. Assume that your current cluster version is version 1.23 and you want to update it to version 1.25. You must first update your version 1.23 cluster to version 1.24 and then update your version 1.24 cluster to version 1.25.

    • Make sure that the kubelet on your managed and Fargate nodes are at the same Kubernetes version as your control plane before you update. We recommend that your self-managed nodes are at the same version as the control plane. They can be only up to one version behind the current version of the control plane.

    • If your cluster is configured with a version of the Amazon VPC CNI plugin for Kubernetes that is earlier than 1.8.0, then we recommend that you update the plugin to the latest version before updating your cluster to version 1.21 or later. To update the plugin, see Working with the Amazon VPC CNI plugin for Kubernetes Amazon EKS add-on.

    • If you're updating your cluster to version 1.25 or later and have the Amazon Load Balancer Controller deployed in your cluster, then update the controller to version 2.4.7 or later before updating your cluster version to 1.25. For more information, see the Kubernetes 1.25 release notes.

    eksctl

    This procedure requires eksctl version 0.135.0 or later. You can check your version with the following command:

    eksctl version

    For instructions on how to install and update eksctl, see Installing or updating eksctl.

    Update the Kubernetes version of your Amazon EKS control plane. Replace my-cluster with your cluster name. Replace 1.25 with the Amazon EKS supported version number that you want to update your cluster to. For a list of supported version numbers, see Amazon EKS Kubernetes versions.

    eksctl upgrade cluster --name my-cluster --version 1.25 --approve

    The update takes several minutes to complete.

    Amazon Web Services Management Console
    1. Open the Amazon EKS console at https://console.amazonaws.cn/eks/home#/clusters.

    2. Choose the name of the Amazon EKS cluster to update and choose Update cluster version.

    3. For Kubernetes version, select the version to update your cluster to and choose Update.

    4. For Cluster name, enter the name of your cluster and choose Confirm.

      The update takes several minutes to complete.

    Amazon CLI
    1. Update your Amazon EKS cluster with the following Amazon CLI command. Replace the example values with your own. Replace 1.25 with the Amazon EKS supported version number that you want to update your cluster to. For a list of supported version numbers, see Amazon EKS Kubernetes versions.

      aws eks update-cluster-version --region region-code --name my-cluster --kubernetes-version 1.25

      The example output is as follows.

      { "update": { "id": "b5f0ba18-9a87-4450-b5a0-825e6e84496f", "status": "InProgress", "type": "VersionUpdate", "params": [ { "type": "Version", "value": "1.25" }, { "type": "PlatformVersion", "value": "eks.1" } ], ... "errors": [] } }
    2. Monitor the status of your cluster update with the following command. Use the cluster name and update ID that the previous command returned. When a Successful status is displayed, the update is complete. The update takes several minutes to complete.

      aws eks describe-update --region region-code --name my-cluster --update-id b5f0ba18-9a87-4450-b5a0-825e6e84496f

      The example output is as follows.

      { "update": { "id": "b5f0ba18-9a87-4450-b5a0-825e6e84496f", "status": "Successful", "type": "VersionUpdate", "params": [ { "type": "Version", "value": "1.25" }, { "type": "PlatformVersion", "value": "eks.1" } ], ... "errors": [] } }
  5. After your cluster update is complete, update your nodes to the same Kubernetes minor version as your updated cluster. For more information, see Self-managed node updates and Updating a managed node group. Any new pods that are launched on Fargate have a kubelet version that matches your cluster version. Existing Fargate pods aren't changed.

  6. (Optional) If you deployed the Kubernetes Cluster Autoscaler to your cluster before updating the cluster, update the Cluster Autoscaler to the latest version that matches the Kubernetes major and minor version that you updated to.

    1. Open the Cluster Autoscaler releases page in a web browser and find the latest Cluster Autoscaler version that matches your cluster's Kubernetes major and minor version. For example, if your cluster's Kubernetes version is 1.25 find the latest Cluster Autoscaler release that begins with 1.25. Record the semantic version number (1.25.n, for example) for that release to use in the next step.

    2. Set the Cluster Autoscaler image tag to the version that you recorded in the previous step with the following command. If necessary, replace 1.25.n with your own value.

      kubectl -n kube-system set image deployment.apps/cluster-autoscaler cluster-autoscaler=public.ecr.aws/bitnami/cluster-autoscaler:v1.25.n
  7. (Clusters with GPU nodes only) If your cluster has node groups with GPU support (for example, p3.2xlarge), you must update the NVIDIA device plugin for Kubernetes DaemonSet on your cluster with the following command.

    kubectl apply -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v0.9.0/nvidia-device-plugin.yml
  8. Update the Amazon VPC CNI plugin for Kubernetes, CoreDNS, and kube-proxy add-ons. If you updated your cluster to version 1.21 or later, than we recommend updating the add-ons to the minimum versions listed in Service account tokens.

    • If you are using Amazon EKS add-ons, select Clusters in the Amazon EKS console, then select the name of the cluster that you updated in the left navigation pane. Notifications appear in the console. They inform you that a new version is available for each add-on that has an available update. To update an add-on, select the Add-ons tab. In one of the boxes for an add-on that has an update available, select Update now, select an available version, and then select Update.

    • Alternately, you can use the Amazon CLI or eksctl to update add-ons. For more information, see Updating an add-on.

  9. If necessary, update your version of kubectl. You must use a kubectl version that is within one minor version difference of your Amazon EKS cluster control plane. For example, a 1.24 kubectl client works with Kubernetes 1.23, 1.24, and 1.25 clusters. You can check your currently installed version with the following command.

    kubectl version --short --client

Kubernetes version 1.22 prerequisites

A number of deprecated beta APIs (v1beta1) have been removed in version 1.22 in favor of the GA (v1) version of those same APIs. As noted in the Kubernetes version 1.22 API and Feature removal blog and deprecated API migration guide, API changes are required for the following deployed resources before updating a cluster to version 1.22.

Before updating your cluster to Kubernetes version 1.22, make sure to do the following:

  • Change your YAML manifest files and clients to reference the new APIs.

  • Update custom integrations and controllers to call the new APIs.

  • Make sure that you use an updated version of any third-party tools. These tools include ingress controllers, service mesh controllers, continuous delivery systems, and other tools that call the new APIs. To check for discontinued API usage in your cluster, enable audit control plane logging and specify v1beta as an event filter. Replacement APIs are available in Kubernetes for several versions.

  • If you currently have the Amazon Load Balancer Controller deployed to your cluster, you must update it to version 2.4.1 before updating your cluster to Kubernetes version 1.22.

Important

When you update clusters to version 1.22, existing persisted objects can be accessed using the new APIs. However, you must migrate manifests and update clients to use these new APIs. Updating the clusters prevents potential workload failures.

Kubernetes version 1.22 removes support from the following beta APIs. Migrate your manifests and API clients based on the following information:

Resource Beta version GA version Notes
ValidatingWebhookConfiguration MutatingWebhookConfiguration admissionregistration.k8s.io/v1beta1 admissionregistration.k8s.io/v1
  • webhooks[*].failurePolicy default changed from Ignore to Fail for v1.

  • webhooks[*].matchPolicy default changed from Exact to Equivalent for v1.

  • webhooks[*].timeoutSeconds default changed from 30s to 10s for v1.

  • webhooks[*].sideEffects default value is removed, and the field made required, and only None and NoneOnDryRun are permitted for v1.

  • webhooks[*].admissionReviewVersions default value is removed and the field made required for v1 (supported versions for AdmissionReview are v1 and v1beta1).

  • webhooks[*].name must be unique in the list for objects created via admissionregistration.k8s.io/v1.

CustomResourceDefinition apiextensions.k8s.io/v1beta1 apiextensions.k8s.io/v1
  • spec.scope is no longer defaulted to Namespaced and must be explicitly specified.

  • spec.version is removed in v1; use spec.versions instead

  • spec.validation is removed in v1; use spec.versions[*].schema instead.

  • spec.subresources is removed in v1; use spec.versions[*].subresources instead.

  • spec.additionalPrinterColumns is removed in v1; use spec.versions[*].additionalPrinterColumns instead.

  • spec.conversion.webhookClientConfig is moved to spec.conversion.webhook.clientConfig in v1.

  • spec.conversion.conversionReviewVersions is moved to spec.conversion.webhook.conversionReviewVersions in v1.

  • spec.versions[*].schema.openAPIV3Schema is now required when creating v1 CustomResourceDefinition objects, and must be a structural schema.

  • spec.preserveUnknownFields: true is disallowed when creating v1 CustomResourceDefinition objects; it must be specified within schema definitions as x-kubernetes-preserve-unknown-fields: true.

  • In additionalPrinterColumns items, the JSONPath field was renamed to jsonPath in v1 (fixes #66531).

APIService apiregistration.k8s.io/v1beta1 apiregistration.k8s.io/v1 None
TokenReview authentication.k8s.io/v1beta1 authentication.k8s.io/v1 None
SubjectAccessReview LocalSubjectAccessReview SelfSubjectAccessReview authorization.k8s.io/v1beta1 authorization.k8s.io/v1 spec.group is renamed to spec.groups
CertificateSigningRequest certificates.k8s.io/v1beta1 certificates.k8s.io/v1
  • For API clients requesting certificates:

    • spec.signerName is now required (see known Kubernetes signers), and requests for kubernetes.io/legacy-unknown are not allowed to be created via the certificates.k8s.io/v1 API

    • spec.usages is now required, may not contain duplicate values, and must only contain known usages

  • For API clients approving or signing certificates:

    • status.conditions may not contain duplicate types

    • status.conditions[*].status is now required

    • status.certificate must be PEM-encoded, and contain only CERTIFICATE blocks

Lease

coordination.k8s.io/v1beta1 coordination.k8s.io/v1 None

Ingress

  • extensions/v1beta1

  • networking.k8s.io/v1beta1

networking.k8s.io/v1
  • spec.backend is renamed to spec.defaultBackend

  • The backend serviceName field is renamed to service.name

  • Numeric backend servicePort fields are renamed to service.port.number

  • String backend servicePort fields are renamed to service.port.name

  • pathType is now required for each specified path. Options are Prefix, Exact, and ImplementationSpecific. To match the undefined v1beta1 behavior, use ImplementationSpecific

IngressClass

networking.k8s.io/v1beta1 networking.k8s.io/v1 None

RBAC

rbac.authorization.k8s.io/v1beta1 rbac.authorization.k8s.io/v1 None
PriorityClass scheduling.k8s.io/v1beta1 scheduling.k8s.io/v1 None
CSIDriver CSINode StorageClass VolumeAttachment storage.k8s.io/v1beta1 storage.k8s.io/v1 None

To learn more about the API removal, see the Deprecated API migration guide.