

# Amazon SageMaker HyperPod AMI
<a name="sagemaker-hyperpod-release-ami"></a>

Amazon SageMaker HyperPod Amazon Machine Images (AMIs) are specialized machine images for distributed machine learning workloads and high-performance computing. These AMIs enhance base images with essential components including GPU drivers and Amazon Neuron accelerator support.

Key components added to HyperPod AMIs include:
+ [Public AMIs](sagemaker-hyperpod-release-public-ami.md) with support for [building custom AMIs](hyperpod-custom-ami-support.md)
+ Advanced orchestration tools:
  + [Orchestrating SageMaker HyperPod clusters with SlurmSlurm orchestration](sagemaker-hyperpod-slurm.md)
  + [Orchestrating SageMaker HyperPod clusters with Amazon EKS](sagemaker-hyperpod-eks.md)
+ Cluster management dependencies
+ Built-in resiliency features:
  + cluster health check
  + auto-resume capabilities
+ Support for HyperPod cluster management and configuration

These enhancements are built upon the following base Deep Learning AMIs (DLAMIs):
+ [Amazon Deep Learning Base GPU AMI (Ubuntu 20.04)](https://www.amazonaws.cn/releasenotes/aws-deep-learning-base-gpu-ami-ubuntu-20-04/) for orchestration with Slurm.
+ Amazon Linux 2 or Amazon Linux 2023 based AMI for orchestration with Amazon EKS.

Choose your HyperPod AMIs based on your orchestration preference:
+ For Slurm orchestration, see [SageMaker HyperPod AMI releases for Slurm](sagemaker-hyperpod-release-ami-slurm.md).
+ For Amazon EKS orchestration, see [SageMaker HyperPod AMI releases for Amazon EKS](sagemaker-hyperpod-release-ami-eks.md).

For information about Amazon SageMaker HyperPod feature releases, see [Amazon SageMaker HyperPod release notes](sagemaker-hyperpod-release-notes.md).

# Update your AMI version in your SageMaker HyperPod cluster
<a name="sagemaker-hyperpod-release-ami-update"></a>

Amazon SageMaker HyperPod Amazon Machine Images (AMIs) are specialized machine images for distributed machine learning workloads and high-performance computing. Each AMI comes pre-loaded with drivers, machine learning frameworks, training libraries, and performance monitoring tools. By updating the AMI version in your cluster, you can use the latest versions of these components and packages for your training jobs and workflows.

 When updating the AMI version within your cluster, you have the option to process the update immediately, schedule a one-time only update, or use a cron expression to create a recurring schedule. You can also choose to update all of the instances in an instance group or just batches of instances. If you choose to update batches, you set the percentage or amount of instances that SageMaker AI should upgrade at a time. If you use this method of updating, you set an interval of how long SageMaker AI should wait in between batches.

If you choose to update in batches, you can also include a list of alarms and metrics. During the wait interval, SageMaker AI observes these metrics and if any exceed their threshold, the corresponding alarm goes into the ALARM state, and SageMaker AI rolls back the AMI update. To utilize automatic rollbacks, your IAM execution role must have the permission `cloudwatch:DescribeAlarms`.

**Note**  
Updating your cluster in batches is available only for HyperPod clusters integrated with Amazon EKS. Also, if you’re creating multiple schedules, we recommend that you have a time buffer in between schedules. If schedules overlap, updates might fail.

For more information about each AMI release for your HyperPod cluster, see [Amazon SageMaker HyperPod AMI](sagemaker-hyperpod-release-ami.md). For more information about general HyperPod releases, see [Amazon SageMaker HyperPod release notes](sagemaker-hyperpod-release-notes.md).

You can use the SageMaker AI API or CLI operations to update your cluster or see scheduled updates for a specific cluster. If you're using the Amazon console, follow these steps:

**Note**  
Updating your AMI with the Amazon console is available only for clusters integrated with Amazon EKS. If you have a Slurm cluster, you must use the SageMaker AI API or CLI operations.

1. Open the Amazon SageMaker AI console at [https://console.amazonaws.cn/sagemaker/](https://console.amazonaws.cn/sagemaker/).

1. On the left, expand **HyperPod Clusters**, and choose **Cluster Management**.

1. Choose the cluster that you want to update, then choose **Details**, and **Update AMI**.



To create and manage update schedules programmatically, use the following API operations:
+ [CreateCluster](https://docs.amazonaws.cn/sagemaker/latest/APIReference/API_CreateCluster.html) – create a cluster while specifying an update schedule
+ [UpdateCluster](https://docs.amazonaws.cn/sagemaker/latest/APIReference/API_UpdateCluster.html) – update a cluster to add an update schedule
+ [ UpdateClusterSoftware](https://docs.amazonaws.cn/sagemaker/latest/APIReference/API_UpdateClusterSoftware.html) – to update the platform software of a cluster
+ [ DescribeCluster](https://docs.amazonaws.cn/sagemaker/latest/APIReference/API_DescribeCluster.html) – see an update schedule you created for a cluster
+ [DescribeClusterNode](https://docs.amazonaws.cn/sagemaker/latest/APIReference/API_DescribeClusterNode.html) and [ListClusterNodes](https://docs.amazonaws.cn/sagemaker/latest/APIReference/API_ListClusterNodes.html) – see when the cluster was last updated.

## Required permissions
<a name="sagemaker-hyperpod-release-ami-update-permissions"></a>

Depending to how you configured your [Pod Disruption Budget](https://kubernetes.io/docs/tasks/run-application/configure-pdb/) in your Amazon EKS cluster, HyperPod evicts pods, releases nodes, and prevents any update scheduling during the AMI update process. If any constraints within the budget are violated, HyperPod skips that node during the AMI update. For SageMaker HyperPod to correctly evict pods, you must add the necessary permissions to the HyperPod service-linked role. The following yaml file has the necessary permissions.

```
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: hyperpod-patching
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["list"]
- apiGroups: [""]
  resources: ["pods/eviction"]
  verbs: ["create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: hyperpod-patching
subjects:
- kind: User
  name: hyperpod-service-linked-role
roleRef:
  kind: ClusterRole
  name: hyperpod-patching
  apiGroup: rbac.authorization.k8s.io
```

Use the following commands to apply the permissions.

```
git clone https://github.com/aws/sagemaker-hyperpod-cli.git 

cd sagemaker-hyperpod-cli/helm_chart

helm upgrade hyperpod-dependencies HyperPodHelmChart --namespace kube-system --install
```

## Cron expressions
<a name="sagemaker-hyperpod-release-ami-update-cron"></a>

To configure a one-time update at a certain time or a recurring schedule, use cron expressions. Cron expressions support six fields and are separated by white space. All six fields are required.

```
cron(Minutes Hours Day-of-month Month Day-of-week Year)
```


| **Fields** | **Values** | **Wildcards** | 
| --- | --- | --- | 
|  Minutes  |  00 – 59  |  N/A  | 
|  Hours  |  00 – 23  |  N/A  | 
|  Day-of-month  |  01 – 31  | ? | 
|  Month  |  01 – 12  | \$1 / | 
|  Day-of-week  |  1 – 7 or MON-SUN  | ? \$1 L | 
|  Year  |  Current year – 2099  | \$1 | 

**Wildcards**
+ The **\$1** (asterisk) wildcard includes all values in the field. In the `Hours` field, **\$1** would include every hour.
+ The **/** (forward slash) wildcard specifies increments. In the `Months` field, you could enter **\$1/3** to specify every 3rd month.
+ The **?** (question mark) wildcard specifies one or another. In the `Day-of-month` field you could enter **7**, and if you didn't care what day of the week the seventh was, you could enter **?** in the Day-of-week field.
+ The **L** wildcard in the `day-of-week` or field specifies the last day of the month or week. For example, `5L` means the last Friday of the month.
+ The **\$1** wildcard in the ay-of-week field specifies a certain instance of the specified day of the week within a month. For example, 3\$12 would be the second Tuesday of the month: the 3 refers to Tuesday because it is the third day of each week, and the 2 refers to the second day of that type within the month.

You can use cron expressions for the following scenarios:
+ One-time schedule that runs at a certain time and day. You can use the `?` wildcard to denote that day-of-month or day-of-week don't matter.

  ```
  cron(30 14 ? 12 MON 2024)
  ```

  ```
  cron(30 14 15 12 ? 2024)
  ```
+ A weekly schedule that runs at a certain time and day. The following example creates a schedule that runs at 12:00pm on every Monday regardless of day-of-month.

  ```
  cron(00 12 ? * 1 *)
  ```
+ Monthly schedule that runs every month regardless of the day-of-week. The following schedule runs at 12:30pm on the 15th of every month.

  ```
  cron(30 12 15 * ? *)
  ```
+ A monthly schedule that uses day-of-week.

  ```
  cron(30 12 ? * MON *)
  ```
+ To create a schedule that runs every Nth month, use the `/` wildcard. The following example creates a monthly schedule that runs every 3 months. The following two examples demonstrate how it works with day-of-week and day-of-month.

  ```
  cron(30 12 15 */3 ? *)
  ```

  ```
  cron(30 12 ? */3 MON *)
  ```
+ A schedule that runs on a certain instance of the specified day of the week. The following example creates a schedule that runs at 12:30pm on the second Monday of every month.

  ```
  cron(30 12 ? * 1#2 *)
  ```
+ A schedule that runs on the last instance of the specified day of the week. The following schedule runs at 12:30pm on the last Monday of every month.

  ```
  cron(30 12 ? * 1L *)
  ```

# SageMaker HyperPod AMI releases for Slurm
<a name="sagemaker-hyperpod-release-ami-slurm"></a>

The following release notes track the latest updates for Amazon SageMaker HyperPod AMI releases for Slurm orchestration. These HyperPod AMIs are built upon [Amazon Deep Learning Base GPU AMI (Ubuntu 22.04)](https://www.amazonaws.cn/releasenotes/aws-deep-learning-base-gpu-ami-ubuntu-22-04/). The HyperPod service team distributes software patches through [SageMaker HyperPod DLAMI](sagemaker-hyperpod-ref.md#sagemaker-hyperpod-ref-hyperpod-ami). For HyperPod AMI releases for Amazon EKS orchestration, see [SageMaker HyperPod AMI releases for Amazon EKS](sagemaker-hyperpod-release-ami-eks.md). For information about Amazon SageMaker HyperPod feature releases, see [Amazon SageMaker HyperPod release notes](sagemaker-hyperpod-release-notes.md).

**Note**  
To update existing HyperPod clusters with the latest DLAMI, see [Update the SageMaker HyperPod platform software of a cluster](sagemaker-hyperpod-operate-slurm-cli-command.md#sagemaker-hyperpod-operate-slurm-cli-command-update-cluster-software).

## SageMaker HyperPod AMI releases for Slurm: March 01, 2026
<a name="sagemaker-hyperpod-release-ami-slurm-20260301"></a>

 **AMI general updates** 
+ Released updates for SageMaker HyperPod AMI for Slurm versions 24.11.
+ Base DLAMI release note is available [here](https://docs.amazonaws.cn//dlami/latest/devguide/appendix-ami-release-notes.html#appendix-ami-release-notes-base).

 **SageMaker HyperPod DLAMI for Slurm support** 

This release includes the following updates:

------
#### [ Slurm v24.11 ]
+ Slurm 24.11 (ARM64):
  + Linux Kernel version: 6.8
  + Glibc version: 2.35
  + OpenSSL version: 3.0.2
  + FSx Lustre Client version: 2.15.6-1fsx26
  + Runc version: 1.3.4
  + Containerd version: containerd containerd.io v2.2.1
  + NVIDIA Driver version: 580.126.09
  + CUDA version: 12.6, 12.8, 12.9, 13.0
  + EFA Installer version: 1.45.1
  + Python version: 3.10.12
  + Slurm version: 24.11.0
  + nvme-cli version: 1.16
  + collectd version: 5.12.0.
  + lustre-client version: 2.15.6-1fsx26
  + nvidia-imex version: 580.126.09-1
  + systemd version: 249
  + openssh version: 8.9
  + sudo version: 1.9.9
  + ufw version: 0.36.1
  + gcc version: 11.4.0
  + cmake version: 3.22.1
  + git version: 2.34.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300064.1b1344-1
  + nfs-utils version: 1:2.6.1-1ubuntu1.2
  + iscsi-initiator-utils version: 2.1.5-1ubuntu1.1
  + lvm2 version: 2.03.11
  + ec2-instance-connect version: 1.1.14-0ubuntu1.1
  + rdma-core version: 60.0-1
+ Slurm 24.11 (x86\$164):
  + Linux Kernel version: 6.8
  + Glibc version: 2.35
  + OpenSSL version: 3.0.2
  + FSx Lustre Client version: 2.15.6-1fsx26
  + Runc version: 1.3.4
  + Containerd version: containerd containerd.io v2.2.1
  + aws Neuronx DKMS version: 2.26.5.0
  + NVIDIA Driver version: 580.126.09
  + CUDA version: 12.6, 12.8, 12.9, 13.0
  + EFA Installer version: 1.45.0
  + Python version: 3.10.12
  + Slurm version: 24.11.0
  + nvme-cli version: 1.16
  + stress version: 1.0.5
  + collectd version: 5.12.0.
  + lustre-client version: 2.15.6-1fsx26
  + systemd version: 249
  + openssh version: 8.9
  + sudo version: 1.9.9
  + ufw version: 0.36.1
  + gcc version: 11.4.0
  + cmake version: 3.22.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300064.1b1344-1
  + nfs-utils version: 1:2.6.1-1ubuntu1.2
  + iscsi-initiator-utils version: 2.1.5-1ubuntu1.1
  + lvm2 version: 2.03.11
  + ec2-instance-connect version: 1.1.14-0ubuntu1.1
  + rdma-core version: 60.0-1

------

## SageMaker HyperPod AMI releases for Slurm: February 12, 2026
<a name="sagemaker-hyperpod-release-ami-slurm-20260212"></a>

 **AMI general updates** 
+ Released updates for SageMaker HyperPod AMI for Slurm versions 24.11.
+ Base DLAMI release note is available [here](https://docs.amazonaws.cn//dlami/latest/devguide/appendix-ami-release-notes.html#appendix-ami-release-notes-base).

 **SageMaker HyperPod DLAMI for Slurm support** 

This release includes the following updates:

------
#### [ Slurm v24.11 ]
+ Slurm 24.11 (ARM64):
  + Linux Kernel version: 6.8
  + Glibc version: 2.35
  + OpenSSL version: 3.0.2
  + FSx Lustre Client version: 2.15.6-1fsx25
  + Runc version: 1.3.4
  + Containerd version: containerd containerd.io v2.2.1
  + NVIDIA Driver version: 580.126.09
  + CUDA version: 12.6, 12.8, 12.9, 13.0
  + EFA Installer version: 1.45.1
  + Python version: 3.10.12
  + Slurm version: 24.11.0
  + nvme-cli version: 1.16
  + collectd version: 5.12.0.
  + lustre-client version: 2.15.6-1fsx25
  + nvidia-imex version: 580.126.09-1
  + systemd version: 249
  + openssh version: 8.9
  + sudo version: 1.9.9
  + ufw version: 0.36.1
  + gcc version: 11.4.0
  + cmake version: 3.22.1
  + git version: 2.34.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300064.0b1337-1
  + nfs-utils version: 1:2.6.1-1ubuntu1.2
  + iscsi-initiator-utils version: 2.1.5-1ubuntu1.1
  + lvm2 version: 2.03.11
  + ec2-instance-connect version: 1.1.14-0ubuntu1.1
  + rdma-core version: 60.0-1
+ Slurm 24.11 (x86\$164):
  + Linux Kernel version: 6.8
  + Glibc version: 2.35
  + OpenSSL version: 3.0.2
  + FSx Lustre Client version: 2.15.6-1fsx25
  + Runc version: 1.3.4
  + Containerd version: containerd containerd.io v2.2.1
  + aws Neuronx DKMS version: 2.25.4.0
  + NVIDIA Driver version: 580.126.09
  + CUDA version: 12.6, 12.8, 12.9, 13.0
  + EFA Installer version: 1.45.0
  + Python version: 3.10.12
  + Slurm version: 24.11.0
  + nvme-cli version: 1.16
  + stress version: 1.0.5
  + collectd version: 5.12.0.
  + lustre-client version: 2.15.6-1fsx25
  + systemd version: 249
  + openssh version: 8.9
  + sudo version: 1.9.9
  + ufw version: 0.36.1
  + gcc version: 11.4.0
  + cmake version: 3.22.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300064.0b1337-1
  + nfs-utils version: 1:2.6.1-1ubuntu1.2
  + iscsi-initiator-utils version: 2.1.5-1ubuntu1.1
  + lvm2 version: 2.03.11
  + ec2-instance-connect version: 1.1.14-0ubuntu1.1
  + rdma-core version: 60.0-1

------

## SageMaker HyperPod AMI releases for Slurm: January 25, 2026
<a name="sagemaker-hyperpod-release-ami-slurm-20260125"></a>

 **AMI general updates** 
+ Released updates for SageMaker HyperPod AMI for Slurm versions 24.11.
+ Base DLAMI release note is available [here](https://docs.amazonaws.cn//dlami/latest/devguide/appendix-ami-release-notes.html#appendix-ami-release-notes-base).

 **SageMaker HyperPod DLAMI for Slurm support** 

This release includes the following updates:

------
#### [ Slurm v24.11 ]
+ Slurm 24.11 (ARM64):
  + Linux Kernel version: 6.8
  + Glibc version: 2.35
  + OpenSSL version: 3.0.2
  + FSx Lustre Client version: 2.15.6-1fsx25
  + Runc version: 1.3.4
  + Containerd version: containerd containerd.io v2.2.1
  + NVIDIA Driver version: 580.126.09
  + CUDA version: 12.6, 12.8, 12.9, 13.0
  + EFA Installer version: 2.3.1amzn3.0
  + Python version: 3.10.12
  + Slurm version: 24.11.0
  + nvme-cli version: 1.16
  + collectd version: 5.12.0.
  + lustre-client version: 2.15.6-1fsx25
  + nvidia-imex version: 580.126.09-1
  + systemd version: 249
  + openssh version: 8.9
  + sudo version: 1.9.9
  + ufw version: 0.36.1
  + gcc version: 11.4.0
  + cmake version: 3.22.1
  + git version: 2.34.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300063.0b1323-1
  + nfs-utils version: 1:2.6.1-1ubuntu1.2
  + iscsi-initiator-utils version: 2.1.5-1ubuntu1.1
  + lvm2 version: 2.03.11
  + ec2-instance-connect version: 1.1.14-0ubuntu1.1
  + rdma-core version: 60.0-1
+ Slurm 24.11 (x86\$164):
  + Linux Kernel version: 6.8
  + Glibc version: 2.35
  + OpenSSL version: 3.0.2
  + FSx Lustre Client version: 2.15.6-1fsx25
  + Runc version: 1.3.4
  + Containerd version: containerd containerd.io v2.2.1
  + aws Neuronx DKMS version: 2.25.4.0
  + NVIDIA Driver version: 580.126.09
  + CUDA version: 12.6, 12.8, 12.9, 13.0
  + EFA Installer version: 2.3.1amzn2.0
  + Python version: 3.10.12
  + Slurm version: 24.11.0
  + nvme-cli version: 1.16
  + stress version: 1.0.5
  + collectd version: 5.12.0.
  + lustre-client version: 2.15.6-1fsx25
  + systemd version: 249
  + openssh version: 8.9
  + sudo version: 1.9.9
  + ufw version: 0.36.1
  + gcc version: 11.4.0
  + cmake version: 3.22.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300063.0b1323-1
  + nfs-utils version: 1:2.6.1-1ubuntu1.2
  + iscsi-initiator-utils version: 2.1.5-1ubuntu1.1
  + lvm2 version: 2.03.11
  + ec2-instance-connect version: 1.1.14-0ubuntu1.1
  + rdma-core version: 60.0-1

------

## SageMaker HyperPod AMI releases for Slurm: December 29, 2025
<a name="sagemaker-hyperpod-release-ami-slurm-20251229"></a>

 **AMI general updates** 
+ Released updates for SageMaker HyperPod AMI for Slurm versions 24.11.
+ Base DLAMI release note is available [here](https://docs.amazonaws.cn//dlami/latest/devguide/appendix-ami-release-notes.html#appendix-ami-release-notes-base).

 **SageMaker HyperPod DLAMI for Slurm support** 

This release includes the following updates:

------
#### [ Slurm v24.11 ]
+ Slurm 24.11 (ARM64):
  + Linux Kernel version: 6.8
  + Glibc version: 2.35
  + OpenSSL version: 3.0.2
  + FSx Lustre Client version: 2.15.6-1fsx25
  + Runc version: 1.3.4
  + Containerd version: containerd containerd.io v2.2.1
  + NVIDIA Driver version: 580.105.08
  + CUDA version: 12.6, 12.8, 12.9, 13.0
  + EFA Installer version: 2.3.1amzn3.0
  + Python version: 3.10.12
  + Slurm version: 24.11.0
  + nvme-cli version: 1.16
  + collectd version: 5.12.0.
  + lustre-client version: 2.15.6-1fsx25
  + nvidia-imex version: 580.105.08-1
  + systemd version: 249
  + openssh version: 8.9
  + sudo version: 1.9.9
  + ufw version: 0.36.1
  + gcc version: 11.4.0
  + cmake version: 3.22.1
  + git version: 2.34.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300062.0b1304-1
  + nfs-utils version: 1:2.6.1-1ubuntu1.2
  + iscsi-initiator-utils version: 2.1.5-1ubuntu1.1
  + lvm2 version: 2.03.11
  + ec2-instance-connect version: 1.1.14-0ubuntu1.1
  + rdma-core version: 60.0-1
+ Slurm 24.11 (x86\$164):
  + Linux Kernel version: 6.8
  + Glibc version: 2.35
  + OpenSSL version: 3.0.2
  + FSx Lustre Client version: 2.15.6-1fsx25
  + Runc version: 1.3.4
  + Containerd version: containerd containerd.io v2.2.1
  + aws Neuronx DKMS version: 2.25.4.0
  + NVIDIA Driver version: 580.105.08
  + CUDA version: 12.6, 12.8, 12.9, 13.0
  + EFA Installer version: 2.3.1amzn2.0
  + Python version: 3.10.12
  + Slurm version: 24.11.0
  + nvme-cli version: 1.16
  + stress version: 1.0.5
  + collectd version: 5.12.0.
  + lustre-client version: 2.15.6-1fsx25
  + systemd version: 249
  + openssh version: 8.9
  + sudo version: 1.9.9
  + ufw version: 0.36.1
  + gcc version: 11.4.0
  + cmake version: 3.22.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300062.0b1304-1
  + nfs-utils version: 1:2.6.1-1ubuntu1.2
  + iscsi-initiator-utils version: 2.1.5-1ubuntu1.1
  + lvm2 version: 2.03.11
  + ec2-instance-connect version: 1.1.14-0ubuntu1.1
  + rdma-core version: 60.0-1

------

## SageMaker HyperPod AMI releases for Slurm: November 22, 2025
<a name="sagemaker-hyperpod-release-ami-slurm-20251128"></a>

 **AMI general updates** 
+ Released updates for SageMaker HyperPod AMI for Slurm versions 24.11.
+ Base DLAMI release note is available [here](https://docs.amazonaws.cn//dlami/latest/devguide/appendix-ami-release-notes.html#appendix-ami-release-notes-base).

 **SageMaker HyperPod DLAMI for Slurm support** 

This release includes the following updates:

------
#### [ Slurm (arm64) ]
+ Linux Kernel version: 6.8
+ Glibc version: 2.35
+ OpenSSL version: 3.0.2
+ FSx Lustre Client version: 2.15.6-1fsx21
+ Runc version: 1.3.3
+ Containerd version: containerd containerd.io v2.1.5
+ NVIDIA Driver version: 580.95.05
+ CUDA version: 12.6, 12.8, 12.9, 13.0
+ EFA Installer version: 2.1.0amzn5.0
+ Python version: 3.10.12
+ Slurm version: 24.11.0
+ nvme-cli version: 1.16
+ collectd version: 5.12.0.
+ lustre-client version: 2.15.6-1fsx21
+ nvidia-imex version: 580.95.05-1
+ systemd version: 249
+ openssh version: 8.9
+ sudo version: 1.9.9
+ ufw version: 0.36.1
+ gcc version: 11.4.0
+ cmake version: 3.22.1
+ git version: 2.34.1
+ make version: 4.3
+ cloudwatch-agent version: 1.300062.0b1304-1
+ nfs-utils version: 1:2.6.1-1ubuntu1.2
+ iscsi-initiator-utils version: 2.1.5-1ubuntu1.1
+ lvm2 version: 2.03.11
+ ec2-instance-connect version: 1.1.14-0ubuntu1.1
+ rdma-core version: 58.amzn0-1

------
#### [ Slurm (x86\$164) ]
+ Linux Kernel version: 6.8
+ Glibc version: 2.35
+ OpenSSL version: 3.0.2
+ FSx Lustre Client version: 2.15.6-1fsx21
+ Runc version: 1.3.3
+ Containerd version: containerd containerd.io v2.1.5
+ aws Neuronx DKMS version: 2.24.7.0
+ NVIDIA Driver version: 580.95.05
+ CUDA version: 12.6, 12.8, 12.9, 13.0
+ EFA Installer version: 2.3.1amzn1.0
+ Python version: 3.10.12
+ Slurm version: 24.11.0
+ nvme-cli version: 1.16
+ stress version: 1.0.5
+ collectd version: 5.12.0.
+ lustre-client version: 2.15.6-1fsx21
+ systemd version: 249
+ openssh version: 8.9
+ sudo version: 1.9.9
+ ufw version: 0.36.1
+ gcc version: 11.4.0
+ cmake version: 3.22.1
+ make version: 4.3
+ cloudwatch-agent version: 1.300062.0b1304-1
+ nfs-utils version: 1:2.6.1-1ubuntu1.2
+ iscsi-initiator-utils version: 2.1.5-1ubuntu1.1
+ lvm2 version: 2.03.11
+ ec2-instance-connect version: 1.1.14-0ubuntu1.1
+ rdma-core version: 59.amzn0-1

------

## SageMaker HyperPod release notes: November 07, 2025
<a name="sagemaker-hyperpod-release-notes-20251107"></a>

**The AMI includes the following:**
+ Supported Amazon Web Services service: Amazon EC2
+ Operating System: Ubuntu 22.04
+ Compute Architecture: ARM64
+ Updated packages: NVIDIA Driver: 580.95.05
+ CUDA Versions: cuda-12.6, cuda-12.8, cuda-12.9, cuda-13.0
+ Security fixes: [ Runc Security patch](https://aws.amazon.com/security/security-bulletins/rss/aws-2025-024/)

## SageMaker HyperPod release notes: September 29, 2025
<a name="sagemaker-hyperpod-release-notes-20250929"></a>

**The AMI includes the following:**
+ Supported Amazon Web Services service: Amazon EC2
+ Operating System: Ubuntu 22.04
+ Compute Architecture: ARM64
+ Updated packages: NVIDIA Driver: 570.172.08
+ Security fixes

## SageMaker HyperPod release notes: August 12, 2025
<a name="sagemaker-hyperpod-release-notes-20250812"></a>

**The AMI includes the following:**
+ Supported Amazon Web Services service: Amazon EC2
+ Operating System: Ubuntu 22.04
+ Compute Architecture: ARM64
+ Latest available version is installed for the following packages:
  + Linux Kernel: 6.8
  + FSx Lustre
  + Docker
  + Amazon CLI v2 at `/usr/bin/aws`
  + NVIDIA DCGM
  + Nvidia container toolkit:
    + Version command: `nvidia-container-cli -V`
  + Nvidia-docker2:
    + Version command: `nvidia-docker version`
  + Nvidia-IMEX: v570.172.08-1
+ NVIDIA Driver: 570.158.01
+ NVIDIA CUDA 12.4, 12.5, 12.6, 12.8 stack:
  + CUDA, NCCL and cuDDN installation directories: `/usr/local/cuda-xx.x/`
    + Example: `/usr/local/cuda-12.8/`, `/usr/local/cuda-12.8/`
  + Compiled NCCL Version:
    + For CUDA directory of 12.4, compiled NCCL Version 2.22.3\$1CUDA12.4
    + For CUDA directory of 12.5, compiled NCCL Version 2.22.3\$1CUDA12.5
    + For CUDA directory of 12.6, compiled NCCL Version 2.24.3\$1CUDA12.6
    + For CUDA directory of 12.8, compiled NCCL Version 2.27.5\$1CUDA12.8
  + Default CUDA: 12.8
    + PATH `/usr/local/cuda` points to CUDA 12.8
    + Updated below env vars:
      + `LD_LIBRARY_PATH` to have `/usr/local/cuda-12.8/lib:/usr/local/cuda-12.8/lib64:/usr/local/cuda-12.8:/usr/local/cuda-12.8/targets/sbsa-linux/lib:/usr/local/cuda-12.8/nvvm/lib64:/usr/local/cuda-12.8/extras/CUPTI/lib64`
      + `PATH` to have `/usr/local/cuda-12.8/bin/:/usr/local/cuda-12.8/include/`
      + For any different CUDA version, please update `LD_LIBRARY_PATH` accordingly.
+ EFA installer: 1.42.0
+ Nvidia GDRCopy: 2.5.1
+ Amazon OFI NCCL plugin comes with EFA installer
  + Paths `/opt/amazon/ofi-nccl/lib/aarch64-linux-gnu` and `/opt/amazon/ofi-nccl/efa` are added to `LD_LIBRARY_PATH`.
+ Amazon CLI v2 at `/usr/local/bin/aws2` and Amazon CLI v1 at `/usr/bin/aws`
+ EBS volume type: gp3
+ Python: `/usr/bin/python3.10`

## SageMaker HyperPod release notes: May 27, 2025
<a name="sagemaker-hyperpod-release-notes-20250527"></a>

SageMaker HyperPod releases the following for [Orchestrating SageMaker HyperPod clusters with SlurmSlurm orchestration](sagemaker-hyperpod-slurm.md).

**New features and improvements**
+ Updated base AMI to `Deep Learning Base OSS Nvidia Driver GPU AMI (Ubuntu 22.04) 20250523` with the following key components:
  + NVIDIA Driver: 570.133.20
  + CUDA: 12.8 (default), with support for CUDA 12.4-12.6
  + NCCL Version: 2.26.5
  + EFA Installer: 1.40.0
  + Amazon OFI NCCL: 1.14.2-aws
+ Updated Neuron SDK packages:
  + aws-neuronx-collectives: 2.25.65.0-9858ac9a1 (from 2.24.59.0-838c7fc8b)
  + aws-neuronx-dkms: 2.21.37.0 (from 2.20.28.0)
  + aws-neuronx-runtime-lib: 2.25.57.0-166c7a468 (from 2.24.53.0-f239092cc)
  + aws-neuronx-tools: 2.23.9.0 (from 2.22.61.0)

**Important notes**
+ NVIDIA Container Toolkit 1.17.4 now has disabled mounting of CUDA compatible libraries.
+ Updated EFA configuration from 1.37 to 1.38, and EFA now includes the Amazon OFI NCCL plugin, which is located in the `/opt/amazon/ofi-nccl` directory instead of the original `/opt/aws-ofi-nccl/` path. (Released on February 18, 2025)
+ Kernel version is pinned for stability and driver compatibility.

## SageMaker HyperPod AMI releases for Slurm: May 13, 2025
<a name="sagemaker-hyperpod-release-ami-slurm-20250513"></a>

Amazon SageMaker HyperPod released an updated AMI that supports Ubuntu 22.04 LTS for Slurm clusters. Amazon regularly updates AMIs to ensure you have access to the most current software stack. Upgrading to the latest AMI provides enhanced security through comprehensive package updates, improved performance and stability for your workloads, and compatibility with new instance types and latest kernel features.

**Important**  
The update from Ubuntu 20.04 LTS to Ubuntu 22.04 LTS introduces changes that might affect compatibility with software and configurations designed for Ubuntu 20.04.

**Topics**
+ [

### Key updates in the Ubuntu 22.04 AMI
](#sagemaker-hyperpod-ami-slurm-ubuntu22-updates)
+ [

### Upgrading to the Ubuntu 22.04 AMI
](#sagemaker-hyperpod-ami-slurm-ubuntu22-upgrade)
+ [

### Troubleshooting upgrade failures
](#sagemaker-hyperpod-ami-slurm-ubuntu22-troubleshoot)

### Key updates in the Ubuntu 22.04 AMI
<a name="sagemaker-hyperpod-ami-slurm-ubuntu22-updates"></a>

The following table lists the component versions of the Ubuntu 22.04 AMI compared to the previous AMI.


**Component versions of the Ubuntu 22.04 AMI compared to the previous AMI**  

| Component | Previous version | Updated version | 
| --- | --- | --- | 
|  **Ubuntu OS**  |  20.04 LTS  |  22.04 LTS  | 
|  **Slurm**  |  24.11  |  24.11 (unchanged)  | 
|  **Python**  |  3.8 (default)  |  3.10 (default)  | 
|  **Elastic Fabric Adapter (EFA) on Amazon FSx**  |  Not supported  |  Supported  | 
|  **Linux kernel**  |  5.15  |  6.8  | 
|  **GNU C Library (glibc)**  |  2.31  |  2.35  | 
|  **GNU Compiler Collection (GCC)**  |  9.4.0  |  11.4.0  | 
|  **libc6**  |  ≤ 2.31  |  ≥ 2.35 supported  | 
|  **Network File System (NFS)**  |  1:1.3.4  |  1:2.6.1  | 

**Note**  
Although the Slurm version (24.11) remains unchanged, the underlying OS and library updates in this AMI may affect your system behavior and workload compatibility. You must test your workloads before upgrading production clusters.

### Upgrading to the Ubuntu 22.04 AMI
<a name="sagemaker-hyperpod-ami-slurm-ubuntu22-upgrade"></a>

Before upgrading your cluster to the Ubuntu 22.04 AMI, complete these preparation steps and review the upgrade requirements. To troubleshoot upgrade failures, see [Troubleshooting upgrade failures](#sagemaker-hyperpod-ami-slurm-ubuntu22-troubleshoot).

#### Review Python compatibility
<a name="sagemaker-hyperpod-ami-slurm-ubuntu22-python-compatibility"></a>

The Ubuntu 22.04 AMI uses Python 3.10 as the default version, upgraded from Python 3.8. Although Python 3.10 maintains compatibility with most Python 3.8 code, you should test your existing workloads before upgrading. If your workloads require Python 3.8, you can install it using the following command in your lifecycle script:

```
yum install python-3.8
```

Before upgrading your cluster, make sure to do the following:

1. Test your code compatibility with Python 3.10.

1. Verify your lifecycle scripts work in the new environment.

1. Check that all dependencies are compatible with the new Python version.

1. If you created your HyperPod cluster by copying the default lifecycle script from GitHub, add the following command to your `setup_mariadb_accounting.sh` file before upgrading to Ubuntu 22. For the complete script, see [setup\$1mariadb\$1accounting.sh on GitHub](https://github.com/aws-samples/awsome-distributed-training/blob/main/1.architectures/5.sagemaker-hyperpod/LifecycleScripts/base-config/setup_mariadb_accounting.sh).

   ```
   apt-get -y -o DPkg::Lock::Timeout=120 update && apt-get -y -o DPkg::Lock::Timeout=120 install apg
   ```

#### Upgrade your Slurm cluster
<a name="sagemaker-hyperpod-ami-slurm-ubuntu22-upgrade-cluster"></a>

You can upgrade your Slurm cluster to use the new AMI in two ways:

1. Create a new cluster using the [https://docs.amazonaws.cn/sagemaker/latest/APIReference/API_CreateCluster.html](https://docs.amazonaws.cn/sagemaker/latest/APIReference/API_CreateCluster.html) API.

1. Update an existing cluster's software using the [https://docs.amazonaws.cn/sagemaker/latest/APIReference/API_UpdateClusterSoftware.html](https://docs.amazonaws.cn/sagemaker/latest/APIReference/API_UpdateClusterSoftware.html) API.

#### Validated configurations
<a name="sagemaker-hyperpod-ami-slurm-ubuntu22-validation"></a>

Amazon has tested a wide range of distributed training workloads and infrastructure features on G5, G6, G6e, P4d, P5, and Trn1 instances, including:
+ Distributed training with PyTorch (e.g., FSDP, NeMo, LLaMA, MNIST).
+ Accelerator testing across instance types with Nvidia (P/G series) and Amazon Neuron (Trn1).
+ Resiliency features that include [auto-resume](https://docs.amazonaws.cn/sagemaker/latest/dg/sagemaker-hyperpod-resiliency-slurm.html#sagemaker-hyperpod-resiliency-slurm-auto-resume) and [deep health checks](https://docs.amazonaws.cn/sagemaker/latest/dg/sagemaker-hyperpod-eks-resiliency-deep-health-checks.html).

#### Cluster downtime and availability
<a name="sagemaker-hyperpod-ami-slurm-ubuntu22-downtime-availability"></a>

During the upgrade process, the cluster will be unavailable. To minimize disruption, do the following:
+ Test the upgrade process on smaller clusters.
+ Create checkpoints before the upgrade, then restart training workloads from existing checkpoints after the upgrade completes.

### Troubleshooting upgrade failures
<a name="sagemaker-hyperpod-ami-slurm-ubuntu22-troubleshoot"></a>

When an upgrade fails, first determine if the failure is related to lifecycle scripts. These scripts commonly fail due to syntax errors, missing dependencies, or incorrect configurations.

To investigate failures related to lifecycle scripts, check CloudWatch logs. All SageMaker HyperPod events and logs are stored under the log group: `/aws/sagemaker/Clusters/[ClusterName]/[ClusterID]`. Look specifically at the log stream `LifecycleConfig/[instance-group-name]/[instance-id]`, which provides detailed information about any errors during script execution.

If the upgrade failure is unrelated to lifecycle scripts, collect relevant information including the cluster ARN, error logs, and timestamps, then contact [Amazon support](https://aws.amazon.com/premiumsupport/) for further assistance.

## SageMaker HyperPod AMI releases for Slurm: May 07, 2025
<a name="sagemaker-hyperpod-release-ami-slurm-20250507"></a>

Amazon SageMaker HyperPod for Slurm released a major OS version upgrade to Ubuntu 22.04 (from the earlier Ubuntu 20.04). Check DLAMI Ubuntu 22.04 ([release notes](https://aws.amazon.com/releasenotes/aws-deep-learning-base-gpu-ami-ubuntu-22-04/) ) for more information: `Deep Learning Base OSS Nvidia Driver GPU AMI (Ubuntu 22.04) 20250503`.

Key package upgrades:
+ Ubuntu 22.04 LTS (from 20.04)
+ Python Version:
  + Python 3.10 is now the default Python version in the Slurm AMI Ubuntu 22.04
  + This upgrade provide access to the latest features, performance improvements and bug fixes introduced in Python 3.10
+ Support for EFA on FSx
+ New Linux Kernel version 6.8 (updated from 5.15)
+ Glibc version: 2.35 (updated from 2.31)
+ GCC version: 11.4.0 (updated from 9.4.0)
+ Newer libc6 version support (from libc6 version <= 2.31)
+ NFS version: 1:2.6.1 (updated from 1:1.3.4)

## SageMaker HyperPod AMI releases for Slurm: April 28, 2025
<a name="sagemaker-hyperpod-release-ami-slurm-20250428"></a>

**Improvements for Slurm**
+ Upgraded NVIDIA driver from version 550.144.03 to 550.163.01. This upgrade is to address Common Vulnerabilities and Exposures (CVEs) present in the [NVIDIA GPU Display Security Bulletin for April 2025](https://nvidia.custhelp.com/app/answers/detail/a_id/5630).

**Amazon SageMaker HyperPod DLAMI for Slurm support**

------
#### [ Installed the latest version of Amazon Neuron SDK ]
+ **aws-neuronx-collectives:** 2.24.59.0-838c7fc8b
+ **aws-neuronx-dkms:** 2.20.28.0
+ **aws-neuronx-runtime-lib:** 2.24.53.0-f239092cc
+ **aws-neuronx-tools/unknown:** 2.22.61.0

------

## SageMaker HyperPod AMI releases for Slurm: February 18, 2025
<a name="sagemaker-hyperpod-release-ami-slurm-20250218"></a>

**Improvements for Slurm**
+ Upgraded Slurm version to 24.11.
+ Upgraded Elastic Fabric Adapter (EFA) version from 1.37.0 to 1.38.0.
+ The EFA now includes the Amazon OFI NCCL plugin. You can find this plugin in the `/opt/amazon/ofi-nccl` directory, rather than the original `/opt/aws-ofi-nccl/` location. If you need to update your `LD_LIBRARY_PATH` environment variable, make sure to modify the path to point to the new `/opt/amazon/ofi-nccl` location for the OFI NCCL plugin.
+ Removed the emacs package from these DLAMIs. You can install emacs from GNU emac.

**Amazon SageMaker HyperPod DLAMI for Slurm support**

------
#### [ Installed the latest version of Amazon Neuron SDK 2.19 ]
+ **aws-neuronx-collectives/unknown:** 2.23.135.0-3e70920f2 amd64
+ **aws-neuronx-dkms/unknown:** 2.19.64.0 amd64
+ **aws-neuronx-runtime-lib/unknown:** 2.23.112.0-9b5179492 amd64
+ **aws-neuronx-tools/unknown:** 2.20.204.0 amd64

------

## SageMaker HyperPod AMI releases for Slurm: December 21, 2024
<a name="sagemaker-hyperpod-release-ami-slurm-20241221"></a>

**SageMaker HyperPod DLAMI for Slurm support**

------
#### [ Deep Learning Slurm AMI ]
+ **NVIDIA driver:** 550.127.05
+ **EFA driver:** 2.13.0-1
+ Installed the latest version of Amazon Neuron SDK
  + **aws-neuronx-collectives:** 2.22.33.0
  + **aws-neuronx-dkms:** 2.18.20.0
  + **aws-neuronx-oci-hook:** 2.5.8.0
  + **aws-neuronx-runtime-lib:** 2.22.19.0
  + **aws-neuronx-tools:** 2.19.0.0

------

## SageMaker HyperPod AMI releases for Slurm: November 24, 2024
<a name="sagemaker-hyperpod-release-ami-slurm-20241124"></a>

**AMI general updates**
+ Released in `MEL` (Melbourne) Region.
+ Updated SageMaker HyperPod base DLAMI to the following versions:
  + Slurm: 2024-11-22.

## SageMaker HyperPod AMI releases for Slurm: November 15, 2024
<a name="sagemaker-hyperpod-release-ami-slurm-20241115"></a>

**AMI general updates**
+ Installed latest `libnvidia-nscq-xxx` package.

**SageMaker HyperPod DLAMI for Slurm support**

------
#### [ Deep Learning Slurm AMI ]
+ **NVIDIA driver:** 550.127.05
+ **EFA driver:** 2.13.0-1
+ Installed the latest version of Amazon Neuron SDK
  + **aws-neuronx-collectives:** v2.22.33.0-d2128d1aa
  + **aws-neuronx-dkms:** v2.17.17.0
  + **aws-neuronx-oci-hook:** v2.4.4.0
  + **aws-neuronx-runtime-lib:** v2.21.41.0
  + **aws-neuronx-tools:** v2.18.3.0

------

## SageMaker HyperPod AMI releases for Slurm: November 11, 2024
<a name="sagemaker-hyperpod-release-ami-slurm-20241111"></a>

**AMI general updates**
+ Updated SageMaker HyperPod base DLAMI to the following version:
  + Slurm: 2024-10-23.

## SageMaker HyperPod AMI releases for Slurm: October 21, 2024
<a name="sagemaker-hyperpod-release-ami-slurm-20241021"></a>

**AMI general updates**
+ Updated SageMaker HyperPod base DLAMI to the following versions:
  + Slurm: 2024-09-27.

## SageMaker HyperPod AMI releases for Slurm: September 10, 2024
<a name="sagemaker-hyperpod-release-ami-slurm-20240910"></a>

**SageMaker HyperPod DLAMI for Slurm support**

------
#### [ Deep Learning Slurm AMI ]
+ Installed the NVIDIA driver v550.90.07
+ Installed the EFA driver v2.10
+ Installed the latest version of Amazon Neuron SDK
  + **aws-neuronx-collectives:** v2.21.46.0
  + **aws-neuronx-dkms:** v2.17.17.0
  + **aws-neuronx-oci-hook:** v2.4.4.0
  + **aws-neuronx-runtime-lib:** v2.21.41.0
  + **aws-neuronx-tools:** v2.18.3.0

------

## SageMaker HyperPod AMI releases for Slurm: March 14, 2024
<a name="sagemaker-hyperpod-release-ami-slurm-20240314"></a>

**HyperPod DLAMI for Slurm software patch**
+ Upgraded [Slurm](https://slurm.schedmd.com/documentation.html) to v23.11.1
+ Added [OpenPMIx](https://openpmix.github.io/code/getting-the-reference-implementation) v4.2.6 for enabling [Slurm with PMIx](https://slurm.schedmd.com/mpi_guide.html#pmix).
+ Built upon the [Amazon Deep Learning Base GPU AMI (Ubuntu 20.04)](https://www.amazonaws.cn/releasenotes/aws-deep-learning-base-gpu-ami-ubuntu-20-04/) released on 2023-10-26
+ A complete list of pre-installed packages in this HyperPod DLAMI in addition to the base AMI
  + [Slurm](https://slurm.schedmd.com/documentation.html): v23.11.1
  + [OpenPMIx ](https://openpmix.github.io/code/getting-the-reference-implementation): v4.2.6
  + Munge: v0.5.15
  + `aws-neuronx-dkms`: v2.\$1
  + `aws-neuronx-collectives`: v2.\$1
  + `aws-neuronx-runtime-lib`: v2.\$1
  + `aws-neuronx-tools`: v2.\$1
  + SageMaker HyperPod software packages to support features such as cluster health check and auto-resume

**Upgrade steps**
+ Run the following command to call the [UpdateClusterSoftware](https://docs.amazonaws.cn/sagemaker/latest/APIReference/API_UpdateClusterSoftware.html) API to update your existing HyperPod clusters with the latest HyperPod DLAMI. To find more instructions, see [Update the SageMaker HyperPod platform software of a cluster](sagemaker-hyperpod-operate-slurm-cli-command.md#sagemaker-hyperpod-operate-slurm-cli-command-update-cluster-software).
**Important**  
Back up your work before running this API. The patching process replaces the root volume with the updated AMI, which means that your previous data stored in the instance root volume will be lost. Make sure that you back up your data from the instance root volume to Amazon S3 or Amazon FSx for Lustre. For more information, see [Use the backup script provided by SageMaker HyperPod](sagemaker-hyperpod-operate-slurm-cli-command.md#sagemaker-hyperpod-operate-slurm-cli-command-update-cluster-software-backup).

  ```
   aws sagemaker update-cluster-software --cluster-name your-cluster-name
  ```
**Note**  
Note that you should run the Amazon CLI command to update your HyperPod cluster. Updating the HyperPod software through SageMaker HyperPod console UI is currently not available.

## SageMaker HyperPod AMI release for Slurm: November 29, 2023
<a name="sagemaker-hyperpod-release-ami-slurm-20231129"></a>

**HyperPod DLAMI for Slurm software patch**

The HyperPod service team distributes software patches through [SageMaker HyperPod DLAMI](sagemaker-hyperpod-ref.md#sagemaker-hyperpod-ref-hyperpod-ami). See the following details about the latest HyperPod DLAMI.
+ Built upon the [Amazon Deep Learning Base GPU AMI (Ubuntu 20.04)](https://aws.amazon.com/releasenotes/aws-deep-learning-base-gpu-ami-ubuntu-20-04/) released on 2023-10-18
+ A complete list of pre-installed packages in this HyperPod DLAMI in addition to the base AMI
  + [Slurm](https://slurm.schedmd.com/documentation.html): v23.02.3
  + Munge: v0.5.15
  + `aws-neuronx-dkms`: v2.\$1
  + `aws-neuronx-collectives`: v2.\$1
  + `aws-neuronx-runtime-lib`: v2.\$1
  + `aws-neuronx-tools`: v2.\$1
  + SageMaker HyperPod software packages to support features such as cluster health check and auto-resume

# SageMaker HyperPod AMI releases for Amazon EKS
<a name="sagemaker-hyperpod-release-ami-eks"></a>

The following release notes track the latest updates for Amazon SageMaker HyperPod AMI releases for Amazon EKS orchestration. Each release note includes a summarized list of packages pre-installed or pre-configured in the SageMaker HyperPod DLAMIs for Amazon EKS support. Each DLAMI is built on AL2023 and supports a specific Kubernetes version. For HyperPod DLAMI releases for Slurm orchestration, see [SageMaker HyperPod AMI releases for Slurm](sagemaker-hyperpod-release-ami-slurm.md). For information about Amazon SageMaker HyperPod feature releases, see [Amazon SageMaker HyperPod release notes](sagemaker-hyperpod-release-notes.md).

## SageMaker Hyperpod AMI releases for Amazon EKS: March 01, 2026
<a name="sagemaker-hyperpod-release-ami-eks-20260301"></a>

 **AMI general updates** 
+ Released updates for SageMaker Hyperpod AMI for Amazon EKS versions 1.28, 1.29, 1.30, 1.31, 1.32, 1.33, 1.34.
+ Base DLAMI release note is available [here](https://docs.amazonaws.cn//dlami/latest/devguide/appendix-ami-release-notes.html#appendix-ami-release-notes-base).

 **SageMaker Hyperpod DLAMI for Amazon EKS support** 

This release includes the following updates:

------
#### [ Kubernetes v1.28 ]
+  **AL2 is now deprecated. Kubernetes AMI is based on AL2023.** 
+ AL2 (x86\$164):
  + Linux Kernel version: 5.10
  + Glibc version: 2.26
  + OpenSSL version: 1.0.2k-fips
  + FSx Lustre Client version: 2.12.8
  + Docker version: Docker version 25.0.14, build 0bab007
  + Runc version: 1.3.4
  + Containerd version: containerd github.com/containerd/containerd 1.7.29
  + aws CLI v2 version: aws-cli/1.44.44 Python/3.10.17 Linux/5.10.248-247.988.amzn2.x86\$164 botocore/1.42.54
  + aws Neuronx DKMS version: 2.25.4.0
  + NVIDIA Driver version: 570.211.01
  + CUDA version: 12.2
  + ENA Driver version: 2.16.1g
  + EFA Installer version: 1.45.0
  + Python version: 3.7.16
  + Kubernetes version: v1.28.15-eks-ecaa3a6
  + iptables-services version: 1.8.4
  + nginx version: 1.20.1
  + nvme-cli version: 1.11.1
  + epel-release version: 7
  + stress version: 1.0.4
  + collectd version: 5.8.1
  + acl version: 2.2.51
  + rsyslog version: 8.24.0
  + lustre-client version: 2.12.8
  + systemd version: 219
  + openssh version: 7.4
  + sudo version: 1.8.23
  + gcc version: 7.3.1
  + cmake version: 2.8.12.2
  + git version: 2.47.3
  + make version: 3.82
  + cloudwatch-agent version: 1.300064.1
  + nfs-utils version: 1.3.0
  + lvm2 version: 2.02.187
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 60.0
+ AL2023 (x86\$164):
  + Linux Kernel version: 6.1
  + Glibc version: 2.34
  + OpenSSL version: 3.2.2
  + FSx Lustre Client version: 2.15.6
  + Runc version: 1.3.4
  + Containerd version: containerd github.com/containerd/containerd 1.7.27
  + aws Neuronx DKMS version: 2.25.4.0
  + NVIDIA Driver version: 580.126.09
  + CUDA version: 12.8
  + ENA Driver version: 2.16.1g
  + EFA Installer version: 1.45.0
  + Python version: 3.9.25
  + Kubernetes version: v1.28.15-eks-ecaa3a6
  + iptables-services version: 1.8.8
  + nginx version: 1.28.2
  + nvme-cli version: 2.13 1.13
  + stress version: 1.0.7
  + collectd version: 5.12.0.
  + acl version: 2.3.1
  + lustre-client version: 2.15.6
  + systemd version: 252
  + openssh version: 8.7
  + sudo version: 1.9.15
  + gcc version: 11.5.0
  + cmake version: 3.22.2
  + git version: 2.50.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300064.1
  + nfs-utils version: 2.5.4
  + lvm2 version: 2.03.16
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 60.0

------
#### [ Kubernetes v1.29 ]
+  **AL2 is now deprecated. Kubernetes AMI is based on AL2023.** 
+ AL2 (x86\$164):
  + Linux Kernel version: 5.10
  + Glibc version: 2.26
  + OpenSSL version: 1.0.2k-fips
  + FSx Lustre Client version: 2.12.8
  + Docker version: Docker version 25.0.14, build 0bab007
  + Runc version: 1.3.4
  + Containerd version: containerd github.com/containerd/containerd 1.7.29
  + aws CLI v2 version: aws-cli/1.44.44 Python/3.10.17 Linux/5.10.248-247.988.amzn2.x86\$164 botocore/1.42.54
  + aws Neuronx DKMS version: 2.25.4.0
  + NVIDIA Driver version: 570.211.01
  + CUDA version: 12.2
  + ENA Driver version: 2.16.1g
  + EFA Installer version: 1.45.0
  + Python version: 3.7.16
  + Kubernetes version: v1.29.15-eks-ecaa3a6
  + iptables-services version: 1.8.4
  + nginx version: 1.20.1
  + nvme-cli version: 1.11.1
  + epel-release version: 7
  + stress version: 1.0.4
  + collectd version: 5.8.1
  + acl version: 2.2.51
  + rsyslog version: 8.24.0
  + lustre-client version: 2.12.8
  + systemd version: 219
  + openssh version: 7.4
  + sudo version: 1.8.23
  + gcc version: 7.3.1
  + cmake version: 2.8.12.2
  + git version: 2.47.3
  + make version: 3.82
  + cloudwatch-agent version: 1.300064.1
  + nfs-utils version: 1.3.0
  + lvm2 version: 2.02.187
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 60.0
+ AL2023 (x86\$164):
  + Linux Kernel version: 6.1
  + Glibc version: 2.34
  + OpenSSL version: 3.2.2
  + FSx Lustre Client version: 2.15.6
  + Runc version: 1.3.4
  + Containerd version: containerd github.com/containerd/containerd 1.7.29
  + aws Neuronx DKMS version: 2.25.4.0
  + NVIDIA Driver version: 580.126.09
  + CUDA version: 12.8
  + ENA Driver version: 2.16.1g
  + EFA Installer version: 1.45.0
  + Python version: 3.9.25
  + Kubernetes version: v1.29.15-eks-ecaa3a6
  + iptables-services version: 1.8.8
  + nginx version: 1.28.2
  + nvme-cli version: 2.13 1.13
  + stress version: 1.0.7
  + collectd version: 5.12.0.
  + acl version: 2.3.1
  + lustre-client version: 2.15.6
  + systemd version: 252
  + openssh version: 8.7
  + sudo version: 1.9.15
  + gcc version: 11.5.0
  + cmake version: 3.22.2
  + git version: 2.50.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300064.1
  + nfs-utils version: 2.5.4
  + lvm2 version: 2.03.16
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 60.0

------
#### [ Kubernetes v1.30 ]
+  **AL2 is now deprecated. Kubernetes AMI is based on AL2023.** 
+ AL2 (x86\$164):
  + Linux Kernel version: 5.10
  + Glibc version: 2.26
  + OpenSSL version: 1.0.2k-fips
  + FSx Lustre Client version: 2.12.8
  + Docker version: Docker version 25.0.14, build 0bab007
  + Runc version: 1.3.4
  + Containerd version: containerd github.com/containerd/containerd 1.7.29
  + aws CLI v2 version: aws-cli/1.44.44 Python/3.10.17 Linux/5.10.248-247.988.amzn2.x86\$164 botocore/1.42.54
  + aws Neuronx DKMS version: 2.25.4.0
  + NVIDIA Driver version: 570.211.01
  + CUDA version: 12.2
  + ENA Driver version: 2.16.1g
  + EFA Installer version: 1.45.0
  + Python version: 3.7.16
  + Kubernetes version: v1.30.14-eks-ecaa3a6
  + iptables-services version: 1.8.4
  + nginx version: 1.20.1
  + nvme-cli version: 1.11.1
  + epel-release version: 7
  + stress version: 1.0.4
  + collectd version: 5.8.1
  + acl version: 2.2.51
  + rsyslog version: 8.24.0
  + lustre-client version: 2.12.8
  + systemd version: 219
  + openssh version: 7.4
  + sudo version: 1.8.23
  + gcc version: 7.3.1
  + cmake version: 2.8.12.2
  + git version: 2.47.3
  + make version: 3.82
  + cloudwatch-agent version: 1.300064.1
  + nfs-utils version: 1.3.0
  + lvm2 version: 2.02.187
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 60.0
+ AL2023 (x86\$164):
  + Linux Kernel version: 6.1
  + Glibc version: 2.34
  + OpenSSL version: 3.2.2
  + FSx Lustre Client version: 2.15.6
  + Runc version: 1.3.4
  + Containerd version: containerd github.com/containerd/containerd 1.7.29
  + aws Neuronx DKMS version: 2.25.4.0
  + NVIDIA Driver version: 580.126.09
  + CUDA version: 12.8
  + ENA Driver version: 2.16.1g
  + EFA Installer version: 1.45.0
  + Python version: 3.9.25
  + Kubernetes version: v1.30.14-eks-ecaa3a6
  + iptables-services version: 1.8.8
  + nginx version: 1.28.2
  + nvme-cli version: 2.13 1.13
  + stress version: 1.0.7
  + collectd version: 5.12.0.
  + acl version: 2.3.1
  + lustre-client version: 2.15.6
  + systemd version: 252
  + openssh version: 8.7
  + sudo version: 1.9.15
  + gcc version: 11.5.0
  + cmake version: 3.22.2
  + git version: 2.50.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300064.1
  + nfs-utils version: 2.5.4
  + lvm2 version: 2.03.16
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 60.0

------
#### [ Kubernetes v1.31 ]
+  **AL2 is now deprecated. Kubernetes AMI is based on AL2023.** 
+ AL2 (x86\$164):
  + Linux Kernel version: 5.10
  + Glibc version: 2.26
  + OpenSSL version: 1.0.2k-fips
  + FSx Lustre Client version: 2.12.8
  + Docker version: Docker version 25.0.14, build 0bab007
  + Runc version: 1.3.4
  + Containerd version: containerd github.com/containerd/containerd 1.7.29
  + aws CLI v2 version: aws-cli/1.44.44 Python/3.10.17 Linux/5.10.248-247.988.amzn2.x86\$164 botocore/1.42.54
  + aws Neuronx DKMS version: 2.25.4.0
  + NVIDIA Driver version: 570.211.01
  + CUDA version: 12.2
  + ENA Driver version: 2.16.1g
  + EFA Installer version: 1.45.0
  + Python version: 3.7.16
  + Kubernetes version: v1.31.13-eks-ecaa3a6
  + iptables-services version: 1.8.4
  + nginx version: 1.20.1
  + nvme-cli version: 1.11.1
  + epel-release version: 7
  + stress version: 1.0.4
  + collectd version: 5.8.1
  + acl version: 2.2.51
  + rsyslog version: 8.24.0
  + lustre-client version: 2.12.8
  + systemd version: 219
  + openssh version: 7.4
  + sudo version: 1.8.23
  + gcc version: 7.3.1
  + cmake version: 2.8.12.2
  + git version: 2.47.3
  + make version: 3.82
  + cloudwatch-agent version: 1.300064.1
  + nfs-utils version: 1.3.0
  + lvm2 version: 2.02.187
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 60.0
+ AL2023 (x86\$164):
  + Linux Kernel version: 6.1
  + Glibc version: 2.34
  + OpenSSL version: 3.2.2
  + FSx Lustre Client version: 2.15.6
  + Runc version: 1.3.4
  + Containerd version: containerd github.com/containerd/containerd/v2 2.1.5
  + aws Neuronx DKMS version: 2.25.4.0
  + NVIDIA Driver version: 580.126.09
  + CUDA version: 12.8
  + ENA Driver version: 2.16.1g
  + EFA Installer version: 1.45.0
  + Python version: 3.9.25
  + Kubernetes version: v1.31.13-eks-ecaa3a6
  + iptables-services version: 1.8.8
  + nginx version: 1.28.2
  + nvme-cli version: 2.13 1.13
  + stress version: 1.0.7
  + collectd version: 5.12.0.
  + acl version: 2.3.1
  + lustre-client version: 2.15.6
  + systemd version: 252
  + openssh version: 8.7
  + sudo version: 1.9.15
  + gcc version: 11.5.0
  + cmake version: 3.22.2
  + git version: 2.50.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300064.1
  + nfs-utils version: 2.5.4
  + lvm2 version: 2.03.16
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 60.0
+ AL2023 (ARM64):
  + Linux Kernel version: 6.12
  + Glibc version: 2.34
  + OpenSSL version: 3.2.2
  + FSx Lustre Client version: 2.15.6
  + Runc version: 1.3.4
  + Containerd version: containerd github.com/containerd/containerd/v2 2.1.5
  + NVIDIA Driver version: 580.126.09
  + CUDA version: 12.8
  + ENA Driver version: 2.16.1g
  + EFA Installer version: 1.43.3
  + Python version: 3.9.25
  + Kubernetes version: v1.31.13-eks-ecaa3a6
  + iptables-services version: 1.8.8
  + nginx version: 1.28.2
  + nvme-cli version: 2.13 1.13
  + stress version: 1.0.7
  + collectd version: 5.12.0.
  + acl version: 2.3.1
  + lustre-client version: 2.15.6
  + nvidia-imex version: 580.126.09
  + systemd version: 252
  + openssh version: 8.7
  + sudo version: 1.9.15
  + gcc version: 11.5.0
  + cmake version: 3.22.2
  + git version: 2.50.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300064.1
  + nfs-utils version: 2.5.4
  + lvm2 version: 2.03.16
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 58.

------
#### [ Kubernetes v1.32 ]
+  **AL2 is now deprecated. Kubernetes AMI is based on AL2023.** 
+ AL2 (x86\$164):
  + Linux Kernel version: 5.10
  + Glibc version: 2.26
  + OpenSSL version: 1.0.2k-fips
  + FSx Lustre Client version: 2.12.8
  + Docker version: Docker version 25.0.14, build 0bab007
  + Runc version: 1.3.4
  + Containerd version: containerd github.com/containerd/containerd 1.7.29
  + aws CLI v2 version: aws-cli/1.44.46 Python/3.10.17 Linux/5.10.248-247.988.amzn2.x86\$164 botocore/1.42.56
  + aws Neuronx DKMS version: 2.25.4.0
  + NVIDIA Driver version: 570.211.01
  + CUDA version: 12.2
  + ENA Driver version: 2.16.1g
  + EFA Installer version: 1.45.0
  + Python version: 3.7.16
  + Kubernetes version: v1.32.9-eks-ecaa3a6
  + iptables-services version: 1.8.4
  + nginx version: 1.20.1
  + nvme-cli version: 1.11.1
  + epel-release version: 7
  + stress version: 1.0.4
  + collectd version: 5.8.1
  + acl version: 2.2.51
  + rsyslog version: 8.24.0
  + lustre-client version: 2.12.8
  + systemd version: 219
  + openssh version: 7.4
  + sudo version: 1.8.23
  + gcc version: 7.3.1
  + cmake version: 2.8.12.2
  + git version: 2.47.3
  + make version: 3.82
  + cloudwatch-agent version: 1.300064.1
  + nfs-utils version: 1.3.0
  + lvm2 version: 2.02.187
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 60.0
+ AL2023 (x86\$164):
  + Linux Kernel version: 6.1
  + Glibc version: 2.34
  + OpenSSL version: 3.2.2
  + FSx Lustre Client version: 2.15.6
  + Runc version: 1.3.4
  + Containerd version: containerd github.com/containerd/containerd/v2 2.1.5
  + aws Neuronx DKMS version: 2.25.4.0
  + NVIDIA Driver version: 580.126.09
  + CUDA version: 12.8
  + ENA Driver version: 2.16.1g
  + EFA Installer version: 1.45.0
  + Python version: 3.9.25
  + Kubernetes version: v1.32.9-eks-ecaa3a6
  + iptables-services version: 1.8.8
  + nginx version: 1.28.2
  + nvme-cli version: 2.13 1.13
  + stress version: 1.0.7
  + collectd version: 5.12.0.
  + acl version: 2.3.1
  + lustre-client version: 2.15.6
  + systemd version: 252
  + openssh version: 8.7
  + sudo version: 1.9.15
  + gcc version: 11.5.0
  + cmake version: 3.22.2
  + git version: 2.50.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300064.1
  + nfs-utils version: 2.5.4
  + lvm2 version: 2.03.16
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 60.0
+ AL2023 (ARM64):
  + Linux Kernel version: 6.12
  + Glibc version: 2.34
  + OpenSSL version: 3.2.2
  + FSx Lustre Client version: 2.15.6
  + Runc version: 1.3.4
  + Containerd version: containerd github.com/containerd/containerd/v2 2.1.5
  + NVIDIA Driver version: 580.126.09
  + CUDA version: 12.8
  + ENA Driver version: 2.16.1g
  + EFA Installer version: 1.43.3
  + Python version: 3.9.25
  + Kubernetes version: v1.32.9-eks-ecaa3a6
  + iptables-services version: 1.8.8
  + nginx version: 1.28.2
  + nvme-cli version: 2.13 1.13
  + stress version: 1.0.7
  + collectd version: 5.12.0.
  + acl version: 2.3.1
  + lustre-client version: 2.15.6
  + nvidia-imex version: 580.126.09
  + systemd version: 252
  + openssh version: 8.7
  + sudo version: 1.9.15
  + gcc version: 11.5.0
  + cmake version: 3.22.2
  + git version: 2.50.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300064.1
  + nfs-utils version: 2.5.4
  + lvm2 version: 2.03.16
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 58.

------
#### [ Kubernetes v1.33 ]
+ AL2023 (x86\$164):
  + Linux Kernel version: 6.1
  + Glibc version: 2.34
  + OpenSSL version: 3.2.2
  + FSx Lustre Client version: 2.15.6
  + Runc version: 1.3.4
  + Containerd version: containerd github.com/containerd/containerd/v2 2.1.5
  + aws Neuronx DKMS version: 2.25.4.0
  + NVIDIA Driver version: 580.126.09
  + CUDA version: 12.8
  + ENA Driver version: 2.16.1g
  + EFA Installer version: 1.45.0
  + Python version: 3.9.25
  + Kubernetes version: v1.33.5-eks-ecaa3a6
  + iptables-services version: 1.8.8
  + nginx version: 1.28.2
  + nvme-cli version: 2.13 1.13
  + stress version: 1.0.7
  + collectd version: 5.12.0.
  + acl version: 2.3.1
  + lustre-client version: 2.15.6
  + systemd version: 252
  + openssh version: 8.7
  + sudo version: 1.9.15
  + gcc version: 11.5.0
  + cmake version: 3.22.2
  + git version: 2.50.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300064.1
  + nfs-utils version: 2.5.4
  + lvm2 version: 2.03.16
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 60.0
+ AL2023 (ARM64):
  + Linux Kernel version: 6.12
  + Glibc version: 2.34
  + OpenSSL version: 3.2.2
  + FSx Lustre Client version: 2.15.6
  + Runc version: 1.3.4
  + Containerd version: containerd github.com/containerd/containerd/v2 2.1.5
  + NVIDIA Driver version: 580.126.09
  + CUDA version: 12.8
  + ENA Driver version: 2.16.1g
  + EFA Installer version: 1.43.3
  + Python version: 3.9.25
  + Kubernetes version: v1.33.5-eks-ecaa3a6
  + iptables-services version: 1.8.8
  + nginx version: 1.28.2
  + nvme-cli version: 2.13 1.13
  + stress version: 1.0.7
  + collectd version: 5.12.0.
  + acl version: 2.3.1
  + lustre-client version: 2.15.6
  + nvidia-imex version: 580.126.09
  + systemd version: 252
  + openssh version: 8.7
  + sudo version: 1.9.15
  + gcc version: 11.5.0
  + cmake version: 3.22.2
  + git version: 2.50.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300064.1
  + nfs-utils version: 2.5.4
  + lvm2 version: 2.03.16
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 58.

------
#### [ Kubernetes v1.34 ]
+ AL2023 (x86\$164):
  + Linux Kernel version: 6.1
  + Glibc version: 2.34
  + OpenSSL version: 3.2.2
  + FSx Lustre Client version: 2.15.6
  + Runc version: 1.3.4
  + Containerd version: containerd github.com/containerd/containerd/v2 2.1.5
  + aws Neuronx DKMS version: 2.25.4.0
  + NVIDIA Driver version: 580.126.09
  + CUDA version: 12.8
  + ENA Driver version: 2.16.1g
  + EFA Installer version: 1.45.0
  + Python version: 3.9.25
  + Kubernetes version: v1.34.2-eks-ecaa3a6
  + iptables-services version: 1.8.8
  + nginx version: 1.28.2
  + nvme-cli version: 2.13 1.13
  + stress version: 1.0.7
  + collectd version: 5.12.0.
  + acl version: 2.3.1
  + lustre-client version: 2.15.6
  + systemd version: 252
  + openssh version: 8.7
  + sudo version: 1.9.15
  + gcc version: 11.5.0
  + cmake version: 3.22.2
  + git version: 2.50.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300064.1
  + nfs-utils version: 2.5.4
  + lvm2 version: 2.03.16
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 60.0
+ AL2023 (ARM64):
  + Linux Kernel version: 6.12
  + Glibc version: 2.34
  + OpenSSL version: 3.2.2
  + FSx Lustre Client version: 2.15.6
  + Runc version: 1.3.4
  + Containerd version: containerd github.com/containerd/containerd/v2 2.1.5
  + NVIDIA Driver version: 580.126.09
  + CUDA version: 12.8
  + ENA Driver version: 2.16.1g
  + EFA Installer version: 1.43.3
  + Python version: 3.9.25
  + Kubernetes version: v1.34.2-eks-ecaa3a6
  + iptables-services version: 1.8.8
  + nginx version: 1.28.2
  + nvme-cli version: 2.13 1.13
  + stress version: 1.0.7
  + collectd version: 5.12.0.
  + acl version: 2.3.1
  + lustre-client version: 2.15.6
  + nvidia-imex version: 580.126.09
  + systemd version: 252
  + openssh version: 8.7
  + sudo version: 1.9.15
  + gcc version: 11.5.0
  + cmake version: 3.22.2
  + git version: 2.50.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300064.1
  + nfs-utils version: 2.5.4
  + lvm2 version: 2.03.16
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 58.

------

## SageMaker Hyperpod AMI releases for Amazon EKS: February 12, 2026
<a name="sagemaker-hyperpod-release-ami-eks-20260212"></a>

 **AMI general updates** 
+ Released updates for SageMaker Hyperpod AMI for Amazon EKS versions 1.28, 1.29, 1.30, 1.31, 1.32, 1.33, 1.34.
+ Base DLAMI release note is available [here](https://docs.amazonaws.cn//dlami/latest/devguide/appendix-ami-release-notes.html#appendix-ami-release-notes-base).

 **SageMaker Hyperpod DLAMI for Amazon EKS support** 

This release includes the following updates:

------
#### [ Kubernetes v1.28 ]
+  **AL2 is now deprecated. Kubernetes AMI is based on AL2023.** 
+ AL2 (x86\$164):
  + Linux Kernel version: 5.10
  + Glibc version: 2.26
  + OpenSSL version: 1.0.2k-fips
  + FSx Lustre Client version: 2.12.8
  + Docker version: Docker version 25.0.14, build 0bab007
  + Runc version: 1.3.4
  + Containerd version: containerd github.com/containerd/containerd 1.7.29
  + aws CLI v2 version: aws-cli/1.44.31 Python/3.10.17 Linux/5.10.247-246.989.amzn2.x86\$164 botocore/1.42.41
  + aws Neuronx DKMS version: 2.25.4.0
  + NVIDIA Driver version: 570.211.01
  + CUDA version: 12.2
  + ENA Driver version: 2.15.0g
  + EFA Installer version: 1.45.0
  + Python version: 3.7.16
  + Kubernetes version: v1.28.15-eks-ecaa3a6
  + iptables-services version: 1.8.4
  + nginx version: 1.20.1
  + nvme-cli version: 1.11.1
  + epel-release version: 7
  + stress version: 1.0.4
  + collectd version: 5.8.1
  + acl version: 2.2.51
  + rsyslog version: 8.24.0
  + lustre-client version: 2.12.8
  + systemd version: 219
  + openssh version: 7.4
  + sudo version: 1.8.23
  + gcc version: 7.3.1
  + cmake version: 2.8.12.2
  + git version: 2.47.3
  + make version: 3.82
  + cloudwatch-agent version: 1.300062.1
  + nfs-utils version: 1.3.0
  + lvm2 version: 2.02.187
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 60.0
+ AL2023 (x86\$164):
  + Linux Kernel version: 6.1
  + Glibc version: 2.34
  + OpenSSL version: 3.2.2
  + FSx Lustre Client version: 2.15.6
  + Runc version: 1.3.4
  + Containerd version: containerd github.com/containerd/containerd 1.7.27
  + aws Neuronx DKMS version: 2.25.4.0
  + NVIDIA Driver version: 580.126.09
  + CUDA version: 12.8
  + ENA Driver version: 2.15.0g
  + EFA Installer version: 1.45.0
  + Python version: 3.9.25
  + Kubernetes version: v1.28.15-eks-ecaa3a6
  + iptables-services version: 1.8.8
  + nginx version: 1.28.1
  + nvme-cli version: 2.13 1.13
  + stress version: 1.0.7
  + collectd version: 5.12.0.
  + acl version: 2.3.1
  + lustre-client version: 2.15.6
  + systemd version: 252
  + openssh version: 8.7
  + sudo version: 1.9.15
  + gcc version: 11.5.0
  + cmake version: 3.22.2
  + git version: 2.50.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300062.1
  + nfs-utils version: 2.5.4
  + lvm2 version: 2.03.16
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 60.0

------
#### [ Kubernetes v1.29 ]
+  **AL2 is now deprecated. Kubernetes AMI is based on AL2023.** 
+ AL2 (x86\$164):
  + Linux Kernel version: 5.10
  + Glibc version: 2.26
  + OpenSSL version: 1.0.2k-fips
  + FSx Lustre Client version: 2.12.8
  + Docker version: Docker version 25.0.14, build 0bab007
  + Runc version: 1.3.4
  + Containerd version: containerd github.com/containerd/containerd 1.7.29
  + aws CLI v2 version: aws-cli/1.44.31 Python/3.10.17 Linux/5.10.247-246.989.amzn2.x86\$164 botocore/1.42.41
  + aws Neuronx DKMS version: 2.25.4.0
  + NVIDIA Driver version: 570.211.01
  + CUDA version: 12.2
  + ENA Driver version: 2.15.0g
  + EFA Installer version: 1.45.0
  + Python version: 3.7.16
  + Kubernetes version: v1.29.15-eks-ecaa3a6
  + iptables-services version: 1.8.4
  + nginx version: 1.20.1
  + nvme-cli version: 1.11.1
  + epel-release version: 7
  + stress version: 1.0.4
  + collectd version: 5.8.1
  + acl version: 2.2.51
  + rsyslog version: 8.24.0
  + lustre-client version: 2.12.8
  + systemd version: 219
  + openssh version: 7.4
  + sudo version: 1.8.23
  + gcc version: 7.3.1
  + cmake version: 2.8.12.2
  + git version: 2.47.3
  + make version: 3.82
  + cloudwatch-agent version: 1.300062.1
  + nfs-utils version: 1.3.0
  + lvm2 version: 2.02.187
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 60.0
+ AL2023 (x86\$164):
  + Linux Kernel version: 6.1
  + Glibc version: 2.34
  + OpenSSL version: 3.2.2
  + FSx Lustre Client version: 2.15.6
  + Runc version: 1.3.4
  + Containerd version: containerd github.com/containerd/containerd 1.7.29
  + aws Neuronx DKMS version: 2.25.4.0
  + NVIDIA Driver version: 580.126.09
  + CUDA version: 12.8
  + ENA Driver version: 2.15.0g
  + EFA Installer version: 1.45.0
  + Python version: 3.9.25
  + Kubernetes version: v1.29.15-eks-ecaa3a6
  + iptables-services version: 1.8.8
  + nginx version: 1.28.1
  + nvme-cli version: 2.13 1.13
  + stress version: 1.0.7
  + collectd version: 5.12.0.
  + acl version: 2.3.1
  + lustre-client version: 2.15.6
  + systemd version: 252
  + openssh version: 8.7
  + sudo version: 1.9.15
  + gcc version: 11.5.0
  + cmake version: 3.22.2
  + git version: 2.50.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300062.1
  + nfs-utils version: 2.5.4
  + lvm2 version: 2.03.16
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 60.0

------
#### [ Kubernetes v1.30 ]
+  **AL2 is now deprecated. Kubernetes AMI is based on AL2023.** 
+ AL2 (x86\$164):
  + Linux Kernel version: 5.10
  + Glibc version: 2.26
  + OpenSSL version: 1.0.2k-fips
  + FSx Lustre Client version: 2.12.8
  + Docker version: Docker version 25.0.14, build 0bab007
  + Runc version: 1.3.4
  + Containerd version: containerd github.com/containerd/containerd 1.7.29
  + aws CLI v2 version: aws-cli/1.44.31 Python/3.10.17 Linux/5.10.247-246.989.amzn2.x86\$164 botocore/1.42.41
  + aws Neuronx DKMS version: 2.25.4.0
  + NVIDIA Driver version: 570.211.01
  + CUDA version: 12.2
  + ENA Driver version: 2.15.0g
  + EFA Installer version: 1.45.0
  + Python version: 3.7.16
  + Kubernetes version: v1.30.14-eks-ecaa3a6
  + iptables-services version: 1.8.4
  + nginx version: 1.20.1
  + nvme-cli version: 1.11.1
  + epel-release version: 7
  + stress version: 1.0.4
  + collectd version: 5.8.1
  + acl version: 2.2.51
  + rsyslog version: 8.24.0
  + lustre-client version: 2.12.8
  + systemd version: 219
  + openssh version: 7.4
  + sudo version: 1.8.23
  + gcc version: 7.3.1
  + cmake version: 2.8.12.2
  + git version: 2.47.3
  + make version: 3.82
  + cloudwatch-agent version: 1.300062.1
  + nfs-utils version: 1.3.0
  + lvm2 version: 2.02.187
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 60.0
+ AL2023 (x86\$164):
  + Linux Kernel version: 6.1
  + Glibc version: 2.34
  + OpenSSL version: 3.2.2
  + FSx Lustre Client version: 2.15.6
  + Runc version: 1.3.4
  + Containerd version: containerd github.com/containerd/containerd 1.7.29
  + aws Neuronx DKMS version: 2.25.4.0
  + NVIDIA Driver version: 580.126.09
  + CUDA version: 12.8
  + ENA Driver version: 2.15.0g
  + EFA Installer version: 1.45.0
  + Python version: 3.9.25
  + Kubernetes version: v1.30.14-eks-ecaa3a6
  + iptables-services version: 1.8.8
  + nginx version: 1.28.1
  + nvme-cli version: 2.13 1.13
  + stress version: 1.0.7
  + collectd version: 5.12.0.
  + acl version: 2.3.1
  + lustre-client version: 2.15.6
  + systemd version: 252
  + openssh version: 8.7
  + sudo version: 1.9.15
  + gcc version: 11.5.0
  + cmake version: 3.22.2
  + git version: 2.50.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300062.1
  + nfs-utils version: 2.5.4
  + lvm2 version: 2.03.16
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 60.0

------
#### [ Kubernetes v1.31 ]
+  **AL2 is now deprecated. Kubernetes AMI is based on AL2023.** 
+ AL2 (x86\$164):
  + Linux Kernel version: 5.10
  + Glibc version: 2.26
  + OpenSSL version: 1.0.2k-fips
  + FSx Lustre Client version: 2.12.8
  + Docker version: Docker version 25.0.14, build 0bab007
  + Runc version: 1.3.4
  + Containerd version: containerd github.com/containerd/containerd 1.7.29
  + aws CLI v2 version: aws-cli/1.44.31 Python/3.10.17 Linux/5.10.247-246.989.amzn2.x86\$164 botocore/1.42.41
  + aws Neuronx DKMS version: 2.25.4.0
  + NVIDIA Driver version: 570.211.01
  + CUDA version: 12.2
  + ENA Driver version: 2.15.0g
  + EFA Installer version: 1.45.0
  + Python version: 3.7.16
  + Kubernetes version: v1.31.13-eks-ecaa3a6
  + iptables-services version: 1.8.4
  + nginx version: 1.20.1
  + nvme-cli version: 1.11.1
  + epel-release version: 7
  + stress version: 1.0.4
  + collectd version: 5.8.1
  + acl version: 2.2.51
  + rsyslog version: 8.24.0
  + lustre-client version: 2.12.8
  + systemd version: 219
  + openssh version: 7.4
  + sudo version: 1.8.23
  + gcc version: 7.3.1
  + cmake version: 2.8.12.2
  + git version: 2.47.3
  + make version: 3.82
  + cloudwatch-agent version: 1.300062.1
  + nfs-utils version: 1.3.0
  + lvm2 version: 2.02.187
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 60.0
+ AL2023 (x86\$164):
  + Linux Kernel version: 6.1
  + Glibc version: 2.34
  + OpenSSL version: 3.2.2
  + FSx Lustre Client version: 2.15.6
  + Runc version: 1.3.4
  + Containerd version: containerd github.com/containerd/containerd/v2 2.1.5
  + aws Neuronx DKMS version: 2.25.4.0
  + NVIDIA Driver version: 580.126.09
  + CUDA version: 12.8
  + ENA Driver version: 2.15.0g
  + EFA Installer version: 1.45.0
  + Python version: 3.9.25
  + Kubernetes version: v1.31.13-eks-ecaa3a6
  + iptables-services version: 1.8.8
  + nginx version: 1.28.1
  + nvme-cli version: 2.13 1.13
  + stress version: 1.0.7
  + collectd version: 5.12.0.
  + acl version: 2.3.1
  + lustre-client version: 2.15.6
  + systemd version: 252
  + openssh version: 8.7
  + sudo version: 1.9.15
  + gcc version: 11.5.0
  + cmake version: 3.22.2
  + git version: 2.50.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300062.1
  + nfs-utils version: 2.5.4
  + lvm2 version: 2.03.16
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 60.0
+ AL2023 (ARM64):
  + Linux Kernel version: 6.12
  + Glibc version: 2.34
  + OpenSSL version: 3.2.2
  + FSx Lustre Client version: 2.15.6
  + Runc version: 1.3.4
  + Containerd version: containerd github.com/containerd/containerd/v2 2.1.5
  + NVIDIA Driver version: 580.126.09
  + CUDA version: 12.8
  + ENA Driver version: 2.15.0g
  + EFA Installer version: 1.43.3
  + Python version: 3.9.25
  + Kubernetes version: v1.31.13-eks-ecaa3a6
  + iptables-services version: 1.8.8
  + nginx version: 1.28.1
  + nvme-cli version: 2.13 1.13
  + stress version: 1.0.7
  + collectd version: 5.12.0.
  + acl version: 2.3.1
  + lustre-client version: 2.15.6
  + nvidia-imex version: 580.126.09
  + systemd version: 252
  + openssh version: 8.7
  + sudo version: 1.9.15
  + gcc version: 11.5.0
  + cmake version: 3.22.2
  + git version: 2.50.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300062.1
  + nfs-utils version: 2.5.4
  + lvm2 version: 2.03.16
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 58.

------
#### [ Kubernetes v1.32 ]
+  **AL2 is now deprecated. Kubernetes AMI is based on AL2023.** 
+ AL2 (x86\$164):
  + Linux Kernel version: 5.10
  + Glibc version: 2.26
  + OpenSSL version: 1.0.2k-fips
  + FSx Lustre Client version: 2.12.8
  + Docker version: Docker version 25.0.14, build 0bab007
  + Runc version: 1.3.4
  + Containerd version: containerd github.com/containerd/containerd 1.7.29
  + aws CLI v2 version: aws-cli/1.44.31 Python/3.10.17 Linux/5.10.247-246.989.amzn2.x86\$164 botocore/1.42.41
  + aws Neuronx DKMS version: 2.25.4.0
  + NVIDIA Driver version: 570.211.01
  + CUDA version: 12.2
  + ENA Driver version: 2.15.0g
  + EFA Installer version: 1.45.0
  + Python version: 3.7.16
  + Kubernetes version: v1.32.9-eks-ecaa3a6
  + iptables-services version: 1.8.4
  + nginx version: 1.20.1
  + nvme-cli version: 1.11.1
  + epel-release version: 7
  + stress version: 1.0.4
  + collectd version: 5.8.1
  + acl version: 2.2.51
  + rsyslog version: 8.24.0
  + lustre-client version: 2.12.8
  + systemd version: 219
  + openssh version: 7.4
  + sudo version: 1.8.23
  + gcc version: 7.3.1
  + cmake version: 2.8.12.2
  + git version: 2.47.3
  + make version: 3.82
  + cloudwatch-agent version: 1.300062.1
  + nfs-utils version: 1.3.0
  + lvm2 version: 2.02.187
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 60.0
+ AL2023 (x86\$164):
  + Linux Kernel version: 6.1
  + Glibc version: 2.34
  + OpenSSL version: 3.2.2
  + FSx Lustre Client version: 2.15.6
  + Runc version: 1.3.4
  + Containerd version: containerd github.com/containerd/containerd/v2 2.1.5
  + aws Neuronx DKMS version: 2.25.4.0
  + NVIDIA Driver version: 580.126.09
  + CUDA version: 12.8
  + ENA Driver version: 2.15.0g
  + EFA Installer version: 1.45.0
  + Python version: 3.9.25
  + Kubernetes version: v1.32.9-eks-ecaa3a6
  + iptables-services version: 1.8.8
  + nginx version: 1.28.1
  + nvme-cli version: 2.13 1.13
  + stress version: 1.0.7
  + collectd version: 5.12.0.
  + acl version: 2.3.1
  + lustre-client version: 2.15.6
  + systemd version: 252
  + openssh version: 8.7
  + sudo version: 1.9.15
  + gcc version: 11.5.0
  + cmake version: 3.22.2
  + git version: 2.50.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300062.1
  + nfs-utils version: 2.5.4
  + lvm2 version: 2.03.16
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 60.0
+ AL2023 (ARM64):
  + Linux Kernel version: 6.12
  + Glibc version: 2.34
  + OpenSSL version: 3.2.2
  + FSx Lustre Client version: 2.15.6
  + Runc version: 1.3.4
  + Containerd version: containerd github.com/containerd/containerd/v2 2.1.5
  + NVIDIA Driver version: 580.126.09
  + CUDA version: 12.8
  + ENA Driver version: 2.15.0g
  + EFA Installer version: 1.43.3
  + Python version: 3.9.25
  + Kubernetes version: v1.32.9-eks-ecaa3a6
  + iptables-services version: 1.8.8
  + nginx version: 1.28.1
  + nvme-cli version: 2.13 1.13
  + stress version: 1.0.7
  + collectd version: 5.12.0.
  + acl version: 2.3.1
  + lustre-client version: 2.15.6
  + nvidia-imex version: 580.126.09
  + systemd version: 252
  + openssh version: 8.7
  + sudo version: 1.9.15
  + gcc version: 11.5.0
  + cmake version: 3.22.2
  + git version: 2.50.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300062.1
  + nfs-utils version: 2.5.4
  + lvm2 version: 2.03.16
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 58.

------
#### [ Kubernetes v1.33 ]
+ AL2023 (x86\$164):
  + Linux Kernel version: 6.1
  + Glibc version: 2.34
  + OpenSSL version: 3.2.2
  + FSx Lustre Client version: 2.15.6
  + Runc version: 1.3.4
  + Containerd version: containerd github.com/containerd/containerd/v2 2.1.5
  + aws Neuronx DKMS version: 2.25.4.0
  + NVIDIA Driver version: 580.126.09
  + CUDA version: 12.8
  + ENA Driver version: 2.15.0g
  + EFA Installer version: 1.45.0
  + Python version: 3.9.25
  + Kubernetes version: v1.33.5-eks-ecaa3a6
  + iptables-services version: 1.8.8
  + nginx version: 1.28.1
  + nvme-cli version: 2.13 1.13
  + stress version: 1.0.7
  + collectd version: 5.12.0.
  + acl version: 2.3.1
  + lustre-client version: 2.15.6
  + systemd version: 252
  + openssh version: 8.7
  + sudo version: 1.9.15
  + gcc version: 11.5.0
  + cmake version: 3.22.2
  + git version: 2.50.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300062.1
  + nfs-utils version: 2.5.4
  + lvm2 version: 2.03.16
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 60.0
+ AL2023 (ARM64):
  + Linux Kernel version: 6.12
  + Glibc version: 2.34
  + OpenSSL version: 3.2.2
  + FSx Lustre Client version: 2.15.6
  + Runc version: 1.3.4
  + Containerd version: containerd github.com/containerd/containerd/v2 2.1.5
  + NVIDIA Driver version: 580.126.09
  + CUDA version: 12.8
  + ENA Driver version: 2.15.0g
  + EFA Installer version: 1.43.3
  + Python version: 3.9.25
  + Kubernetes version: v1.33.5-eks-ecaa3a6
  + iptables-services version: 1.8.8
  + nginx version: 1.28.1
  + nvme-cli version: 2.13 1.13
  + stress version: 1.0.7
  + collectd version: 5.12.0.
  + acl version: 2.3.1
  + lustre-client version: 2.15.6
  + nvidia-imex version: 580.126.09
  + systemd version: 252
  + openssh version: 8.7
  + sudo version: 1.9.15
  + gcc version: 11.5.0
  + cmake version: 3.22.2
  + git version: 2.50.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300062.1
  + nfs-utils version: 2.5.4
  + lvm2 version: 2.03.16
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 58.

------
#### [ Kubernetes v1.34 ]
+ AL2023 (x86\$164):
  + Linux Kernel version: 6.1
  + Glibc version: 2.34
  + OpenSSL version: 3.2.2
  + FSx Lustre Client version: 2.15.6
  + Runc version: 1.3.4
  + Containerd version: containerd github.com/containerd/containerd/v2 2.1.5
  + aws Neuronx DKMS version: 2.25.4.0
  + NVIDIA Driver version: 580.126.09
  + CUDA version: 12.8
  + ENA Driver version: 2.15.0g
  + EFA Installer version: 1.45.0
  + Python version: 3.9.25
  + Kubernetes version: v1.34.2-eks-ecaa3a6
  + iptables-services version: 1.8.8
  + nginx version: 1.28.1
  + nvme-cli version: 2.13 1.13
  + stress version: 1.0.7
  + collectd version: 5.12.0.
  + acl version: 2.3.1
  + lustre-client version: 2.15.6
  + systemd version: 252
  + openssh version: 8.7
  + sudo version: 1.9.15
  + gcc version: 11.5.0
  + cmake version: 3.22.2
  + git version: 2.50.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300062.1
  + nfs-utils version: 2.5.4
  + lvm2 version: 2.03.16
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 60.0
+ AL2023 (ARM64):
  + Linux Kernel version: 6.12
  + Glibc version: 2.34
  + OpenSSL version: 3.2.2
  + FSx Lustre Client version: 2.15.6
  + Runc version: 1.3.4
  + Containerd version: containerd github.com/containerd/containerd/v2 2.1.5
  + NVIDIA Driver version: 580.126.09
  + CUDA version: 12.8
  + ENA Driver version: 2.15.0g
  + EFA Installer version: 1.43.3
  + Python version: 3.9.25
  + Kubernetes version: v1.34.2-eks-ecaa3a6
  + iptables-services version: 1.8.8
  + nginx version: 1.28.1
  + nvme-cli version: 2.13 1.13
  + stress version: 1.0.7
  + collectd version: 5.12.0.
  + acl version: 2.3.1
  + lustre-client version: 2.15.6
  + nvidia-imex version: 580.126.09
  + systemd version: 252
  + openssh version: 8.7
  + sudo version: 1.9.15
  + gcc version: 11.5.0
  + cmake version: 3.22.2
  + git version: 2.50.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300062.1
  + nfs-utils version: 2.5.4
  + lvm2 version: 2.03.16
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 58.

------

## SageMaker Hyperpod AMI releases for Amazon EKS: January 25, 2026
<a name="sagemaker-hyperpod-release-ami-eks-20260125"></a>

 **AMI general updates** 
+ Released updates for SageMaker Hyperpod AMI for Amazon EKS versions 1.28, 1.29, 1.30, 1.31, 1.32, 1.33, 1.34.
+ Base DLAMI release note is available [here](https://docs.amazonaws.cn//dlami/latest/devguide/appendix-ami-release-notes.html#appendix-ami-release-notes-base).

 **SageMaker Hyperpod DLAMI for Amazon EKS support** 

This release includes the following updates:

------
#### [ Kubernetes v1.28 ]
+  **AL2 is now deprecated. Kubernetes AMI is based on AL2023.** 
+ AL2 (x86\$164):
  + Linux Kernel version: 5.10
  + Glibc version: 2.26
  + OpenSSL version: 1.0.2k-fips
  + FSx Lustre Client version: 2.12.8
  + Docker version: Docker version 25.0.14, build 0bab007
  + Runc version: 1.3.4
  + Containerd version: containerd github.com/containerd/containerd 1.7.29
  + aws CLI v2 version: aws-cli/1.44.21 Python/3.10.17 Linux/5.10.247-246.989.amzn2.x86\$164 botocore/1.42.31
  + aws Neuronx DKMS version: 2.25.4.0
  + NVIDIA Driver version: 570.211.01
  + CUDA version: 12.2
  + ENA Driver version: 2.15.0g
  + Python version: 3.7.16
  + Kubernetes version: v1.28.15-eks-ecaa3a6
  + iptables-services version: 1.8.4
  + nginx version: 1.20.1
  + nvme-cli version: 1.11.1
  + epel-release version: 7
  + stress version: 1.0.4
  + collectd version: 5.8.1
  + acl version: 2.2.51
  + rsyslog version: 8.24.0
  + lustre-client version: 2.12.8
  + systemd version: 219
  + openssh version: 7.4
  + sudo version: 1.8.23
  + gcc version: 7.3.1
  + cmake version: 2.8.12.2
  + git version: 2.47.3
  + make version: 3.82
  + cloudwatch-agent version: 1.300062.1
  + nfs-utils version: 1.3.0
  + lvm2 version: 2.02.187
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 60.0
+ AL2023 (x86\$164):
  + Linux Kernel version: 6.1
  + Glibc version: 2.34
  + OpenSSL version: 3.2.2
  + FSx Lustre Client version: 2.15.6
  + Runc version: 1.3.4
  + Containerd version: containerd github.com/containerd/containerd 1.7.27
  + aws Neuronx DKMS version: 2.25.4.0
  + NVIDIA Driver version: 580.126.09
  + CUDA version: 12.8
  + ENA Driver version: 2.15.0g
  + Python version: 3.9.25
  + Kubernetes version: v1.28.15-eks-ecaa3a6
  + iptables-services version: 1.8.8
  + nginx version: 1.28.0
  + nvme-cli version: 2.13 1.13
  + stress version: 1.0.7
  + collectd version: 5.12.0.
  + acl version: 2.3.1
  + lustre-client version: 2.15.6
  + systemd version: 252
  + openssh version: 8.7
  + sudo version: 1.9.15
  + gcc version: 11.5.0
  + cmake version: 3.22.2
  + git version: 2.50.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300062.1
  + nfs-utils version: 2.5.4
  + lvm2 version: 2.03.16
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 60.0

------
#### [ Kubernetes v1.29 ]
+  **AL2 is now deprecated. Kubernetes AMI is based on AL2023.** 
+ AL2 (x86\$164):
  + Linux Kernel version: 5.10
  + Glibc version: 2.26
  + OpenSSL version: 1.0.2k-fips
  + FSx Lustre Client version: 2.12.8
  + Docker version: Docker version 25.0.14, build 0bab007
  + Runc version: 1.3.4
  + Containerd version: containerd github.com/containerd/containerd 1.7.29
  + aws CLI v2 version: aws-cli/1.44.21 Python/3.10.17 Linux/5.10.247-246.989.amzn2.x86\$164 botocore/1.42.31
  + aws Neuronx DKMS version: 2.25.4.0
  + NVIDIA Driver version: 570.211.01
  + CUDA version: 12.2
  + ENA Driver version: 2.15.0g
  + Python version: 3.7.16
  + Kubernetes version: v1.29.15-eks-ecaa3a6
  + iptables-services version: 1.8.4
  + nginx version: 1.20.1
  + nvme-cli version: 1.11.1
  + epel-release version: 7
  + stress version: 1.0.4
  + collectd version: 5.8.1
  + acl version: 2.2.51
  + rsyslog version: 8.24.0
  + lustre-client version: 2.12.8
  + systemd version: 219
  + openssh version: 7.4
  + sudo version: 1.8.23
  + gcc version: 7.3.1
  + cmake version: 2.8.12.2
  + git version: 2.47.3
  + make version: 3.82
  + cloudwatch-agent version: 1.300062.1
  + nfs-utils version: 1.3.0
  + lvm2 version: 2.02.187
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 60.0
+ AL2023 (x86\$164):
  + Linux Kernel version: 6.1
  + Glibc version: 2.34
  + OpenSSL version: 3.2.2
  + FSx Lustre Client version: 2.15.6
  + Runc version: 1.3.4
  + Containerd version: containerd github.com/containerd/containerd 1.7.29
  + aws Neuronx DKMS version: 2.25.4.0
  + NVIDIA Driver version: 580.126.09
  + CUDA version: 12.8
  + ENA Driver version: 2.15.0g
  + Python version: 3.9.25
  + Kubernetes version: v1.29.15-eks-ecaa3a6
  + iptables-services version: 1.8.8
  + nginx version: 1.28.0
  + nvme-cli version: 2.13 1.13
  + stress version: 1.0.7
  + collectd version: 5.12.0.
  + acl version: 2.3.1
  + lustre-client version: 2.15.6
  + systemd version: 252
  + openssh version: 8.7
  + sudo version: 1.9.15
  + gcc version: 11.5.0
  + cmake version: 3.22.2
  + git version: 2.50.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300062.1
  + nfs-utils version: 2.5.4
  + lvm2 version: 2.03.16
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 60.0

------
#### [ Kubernetes v1.30 ]
+  **AL2 is now deprecated. Kubernetes AMI is based on AL2023.** 
+ AL2 (x86\$164):
  + Linux Kernel version: 5.10
  + Glibc version: 2.26
  + OpenSSL version: 1.0.2k-fips
  + FSx Lustre Client version: 2.12.8
  + Docker version: Docker version 25.0.14, build 0bab007
  + Runc version: 1.3.4
  + Containerd version: containerd github.com/containerd/containerd 1.7.29
  + aws CLI v2 version: aws-cli/1.44.21 Python/3.10.17 Linux/5.10.247-246.989.amzn2.x86\$164 botocore/1.42.31
  + aws Neuronx DKMS version: 2.25.4.0
  + NVIDIA Driver version: 570.211.01
  + CUDA version: 12.2
  + ENA Driver version: 2.15.0g
  + Python version: 3.7.16
  + Kubernetes version: v1.30.14-eks-ecaa3a6
  + iptables-services version: 1.8.4
  + nginx version: 1.20.1
  + nvme-cli version: 1.11.1
  + epel-release version: 7
  + stress version: 1.0.4
  + collectd version: 5.8.1
  + acl version: 2.2.51
  + rsyslog version: 8.24.0
  + lustre-client version: 2.12.8
  + systemd version: 219
  + openssh version: 7.4
  + sudo version: 1.8.23
  + gcc version: 7.3.1
  + cmake version: 2.8.12.2
  + git version: 2.47.3
  + make version: 3.82
  + cloudwatch-agent version: 1.300062.1
  + nfs-utils version: 1.3.0
  + lvm2 version: 2.02.187
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 60.0
+ AL2023 (x86\$164):
  + Linux Kernel version: 6.1
  + Glibc version: 2.34
  + OpenSSL version: 3.2.2
  + FSx Lustre Client version: 2.15.6
  + Runc version: 1.3.4
  + Containerd version: containerd github.com/containerd/containerd 1.7.29
  + aws Neuronx DKMS version: 2.25.4.0
  + NVIDIA Driver version: 580.126.09
  + CUDA version: 12.8
  + ENA Driver version: 2.15.0g
  + Python version: 3.9.25
  + Kubernetes version: v1.30.14-eks-ecaa3a6
  + iptables-services version: 1.8.8
  + nginx version: 1.28.0
  + nvme-cli version: 2.13 1.13
  + stress version: 1.0.7
  + collectd version: 5.12.0.
  + acl version: 2.3.1
  + lustre-client version: 2.15.6
  + systemd version: 252
  + openssh version: 8.7
  + sudo version: 1.9.15
  + gcc version: 11.5.0
  + cmake version: 3.22.2
  + git version: 2.50.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300062.1
  + nfs-utils version: 2.5.4
  + lvm2 version: 2.03.16
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 60.0

------
#### [ Kubernetes v1.31 ]
+  **AL2 is now deprecated. Kubernetes AMI is based on AL2023.** 
+ AL2 (x86\$164):
  + Linux Kernel version: 5.10
  + Glibc version: 2.26
  + OpenSSL version: 1.0.2k-fips
  + FSx Lustre Client version: 2.12.8
  + Docker version: Docker version 25.0.14, build 0bab007
  + Runc version: 1.3.4
  + Containerd version: containerd github.com/containerd/containerd 1.7.29
  + aws CLI v2 version: aws-cli/1.44.21 Python/3.10.17 Linux/5.10.247-246.989.amzn2.x86\$164 botocore/1.42.31
  + aws Neuronx DKMS version: 2.25.4.0
  + NVIDIA Driver version: 570.211.01
  + CUDA version: 12.2
  + ENA Driver version: 2.15.0g
  + Python version: 3.7.16
  + Kubernetes version: v1.31.13-eks-ecaa3a6
  + iptables-services version: 1.8.4
  + nginx version: 1.20.1
  + nvme-cli version: 1.11.1
  + epel-release version: 7
  + stress version: 1.0.4
  + collectd version: 5.8.1
  + acl version: 2.2.51
  + rsyslog version: 8.24.0
  + lustre-client version: 2.12.8
  + systemd version: 219
  + openssh version: 7.4
  + sudo version: 1.8.23
  + gcc version: 7.3.1
  + cmake version: 2.8.12.2
  + git version: 2.47.3
  + make version: 3.82
  + cloudwatch-agent version: 1.300062.1
  + nfs-utils version: 1.3.0
  + lvm2 version: 2.02.187
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 60.0
+ AL2023 (x86\$164):
  + Linux Kernel version: 6.1
  + Glibc version: 2.34
  + OpenSSL version: 3.2.2
  + FSx Lustre Client version: 2.15.6
  + Runc version: 1.3.4
  + Containerd version: containerd github.com/containerd/containerd/v2 2.1.5
  + aws Neuronx DKMS version: 2.25.4.0
  + NVIDIA Driver version: 580.126.09
  + CUDA version: 12.8
  + ENA Driver version: 2.15.0g
  + Python version: 3.9.25
  + Kubernetes version: v1.31.13-eks-ecaa3a6
  + iptables-services version: 1.8.8
  + nginx version: 1.28.0
  + nvme-cli version: 2.13 1.13
  + stress version: 1.0.7
  + collectd version: 5.12.0.
  + acl version: 2.3.1
  + lustre-client version: 2.15.6
  + systemd version: 252
  + openssh version: 8.7
  + sudo version: 1.9.15
  + gcc version: 11.5.0
  + cmake version: 3.22.2
  + git version: 2.50.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300062.1
  + nfs-utils version: 2.5.4
  + lvm2 version: 2.03.16
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 60.0
+ AL2023 (ARM64):
  + Linux Kernel version: 6.12
  + Glibc version: 2.34
  + OpenSSL version: 3.2.2
  + FSx Lustre Client version: 2.15.6
  + Runc version: 1.3.4
  + Containerd version: containerd github.com/containerd/containerd/v2 2.1.5
  + NVIDIA Driver version: 580.126.09
  + CUDA version: 12.8
  + ENA Driver version: 2.15.0g
  + Python version: 3.9.25
  + Kubernetes version: v1.31.13-eks-ecaa3a6
  + iptables-services version: 1.8.8
  + nginx version: 1.28.0
  + nvme-cli version: 2.13 1.13
  + stress version: 1.0.7
  + collectd version: 5.12.0.
  + acl version: 2.3.1
  + lustre-client version: 2.15.6
  + nvidia-imex version: 580.126.09
  + systemd version: 252
  + openssh version: 8.7
  + sudo version: 1.9.15
  + gcc version: 11.5.0
  + cmake version: 3.22.2
  + git version: 2.50.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300062.1
  + nfs-utils version: 2.5.4
  + lvm2 version: 2.03.16
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 58.

------
#### [ Kubernetes v1.32 ]
+  **AL2 is now deprecated. Kubernetes AMI is based on AL2023.** 
+ AL2 (x86\$164):
  + Linux Kernel version: 5.10
  + Glibc version: 2.26
  + OpenSSL version: 1.0.2k-fips
  + FSx Lustre Client version: 2.12.8
  + Docker version: Docker version 25.0.14, build 0bab007
  + Runc version: 1.3.4
  + Containerd version: containerd github.com/containerd/containerd 1.7.29
  + aws CLI v2 version: aws-cli/1.44.21 Python/3.10.17 Linux/5.10.247-246.989.amzn2.x86\$164 botocore/1.42.31
  + aws Neuronx DKMS version: 2.25.4.0
  + NVIDIA Driver version: 570.211.01
  + CUDA version: 12.2
  + ENA Driver version: 2.15.0g
  + Python version: 3.7.16
  + Kubernetes version: v1.32.9-eks-ecaa3a6
  + iptables-services version: 1.8.4
  + nginx version: 1.20.1
  + nvme-cli version: 1.11.1
  + epel-release version: 7
  + stress version: 1.0.4
  + collectd version: 5.8.1
  + acl version: 2.2.51
  + rsyslog version: 8.24.0
  + lustre-client version: 2.12.8
  + systemd version: 219
  + openssh version: 7.4
  + sudo version: 1.8.23
  + gcc version: 7.3.1
  + cmake version: 2.8.12.2
  + git version: 2.47.3
  + make version: 3.82
  + cloudwatch-agent version: 1.300062.1
  + nfs-utils version: 1.3.0
  + lvm2 version: 2.02.187
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 60.0
+ AL2023 (x86\$164):
  + Linux Kernel version: 6.1
  + Glibc version: 2.34
  + OpenSSL version: 3.2.2
  + FSx Lustre Client version: 2.15.6
  + Runc version: 1.3.4
  + Containerd version: containerd github.com/containerd/containerd/v2 2.1.5
  + aws Neuronx DKMS version: 2.25.4.0
  + NVIDIA Driver version: 580.126.09
  + CUDA version: 12.8
  + ENA Driver version: 2.15.0g
  + Python version: 3.9.25
  + Kubernetes version: v1.32.9-eks-ecaa3a6
  + iptables-services version: 1.8.8
  + nginx version: 1.28.0
  + nvme-cli version: 2.13 1.13
  + stress version: 1.0.7
  + collectd version: 5.12.0.
  + acl version: 2.3.1
  + lustre-client version: 2.15.6
  + systemd version: 252
  + openssh version: 8.7
  + sudo version: 1.9.15
  + gcc version: 11.5.0
  + cmake version: 3.22.2
  + git version: 2.50.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300062.1
  + nfs-utils version: 2.5.4
  + lvm2 version: 2.03.16
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 60.0
+ AL2023 (ARM64):
  + Linux Kernel version: 6.12
  + Glibc version: 2.34
  + OpenSSL version: 3.2.2
  + FSx Lustre Client version: 2.15.6
  + Runc version: 1.3.4
  + Containerd version: containerd github.com/containerd/containerd/v2 2.1.5
  + NVIDIA Driver version: 580.126.09
  + CUDA version: 12.8
  + ENA Driver version: 2.15.0g
  + Python version: 3.9.25
  + Kubernetes version: v1.32.9-eks-ecaa3a6
  + iptables-services version: 1.8.8
  + nginx version: 1.28.0
  + nvme-cli version: 2.13 1.13
  + stress version: 1.0.7
  + collectd version: 5.12.0.
  + acl version: 2.3.1
  + lustre-client version: 2.15.6
  + nvidia-imex version: 580.126.09
  + systemd version: 252
  + openssh version: 8.7
  + sudo version: 1.9.15
  + gcc version: 11.5.0
  + cmake version: 3.22.2
  + git version: 2.50.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300062.1
  + nfs-utils version: 2.5.4
  + lvm2 version: 2.03.16
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 58.

------
#### [ Kubernetes v1.33 ]
+ AL2023 (x86\$164):
  + Linux Kernel version: 6.1
  + Glibc version: 2.34
  + OpenSSL version: 3.2.2
  + FSx Lustre Client version: 2.15.6
  + Runc version: 1.3.4
  + Containerd version: containerd github.com/containerd/containerd/v2 2.1.5
  + aws Neuronx DKMS version: 2.25.4.0
  + NVIDIA Driver version: 580.126.09
  + CUDA version: 12.8
  + ENA Driver version: 2.15.0g
  + Python version: 3.9.25
  + Kubernetes version: v1.33.5-eks-ecaa3a6
  + iptables-services version: 1.8.8
  + nginx version: 1.28.0
  + nvme-cli version: 2.13 1.13
  + stress version: 1.0.7
  + collectd version: 5.12.0.
  + acl version: 2.3.1
  + lustre-client version: 2.15.6
  + systemd version: 252
  + openssh version: 8.7
  + sudo version: 1.9.15
  + gcc version: 11.5.0
  + cmake version: 3.22.2
  + git version: 2.50.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300062.1
  + nfs-utils version: 2.5.4
  + lvm2 version: 2.03.16
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 60.0
+ AL2023 (ARM64):
  + Linux Kernel version: 6.12
  + Glibc version: 2.34
  + OpenSSL version: 3.2.2
  + FSx Lustre Client version: 2.15.6
  + Runc version: 1.3.4
  + Containerd version: containerd github.com/containerd/containerd/v2 2.1.5
  + NVIDIA Driver version: 580.126.09
  + CUDA version: 12.8
  + ENA Driver version: 2.15.0g
  + Python version: 3.9.25
  + Kubernetes version: v1.33.5-eks-ecaa3a6
  + iptables-services version: 1.8.8
  + nginx version: 1.28.0
  + nvme-cli version: 2.13 1.13
  + stress version: 1.0.7
  + collectd version: 5.12.0.
  + acl version: 2.3.1
  + lustre-client version: 2.15.6
  + nvidia-imex version: 580.126.09
  + systemd version: 252
  + openssh version: 8.7
  + sudo version: 1.9.15
  + gcc version: 11.5.0
  + cmake version: 3.22.2
  + git version: 2.50.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300062.1
  + nfs-utils version: 2.5.4
  + lvm2 version: 2.03.16
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 58.

------
#### [ Kubernetes v1.34 ]
+ AL2023 (x86\$164):
  + Linux Kernel version: 6.1
  + Glibc version: 2.34
  + OpenSSL version: 3.2.2
  + FSx Lustre Client version: 2.15.6
  + Runc version: 1.3.4
  + Containerd version: containerd github.com/containerd/containerd/v2 2.1.5
  + aws Neuronx DKMS version: 2.25.4.0
  + NVIDIA Driver version: 580.126.09
  + CUDA version: 12.8
  + ENA Driver version: 2.15.0g
  + Python version: 3.9.25
  + Kubernetes version: v1.34.2-eks-ecaa3a6
  + iptables-services version: 1.8.8
  + nginx version: 1.28.0
  + nvme-cli version: 2.13 1.13
  + stress version: 1.0.7
  + collectd version: 5.12.0.
  + acl version: 2.3.1
  + lustre-client version: 2.15.6
  + systemd version: 252
  + openssh version: 8.7
  + sudo version: 1.9.15
  + gcc version: 11.5.0
  + cmake version: 3.22.2
  + git version: 2.50.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300062.1
  + nfs-utils version: 2.5.4
  + lvm2 version: 2.03.16
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 60.0
+ AL2023 (ARM64):
  + Linux Kernel version: 6.12
  + Glibc version: 2.34
  + OpenSSL version: 3.2.2
  + FSx Lustre Client version: 2.15.6
  + Runc version: 1.3.4
  + Containerd version: containerd github.com/containerd/containerd/v2 2.1.5
  + NVIDIA Driver version: 580.126.09
  + CUDA version: 12.8
  + ENA Driver version: 2.15.0g
  + Python version: 3.9.25
  + Kubernetes version: v1.34.2-eks-ecaa3a6
  + iptables-services version: 1.8.8
  + nginx version: 1.28.0
  + nvme-cli version: 2.13 1.13
  + stress version: 1.0.7
  + collectd version: 5.12.0.
  + acl version: 2.3.1
  + lustre-client version: 2.15.6
  + nvidia-imex version: 580.126.09
  + systemd version: 252
  + openssh version: 8.7
  + sudo version: 1.9.15
  + gcc version: 11.5.0
  + cmake version: 3.22.2
  + git version: 2.50.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300062.1
  + nfs-utils version: 2.5.4
  + lvm2 version: 2.03.16
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 58.

------

## SageMaker Hyperpod AMI releases for Amazon EKS: December 29, 2025
<a name="sagemaker-hyperpod-release-ami-eks-20251229"></a>

 **AMI general updates** 
+ Released updates for SageMaker Hyperpod AMI for Amazon EKS versions 1.28, 1.29, 1.30, 1.31, 1.32, 1.33.
+ Base DLAMI release note is available [here](https://docs.amazonaws.cn//dlami/latest/devguide/appendix-ami-release-notes.html#appendix-ami-release-notes-base).

 **SageMaker Hyperpod DLAMI for Amazon EKS support** 

This release includes the following updates:

------
#### [ Kubernetes v1.28 ]
+  **AL2 is now deprecated. Kubernetes AMI is based on AL2023.** 
+ AL2 (x86\$164):
  + Linux Kernel version: 5.10
  + Glibc version: 2.26
  + OpenSSL version: 1.0.2k-fips
  + FSx Lustre Client version: 2.12.8
  + Docker version: Docker version 25.0.13, build 0bab007
  + Runc version: 1.3.3
  + Containerd version: containerd github.com/containerd/containerd 1.7.29
  + aws CLI v2 version: aws-cli/1.44.4 Python/3.10.17 Linux/5.10.245-245.983.amzn2.x86\$164 botocore/1.42.14
  + aws Neuronx DKMS version: 2.25.4.0
  + NVIDIA Driver version: 570.195.03
  + CUDA version: 12.2
  + ENA Driver version: 2.15.0g
  + Python version: 3.7.16
  + Kubernetes version: v1.28.15-eks-ecaa3a6
  + iptables-services version: 1.8.4
  + nginx version: 1.20.1
  + nvme-cli version: 1.11.1
  + epel-release version: 7
  + stress version: 1.0.4
  + collectd version: 5.8.1
  + acl version: 2.2.51
  + rsyslog version: 8.24.0
  + lustre-client version: 2.12.8
  + systemd version: 219
  + openssh version: 7.4
  + sudo version: 1.8.23
  + gcc version: 7.3.1
  + cmake version: 2.8.12.2
  + git version: 2.47.3
  + make version: 3.82
  + cloudwatch-agent version: 1.300060.1
  + nfs-utils version: 1.3.0
  + lvm2 version: 2.02.187
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 60.0
+ AL2023 (x86\$164):
  + Linux Kernel version: 6.1
  + Glibc version: 2.34
  + OpenSSL version: 3.2.2
  + FSx Lustre Client version: 2.15.6
  + Runc version: 1.3.3
  + Containerd version: containerd github.com/containerd/containerd 1.7.27
  + aws Neuronx DKMS version: 2.25.4.0
  + NVIDIA Driver version: 580.105.08
  + CUDA version: 12.8
  + ENA Driver version: 2.15.0g
  + Python version: 3.9.25
  + Kubernetes version: v1.28.15-eks-ecaa3a6
  + iptables-services version: 1.8.8
  + nginx version: 1.28.0
  + nvme-cli version: 2.13 1.13
  + stress version: 1.0.7
  + collectd version: 5.12.0.
  + acl version: 2.3.1
  + lustre-client version: 2.15.6
  + systemd version: 252
  + openssh version: 8.7
  + sudo version: 1.9.15
  + gcc version: 11.5.0
  + cmake version: 3.22.2
  + git version: 2.50.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300060.1
  + nfs-utils version: 2.5.4
  + lvm2 version: 2.03.16
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 60.0

------
#### [ Kubernetes v1.29 ]
+  **AL2 is now deprecated. Kubernetes AMI is based on AL2023.** 
+ AL2 (x86\$164):
  + Linux Kernel version: 5.10
  + Glibc version: 2.26
  + OpenSSL version: 1.0.2k-fips
  + FSx Lustre Client version: 2.12.8
  + Docker version: Docker version 25.0.13, build 0bab007
  + Runc version: 1.3.3
  + Containerd version: containerd github.com/containerd/containerd 1.7.29
  + aws CLI v2 version: aws-cli/1.44.4 Python/3.10.17 Linux/5.10.245-245.983.amzn2.x86\$164 botocore/1.42.14
  + aws Neuronx DKMS version: 2.25.4.0
  + NVIDIA Driver version: 570.195.03
  + CUDA version: 12.2
  + ENA Driver version: 2.15.0g
  + Python version: 3.7.16
  + Kubernetes version: v1.29.15-eks-ecaa3a6
  + iptables-services version: 1.8.4
  + nginx version: 1.20.1
  + nvme-cli version: 1.11.1
  + epel-release version: 7
  + stress version: 1.0.4
  + collectd version: 5.8.1
  + acl version: 2.2.51
  + rsyslog version: 8.24.0
  + lustre-client version: 2.12.8
  + systemd version: 219
  + openssh version: 7.4
  + sudo version: 1.8.23
  + gcc version: 7.3.1
  + cmake version: 2.8.12.2
  + git version: 2.47.3
  + make version: 3.82
  + cloudwatch-agent version: 1.300060.1
  + nfs-utils version: 1.3.0
  + lvm2 version: 2.02.187
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 60.0
+ AL2023 (x86\$164):
  + Linux Kernel version: 6.1
  + Glibc version: 2.34
  + OpenSSL version: 3.2.2
  + FSx Lustre Client version: 2.15.6
  + Runc version: 1.3.3
  + Containerd version: containerd github.com/containerd/containerd 1.7.29
  + aws Neuronx DKMS version: 2.25.4.0
  + NVIDIA Driver version: 580.105.08
  + CUDA version: 12.8
  + ENA Driver version: 2.15.0g
  + Python version: 3.9.25
  + Kubernetes version: v1.29.15-eks-ecaa3a6
  + iptables-services version: 1.8.8
  + nginx version: 1.28.0
  + nvme-cli version: 2.13 1.13
  + stress version: 1.0.7
  + collectd version: 5.12.0.
  + acl version: 2.3.1
  + lustre-client version: 2.15.6
  + systemd version: 252
  + openssh version: 8.7
  + sudo version: 1.9.15
  + gcc version: 11.5.0
  + cmake version: 3.22.2
  + git version: 2.50.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300060.1
  + nfs-utils version: 2.5.4
  + lvm2 version: 2.03.16
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 60.0

------
#### [ Kubernetes v1.30 ]
+  **AL2 is now deprecated. Kubernetes AMI is based on AL2023.** 
+ AL2 (x86\$164):
  + Linux Kernel version: 5.10
  + Glibc version: 2.26
  + OpenSSL version: 1.0.2k-fips
  + FSx Lustre Client version: 2.12.8
  + Docker version: Docker version 25.0.13, build 0bab007
  + Runc version: 1.3.3
  + Containerd version: containerd github.com/containerd/containerd 1.7.29
  + aws CLI v2 version: aws-cli/1.44.4 Python/3.10.17 Linux/5.10.245-245.983.amzn2.x86\$164 botocore/1.42.14
  + aws Neuronx DKMS version: 2.25.4.0
  + NVIDIA Driver version: 570.195.03
  + CUDA version: 12.2
  + ENA Driver version: 2.15.0g
  + Python version: 3.7.16
  + Kubernetes version: v1.30.14-eks-ecaa3a6
  + iptables-services version: 1.8.4
  + nginx version: 1.20.1
  + nvme-cli version: 1.11.1
  + epel-release version: 7
  + stress version: 1.0.4
  + collectd version: 5.8.1
  + acl version: 2.2.51
  + rsyslog version: 8.24.0
  + lustre-client version: 2.12.8
  + systemd version: 219
  + openssh version: 7.4
  + sudo version: 1.8.23
  + gcc version: 7.3.1
  + cmake version: 2.8.12.2
  + git version: 2.47.3
  + make version: 3.82
  + cloudwatch-agent version: 1.300060.1
  + nfs-utils version: 1.3.0
  + lvm2 version: 2.02.187
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 60.0
+ AL2023 (x86\$164):
  + Linux Kernel version: 6.1
  + Glibc version: 2.34
  + OpenSSL version: 3.2.2
  + FSx Lustre Client version: 2.15.6
  + Runc version: 1.3.3
  + Containerd version: containerd github.com/containerd/containerd 1.7.29
  + aws Neuronx DKMS version: 2.25.4.0
  + NVIDIA Driver version: 580.105.08
  + CUDA version: 12.8
  + ENA Driver version: 2.15.0g
  + Python version: 3.9.25
  + Kubernetes version: v1.30.14-eks-ecaa3a6
  + iptables-services version: 1.8.8
  + nginx version: 1.28.0
  + nvme-cli version: 2.13 1.13
  + stress version: 1.0.7
  + collectd version: 5.12.0.
  + acl version: 2.3.1
  + lustre-client version: 2.15.6
  + systemd version: 252
  + openssh version: 8.7
  + sudo version: 1.9.15
  + gcc version: 11.5.0
  + cmake version: 3.22.2
  + git version: 2.50.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300060.1
  + nfs-utils version: 2.5.4
  + lvm2 version: 2.03.16
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 60.0

------
#### [ Kubernetes v1.31 ]
+  **AL2 is now deprecated. Kubernetes AMI is based on AL2023.** 
+ AL2 (x86\$164):
  + Linux Kernel version: 5.10
  + Glibc version: 2.26
  + OpenSSL version: 1.0.2k-fips
  + FSx Lustre Client version: 2.12.8
  + Docker version: Docker version 25.0.13, build 0bab007
  + Runc version: 1.3.3
  + Containerd version: containerd github.com/containerd/containerd 1.7.29
  + aws CLI v2 version: aws-cli/1.44.4 Python/3.10.17 Linux/5.10.245-245.983.amzn2.x86\$164 botocore/1.42.14
  + aws Neuronx DKMS version: 2.25.4.0
  + NVIDIA Driver version: 570.195.03
  + CUDA version: 12.2
  + ENA Driver version: 2.15.0g
  + Python version: 3.7.16
  + Kubernetes version: v1.31.13-eks-ecaa3a6
  + iptables-services version: 1.8.4
  + nginx version: 1.20.1
  + nvme-cli version: 1.11.1
  + epel-release version: 7
  + stress version: 1.0.4
  + collectd version: 5.8.1
  + acl version: 2.2.51
  + rsyslog version: 8.24.0
  + lustre-client version: 2.12.8
  + systemd version: 219
  + openssh version: 7.4
  + sudo version: 1.8.23
  + gcc version: 7.3.1
  + cmake version: 2.8.12.2
  + git version: 2.47.3
  + make version: 3.82
  + cloudwatch-agent version: 1.300060.1
  + nfs-utils version: 1.3.0
  + lvm2 version: 2.02.187
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 60.0
+ AL2023 (x86\$164):
  + Linux Kernel version: 6.1
  + Glibc version: 2.34
  + OpenSSL version: 3.2.2
  + FSx Lustre Client version: 2.15.6
  + Runc version: 1.3.3
  + Containerd version: containerd github.com/containerd/containerd/v2 2.1.4
  + aws Neuronx DKMS version: 2.25.4.0
  + NVIDIA Driver version: 580.105.08
  + CUDA version: 12.8
  + ENA Driver version: 2.15.0g
  + Python version: 3.9.25
  + Kubernetes version: v1.31.13-eks-ecaa3a6
  + iptables-services version: 1.8.8
  + nginx version: 1.28.0
  + nvme-cli version: 2.13 1.13
  + stress version: 1.0.7
  + collectd version: 5.12.0.
  + acl version: 2.3.1
  + lustre-client version: 2.15.6
  + systemd version: 252
  + openssh version: 8.7
  + sudo version: 1.9.15
  + gcc version: 11.5.0
  + cmake version: 3.22.2
  + git version: 2.50.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300060.1
  + nfs-utils version: 2.5.4
  + lvm2 version: 2.03.16
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 60.0
+ AL2023 (ARM64):
  + Linux Kernel version: 6.12
  + Glibc version: 2.34
  + OpenSSL version: 3.2.2
  + FSx Lustre Client version: 2.15.6
  + Runc version: 1.3.3
  + Containerd version: containerd github.com/containerd/containerd/v2 2.1.4
  + NVIDIA Driver version: 580.105.08
  + CUDA version: 12.8
  + ENA Driver version: 2.15.0g
  + Python version: 3.9.25
  + Kubernetes version: v1.31.13-eks-ecaa3a6
  + iptables-services version: 1.8.8
  + nginx version: 1.28.0
  + nvme-cli version: 2.13 1.13
  + stress version: 1.0.7
  + collectd version: 5.12.0.
  + acl version: 2.3.1
  + lustre-client version: 2.15.6
  + nvidia-imex version: 580.105.08
  + systemd version: 252
  + openssh version: 8.7
  + sudo version: 1.9.15
  + gcc version: 11.5.0
  + cmake version: 3.22.2
  + git version: 2.50.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300060.1
  + nfs-utils version: 2.5.4
  + lvm2 version: 2.03.16
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 58.

------
#### [ Kubernetes v1.32 ]
+  **AL2 is now deprecated. Kubernetes AMI is based on AL2023.** 
+ AL2 (x86\$164):
  + Linux Kernel version: 5.10
  + Glibc version: 2.26
  + OpenSSL version: 1.0.2k-fips
  + FSx Lustre Client version: 2.12.8
  + Docker version: Docker version 25.0.13, build 0bab007
  + Runc version: 1.3.3
  + Containerd version: containerd github.com/containerd/containerd 1.7.29
  + aws CLI v2 version: aws-cli/1.44.4 Python/3.10.17 Linux/5.10.245-245.983.amzn2.x86\$164 botocore/1.42.14
  + aws Neuronx DKMS version: 2.25.4.0
  + NVIDIA Driver version: 570.195.03
  + CUDA version: 12.2
  + ENA Driver version: 2.15.0g
  + Python version: 3.7.16
  + Kubernetes version: v1.32.9-eks-ecaa3a6
  + iptables-services version: 1.8.4
  + nginx version: 1.20.1
  + nvme-cli version: 1.11.1
  + epel-release version: 7
  + stress version: 1.0.4
  + collectd version: 5.8.1
  + acl version: 2.2.51
  + rsyslog version: 8.24.0
  + lustre-client version: 2.12.8
  + systemd version: 219
  + openssh version: 7.4
  + sudo version: 1.8.23
  + gcc version: 7.3.1
  + cmake version: 2.8.12.2
  + git version: 2.47.3
  + make version: 3.82
  + cloudwatch-agent version: 1.300060.1
  + nfs-utils version: 1.3.0
  + lvm2 version: 2.02.187
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 60.0
+ AL2023 (x86\$164):
  + Linux Kernel version: 6.1
  + Glibc version: 2.34
  + OpenSSL version: 3.2.2
  + FSx Lustre Client version: 2.15.6
  + Runc version: 1.3.3
  + Containerd version: containerd github.com/containerd/containerd/v2 2.1.4
  + aws Neuronx DKMS version: 2.25.4.0
  + NVIDIA Driver version: 580.105.08
  + CUDA version: 12.8
  + ENA Driver version: 2.15.0g
  + Python version: 3.9.25
  + Kubernetes version: v1.32.9-eks-ecaa3a6
  + iptables-services version: 1.8.8
  + nginx version: 1.28.0
  + nvme-cli version: 2.13 1.13
  + stress version: 1.0.7
  + collectd version: 5.12.0.
  + acl version: 2.3.1
  + lustre-client version: 2.15.6
  + systemd version: 252
  + openssh version: 8.7
  + sudo version: 1.9.15
  + gcc version: 11.5.0
  + cmake version: 3.22.2
  + git version: 2.50.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300060.1
  + nfs-utils version: 2.5.4
  + lvm2 version: 2.03.16
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 60.0
+ AL2023 (ARM64):
  + Linux Kernel version: 6.12
  + Glibc version: 2.34
  + OpenSSL version: 3.2.2
  + FSx Lustre Client version: 2.15.6
  + Runc version: 1.3.3
  + Containerd version: containerd github.com/containerd/containerd/v2 2.1.4
  + NVIDIA Driver version: 580.105.08
  + CUDA version: 12.8
  + ENA Driver version: 2.15.0g
  + Python version: 3.9.25
  + Kubernetes version: v1.32.9-eks-ecaa3a6
  + iptables-services version: 1.8.8
  + nginx version: 1.28.0
  + nvme-cli version: 2.13 1.13
  + stress version: 1.0.7
  + collectd version: 5.12.0.
  + acl version: 2.3.1
  + lustre-client version: 2.15.6
  + nvidia-imex version: 580.105.08
  + systemd version: 252
  + openssh version: 8.7
  + sudo version: 1.9.15
  + gcc version: 11.5.0
  + cmake version: 3.22.2
  + git version: 2.50.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300060.1
  + nfs-utils version: 2.5.4
  + lvm2 version: 2.03.16
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 58.

------
#### [ Kubernetes v1.33 ]
+ AL2023 (x86\$164):
  + Linux Kernel version: 6.1
  + Glibc version: 2.34
  + OpenSSL version: 3.2.2
  + FSx Lustre Client version: 2.15.6
  + Runc version: 1.3.3
  + Containerd version: containerd github.com/containerd/containerd/v2 2.1.4
  + aws Neuronx DKMS version: 2.25.4.0
  + NVIDIA Driver version: 580.105.08
  + CUDA version: 12.8
  + ENA Driver version: 2.15.0g
  + Python version: 3.9.25
  + Kubernetes version: v1.33.5-eks-ecaa3a6
  + iptables-services version: 1.8.8
  + nginx version: 1.28.0
  + nvme-cli version: 2.13 1.13
  + stress version: 1.0.7
  + collectd version: 5.12.0.
  + acl version: 2.3.1
  + lustre-client version: 2.15.6
  + systemd version: 252
  + openssh version: 8.7
  + sudo version: 1.9.15
  + gcc version: 11.5.0
  + cmake version: 3.22.2
  + git version: 2.50.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300060.1
  + nfs-utils version: 2.5.4
  + lvm2 version: 2.03.16
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 60.0
+ AL2023 (ARM64):
  + Linux Kernel version: 6.12
  + Glibc version: 2.34
  + OpenSSL version: 3.2.2
  + FSx Lustre Client version: 2.15.6
  + Runc version: 1.3.3
  + Containerd version: containerd github.com/containerd/containerd/v2 2.1.4
  + NVIDIA Driver version: 580.105.08
  + CUDA version: 12.8
  + ENA Driver version: 2.15.0g
  + Python version: 3.9.25
  + Kubernetes version: v1.33.5-eks-ecaa3a6
  + iptables-services version: 1.8.8
  + nginx version: 1.28.0
  + nvme-cli version: 2.13 1.13
  + stress version: 1.0.7
  + collectd version: 5.12.0.
  + acl version: 2.3.1
  + lustre-client version: 2.15.6
  + nvidia-imex version: 580.105.08
  + systemd version: 252
  + openssh version: 8.7
  + sudo version: 1.9.15
  + gcc version: 11.5.0
  + cmake version: 3.22.2
  + git version: 2.50.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300060.1
  + nfs-utils version: 2.5.4
  + lvm2 version: 2.03.16
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 58.

------

## SageMaker Hyperpod AMI releases for Amazon EKS: November 22, 2025
<a name="sagemaker-hyperpod-release-ami-eks-20251128"></a>

 **AMI general updates** 
+ Released updates for SageMaker Hyperpod AMI for Amazon EKS versions 1.28, 1.29, 1.30, 1.31, 1.32, 1.33.
+ Base DLAMI release note is available [here](https://docs.amazonaws.cn//dlami/latest/devguide/appendix-ami-release-notes.html#appendix-ami-release-notes-base).

 **SageMaker Hyperpod DLAMI for Amazon EKS support** 

This release includes the following updates:

------
#### [ Kubernetes v1.28 ]
+  **AL2 is now deprecated. Kubernetes AMI is based on AL2023.** 
+ AL2 (x86\$164):
  + Linux Kernel version: 5.10
  + Glibc version: 2.26
  + OpenSSL version: 1.0.2k-fips
  + FSx Lustre Client version: 2.12.8
  + Docker version: Docker version 25.0.13, build 0bab007
  + Runc version: 1.3.3
  + Containerd version: containerd github.com/containerd/containerd 1.7.27
  + aws CLI v2 version: aws-cli/1.42.71 Python/3.10.17 Linux/5.10.245-241.978.amzn2.x86\$164 botocore/1.40.71
  + aws Neuronx DKMS version: 2.24.7.0
  + NVIDIA Driver version: 570.195.03
  + CUDA version: 12.2
  + ENA Driver version: 2.15.0g
  + Python version: 3.7.16
  + Kubernetes version: v1.28.15-eks-473151a
  + iptables-services version: 1.8.4
  + nginx version: 1.20.1
  + nvme-cli version: 1.11.1
  + epel-release version: 7
  + stress version: 1.0.4
  + collectd version: 5.8.1
  + acl version: 2.2.51
  + rsyslog version: 8.24.0
  + lustre-client version: 2.12.8
  + systemd version: 219
  + openssh version: 7.4
  + sudo version: 1.8.23
  + gcc version: 7.3.1
  + cmake version: 2.8.12.2
  + git version: 2.47.3
  + make version: 3.82
  + cloudwatch-agent version: 1.300060.1
  + nfs-utils version: 1.3.0
  + lvm2 version: 2.02.187
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 59.
+ AL2023 (x86\$164):
  + Linux Kernel version: 6.1
  + Glibc version: 2.34
  + OpenSSL version: 3.2.2
  + FSx Lustre Client version: 2.15.6
  + Runc version: 1.3.3
  + Containerd version: containerd github.com/containerd/containerd 1.7.27
  + aws Neuronx DKMS version: 2.24.7.0
  + NVIDIA Driver version: 580.95.05
  + CUDA version: 12.8
  + ENA Driver version: 2.15.0g
  + Python version: 3.9.24
  + Kubernetes version: v1.28.15-eks-473151a
  + iptables-services version: 1.8.8
  + nginx version: 1.28.0
  + nvme-cli version: 2.13 1.13
  + stress version: 1.0.7
  + collectd version: 5.12.0.
  + acl version: 2.3.1
  + lustre-client version: 2.15.6
  + systemd version: 252
  + openssh version: 8.7
  + sudo version: 1.9.15
  + gcc version: 11.5.0
  + cmake version: 3.22.2
  + git version: 2.50.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300060.1
  + nfs-utils version: 2.5.4
  + lvm2 version: 2.03.16
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 59.

------
#### [ Kubernetes v1.29 ]
+  **AL2 is now deprecated. Kubernetes AMI is based on AL2023.** 
+ AL2 (x86\$164):
  + Linux Kernel version: 5.10
  + Glibc version: 2.26
  + OpenSSL version: 1.0.2k-fips
  + FSx Lustre Client version: 2.12.8
  + Docker version: Docker version 25.0.13, build 0bab007
  + Runc version: 1.3.3
  + Containerd version: containerd github.com/containerd/containerd 1.7.27
  + aws CLI v2 version: aws-cli/1.42.71 Python/3.10.17 Linux/5.10.245-241.978.amzn2.x86\$164 botocore/1.40.71
  + aws Neuronx DKMS version: 2.24.7.0
  + NVIDIA Driver version: 570.195.03
  + CUDA version: 12.2
  + ENA Driver version: 2.15.0g
  + Python version: 3.7.16
  + Kubernetes version: v1.29.15-eks-473151a
  + iptables-services version: 1.8.4
  + nginx version: 1.20.1
  + nvme-cli version: 1.11.1
  + epel-release version: 7
  + stress version: 1.0.4
  + collectd version: 5.8.1
  + acl version: 2.2.51
  + rsyslog version: 8.24.0
  + lustre-client version: 2.12.8
  + systemd version: 219
  + openssh version: 7.4
  + sudo version: 1.8.23
  + gcc version: 7.3.1
  + cmake version: 2.8.12.2
  + git version: 2.47.3
  + make version: 3.82
  + cloudwatch-agent version: 1.300060.1
  + nfs-utils version: 1.3.0
  + lvm2 version: 2.02.187
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 59.
+ AL2023 (x86\$164):
  + Linux Kernel version: 6.1
  + Glibc version: 2.34
  + OpenSSL version: 3.2.2
  + FSx Lustre Client version: 2.15.6
  + Runc version: 1.3.3
  + Containerd version: containerd github.com/containerd/containerd 1.7.27
  + aws Neuronx DKMS version: 2.24.7.0
  + NVIDIA Driver version: 580.95.05
  + CUDA version: 12.8
  + ENA Driver version: 2.15.0g
  + Python version: 3.9.24
  + Kubernetes version: v1.29.15-eks-473151a
  + iptables-services version: 1.8.8
  + nginx version: 1.28.0
  + nvme-cli version: 2.13 1.13
  + stress version: 1.0.7
  + collectd version: 5.12.0.
  + acl version: 2.3.1
  + lustre-client version: 2.15.6
  + systemd version: 252
  + openssh version: 8.7
  + sudo version: 1.9.15
  + gcc version: 11.5.0
  + cmake version: 3.22.2
  + git version: 2.50.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300060.1
  + nfs-utils version: 2.5.4
  + lvm2 version: 2.03.16
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 59.

------
#### [ Kubernetes v1.30 ]
+  **AL2 is now deprecated. Kubernetes AMI is based on AL2023.** 
+ AL2 (x86\$164):
  + Linux Kernel version: 5.10
  + Glibc version: 2.26
  + OpenSSL version: 1.0.2k-fips
  + FSx Lustre Client version: 2.12.8
  + Docker version: Docker version 25.0.13, build 0bab007
  + Runc version: 1.3.2
  + Containerd version: containerd github.com/containerd/containerd 1.7.27
  + aws CLI v2 version: aws-cli/1.42.69 Python/3.10.17 Linux/5.10.245-241.976.amzn2.x86\$164 botocore/1.40.69
  + aws Neuronx DKMS version: 2.24.7.0
  + NVIDIA Driver version: 570.195.03
  + CUDA version: 12.2
  + ENA Driver version: 2.15.0g
  + Python version: 3.7.16
  + Kubernetes version: v1.30.11-eks-473151a
  + iptables-services version: 1.8.4
  + nginx version: 1.20.1
  + nvme-cli version: 1.11.1
  + epel-release version: 7
  + stress version: 1.0.4
  + collectd version: 5.8.1
  + acl version: 2.2.51
  + rsyslog version: 8.24.0
  + lustre-client version: 2.12.8
  + systemd version: 219
  + openssh version: 7.4
  + sudo version: 1.8.23
  + gcc version: 7.3.1
  + cmake version: 2.8.12.2
  + git version: 2.47.3
  + make version: 3.82
  + cloudwatch-agent version: 1.300060.1
  + nfs-utils version: 1.3.0
  + lvm2 version: 2.02.187
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 58.
+ AL2023 (x86\$164):
  + Linux Kernel version: 6.1
  + Glibc version: 2.34
  + OpenSSL version: 3.2.2
  + FSx Lustre Client version: 2.15.6
  + Runc version: 1.3.3
  + Containerd version: containerd github.com/containerd/containerd 1.7.27
  + aws Neuronx DKMS version: 2.24.7.0
  + NVIDIA Driver version: 580.95.05
  + CUDA version: 12.8
  + ENA Driver version: 2.15.0g
  + Python version: 3.9.24
  + Kubernetes version: v1.30.11-eks-473151a
  + iptables-services version: 1.8.8
  + nginx version: 1.28.0
  + nvme-cli version: 2.13 1.13
  + stress version: 1.0.7
  + collectd version: 5.12.0.
  + acl version: 2.3.1
  + lustre-client version: 2.15.6
  + systemd version: 252
  + openssh version: 8.7
  + sudo version: 1.9.15
  + gcc version: 11.5.0
  + cmake version: 3.22.2
  + git version: 2.50.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300060.1
  + nfs-utils version: 2.5.4
  + lvm2 version: 2.03.16
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 59.

------
#### [ Kubernetes v1.31 ]
+  **AL2 is now deprecated. Kubernetes AMI is based on AL2023.** 
+ AL2 (x86\$164):
  + Linux Kernel version: 5.10
  + Glibc version: 2.26
  + OpenSSL version: 1.0.2k-fips
  + FSx Lustre Client version: 2.12.8
  + Docker version: Docker version 25.0.13, build 0bab007
  + Runc version: 1.3.3
  + Containerd version: containerd github.com/containerd/containerd 1.7.27
  + aws CLI v2 version: aws-cli/1.42.71 Python/3.10.17 Linux/5.10.245-241.978.amzn2.x86\$164 botocore/1.40.71
  + aws Neuronx DKMS version: 2.24.7.0
  + NVIDIA Driver version: 570.195.03
  + CUDA version: 12.2
  + ENA Driver version: 2.15.0g
  + Python version: 3.7.16
  + Kubernetes version: v1.31.7-eks-473151a
  + iptables-services version: 1.8.4
  + nginx version: 1.20.1
  + nvme-cli version: 1.11.1
  + epel-release version: 7
  + stress version: 1.0.4
  + collectd version: 5.8.1
  + acl version: 2.2.51
  + rsyslog version: 8.24.0
  + lustre-client version: 2.12.8
  + systemd version: 219
  + openssh version: 7.4
  + sudo version: 1.8.23
  + gcc version: 7.3.1
  + cmake version: 2.8.12.2
  + git version: 2.47.3
  + make version: 3.82
  + cloudwatch-agent version: 1.300060.1
  + nfs-utils version: 1.3.0
  + lvm2 version: 2.02.187
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 59.
+ AL2023 (x86\$164):
  + Linux Kernel version: 6.1
  + Glibc version: 2.34
  + OpenSSL version: 3.2.2
  + FSx Lustre Client version: 2.15.6
  + Runc version: 1.3.3
  + Containerd version: containerd github.com/containerd/containerd 1.7.27
  + aws Neuronx DKMS version: 2.24.7.0
  + NVIDIA Driver version: 580.95.05
  + CUDA version: 12.8
  + ENA Driver version: 2.15.0g
  + Python version: 3.9.24
  + Kubernetes version: v1.31.13-eks-113cf36
  + iptables-services version: 1.8.8
  + nginx version: 1.28.0
  + nvme-cli version: 2.13 1.13
  + stress version: 1.0.7
  + collectd version: 5.12.0.
  + acl version: 2.3.1
  + lustre-client version: 2.15.6
  + systemd version: 252
  + openssh version: 8.7
  + sudo version: 1.9.15
  + gcc version: 11.5.0
  + cmake version: 3.22.2
  + git version: 2.50.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300060.1
  + nfs-utils version: 2.5.4
  + lvm2 version: 2.03.16
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 59.
+ AL2023 (ARM64):
  + Linux Kernel version: 6.12
  + Glibc version: 2.34
  + OpenSSL version: 3.2.2
  + FSx Lustre Client version: 2.15.6
  + Runc version: 1.3.3
  + Containerd version: containerd github.com/containerd/containerd 1.7.27
  + NVIDIA Driver version: 580.95.05
  + CUDA version: 12.8
  + ENA Driver version: 2.15.0g
  + Python version: 3.9.24
  + Kubernetes version: v1.31.13-eks-113cf36
  + iptables-services version: 1.8.8
  + nginx version: 1.28.0
  + nvme-cli version: 2.13 1.13
  + stress version: 1.0.7
  + collectd version: 5.12.0.
  + acl version: 2.3.1
  + lustre-client version: 2.15.6
  + nvidia-imex version: 580.95.05
  + systemd version: 252
  + openssh version: 8.7
  + sudo version: 1.9.15
  + gcc version: 11.5.0
  + cmake version: 3.22.2
  + git version: 2.50.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300060.1
  + nfs-utils version: 2.5.4
  + lvm2 version: 2.03.16
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 58.

------
#### [ Kubernetes v1.32 ]
+  **AL2 is now deprecated. Kubernetes AMI is based on AL2023.** 
+ AL2 (x86\$164):
  + Linux Kernel version: 5.10
  + Glibc version: 2.26
  + OpenSSL version: 1.0.2k-fips
  + FSx Lustre Client version: 2.12.8
  + Docker version: Docker version 25.0.13, build 0bab007
  + Runc version: 1.3.3
  + Containerd version: containerd github.com/containerd/containerd 1.7.27
  + aws CLI v2 version: aws-cli/1.42.74 Python/3.10.17 Linux/5.10.245-241.978.amzn2.x86\$164 botocore/1.40.74
  + aws Neuronx DKMS version: 2.24.7.0
  + NVIDIA Driver version: 570.195.03
  + CUDA version: 12.2
  + ENA Driver version: 2.15.0g
  + Python version: 3.7.16
  + Kubernetes version: v1.32.3-eks-473151a
  + iptables-services version: 1.8.4
  + nginx version: 1.20.1
  + nvme-cli version: 1.11.1
  + epel-release version: 7
  + stress version: 1.0.4
  + collectd version: 5.8.1
  + acl version: 2.2.51
  + rsyslog version: 8.24.0
  + lustre-client version: 2.12.8
  + systemd version: 219
  + openssh version: 7.4
  + sudo version: 1.8.23
  + gcc version: 7.3.1
  + cmake version: 2.8.12.2
  + git version: 2.47.3
  + make version: 3.82
  + cloudwatch-agent version: 1.300060.1
  + nfs-utils version: 1.3.0
  + lvm2 version: 2.02.187
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 59.
+ AL2023 (x86\$164):
  + Linux Kernel version: 6.1
  + Glibc version: 2.34
  + OpenSSL version: 3.2.2
  + FSx Lustre Client version: 2.15.6
  + Runc version: 1.3.3
  + Containerd version: containerd github.com/containerd/containerd 1.7.27
  + aws Neuronx DKMS version: 2.24.7.0
  + NVIDIA Driver version: 580.95.05
  + CUDA version: 12.8
  + ENA Driver version: 2.15.0g
  + Python version: 3.9.24
  + Kubernetes version: v1.32.9-eks-113cf36
  + iptables-services version: 1.8.8
  + nginx version: 1.28.0
  + nvme-cli version: 2.13 1.13
  + stress version: 1.0.7
  + collectd version: 5.12.0.
  + acl version: 2.3.1
  + lustre-client version: 2.15.6
  + systemd version: 252
  + openssh version: 8.7
  + sudo version: 1.9.15
  + gcc version: 11.5.0
  + cmake version: 3.22.2
  + git version: 2.50.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300060.1
  + nfs-utils version: 2.5.4
  + lvm2 version: 2.03.16
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 59.
+ AL2023 (ARM64):
  + Linux Kernel version: 6.12
  + Glibc version: 2.34
  + OpenSSL version: 3.2.2
  + FSx Lustre Client version: 2.15.6
  + Runc version: 1.3.3
  + Containerd version: containerd github.com/containerd/containerd 1.7.27
  + NVIDIA Driver version: 580.95.05
  + CUDA version: 12.8
  + ENA Driver version: 2.15.0g
  + Python version: 3.9.24
  + Kubernetes version: v1.32.9-eks-113cf36
  + iptables-services version: 1.8.8
  + nginx version: 1.28.0
  + nvme-cli version: 2.13 1.13
  + stress version: 1.0.7
  + collectd version: 5.12.0.
  + acl version: 2.3.1
  + lustre-client version: 2.15.6
  + nvidia-imex version: 580.95.05
  + systemd version: 252
  + openssh version: 8.7
  + sudo version: 1.9.15
  + gcc version: 11.5.0
  + cmake version: 3.22.2
  + git version: 2.50.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300060.1
  + nfs-utils version: 2.5.4
  + lvm2 version: 2.03.16
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 58.

------
#### [ Kubernetes v1.33 ]
+ AL2023 (x86\$164):
  + Linux Kernel version: 6.1
  + Glibc version: 2.34
  + OpenSSL version: 3.2.2
  + FSx Lustre Client version: 2.15.6
  + Runc version: 1.3.3
  + Containerd version: containerd github.com/containerd/containerd 1.7.27
  + aws Neuronx DKMS version: 2.24.7.0
  + NVIDIA Driver version: 580.95.05
  + CUDA version: 12.8
  + ENA Driver version: 2.15.0g
  + Python version: 3.9.24
  + Kubernetes version: v1.33.5-eks-113cf36
  + iptables-services version: 1.8.8
  + nginx version: 1.28.0
  + nvme-cli version: 2.13 1.13
  + stress version: 1.0.7
  + collectd version: 5.12.0.
  + acl version: 2.3.1
  + lustre-client version: 2.15.6
  + systemd version: 252
  + openssh version: 8.7
  + sudo version: 1.9.15
  + gcc version: 11.5.0
  + cmake version: 3.22.2
  + git version: 2.50.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300060.1
  + nfs-utils version: 2.5.4
  + lvm2 version: 2.03.16
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 59.
+ AL2023 (ARM64):
  + Linux Kernel version: 6.12
  + Glibc version: 2.34
  + OpenSSL version: 3.2.2
  + FSx Lustre Client version: 2.15.6
  + Runc version: 1.3.3
  + Containerd version: containerd github.com/containerd/containerd 1.7.27
  + NVIDIA Driver version: 580.95.05
  + CUDA version: 12.8
  + ENA Driver version: 2.15.0g
  + Python version: 3.9.24
  + Kubernetes version: v1.33.5-eks-113cf36
  + iptables-services version: 1.8.8
  + nginx version: 1.28.0
  + nvme-cli version: 2.13 1.13
  + stress version: 1.0.7
  + collectd version: 5.12.0.
  + acl version: 2.3.1
  + lustre-client version: 2.15.6
  + nvidia-imex version: 580.95.05
  + systemd version: 252
  + openssh version: 8.7
  + sudo version: 1.9.15
  + gcc version: 11.5.0
  + cmake version: 3.22.2
  + git version: 2.50.1
  + make version: 4.3
  + cloudwatch-agent version: 1.300060.1
  + nfs-utils version: 2.5.4
  + lvm2 version: 2.03.16
  + ec2-instance-connect version: 1.1
  + aws-cfn-bootstrap version: 2.0
  + rdma-core version: 58.

------

## SageMaker HyperPod AMI releases for Amazon EKS: November 07, 2025
<a name="sagemaker-hyperpod-release-ami-eks-20251107"></a>

**AMI general updates**
+ Released updates for SageMaker HyperPod AMI for Amazon EKS versions 1.28, 1.29, 1.30, 1.31, 1.32, and 1.33. 
+ Base DLAMI release note is available [here](https://docs.amazonaws.cn//dlami/latest/devguide/appendix-ami-release-notes.html#appendix-ami-release-notes-base).

**SageMaker HyperPod DLAMI for Amazon EKS support**

This release includes the following updates:

------
#### [ Kubernetes v1.28 ]
+ **Amazon Linux 2 is now deprecated. Kubernetes AMI is based on AL2023.**
+ AL2 (x86\$164):
  + NVIDIA driver version: 570.195.03
  + CUDA version: 12.8
  + Kubernetes version: 1.28.15
+ AL2023 (x86\$164):
  + NVIDIA driver version: 580.95.05
  + CUDA version: 13.0
  + Kubernetes version: 1.28.15
+ Package updates include boto3, botocore, pip, regex, psutil, and nvidia container toolkit components.
+ Added package: annotated-doc 0.0.3

------
#### [ Kubernetes v1.29 ]
+ **Amazon Linux 2 is now deprecated. Kubernetes AMI is based on AL2023.**
+ AL2 (x86\$164):
  + NVIDIA driver version: 570.195.03
  + CUDA version: 12.8
  + Kubernetes version: 1.29.15
+ AL2023 (x86\$164):
  + NVIDIA driver version: 580.95.05
  + CUDA version: 13.0
  + Kubernetes version: 1.29.15
+ Package updates include kernel updates, glibc updates, and various system libraries.
+ Added package: annotated-doc 0.0.3

------
#### [ Kubernetes v1.30 ]
+ **Amazon Linux 2 is now deprecated. Kubernetes AMI is based on AL2023.**
+ AL2 (x86\$164):
  + NVIDIA driver version: 570.195.03
  + CUDA version: 12.8
  + Kubernetes version: 1.30.11
+ AL2023 (x86\$164):
  + NVIDIA driver version: 580.95.05
  + CUDA version: 13.0
  + Kubernetes version: 1.30.11
+ Package updates include kernel livepatch updates and system library updates.
+ Added package: annotated-doc 0.0.3

------
#### [ Kubernetes v1.31 ]
+ **Amazon Linux 2 is now deprecated. Kubernetes AMI is based on AL2023.**
+ AL2 (x86\$164):
  + NVIDIA driver version: 570.195.03
  + CUDA version: 12.8
  + Kubernetes version: 1.31.7
+ AL2023 (x86\$164):
  + NVIDIA driver version: 580.95.05
  + CUDA version: 13.0
  + Kubernetes version: 1.31.13
+ AL2023 (arm):
  + NVIDIA driver version: 580.95.05
  + CUDA version: 13.0
  + Kubernetes version: 1.31.13
  + Kernel version: 6.12.46-66.121.amzn2023.aarch64
+ Package updates include extensive system library updates, kernel updates, and boost library updates.
+ Added packages: apr-util-lmdb, kernel-livepatch-6.1.156-177.286

------
#### [ Kubernetes v1.32 ]
+ **Amazon Linux 2 is now deprecated. Kubernetes AMI is based on AL2023.**
+ AL2 (x86\$164):
  + NVIDIA driver version: 570.195.03
  + CUDA version: 12.8
  + Kubernetes version: 1.32.3
  + Amazon IAM Authenticator version: v0.6.29
+ AL2023 (x86\$164):
  + NVIDIA driver version: 580.95.05
  + CUDA version: 13.0
  + Kubernetes version: 1.32.9
+ AL2023 (arm):
  + NVIDIA driver version: 580.95.05
  + CUDA version: 13.0
  + Kubernetes version: 1.32.9
  + Kernel version: 6.12.46-66.121.amzn2023.aarch64
+ Package updates include kernel livepatch updates and system library updates.
+ Added package: annotated-doc 0.0.3

------
#### [ Kubernetes v1.33 ]
+ AL2023 (x86\$164):
  + NVIDIA driver version: 580.95.05
  + CUDA version: 13.0
  + Kubernetes version: 1.33.5
  + Kernel version: 6.1.155-176.282.amzn2023.x86\$164
+ AL2023 (arm):
  + NVIDIA driver version: 580.95.05
  + CUDA version: 13.0
  + Kubernetes version: 1.33.5
  + Kernel version: 6.12.46-66.121.amzn2023.aarch64
+ Package updates include extensive system library updates, kernel updates, and boost library updates.
+ Added packages: apr-util-lmdb, kernel-livepatch updates

------

**Note**  
runc version has been upgraded to 1.3.2 [Security bulletin](https://aws.amazon.com/security/security-bulletins/rss/aws-2025-024/)

## SageMaker HyperPod AMI releases for Amazon EKS: October 29, 2025
<a name="sagemaker-hyperpod-release-ami-eks-20251029"></a>

**AMI general updates**
+ Released updates for SageMaker HyperPod AMI for Amazon EKS versions 1.28, 1.29, 1.30, 1.31, 1.32, and 1.33. 
+ Base DLAMI release note is available [here](https://docs.amazonaws.cn//dlami/latest/devguide/aws-deep-learning-ami-baseoss-aml2-2025-10-14.html).

**SageMaker HyperPod DLAMI for Amazon EKS support**

This release includes the following updates:

------
#### [ Kubernetes v1.28 ]
+ **Amazon Linux 2 is now deprecated. Kubernetes AMI is based on AL2023.**
+ AL2 (x86\$164):
  + NVIDIA driver version: 570.195.03
  + CUDA version: 12.8
  + Kubernetes version: 1.28.15
+ AL2023 (x86\$164):
  + NVIDIA driver version: 580.95.05
  + CUDA version: 13.0
  + Kubernetes version: 1.28.15
+ Package updates include boto3, botocore, pip, regex, psutil, and nvidia container toolkit components.
+ Added package: annotated-doc 0.0.3

------
#### [ Kubernetes v1.29 ]
+ **Amazon Linux 2 is now deprecated. Kubernetes AMI is based on AL2023.**
+ AL2 (x86\$164):
  + NVIDIA driver version: 570.195.03
  + CUDA version: 12.8
  + Kubernetes version: 1.29.15
+ AL2023 (x86\$164):
  + NVIDIA driver version: 580.95.05
  + CUDA version: 13.0
  + Kubernetes version: 1.29.15
+ Package updates include kernel updates, glibc updates, and various system libraries.
+ Added package: annotated-doc 0.0.3

------
#### [ Kubernetes v1.30 ]
+ **Amazon Linux 2 is now deprecated. Kubernetes AMI is based on AL2023.**
+ AL2 (x86\$164):
  + NVIDIA driver version: 570.195.03
  + CUDA version: 12.8
  + Kubernetes version: 1.30.11
+ AL2023 (x86\$164):
  + NVIDIA driver version: 580.95.05
  + CUDA version: 13.0
  + Kubernetes version: 1.30.11
+ Package updates include kernel livepatch updates and system library updates.
+ Added package: annotated-doc 0.0.3

------
#### [ Kubernetes v1.31 ]
+ **Amazon Linux 2 is now deprecated. Kubernetes AMI is based on AL2023.**
+ AL2 (x86\$164):
  + NVIDIA driver version: 570.195.03
  + CUDA version: 12.8
  + Kubernetes version: 1.31.7
+ AL2023 (x86\$164):
  + NVIDIA driver version: 580.95.05
  + CUDA version: 13.0
  + Kubernetes version: 1.31.13
+ AL2023 (arm):
  + NVIDIA driver version: 580.95.05
  + CUDA version: 13.0
  + Kubernetes version: 1.31.13
  + Kernel version: 6.12.46-66.121.amzn2023.aarch64
+ Package updates include extensive system library updates, kernel updates, and boost library updates.
+ Added packages: apr-util-lmdb, kernel-livepatch-6.1.156-177.286

------
#### [ Kubernetes v1.32 ]
+ **Amazon Linux 2 is now deprecated. Kubernetes AMI is based on AL2023.**
+ AL2 (x86\$164):
  + NVIDIA driver version: 570.195.03
  + CUDA version: 12.8
  + Kubernetes version: 1.32.3
+ AL2023 (x86\$164):
  + NVIDIA driver version: 580.95.05
  + CUDA version: 13.0
  + Kubernetes version: 1.32.9
+ AL2023 (arm):
  + NVIDIA driver version: 580.95.05
  + CUDA version: 13.0
  + Kubernetes version: 1.32.9
  + Kernel version: 6.12.46-66.121.amzn2023.aarch64
+ Package updates include kernel livepatch updates and system library updates.
+ Added package: annotated-doc 0.0.3

------
#### [ Kubernetes v1.33 ]
+ AL2023 (x86\$164):
  + NVIDIA driver version: 580.95.05
  + CUDA version: 13.0
  + Kubernetes version: 1.33.5
  + Kernel version: 6.1.155-176.282.amzn2023.x86\$164
+ AL2023 (arm):
  + NVIDIA driver version: 580.95.05
  + CUDA version: 13.0
  + Kubernetes version: 1.33.5
  + Kernel version: 6.12.46-66.121.amzn2023.aarch64
+ Package updates include extensive system library updates, kernel updates, and boost library updates.
+ Added packages: apr-util-lmdb, kernel-livepatch updates

------

## SageMaker HyperPod AMI releases for Amazon EKS: October 22, 2025
<a name="sagemaker-hyperpod-release-ami-eks-20251022"></a>

**AL2x86**

**Note**  
Amazon Linux 2 is now deprecated. The Kubernetes AMI is based on AL2023.

Base DLAMI release note is available [here](https://docs.amazonaws.cn//dlami/latest/devguide/aws-deep-learning-ami-baseoss-aml2-2025-10-14.html).
+ EKS versions 1.28 - 1.32
+ This release contains CVE patches for affected NVIDIA Driver packages found in the [Nvidia October Security Bulletin](https://nvidia.custhelp.com/app/answers/detail/a_id/5703).
+ NVIDIA SMI

  ```
  NVIDIA-SMI 570.195.03             
  Driver Version: 570.195.03     
  CUDA Version: 12.8
  ```
+ Major versions  
****    
[\[See the AWS documentation website for more details\]](http://docs.amazonaws.cn/en_us/sagemaker/latest/dg/sagemaker-hyperpod-release-ami-eks.html)
+ Added packages: No packages were added in this release.
+ Updated packages  
****    
[\[See the AWS documentation website for more details\]](http://docs.amazonaws.cn/en_us/sagemaker/latest/dg/sagemaker-hyperpod-release-ami-eks.html)
+ Removed packages: No packages were removed in this release.

**AL2023x86**

Base DLAMI release note is available [here](https://docs.amazonaws.cn//dlami/latest/devguide/aws-deep-learning-ami-gpubaseoss-al2023-2025-10-14.html).
+ EKS versions 1.28 - 1.32. No release for EKS version 1.33.
+ This release contains CVE patches for affected NVIDIA Driver packages found in the [Nvidia October Security Bulletin](https://nvidia.custhelp.com/app/answers/detail/a_id/5703).
+ NVIDIA SMI

  ```
  NVIDIA-SMI 580.95.05             
  Driver Version: 580.95.05  
  CUDA Version: 13.0
  ```
+ Major versions  
****    
[\[See the AWS documentation website for more details\]](http://docs.amazonaws.cn/en_us/sagemaker/latest/dg/sagemaker-hyperpod-release-ami-eks.html)
+ Added packages: No packages were added in this release.
+ Updated packages  
****    
[\[See the AWS documentation website for more details\]](http://docs.amazonaws.cn/en_us/sagemaker/latest/dg/sagemaker-hyperpod-release-ami-eks.html)
+ Removed packages: No packages were removed in this release.

**AL2023 ARM64**

Base DLAMI release note is available [here](https://docs.amazonaws.cn//dlami/latest/devguide/aws-deep-learning-ami-gpubaseossarm64-al2023-2025-10-14.html).
+ EKS versions 1.31 - 1.33.
+ This release contains CVE patches for affected NVIDIA Driver packages found in the [Nvidia October Security Bulletin](https://nvidia.custhelp.com/app/answers/detail/a_id/5703).
+ NVIDIA SMI

  ```
  NVIDIA-SMI 580.95.05        
  Driver Version: 580.95.05    
  CUDA Version: 13.0
  ```
+ Major versions  
****    
[\[See the AWS documentation website for more details\]](http://docs.amazonaws.cn/en_us/sagemaker/latest/dg/sagemaker-hyperpod-release-ami-eks.html)
+ Added packages: No packages were added in this release.
+ Updated packages  
****    
[\[See the AWS documentation website for more details\]](http://docs.amazonaws.cn/en_us/sagemaker/latest/dg/sagemaker-hyperpod-release-ami-eks.html)
+ Removed packages: No packages were removed in this release.

## SageMaker HyperPod AMI releases for Amazon EKS: September 29, 2025
<a name="sagemaker-hyperpod-release-ami-eks-20250929"></a>

**AMI general updates**
+ Released the new SageMaker HyperPod AMI for Amazon EKS 1.33. For more information, see SageMaker HyperPod AMI releases for Amazon EKS: September 29, 2025.
**Important**  
The Dynamic Resource Allocation beta Kubernetes API is enabled by default in this release.  
This API improves scheduling and monitoring workloads that require resources such as GPUs.
This API was developed by the open source Kubernetes community and might change in future versions of Kubernetes. Before you use the API, review the [Kubernetes documentation](https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/) and understand how it affects your workloads.
HyperPod is not releasing a HyperPod Amazon Linux 2 AMI for Kubernetes 1.33. Amazon recommends that you migrate to AL2023. For more information, see [Upgrade from Amazon Linux 2 to AL2023](https://docs.amazonaws.cn/eks/latest/userguide/al2023.html).

For more information, see [Kubernetes v1.33](https://kubernetes.io/blog/2025/04/23/kubernetes-v1-33-release/).

**SageMaker HyperPod DLAMI for Amazon EKS support**

This release includes the following updates:

------
#### [ Kubernetes v1.28 ]
+ **Amazon Linux 2 is now deprecated. Kubernetes AMI is based on AL2023.**
+ NVIDIA SMI:
  + NVIDIA driver version: 570.172.08
  + CUDA version: 12.8
+ Packages:
  + Languages and core libraries:
    + GCC: 11.5.0-5.amzn2023.0.5
    + GCC 14: 14.2.1-7.amzn2023.0.1
    + Java: 17.0.16\$18-1.amzn2023.1
    + Perl: 5.32.1-477.amzn2023.0.7
    + Python: 3.9.23-1.amzn2023.0.3
    + Go: 3.2.0-37.amzn2023
    + Rust: 1.89.0-1.amzn2023.0.2
  + Core Libraries:
    + GlibC: 2.34-196.amzn2023.0.1
    + OpenSSL: 3.2.2-1.amzn2023.0.1
    + Zlib: 1.2.11-33.amzn2023.0.5
    + XZ Utils: 5.2.5-9.amzn2023.0.2
    + Util-linux: 2.37.4-1.amzn2023.0.4
  + Neuron:
    + aws-neuronx-dkms: 2.23.9.0-dkms
    + aws-neuronx-tools: 2.25.145.0-1
  + EFA:
    + efa driver: 2.17.2-1.amzn2023
    + efa config: 1.18-1.amzn2023
    + efa nv peermem: 1.2.2-1.amzn2023
    + efa profile: 1.7-1.amzn2023
  + kernel:
    + kernel: 6.1.148-173.267.amzn2023
    + kernel development: 6.1.148-173.267.amzn2023
    + kernel headers: 6.1.148-173.267.amzn2023
    + kernel tools: 6.1.148-173.267.amzn2023
    + kernel modules extra: 6.1.148-173.267.amzn2023
    + kernel livepatch: 1.0-0.amzn2023
  + Nvidia:
    + nvidia container toolkit: 1.17.8-1
    + nvidia container toolkit base: 1.17.8-1
    + libnvidia-container: 1.17.8-1 (with tools)
    + nvidia fabric manager: 570.172.08-1
    + libnvidia-nscq: 570.172.08-1

------
#### [ Kubernetes v1.29 ]
+ **Amazon Linux 2 is now deprecated. Kubernetes AMI is based on AL2023.**
+ NVIDIA SMI:
  + NVIDIA driver version: 570.172.08
  + CUDA version: 12.8
+ Packages:
  + Languages and core libraries:
    + GCC: 11.5.0-5.amzn2023.0.5
    + GCC 14: 14.2.1-7.amzn2023.0.1
    + Java: 17.0.16\$18-1.amzn2023.1
    + Perl: 5.32.1-477.amzn2023.0.7
    + Python: 3.9.23-1.amzn2023.0.3
    + Go: 3.2.0-37.amzn2023
    + Rust: 1.89.0-1.amzn2023.0.2
  + Core Libraries:
    + GlibC: 2.34-196.amzn2023.0.1
    + OpenSSL: 3.2.2-1.amzn2023.0.1
    + Zlib: 1.2.11-33.amzn2023.0.5
    + XZ Utils: 5.2.5-9.amzn2023.0.2
    + Util-linux: 2.37.4-1.amzn2023.0.4
  + Neuron:
    + aws-neuronx-dkms: 2.23.9.0-dkms
    + aws-neuronx-tools: 2.25.145.0-1
  + EFA:
    + efa driver: 2.17.2-1.amzn2023
    + efa config: 1.18-1.amzn2023
    + efa nv peermem: 1.2.2-1.amzn2023
    + efa profile: 1.7-1.amzn2023
  + kernel:
    + kernel: 6.1.148-173.267.amzn2023
    + kernel development: 6.1.148-173.267.amzn2023
    + kernel headers: 6.1.148-173.267.amzn2023
    + kernel tools: 6.1.148-173.267.amzn2023
    + kernel modules extra: 6.1.148-173.267.amzn2023
    + kernel livepatch: 1.0-0.amzn2023
  + Nvidia:
    + nvidia container toolkit: 1.17.8-1
    + nvidia container toolkit base: 1.17.8-1
    + libnvidia-container: 1.17.8-1 (with tools)
    + nvidia fabric manager: 570.172.08-1
    + libnvidia-nscq: 570.172.08-1

------
#### [ Kubernetes v1.30 ]
+ **Amazon Linux 2 is now deprecated. Kubernetes AMI is based on AL2023.**
+ NVIDIA SMI:
  + NVIDIA driver version: 570.172.08
  + CUDA version: 12.8
+ Packages:
  + Languages and core libraries:
    + GCC: 11.5.0-5.amzn2023.0.5
    + GCC 14: 14.2.1-7.amzn2023.0.1
    + Java: 17.0.16\$18-1.amzn2023.1
    + Perl: 5.32.1-477.amzn2023.0.7
    + Python: 3.9.23-1.amzn2023.0.3
    + Go: 3.2.0-37.amzn2023
    + Rust: 1.89.0-1.amzn2023.0.2
  + Core Libraries:
    + GlibC: 2.34-196.amzn2023.0.1
    + OpenSSL: 3.2.2-1.amzn2023.0.1
    + Zlib: 1.2.11-33.amzn2023.0.5
    + XZ Utils: 5.2.5-9.amzn2023.0.2
    + Util-linux: 2.37.4-1.amzn2023.0.4
  + Neuron:
    + aws-neuronx-dkms: 2.23.9.0-dkms
    + aws-neuronx-tools: 2.25.145.0-1
  + EFA:
    + efa driver: 2.17.2-1.amzn2023
    + efa config: 1.18-1.amzn2023
    + efa nv peermem: 1.2.2-1.amzn2023
    + efa profile: 1.7-1.amzn2023
  + kernel:
    + kernel: 6.1.148-173.267.amzn2023
    + kernel development: 6.1.148-173.267.amzn2023
    + kernel headers: 6.1.148-173.267.amzn2023
    + kernel tools: 6.1.148-173.267.amzn2023
    + kernel modules extra: 6.1.148-173.267.amzn2023
    + kernel livepatch: 1.0-0.amzn2023
  + Nvidia:
    + nvidia container toolkit: 1.17.8-1
    + nvidia container toolkit base: 1.17.8-1
    + libnvidia-container: 1.17.8-1 (with tools)
    + nvidia fabric manager: 570.172.08-1
    + libnvidia-nscq: 570.172.08-1

------
#### [ Kubernetes v1.31 ]
+ **Amazon Linux 2 is now deprecated. Kubernetes AMI is based on AL2023.**
+ NVIDIA SMI:
  + NVIDIA driver version: 570.172.08
  + CUDA version: 12.8
+ Packages:
  + Languages and core libraries:
    + GCC: 11.5.0-5.amzn2023.0.5
    + GCC 14: 14.2.1-7.amzn2023.0.1
    + Java: 17.0.16\$18-1.amzn2023.1
    + Perl: 5.32.1-477.amzn2023.0.7
    + Python: 3.9.23-1.amzn2023.0.3
    + Go: 3.2.0-37.amzn2023
    + Rust: 1.89.0-1.amzn2023.0.2
  + Core Libraries:
    + GlibC: 2.34-196.amzn2023.0.1
    + OpenSSL: 3.2.2-1.amzn2023.0.1
    + Zlib: 1.2.11-33.amzn2023.0.5
    + XZ Utils: 5.2.5-9.amzn2023.0.2
    + Util-linux: 2.37.4-1.amzn2023.0.4
  + Neuron:
    + aws-neuronx-dkms: 2.23.9.0-dkms
    + aws-neuronx-tools: 2.25.145.0-1
  + EFA:
    + efa driver: 2.17.2-1.amzn2023
    + efa config: 1.18-1.amzn2023
    + efa nv peermem: 1.2.2-1.amzn2023
    + efa profile: 1.7-1.amzn2023
  + kernel:
    + kernel: 6.1.148-173.267.amzn2023
    + kernel development: 6.1.148-173.267.amzn2023
    + kernel headers: 6.1.148-173.267.amzn2023
    + kernel tools: 6.1.148-173.267.amzn2023
    + kernel modules extra: 6.1.148-173.267.amzn2023
    + kernel livepatch: 1.0-0.amzn2023
  + Nvidia:
    + nvidia container toolkit: 1.17.8-1
    + nvidia container toolkit base: 1.17.8-1
    + libnvidia-container: 1.17.8-1 (with tools)
    + nvidia fabric manager: 570.172.08-1
    + libnvidia-nscq: 570.172.08-1

------
#### [ Kubernetes v1.32 ]
+ **Amazon Linux 2 is now deprecated. Kubernetes AMI is based on AL2023.**
+ NVIDIA SMI:
  + NVIDIA driver version: 570.172.08
  + CUDA version: 12.8
+ Packages:
  + Languages and core libraries:
    + GCC: 11.5.0-5.amzn2023.0.5
    + GCC 14: 14.2.1-7.amzn2023.0.1
    + Java: 17.0.16\$18-1.amzn2023.1
    + Perl: 5.32.1-477.amzn2023.0.7
    + Python: 3.9.23-1.amzn2023.0.3
    + Go: 3.2.0-37.amzn2023
    + Rust: 1.89.0-1.amzn2023.0.2
  + Core Libraries:
    + GlibC: 2.34-196.amzn2023.0.1
    + OpenSSL: 3.2.2-1.amzn2023.0.1
    + Zlib: 1.2.11-33.amzn2023.0.5
    + XZ Utils: 5.2.5-9.amzn2023.0.2
    + Util-linux: 2.37.4-1.amzn2023.0.4
  + Neuron:
    + aws-neuronx-dkms: 2.23.9.0-dkms
    + aws-neuronx-tools: 2.25.145.0-1
  + EFA:
    + efa driver: 2.17.2-1.amzn2023
    + efa config: 1.18-1.amzn2023
    + efa nv peermem: 1.2.2-1.amzn2023
    + efa profile: 1.7-1.amzn2023
  + kernel:
    + kernel: 6.1.148-173.267.amzn2023
    + kernel development: 6.1.148-173.267.amzn2023
    + kernel headers: 6.1.148-173.267.amzn2023
    + kernel tools: 6.1.148-173.267.amzn2023
    + kernel modules extra: 6.1.148-173.267.amzn2023
    + kernel livepatch: 1.0-0.amzn2023
  + Nvidia:
    + nvidia container toolkit: 1.17.8-1
    + nvidia container toolkit base: 1.17.8-1
    + libnvidia-container: 1.17.8-1 (with tools)
    + nvidia fabric manager: 570.172.08-1
    + libnvidia-nscq: 570.172.08-1

------
#### [ Kubernetes v1.33 ]

The following table contains information about components within this AMI release and the corresponding versions.


| component | AL2023\$1x86 | AL2023\$1arm64 | 
| --- | --- | --- | 
| EKS | v1.33.4 | v1.33.4 | 
| amazon-ssm-agent | 3.3.2299.0-1.amzn2023 | 3.3.2299.0-1.amzn2023 | 
| aws-neuronx-dkms | 2.23.9.0-dkms | N/A | 
| containerd | 1.7.27-1.eks.amzn2023.0.4 | 1.7.27-1.eks.amzn2023.0.4 | 
| efa | 2.17.2-1.amzn2023 | 2.17.2-1.amzn2023 | 
| ena | 2.14.1g | 2.14.1g | 
| kernel | 6.12.40-64.114.amzn2023 | N/A | 
| kernel6.12 | N/A | 6.12.40-64.114.amzn2023 | 
| kmod-nvidia-latest-dkms | 570.172.08-1.amzn2023 | 570.172.08-1.el9 | 
| nvidia-container-toolkit | 1.17.8-1 | 1.17.8-1 | 
| runc | 1.2.6-1.amzn2023.0.1 | 1.2.6-1.amzn2023.0.1 | 

------

## SageMaker HyperPod AMI releases for Amazon EKS: August 25, 2025
<a name="sagemaker-hyperpod-release-ami-eks-20250825"></a>

**SageMaker HyperPod DLAMI for Amazon EKS support**

This release includes the following updates:

------
#### [ Kubernetes v1.28 ]

**NVIDIA SMI:**
+ Nvidia Driver Version: 570.172.08
+ CUDA Version: 12.8

**Added Packages:**
+ kernel-livepatch-5.10.240-238.955.x86\$164 1.0-0.amzn2 amzn2extra-kernel-5.10

**Updated Packages:**
+ gdk-pixbuf2.x86\$164: 2.36.12-3.amzn2 → 2.36.12-3.amzn2.0.2
+ kernel.x86\$164: 5.10.239-236.958.amzn2 → 5.10.240-238.955.amzn2
+ kernel-devel.x86\$164: 5.10.239-236.958.amzn2 → 5.10.240-238.955.amzn2
+ kernel-headers.x86\$164: 5.10.239-236.958.amzn2 → 5.10.240-238.955.amzn2
+ kernel-tools.x86\$164: 5.10.239-236.958.amzn2 → 5.10.240-238.955.amzn2
+ libgs.x86\$164: 9.54.0-9.amzn2.0.11 → 9.54.0-9.amzn2.0.12
+ microcode\$1ctl.x86\$164: 2:2.1-47.amzn2.4.24 → 2:2.1-47.amzn2.4.25
+ pam.x86\$164: 1.1.8-23.amzn2.0.2 → 1.1.8-23.amzn2.0.4

**Removed Packages:**
+ kernel-livepatch-5.10.239-236.958.x86\$164 1.0-0.amzn2 amzn2extra-kernel-5.10

**Repository Changed:**
+ libnvidia-container-tools.x86\$164: cuda-rhel8-x86\$164 → nvidia-container-toolkit
+ libnvidia-container1.x86\$164: cuda-rhel8-x86\$164 → nvidia-container-toolkit
+ nvidia-container-toolkit.x86\$164: cuda-rhel8-x86\$164 → nvidia-container-toolkit
+ nvidia-container-toolkit-base.x86\$164: cuda-rhel8-x86\$164 → nvidia-container-toolkit

------
#### [ Kubernetes v1.29 ]

**NVIDIA SMI:**
+ Nvidia Driver Version: 570.172.08
+ CUDA Version: 12.8

**Added Packages:**
+ kernel-livepatch-5.10.240-238.955.x86\$164 1.0-0.amzn2 amzn2extra-kernel-5.10

**Updated Packages:**
+ gdk-pixbuf2.x86\$164: 2.36.12-3.amzn2 → 2.36.12-3.amzn2.0.2
+ kernel.x86\$164: 5.10.239-236.958.amzn2 → 5.10.240-238.955.amzn2
+ kernel-devel.x86\$164: 5.10.239-236.958.amzn2 → 5.10.240-238.955.amzn2
+ kernel-headers.x86\$164: 5.10.239-236.958.amzn2 → 5.10.240-238.955.amzn2
+ kernel-tools.x86\$164: 5.10.239-236.958.amzn2 → 5.10.240-238.955.amzn2
+ libgs.x86\$164: 9.54.0-9.amzn2.0.11 → 9.54.0-9.amzn2.0.12
+ microcode\$1ctl.x86\$164: 2:2.1-47.amzn2.4.24 → 2:2.1-47.amzn2.4.25
+ pam.x86\$164: 1.1.8-23.amzn2.0.2 → 1.1.8-23.amzn2.0.4

**Removed Packages:**
+ kernel-livepatch-5.10.239-236.958.x86\$164 1.0-0.amzn2 amzn2extra-kernel-5.10

**Repository Changed:**
+ libnvidia-container-tools.x86\$164: cuda-rhel8-x86\$164 → nvidia-container-toolkit
+ libnvidia-container1.x86\$164: cuda-rhel8-x86\$164 → nvidia-container-toolkit
+ nvidia-container-toolkit.x86\$164: cuda-rhel8-x86\$164 → nvidia-container-toolkit
+ nvidia-container-toolkit-base.x86\$164: cuda-rhel8-x86\$164 → nvidia-container-toolkit

------
#### [ Kubernetes v1.30 ]

**NVIDIA SMI:**
+ Nvidia Driver Version: 570.172.08
+ CUDA Version: 12.8

**Added Packages:**
+ kernel-livepatch-5.10.240-238.955.x86\$164 1.0-0.amzn2 amzn2extra-kernel-5.10

**Updated Packages:**
+ aws-neuronx-dkms.noarch: 2.22.2.0-dkms → 2.23.9.0-dkms
+ efa.x86\$164: 2.15.3-1.amzn2 → 2.17.2-1.amzn2
+ efa-nv-peermem.x86\$164: 1.2.1-1.amzn2 → 1.2.2-1.amzn2
+ gdk-pixbuf2.x86\$164: 2.36.12-3.amzn2 → 2.36.12-3.amzn2.0.2
+ ibacm.x86\$164: 57.amzn1-1.amzn2.0.2 → 58.amzn0-1.amzn2.0.2
+ infiniband-diags.x86\$164: 57.amzn1-1.amzn2.0.2 → 58.amzn0-1.amzn2.0.2
+ kernel.x86\$164: 5.10.239-236.958.amzn2 → 5.10.240-238.955.amzn2
+ kernel-devel.x86\$164: 5.10.239-236.958.amzn2 → 5.10.240-238.955.amzn2
+ kernel-headers.x86\$164: 5.10.239-236.958.amzn2 → 5.10.240-238.955.amzn2
+ kernel-tools.x86\$164: 5.10.239-236.958.amzn2 → 5.10.240-238.955.amzn2
+ libfabric-aws.x86\$164: 2.1.0amzn3.0-1.amzn2 → 2.1.0amzn5.0-1.amzn2
+ libfabric-aws-devel.x86\$164: 2.1.0amzn3.0-1.amzn2 → 2.1.0amzn5.0-1.amzn2
+ libgs.x86\$164: 9.54.0-9.amzn2.0.11 → 9.54.0-9.amzn2.0.12
+ libibumad.x86\$164: 57.amzn1-1.amzn2.0.2 → 58.amzn0-1.amzn2.0.2
+ libibverbs.x86\$164: 57.amzn1-1.amzn2.0.2 → 58.amzn0-1.amzn2.0.2
+ libibverbs-core.x86\$164: 57.amzn1-1.amzn2.0.2 → 58.amzn0-1.amzn2.0.2
+ libibverbs-utils.x86\$164: 57.amzn1-1.amzn2.0.2 → 58.amzn0-1.amzn2.0.2
+ libnccl-ofi.x86\$164: 1.15.0-1.amzn2 → 1.16.2-1.amzn2
+ librdmacm.x86\$164: 57.amzn1-1.amzn2.0.2 → 58.amzn0-1.amzn2.0.2
+ librdmacm-utils.x86\$164: 57.amzn1-1.amzn2.0.2 → 58.amzn0-1.amzn2.0.2
+ microcode\$1ctl.x86\$164: 2:2.1-47.amzn2.4.24 → 2:2.1-47.amzn2.4.25
+ pam.x86\$164: 1.1.8-23.amzn2.0.2 → 1.1.8-23.amzn2.0.4
+ rdma-core.x86\$164: 57.amzn1-1.amzn2.0.2 → 58.amzn0-1.amzn2.0.2
+ rdma-core-devel.x86\$164: 57.amzn1-1.amzn2.0.2 → 58.amzn0-1.amzn2.0.2

**Removed Packages:**
+ kernel-livepatch-5.10.239-236.958.x86\$164 1.0-0.amzn2 amzn2extra-kernel-5.10

**Repository Changed:**
+ libnvidia-container-tools.x86\$164: cuda-rhel8-x86\$164 → nvidia-container-toolkit
+ libnvidia-container1.x86\$164: cuda-rhel8-x86\$164 → nvidia-container-toolkit
+ nvidia-container-toolkit.x86\$164: cuda-rhel8-x86\$164 → nvidia-container-toolkit
+ nvidia-container-toolkit-base.x86\$164: cuda-rhel8-x86\$164 → nvidia-container-toolkit

------
#### [ Kubernetes v1.31 ]

**NVIDIA SMI:**
+ Nvidia Driver Version: 570.172.08
+ CUDA Version: 12.8

**Added Packages:**
+ kernel-livepatch-5.10.240-238.955.x86\$164 1.0-0.amzn2 amzn2extra-kernel-5.10

**Updated Packages:**
+ gdk-pixbuf2.x86\$164: 2.36.12-3.amzn2 → 2.36.12-3.amzn2.0.2
+ kernel.x86\$164: 5.10.239-236.958.amzn2 → 5.10.240-238.955.amzn2
+ kernel-devel.x86\$164: 5.10.239-236.958.amzn2 → 5.10.240-238.955.amzn2
+ kernel-headers.x86\$164: 5.10.239-236.958.amzn2 → 5.10.240-238.955.amzn2
+ kernel-tools.x86\$164: 5.10.239-236.958.amzn2 → 5.10.240-238.955.amzn2
+ libgs.x86\$164: 9.54.0-9.amzn2.0.11 → 9.54.0-9.amzn2.0.12
+ microcode\$1ctl.x86\$164: 2:2.1-47.amzn2.4.24 → 2:2.1-47.amzn2.4.25
+ pam.x86\$164: 1.1.8-23.amzn2.0.2 → 1.1.8-23.amzn2.0.4

**Removed Packages:**
+ kernel-livepatch-5.10.239-236.958.x86\$164 1.0-0.amzn2 amzn2extra-kernel-5.10

**Repository Changed:**
+ libnvidia-container-tools.x86\$164: cuda-rhel8-x86\$164 → nvidia-container-toolkit
+ libnvidia-container1.x86\$164: cuda-rhel8-x86\$164 → nvidia-container-toolkit
+ nvidia-container-toolkit.x86\$164: cuda-rhel8-x86\$164 → nvidia-container-toolkit
+ nvidia-container-toolkit-base.x86\$164: cuda-rhel8-x86\$164 → nvidia-container-toolkit

------
#### [ Kubernetes v1.32 ]

**NVIDIA SMI:**
+ Nvidia Driver Version: 570.172.08
+ CUDA Version: 12.8

**Added Packages:**
+ kernel-livepatch-5.10.240-238.955.x86\$164 1.0-0.amzn2 amzn2extra-kernel-5.10

**Updated Packages:**
+ aws-neuronx-dkms.noarch: 2.22.2.0-dkms → 2.23.9.0-dkms
+ efa.x86\$164: 2.15.3-1.amzn2 → 2.17.2-1.amzn2
+ efa-nv-peermem.x86\$164: 1.2.1-1.amzn2 → 1.2.2-1.amzn2
+ gdk-pixbuf2.x86\$164: 2.36.12-3.amzn2 → 2.36.12-3.amzn2.0.2
+ ibacm.x86\$164: 57.amzn1-1.amzn2.0.2 → 58.amzn0-1.amzn2.0.2
+ infiniband-diags.x86\$164: 57.amzn1-1.amzn2.0.2 → 58.amzn0-1.amzn2.0.2
+ kernel.x86\$164: 5.10.239-236.958.amzn2 → 5.10.240-238.955.amzn2
+ kernel-devel.x86\$164: 5.10.239-236.958.amzn2 → 5.10.240-238.955.amzn2
+ kernel-headers.x86\$164: 5.10.239-236.958.amzn2 → 5.10.240-238.955.amzn2
+ kernel-tools.x86\$164: 5.10.239-236.958.amzn2 → 5.10.240-238.955.amzn2
+ libfabric-aws.x86\$164: 2.1.0amzn3.0-1.amzn2 → 2.1.0amzn5.0-1.amzn2
+ libfabric-aws-devel.x86\$164: 2.1.0amzn3.0-1.amzn2 → 2.1.0amzn5.0-1.amzn2
+ libgs.x86\$164: 9.54.0-9.amzn2.0.11 → 9.54.0-9.amzn2.0.12
+ libibumad.x86\$164: 57.amzn1-1.amzn2.0.2 → 58.amzn0-1.amzn2.0.2
+ libibverbs.x86\$164: 57.amzn1-1.amzn2.0.2 → 58.amzn0-1.amzn2.0.2
+ libibverbs-core.x86\$164: 57.amzn1-1.amzn2.0.2 → 58.amzn0-1.amzn2.0.2
+ libibverbs-utils.x86\$164: 57.amzn1-1.amzn2.0.2 → 58.amzn0-1.amzn2.0.2
+ libnccl-ofi.x86\$164: 1.15.0-1.amzn2 → 1.16.2-1.amzn2
+ librdmacm.x86\$164: 57.amzn1-1.amzn2.0.2 → 58.amzn0-1.amzn2.0.2
+ librdmacm-utils.x86\$164: 57.amzn1-1.amzn2.0.2 → 58.amzn0-1.amzn2.0.2
+ microcode\$1ctl.x86\$164: 2:2.1-47.amzn2.4.24 → 2:2.1-47.amzn2.4.25
+ pam.x86\$164: 1.1.8-23.amzn2.0.2 → 1.1.8-23.amzn2.0.4
+ rdma-core.x86\$164: 57.amzn1-1.amzn2.0.2 → 58.amzn0-1.amzn2.0.2
+ rdma-core-devel.x86\$164: 57.amzn1-1.amzn2.0.2 → 58.amzn0-1.amzn2.0.2

**Removed Packages:**
+ kernel-livepatch-5.10.239-236.958.x86\$164 1.0-0.amzn2 amzn2extra-kernel-5.10

**Repository Changed:**
+ libnvidia-container-tools.x86\$164: cuda-rhel8-x86\$164 → nvidia-container-toolkit
+ libnvidia-container1.x86\$164: cuda-rhel8-x86\$164 → nvidia-container-toolkit
+ nvidia-container-toolkit.x86\$164: cuda-rhel8-x86\$164 → nvidia-container-toolkit
+ nvidia-container-toolkit-base.x86\$164: cuda-rhel8-x86\$164 → nvidia-container-toolkit

------

## SageMaker HyperPod AMI releases for Amazon EKS: August 12, 2025
<a name="sagemaker-hyperpod-release-ami-eks-20250812"></a>

**The AMI includes the following:**
+ Supported Amazon Service: Amazon EC2
+ Operating System: Amazon Linux 2023
+ Compute Architecture: ARM64
+ Latest available version is installed for the following packages:
  + Linux Kernel: 6.12
  + FSx Lustre
  + Docker
  + Amazon CLI v2 at `/usr/bin/aws`
  + NVIDIA DCGM
  + Nvidia container toolkit:
    + Version command: `nvidia-container-cli -V`
  + Nvidia-docker2:
    + Version command: `nvidia-docker version`
  + Nvidia-IMEX: v570.172.08-1
+ NVIDIA Driver: 570.158.01
+ NVIDIA CUDA 12.4, 12.5, 12.6, 12.8 stack:
  + CUDA, NCCL and cuDDN installation directories: `/usr/local/cuda-xx.x/`
    + Example: `/usr/local/cuda-12.8/`, `/usr/local/cuda-12.8/`
  + Compiled NCCL Version:
    + For CUDA directory of 12.4, compiled NCCL Version 2.22.3\$1CUDA12.4
    + For CUDA directory of 12.5, compiled NCCL Version 2.22.3\$1CUDA12.5
    + For CUDA directory of 12.6, compiled NCCL Version 2.24.3\$1CUDA12.6
    + For CUDA directory of 12.8, compiled NCCL Version 2.27.5\$1CUDA12.8
  + Default CUDA: 12.8
    + PATH `/usr/local/cuda` points to CUDA 12.8
    + Updated below env vars:
      + `LD_LIBRARY_PATH` to have `/usr/local/cuda-12.8/lib:/usr/local/cuda-12.8/lib64:/usr/local/cuda-12.8:/usr/local/cuda-12.8/targets/sbsa-linux/lib:/usr/local/cuda-12.8/nvvm/lib64:/usr/local/cuda-12.8/extras/CUPTI/lib64`
      + `PATH` to have `/usr/local/cuda-12.8/bin/:/usr/local/cuda-12.8/include/`
      + For any different CUDA version, please update `LD_LIBRARY_PATH` accordingly.
+ EFA installer: 1.42.0
+ Nvidia GDRCopy: 2.5.1
+ Amazon OFI NCCL plugin comes with EFA installer
  + Paths `/opt/amazon/ofi-nccl/lib` and `/opt/amazon/ofi-nccl/efa` are added to `LD_LIBRARY_PATH`.
+ Amazon CLI v2 at `/usr/local/bin/aws`
+ EBS volume type: gp3
+ Python: `/usr/bin/python3.9`

## SageMaker HyperPod AMI releases for Amazon EKS: August 6, 2025
<a name="sagemaker-hyperpod-release-ami-eks-20250806"></a>

**SageMaker HyperPod DLAMI for Amazon EKS support**

The AMIs include the following updates:

------
#### [ K8s v1.28 ]
+ **Neuron packages:**
  + **aws-neuronx-collectives:** 2.27.34.0\$1ec8cd5e8b-1
  + **aws-neuronx-dkms:** 2.23.9.0-dkms
  + **aws-neuronx-runtime-lib:** 2.27.23.0\$18deec4dbf-1
  + **aws-neuronx-k8-plugin:** 2.27.7.0-1
  + **aws-neuronx-k8-scheduler:** 2.27.7.0-1
  + **aws-neuronx-tools:** 2.25.145.0-1

------
#### [ K8s v1.29 ]
+ **Neuron packages:**
  + **aws-neuronx-collectives:** 2.27.34.0\$1ec8cd5e8b-1
  + **aws-neuronx-dkms:** 2.23.9.0-dkms
  + **aws-neuronx-runtime-lib:** 2.27.23.0\$18deec4dbf-1
  + **aws-neuronx-k8-plugin:** 2.27.7.0-1
  + **aws-neuronx-k8-scheduler:** 2.27.7.0-1
  + **aws-neuronx-tools:** 2.25.145.0-1

------
#### [ K8s v1.30 ]
+ **Neuron packages:**
  + **aws-neuronx-collectives:** 2.27.34.0\$1ec8cd5e8b-1
  + **aws-neuronx-dkms:** 2.23.9.0-dkms
  + **aws-neuronx-runtime-lib:** 2.27.23.0\$18deec4dbf-1
  + **aws-neuronx-k8-plugin:** 2.27.7.0-1
  + **aws-neuronx-k8-scheduler:** 2.27.7.0-1
  + **aws-neuronx-tools:** 2.25.145.0-1

------
#### [ K8s v1.31 ]
+ **Neuron packages:**
  + **aws-neuronx-collectives:** 2.27.34.0\$1ec8cd5e8b-1
  + **aws-neuronx-dkms:** 2.23.9.0-dkms
  + **aws-neuronx-runtime-lib:** 2.27.23.0\$18deec4dbf-1
  + **aws-neuronx-k8-plugin:** 2.27.7.0-1
  + **aws-neuronx-k8-scheduler:** 2.27.7.0-1
  + **aws-neuronx-tools:** 2.25.145.0-1

------
#### [ K8s v1.32 ]
+ **Neuron packages:**
  + **aws-neuronx-collectives:** 2.27.34.0\$1ec8cd5e8b-1
  + **aws-neuronx-dkms:** 2.23.9.0-dkms
  + **aws-neuronx-runtime-lib:** 2.27.23.0\$18deec4dbf-1
  + **aws-neuronx-k8-plugin:** 2.27.7.0-1
  + **aws-neuronx-k8-scheduler:** 2.27.7.0-1
  + **aws-neuronx-tools:** 2.25.145.0-1

------

**Important**  
Deep Learning Base OSS Nvidia Driver AMI (Amazon Linux 2) Version 70.3
Deep Learning Base Proprietary Nvidia Driver AMI (Amazon Linux 2) Version 68.4
Latest CUDA 12.8 support
Upgraded Nvidia Driver to from 570.158.01 to 570.172.08 to fix CVE's present in the Nvidia Security Bulletin for July

## SageMaker HyperPod AMI releases for Amazon EKS: July 31, 2025
<a name="sagemaker-hyperpod-release-ami-eks-20250731"></a>

Amazon SageMaker HyperPod now supports a new AMI for Amazon EKS clusters that updates the base operating system to Amazon Linux 2023. This release provides several improvements from Amazon Linux 2 (AL2). HyperPod releases new AMIs regularly, and we recommend that you run all of your HyperPod clusters on the latest and most secure versions of AMIs to address vulnerabilities and phase out outdated software and libraries.

### Key upgrades
<a name="sagemaker-hyperpod-release-ami-eks-20250731-specs"></a>
+ **Operating System**: Amazon Linux 2023 (updated from Amazon Linux 2, or AL2)
+ **Package Manager**: DNF is the default package management tool, replacing YUM used in AL2
+ **Networking Service**: `systemd-networkd` manages network interfaces, replacing ISC `dhclient` used in AL2
+ **Linux Kernel**: Version 6.1, updated from the kernel used in AL2
+ **Glibc**: Version 2.34, updated from the version in AL2
+ **GCC**: Version 11.5.0, updated from the version in AL2
+ **NFS**: Version 1:2.6.1, updated from version 1:1.3.4 in AL2
+ **NVIDIA Driver**: Version 570.172.08, a newer driver version
+ **Python**: Version 3.9, replacing Python 2.7 used in AL2
+ **NVME**: Version 1.11.1, a newer version of the NVMe driver

### Before you upgrade
<a name="sagemaker-hyperpod-release-ami-eks-20250731-prereqs"></a>

There are a few important things to know before upgrading. With AL2023, several packages have been added, upgraded or removed compared to AL2. We strongly recommend that you test your applications with AL2023 before upgrading your clusters. For a comprehensive list of all package changes in AL2023, see [Package changes in Amazon Linux 2023](https://docs.amazonaws.cn/linux/al2023/release-notes/compare-packages.html).

The following are some of the significant changes between AL2 and AL2023:
+ **Python 3.10**: The most significant update, apart from the operating system, is the Python version upgrade. After upgrading, clusters have Python 3.10 as default. While some Python 3.8 distributed training workloads might be compatible with Python 3.10, we strongly recommend that you test your specific workloads separately. If migration to Python 3.10 proves challenging but you still want to upgrade your cluster for other new features, you can install an older Python version by using the command `yum install python-xx.x` with [lifecycle scripts](https://docs.amazonaws.cn/sagemaker/latest/dg/sagemaker-hyperpod-lifecycle-best-practices-slurm.html) before running any workloads. Ensure you test both your existing lifecycle scripts and application code for compatibility.
+ **NVIDIA runtime enforcement**: AL2023 strictly enforces the NVIDIA container runtime requirements, causing containers with hard-coded NVIDIA environment variables (such as `NVIDIA_VISIBLE_DEVICES: "all"`) to fail on CPU-only nodes (whereas AL2 ignored these settings when no GPU drivers are present). You can override the enforcement by setting `NVIDIA_VISIBLE_DEVICES: "void"` in your pod specification or by using CPU-only images.
+ **cgroup v2**: AL2023 features the next generation of unified control group hierarchy (cgroup v2). cgroup v2 is used for container runtimes and is also used by `systemd`. While AL2023 still includes code that can make the system run using cgroup v1, this isn't a recommended configuration.
+ **Amazon VPC CNI and `eksctl` versions**: AL2023 also requires your Amazon VPC CNI version to be 1.16.2 or greater and your `eksctl` version to be 0.176.0 or greater.
+ **EFA on FSx for Lustre**: You can now use EFA on FSx for Lustre, which enables you to achieve application performance comparable to on-premises AI/ML or HPC (high performance computing) clusters, while benefiting from the scalability, flexibility and elasticity of cloud computing.

Additionally, upgrading to AL2023 requires at minimum version `1.0.643.0_1.0.192.0` of Health Monitoring Agent. Complete the following procedure to update the Health Monitoring Agent:

1. If you use HyperPod lifecycle scripts from the GitHub repository [awsome-distributed-training](https://github.com/aws-samples/awsome-distributed-training)), make sure to pull the latest version. Earlier versions are not compatible with AL2023. The new lifecycle script ensures that `containerd` uses the additional mounted storage for pulling in container images in AL2023.

1. Pull in the latest version of the [HyperPod CLI git repository](https://github.com/aws/sagemaker-hyperpod-cli/tree/main).

1. Update dependencies with the following command: `helm dependencies update helm_chart/HyperPodHelmChart`

1. As mentioned on the step 4 in the [README of HyperPodHelmChart](https://github.com/aws/sagemaker-hyperpod-cli/tree/main/helm_chart#step-four-whenever-you-want-to-upgrade-the-installation-of-helm-charts), run the following command to upgrade the version of dependencies running on the cluster: `helm upgrade dependencies helm_chart/HyperPodHelmChart -namespace kube-system`

### Workloads that have been tested on upgraded EKS clusters
<a name="sagemaker-hyperpod-release-ami-eks-20250731-tested"></a>

The following are some use cases where the upgrade has been tested:
+ **Backwards compatibility**: Popular distributed training jobs involving PyTorch should be backwards compatible on the new AMI. However, since your workloads may depend on specific Python or Linux libraries, we recommend first testing on a smaller scale or subset of nodes before upgrading your larger clusters.
+ **Accelerator testing**: Jobs across various instance types, utilizing both NVIDIA accelerators (for the P and G instance families) and Amazon Neuron accelerators (for Trn instances) have been tested.

### How to upgrade your AMI and associated workloads
<a name="sagemaker-hyperpod-release-ami-eks-20250731-upgrade"></a>

You can upgrade to the new AMI using one of the following methods:
+ Use the [create-cluster](https://docs.amazonaws.cn/sagemaker/latest/APIReference/API_CreateCluster.html) API to create a new cluster with the latest AMI.
+ Use the [update-cluster-software](https://docs.amazonaws.cn/sagemaker/latest/APIReference/API_UpdateClusterSoftware.html) API to upgrade your existing cluster. Note that this option re-runs any lifecycle scripts.

The cluster is unavailable during the update process. We recommend planning for this downtime and restarting the training workload from an existing checkpoint after the upgrade completes. As a best practice, we recommend that you perform testing on a smaller cluster before upgrading your larger clusters.

If the update command fails, first identify the cause of the failure. For lifecycle script failures, make the necessary corrections to your scripts and retry. For any other issues that cannot be resolved, contact [Amazon Web Services Support](https://www.amazonaws.cn/premiumsupport/).

### Troubleshooting
<a name="sagemaker-hyperpod-release-ami-eks-20250731-troubleshooting"></a>

Use the following section to help with troubleshooting any issues you encounter when upgrading to AL2023.

**How do I fix errors such as `"nvml error: driver not loaded: unknown"` on CPU-only cluster nodes?**

If containers that worked on CPU AL2 Amazon EKS nodes now fail on AL2023, your container image may have hard-coded NVIDIA environment variables. You can check for hard-coded environment variables with the following command:

```
docker inspect image:tag | grep -i nvidia
```

AL2023 strictly enforces these requirements whereas AL2 was more lenient on CPU-only nodes. One solution is to override the AL2023 enforcement by setting certain NVIDIA environment variables in your Amazon EKS pod specification, as shown in the following example:

```
yaml
containers:
- name: your-container
image: your-image:tag
env:
- name: NVIDIA_VISIBLE_DEVICES
value: "void"
- name: NVIDIA_DRIVER_CAPABILITIES
value: ""
```

Another alternative is to use CPU-only container images (such as `pytorch/pytorch:latest-cpu`) or build custom images without NVIDIA dependencies.

## SageMaker HyperPod AMI releases for Amazon EKS: July 15, 2025
<a name="sagemaker-hyperpod-release-ami-eks-20250715"></a>

**SageMaker HyperPod DLAMI for Amazon EKS support**

The AMIs include the following updates:

------
#### [ K8s v1.28 ]
+ **Latest NVIDIA Driver:** 550.163.01
+ **Default CUDA:** 12.4
+ **EFA Installer:** 1.38.0
+ **Neuron packages:**
  + **aws-neuronx-dkms.noarch:** 2.22.2.0-dkms
  + **aws-neuronx-oci-hook.x86\$164:** 2.4.4.0-1
  + **aws-neuronx-tools.x86\$164:** 2.18.3.0-1
  + **aws-neuron-dkms.noarch:** 2.3.26.0-dkms
  + **aws-neuron-k8-plugin.x86\$164:** 1.9.3.0-1
  + **aws-neuron-k8-scheduler.x86\$164:** 1.9.3.0-1
  + **aws-neuron-runtime.x86\$164:** 1.6.24.0-1
  + **aws-neuron-runtime-base.x86\$164:** 1.6.21.0-1
  + **aws-neuron-tools.x86\$164:** 2.1.4.0-1
  + **aws-neuronx-collectives.x86\$164:** 2.26.43.0\$147cc904ea-1
  + **aws-neuronx-gpsimd-customop.x86\$164:** 0.2.3.0-1
  + **aws-neuronx-gpsimd-customop-lib.x86\$164:** 0.16.2.0-1
  + **aws-neuronx-gpsimd-tools.x86\$164:** 0.16.1.0\$10a6506a47-1
  + **aws-neuronx-k8-plugin.x86\$164:** 2.26.26.0-1
  + **aws-neuronx-k8-scheduler.x86\$164:** 2.26.26.0-1
  + **aws-neuronx-runtime-lib.x86\$164:** 2.26.42.0\$12ff3b5c7d-1
  + **aws-neuronx-tools.x86\$164:** 2.24.54.0-1
  + **tensorflow-model-server-neuron.x86\$164:** 2.8.0.2.3.0.0-0
  + **tensorflow-model-server-neuronx.x86\$164:** 2.10.1.2.12.2.0-0

------
#### [ K8s v1.29 ]
+ **Nvidia Driver Version:** 550.163.01
+ **CUDA Version:** 12.4
+ **EFA Installer:** 1.38.0
+ **Neuron packages:**
  + **aws-neuronx-dkms.noarch:** 2.22.2.0-dkms
  + **aws-neuronx-oci-hook.x86\$164:** 2.4.4.0-1
  + **aws-neuronx-tools.x86\$164:** 2.18.3.0-1
  + **aws-neuron-dkms.noarch:** 2.3.26.0-dkms
  + **aws-neuron-k8-plugin.x86\$164:** 1.9.3.0-1
  + **aws-neuron-k8-scheduler.x86\$164:** 1.9.3.0-1
  + **aws-neuron-runtime.x86\$164:** 1.6.24.0-1
  + **aws-neuron-runtime-base.x86\$164:** 1.6.21.0-1
  + **aws-neuron-tools.x86\$164:** 2.1.4.0-1
  + **aws-neuronx-collectives.x86\$164:** 2.26.43.0\$147cc904ea-1
  + **aws-neuronx-gpsimd-customop.x86\$164:** 0.2.3.0-1
  + **aws-neuronx-gpsimd-customop-lib.x86\$164:** 0.16.2.0-1
  + **aws-neuronx-gpsimd-tools.x86\$164:** 0.16.1.0\$10a6506a47-1
  + **aws-neuronx-k8-plugin.x86\$164:** 2.26.26.0-1
  + **aws-neuronx-k8-scheduler.x86\$164:** 2.26.26.0-1
  + **aws-neuronx-runtime-lib.x86\$164:** 2.26.42.0\$12ff3b5c7d-1
  + **aws-neuronx-tools.x86\$164:** 2.24.54.0-1
  + **tensorflow-model-server-neuron.x86\$164:** 2.8.0.2.3.0.0-0
  + **tensorflow-model-server-neuronx.x86\$164:** 2.10.1.2.12.2.0-0

------
#### [ K8s v1.30 ]
+ **Nvidia Driver Version:** 550.163.01
+ **CUDA Version:** 12.4
+ **EFA installer version:** 1.38.0
+ **Neuron packages:**
  + **aws-neuronx-dkms.noarch:** 2.22.2.0-dkms
  + **aws-neuronx-oci-hook.x86\$164:** 2.4.4.0-1
  + **aws-neuronx-tools.x86\$164:** 2.18.3.0-1
  + **aws-neuron-dkms.noarch:** 2.3.26.0-dkms
  + **aws-neuron-k8-plugin.x86\$164:** 1.9.3.0-1
  + **aws-neuron-k8-scheduler.x86\$164:** 1.9.3.0-1
  + **aws-neuron-runtime.x86\$164:** 1.6.24.0-1
  + **aws-neuron-runtime-base.x86\$164:** 1.6.21.0-1
  + **aws-neuron-tools.x86\$164:** 2.1.4.0-1
  + **aws-neuronx-collectives.x86\$164:** 2.26.43.0\$147cc904ea-1
  + **aws-neuronx-gpsimd-customop.x86\$164:** 0.2.3.0-1
  + **aws-neuronx-gpsimd-customop-lib.x86\$164:** 0.16.2.0-1
  + **aws-neuronx-gpsimd-tools.x86\$164:** 0.16.1.0\$10a6506a47-1
  + **aws-neuronx-k8-plugin.x86\$164:** 2.26.26.0-1
  + **aws-neuronx-k8-scheduler.x86\$164:** 2.26.26.0-1
  + **aws-neuronx-runtime-lib.x86\$164:** 2.26.42.0\$12ff3b5c7d-1
  + **aws-neuronx-tools.x86\$164:** 2.24.54.0-1
  + **tensorflow-model-server-neuron.x86\$164:** 2.8.0.2.3.0.0-0
  + **tensorflow-model-server-neuronx.x86\$164:** 2.10.1.2.12.2.0-0

------
#### [ K8s v1.31 ]
+ **Nvidia Driver Version:** 550.163.01
+ **CUDA Version:** 12.4
+ **EFA installer version:** 1.38.0
+ **Neuron packages:**
  + **aws-neuronx-dkms.noarch:** 2.22.2.0-dkms
  + **aws-neuronx-oci-hook.x86\$164:** 2.4.4.0-1
  + **aws-neuronx-tools.x86\$164:** 2.18.3.0-1
  + **aws-neuron-dkms.noarch:** 2.3.26.0-dkms
  + **aws-neuron-k8-plugin.x86\$164:** 1.9.3.0-1
  + **aws-neuron-k8-scheduler.x86\$164:** 1.9.3.0-1
  + **aws-neuron-runtime.x86\$164:** 1.6.24.0-1
  + **aws-neuron-runtime-base.x86\$164:** 1.6.21.0-1
  + **aws-neuron-tools.x86\$164:** 2.1.4.0-1
  + **aws-neuronx-collectives.x86\$164:** 2.26.43.0\$147cc904ea-1
  + **aws-neuronx-gpsimd-customop.x86\$164:** 0.2.3.0-1
  + **aws-neuronx-gpsimd-customop-lib.x86\$164:** 0.16.2.0-1
  + **aws-neuronx-gpsimd-tools.x86\$164:** 0.16.1.0\$10a6506a47-1
  + **aws-neuronx-k8-plugin.x86\$164:** 2.26.26.0-1
  + **aws-neuronx-k8-scheduler.x86\$164:** 2.26.26.0-1
  + **aws-neuronx-runtime-lib.x86\$164:** 2.26.42.0\$12ff3b5c7d-1
  + **aws-neuronx-tools.x86\$164:** 2.24.54.0-1
  + **tensorflow-model-server-neuron.x86\$164:** 2.8.0.2.3.0.0-0
  + **tensorflow-model-server-neuronx.x86\$164:** 2.10.1.2.12.2.0-0

------
#### [ K8s v1.32 ]
+ **Nvidia Driver Version:** 550.163.01
+ **CUDA Version:** 12.4
+ **EFA installer version:** 1.38.0
+ **Neuron packages:**
  + **aws-neuronx-dkms.noarch:** 2.22.2.0-dkms
  + **aws-neuronx-oci-hook.x86\$164:** 2.4.4.0-1
  + **aws-neuronx-tools.x86\$164:** 2.18.3.0-1
  + **aws-neuron-dkms.noarch:** 2.3.26.0-dkms
  + **aws-neuron-k8-plugin.x86\$164:** 1.9.3.0-1
  + **aws-neuron-k8-scheduler.x86\$164:** 1.9.3.0-1
  + **aws-neuron-runtime.x86\$164:** 1.6.24.0-1
  + **aws-neuron-runtime-base.x86\$164:** 1.6.21.0-1
  + **aws-neuron-tools.x86\$164:** 2.1.4.0-1
  + **aws-neuronx-collectives.x86\$164:** 2.26.43.0\$147cc904ea-1
  + **aws-neuronx-gpsimd-customop.x86\$164:** 0.2.3.0-1
  + **aws-neuronx-gpsimd-customop-lib.x86\$164:** 0.16.2.0-1
  + **aws-neuronx-gpsimd-tools.x86\$164:** 0.16.1.0\$10a6506a47-1
  + **aws-neuronx-k8-plugin.x86\$164:** 2.26.26.0-1
  + **aws-neuronx-k8-scheduler.x86\$164:** 2.26.26.0-1
  + **aws-neuronx-runtime-lib.x86\$164:** 2.26.42.0\$12ff3b5c7d-1
  + **aws-neuronx-tools.x86\$164:** 2.24.54.0-1
  + **tensorflow-model-server-neuron.x86\$164:** 2.8.0.2.3.0.0-0
  + **tensorflow-model-server-neuronx.x86\$164:** 2.10.1.2.12.2.0-0

------

## SageMaker HyperPod AMI releases for Amazon EKS: June 09, 2025
<a name="sagemaker-hyperpod-release-ami-eks-20250609"></a>

**SageMaker HyperPod DLAMI for Amazon EKS support**

------
#### [ Neuron SDK Updates ]
+ **aws-neuronx-dkms.noarch:** 2.21.37.0 (from 2.20.74.0)

------

## SageMaker HyperPod AMI releases for Amazon EKS: May 22, 2025
<a name="sagemaker-hyperpod-release-ami-eks-20250522"></a>

**AMI general updates**

**SageMaker HyperPod DLAMI for Amazon EKS support**

------
#### [ Deep Learning Base AMI AL2 ]
+ **Latest NVIDIA Driver:** 550.163.01
+ **CUDA Stack updates:**
  + **Default CUDA:** 12.1
  + **NCCL Version:** 2.22.3
+ **EFA Installer:** 1.38.0
+ **Amazon OFI NCCL:** 1.13.2
+ **Linux Kernel:** 5.10
+ **GDRCopy:** 2.4

**Important**  
**NVIDIA Container Toolkit 1.17.4 update:** CUDA compat libraries mounting is now disabled
**EFA Updates from 1.37 to 1.38:**  
Amazon OFI NCCL plugin now located in /opt/amazon/ofi-nccl
Previous location /opt/aws-ofi-nccl/ is deprecated

------
#### [ Neuron SDK Updates ]
+ **aws-neuronx-dkms.noarch:** 2.20.74.0 (from 2.20.28.0)
+ **aws-neuronx-collectives.x86\$164:** 2.25.65.0\$19858ac9a1-1 (from 2.24.59.0\$1838c7fc8b-1)
+ **aws-neuronx-runtime-lib.x86\$164:** 2.25.57.0\$1166c7a468-1 (from 2.24.53.0\$1f239092cc-1)
+ **aws-neuronx-tools.x86\$164:** 2.23.9.0 (from 2.22.61.0)
+ **aws-neuronx-gpsimd-customop-lib.x86\$164:** 0.15.12.0 (from 0.14.12.0)
+ **aws-neuronx-gpsimd-tools.x86\$164:** 0.15.1.0\$15d31b6a3f (from 0.14.6.0\$1241eb69f4)
+ **aws-neuronx-k8-plugin.x86\$164: **2.25.24.0 (from 2.24.23.0)
+ **aws-neuronx-k8-scheduler.x86\$164:** 2.25.24.0 (from 2.24.23.0)

**Support notes:**
+ AMI components including CUDA versions may be removed or changed based on framework support policy
+ Kernel version is pinned for compatibility. Users should avoid updates unless required for security patches
+ For EC2 instances with multiple network cards, please refer to EFA configuration guide for proper setup

------

## SageMaker HyperPod AMI releases for Amazon EKS: May 07, 2025
<a name="sagemaker-hyperpod-release-ami-eks-20250507"></a>

------
#### [ Installed the latest version of Amazon Neuron SDK ]
+ **tensorflow-model-server-neuron.x86\$164** 2.8.0.2.3.0.0-0 neuron

------

## SageMaker HyperPod AMI releases for Amazon EKS: April 28, 2025
<a name="sagemaker-hyperpod-release-ami-eks-20250428"></a>

**Improvements for K8s**
+ Upgraded NVIDIA driver from version 550.144.03 to 550.163.01. This upgrade is to address Common Vulnerabilities and Exposures (CVEs) present in the [NVIDIA GPU Display Security Bulletin for April 2025](https://nvidia.custhelp.com/app/answers/detail/a_id/5630).

**SageMaker HyperPod DLAMI for Amazon EKS support**

------
#### [ Installed the latest version of Amazon Neuron SDK ]
+ **aws-neuronx-dkms.noarch:** 2.20.28.0-dkms
+ **aws-neuronx-oci-hook.x86\$164:** 2.4.4.0-1
+ **aws-neuronx-tools.x86\$164:** 2.18.3.0-1
+ **aws-neuron-dkms.noarch:** 2.3.26.0-dkms
+ **aws-neuron-k8-plugin.x86\$164:** 1.9.3.0-1
+ **aws-neuron-k8-scheduler.x86\$164:** 1.9.3.0-1
+ **aws-neuron-runtime.x86\$164:** 1.6.24.0-1
+ **aws-neuron-runtime-base.x86\$164:** 1.6.21.0-1
+ **aws-neuron-tools.x86\$164:** 2.1.4.0-1
+ **aws-neuronx-collectives.x86\$164:** 2.24.59.0\$1838c7fc8b-1
+ **aws-neuronx-gpsimd-customop.x86\$164:** 0.2.3.0-1
+ **aws-neuronx-gpsimd-customop-lib.x86\$164:** 0.14.12.0-1
+ **aws-neuronx-gpsimd-tools.x86\$164:** 0.14.6.0\$1241eb69f4-1
+ **aws-neuronx-k8-plugin.x86\$164:** 2.24.23.0-1
+ **aws-neuronx-k8-scheduler.x86\$164:** 2.24.23.0-1
+ **aws-neuronx-runtime-lib.x86\$164:** 2.24.53.0\$1f239092cc-1
+ **aws-neuronx-tools.x86\$164:** 2.22.61.0-1
+ **tensorflow-model-server-neuronx.x86\$164:** 2.10.1.2.12.2.0-0

------

## SageMaker HyperPod AMI releases for Amazon EKS: April 18, 2025
<a name="sagemaker-hyperpod-release-ami-eks-20250418"></a>

**AMI general updates**
+ New SageMaker HyperPod AMI for Amazon EKS 1.32.1.

**SageMaker HyperPod DLAMI for Amazon EKS support**

The AMIs include the following:

------
#### [ Deep Learning EKS AMI 1.32.1 ]
+ **Amazon EKS Components**
  + Kubernetes Version: 1.32.1
  + Containerd Version: 1.7.27
  + Runc Version: 1.1.14
  + Amazon IAM Authenticator: 0.6.29
+ **Amazon SSM Agent:** 3.3.1611.0 
+ **Linux Kernel:** 5.10.235
+ **OSS Nvidia driver:** 550.163.01
+ **NVIDIA CUDA:** 12.4
+ **EFA Installer:** 1.38.0
+ **GDRCopy:** 2.4.1-1
+ **Nvidia container toolkit:** 1.17.6
+ **Amazon OFI NCCL:** 1.13.2
+ **aws-neuronx-tools:** 2.18.3.0
+ **aws-neuronx-runtime-lib:** 2.24.53.0
+ **aws-neuronx-oci-hook:** 2.4.4.0-1
+ **aws-neuronx-dkms:** 2.20.28.0
+ **aws-neuronx-collectives:** 2.24.59.0

------

## SageMaker HyperPod AMI releases for Amazon EKS: February 18, 2025
<a name="sagemaker-hyperpod-release-ami-eks-20250218"></a>

**Improvements for K8s**
+ Upgraded Nvidia container toolkit from version 1.17.3 to version 1.17.4.
+ Fixed the issue where customers were unable to connect to nodes after a reboot.
+ Upgraded Elastic Fabric Adapter (EFA) version from 1.37.0 to 1.38.0.
+ The EFA now includes the Amazon OFI NCCL plugin, which is located in the `/opt/amazon/ofi-nccl` directory instead of the original `/opt/aws-ofi-nccl/` path. If you need to update your `LD_LIBRARY_PATH` environment variable, make sure to modify the path to point to the new `/opt/amazon/ofi-nccl` location for the OFI NCCL plugin.
+ Removed the emacs package from these DLAMIs. You can install emacs from GNU emac.

**SageMaker HyperPod DLAMI for Amazon EKS support**

------
#### [ Installed the latest version of neuron SDK ]
+ **aws-neuronx-dkms.noarch:** 2.19.64.0-dkms @neuron
+ **aws-neuronx-oci-hook.x86\$164:** 2.4.4.0-1 @neuron
+ **aws-neuronx-tools.x86\$164:** 2.18.3.0-1 @neuron
+ **aws-neuronx-collectives.x86\$164:** 2.23.135.0\$13e70920f2-1 neuron
+ **aws-neuronx-gpsimd-customop.x86\$164:** 0.2.3.0-1 neuron
+ **aws-neuronx-gpsimd-customop-lib.x86\$164**
+ **aws-neuronx-gpsimd-tools.x86\$164:** 0.13.2.0\$194ba34927-1 neuron
+ **aws-neuronx-k8-plugin.x86\$164:** 2.23.45.0-1 neuron
+ **aws-neuronx-k8-scheduler.x86\$164:** 2.23.45.0-1 neuron
+ **aws-neuronx-runtime-lib.x86\$164:** 2.23.112.0\$19b5179492-1 neuron
+ **aws-neuronx-tools.x86\$164:** 2.20.204.0-1 neuron
+ **tensorflow-model-server-neuronx.x86\$164**

------

## SageMaker HyperPod AMI releases for Amazon EKS: January 22, 2025
<a name="sagemaker-hyperpod-release-ami-eks-20250122"></a>

**AMI general updates**
+ New SageMaker HyperPod AMI for Amazon EKS 1.31.2.

**SageMaker HyperPod DLAMI for Amazon EKS support**

The AMIs include the following:

------
#### [ Deep Learning EKS AMI 1.31 ]
+ **Amazon EKS Components**
  + Kubernetes Version: 1.31.2
  + Containerd Version: 1.7.23
  + Runc Version: 1.1.14
  + Amazon IAM Authenticator: 0.6.26
+ **Amazon SSM Agent:** 3.3.987
+ **Linux Kernel:** 5.10.230
+ **OSS Nvidia driver:** 550.127.05
+ **NVIDIA CUDA:** 12.4
+ **EFA Installer:** 1.37.0
+ **GDRCopy:** 2.4.1-1
+ **Nvidia container toolkit:** 1.17.3
+ **Amazon OFI NCCL:** 1.13.0
+ **aws-neuronx-tools:** 2.18.3
+ **aws-neuronx-runtime-lib:** 2.23.112.0
+ **aws-neuronx-oci-hook:** 2.4.4.0-1
+ **aws-neuronx-dkms:** 2.18.20.0
+ **aws-neuronx-collectives:** 2.23.133.0

------

## SageMaker HyperPod AMI releases for Amazon EKS: December 21, 2024
<a name="sagemaker-hyperpod-release-ami-eks-20241221"></a>

**SageMaker HyperPod DLAMI for Amazon EKS support**

The AMIs include the following:

------
#### [ K8s v1.28 ]
+ **Amazon EKS Components**
  + Kubernetes Version: 1.28.15
  + Containerd Version: 1.7.23
  + Runc Version: 1.1.14
  + Amazon IAM Authenticator: 0.6.26
+ **Amazon SSM Agent:** 3.3.987
+ **Linux Kernel:** 5.10.228
+ **OSS NVIDIA driver:** 550.127.05
+ **NVIDIA CUDA:** 12.4
+ **EFA Installer:** 1.37.0
+ **GDRCopy:** 2.4
+ **NVIDIA container toolkit:** 1.17.3
+ **Amazon OFI NCCL:** 1.13.0
+ **aws-neuronx-tools:** 2.18.3.0-1
+ **aws-neuronx-runtime-lib:** 2.23.112.0
+ **aws-neuronx-oci-hook:** 2.4.4.0-1
+ **aws-neuronx-dkms:** 2.18.20.0
+ **aws-neuronx-collectives:** 2.23.135.0

------
#### [ K8s v1.29 ]
+ **Amazon EKS Components**
  + Kubernetes Version: 1.29.10
  + Containerd Version: 1.7.23
  + Runc Version: 1.1.14
  + Amazon IAM Authenticator: 0.6.26
+ **Amazon SSM Agent:** 3.3.987
+ **Linux Kernel:** 5.15.0
+ **OSS Nvidia driver:** 550.127.05
+ **NVIDIA CUDA:** 12.4
+ **EFA Installer:** 1.37.0
+ **GDRCopy:** 2.4
+ **Nvidia container toolkit:** 1.17.3
+ **Amazon OFI NCCL:** 1.13.0
+ **aws-neuronx-tools:** 2.18.3.0-1
+ **aws-neuronx-runtime-lib:** 2.23.112.0
+ **aws-neuronx-oci-hook:** 2.4.4.0-1
+ **aws-neuronx-dkms:** 2.18.20.0
+ **aws-neuronx-collectives:** 2.23.135.0

------
#### [ K8s v1.30 ]
+ **Amazon EKS Components**
  + Kubernetes Version: 1.30.6
  + Containerd Version: 1.7.23
  + Runc Version: 1.1.14
  + Amazon IAM Authenticator: 0.6.26
+ **Amazon SSM Agent:** 3.3.987.0
+ **Linux Kernel:** 5.10.228
+ **OSS Nvidia driver:** 550.127.05
+ **NVIDIA CUDA:** 12.4
+ **EFA Installer:** 1.37.0
+ **GDRCopy:** 2.4
+ **Nvidia container toolkit:** 1.17.3
+ **Amazon OFI NCCL:** 1.13.0
+ **aws-neuronx-tools:** 2.18.3.0-1
+ **aws-neuronx-runtime-lib:** 2.23.112.0
+ **aws-neuronx-oci-hook:** 2.4.4.0-1
+ **aws-neuronx-dkms:** 2.18.20.0
+ **aws-neuronx-collectives:** 2.23.135.0

------

## SageMaker HyperPod AMI releases for Amazon EKS: December 13, 2024
<a name="sagemaker-hyperpod-release-ami-eks-20241213"></a>

**SageMaker HyperPod DLAMI for Amazon EKS upgrade**
+ Updated SSM Agent to version `3.3.1311.0`.

## SageMaker HyperPod AMI releases for Amazon EKS: November 24, 2024
<a name="sagemaker-hyperpod-release-ami-eks-20241124"></a>

**AMI general updates**
+ Released in `MEL` (Melbourne) Region.
+ Updated SageMaker HyperPod base DLAMI to the following versions:
  + Kubernetes: 2024-11-01.

## SageMaker HyperPod AMI releases for Amazon EKS: November 15, 2024
<a name="sagemaker-hyperpod-release-ami-eks-20241115"></a>

**SageMaker HyperPod DLAMI for Amazon EKS support**

The AMIs include the following:

------
#### [ Deep Learning EKS AMI 1.28 ]
+ **Amazon EKS Components**
  + Kubernetes Version: 1.28.15
  + Containerd Version: 1.7.23
  + Runc Version: 1.1.14
  + Amazon IAM Authenticator: 0.6.26
+ **Amazon SSM Agent:** 3.3.987
+ **Linux Kernel:** 5.10.228
+ **OSS NVIDIA driver:** 550.127.05
+ **NVIDIA CUDA:** 12.4
+ **EFA Installer:** 1.34.0
+ **GDRCopy:** 2.4
+ **NVIDIA container toolkit:** 1.17.3
+ **Amazon OFI NCCL:** 1.11.0
+ **aws-neuronx-tools:** 2.18.3.0-1
+ **aws-neuronx-runtime-lib:** 2.22.19.0
+ **aws-neuronx-oci-hook:** 2.4.4.0-1
+ **aws-neuronx-dkms:** 2.18.20.0
+ **aws-neuronx-collectives:** 2.22.33.0

------
#### [ Deep Learning EKS AMI 1.29 ]
+ **Amazon EKS Components**
  + Kubernetes Version: 1.29.10
  + Containerd Version: 1.7.23
  + Runc Version: 1.1.14
  + Amazon IAM Authenticator: 0.6.26
+ **Amazon SSM Agent:** 3.3.987
+ **Linux Kernel:** 5.10.228
+ **OSS Nvidia driver:** 550.127.05
+ **NVIDIA CUDA:** 12.4
+ **EFA Installer:** 1.34.0
+ **GDRCopy:** 2.4
+ **Nvidia container toolkit:** 1.17.3
+ **Amazon OFI NCCL:** 1.11.0
+ **aws-neuronx-tools:** 2.18.3.0-1
+ **aws-neuronx-runtime-lib:** 2.22.19.0
+ **aws-neuronx-oci-hook:** 2.4.4.0-1
+ **aws-neuronx-dkms:** 2.18.20.0
+ **aws-neuronx-collectives:** 2.22.33.0

------
#### [ Deep Learning EKS AMI 1.30 ]
+ **Amazon EKS Components**
  + Kubernetes Version: 1.30.6
  + Containerd Version: 1.7.23
  + Runc Version: 1.1.14
  + Amazon IAM Authenticator: 0.6.26
+ **Amazon SSM Agent:** 3.3.987
+ **Linux Kernel:** 5.10.228
+ **OSS Nvidia driver:** 550.127.05
+ **NVIDIA CUDA:** 12.4
+ **EFA Installer:** 1.34.0
+ **GDRCopy:** 2.4
+ **Nvidia container toolkit:** 1.17.3
+ **Amazon OFI NCCL:** 1.11.0
+ **aws-neuronx-tools:** 2.18.3.0-1
+ **aws-neuronx-runtime-lib:** 2.22.19.0
+ **aws-neuronx-oci-hook:** 2.4.4.0-1
+ **aws-neuronx-dkms:** 2.18.20.0
+ **aws-neuronx-collectives:** 2.22.33.0

------

## SageMaker HyperPod AMI releases for Amazon EKS: November 11, 2024
<a name="sagemaker-hyperpod-release-ami-eks-20241111"></a>

**AMI general updates**
+ Updated SageMaker HyperPod DLAMI with Amazon EKS versions 1.28.13, 1.29.8, 1.30.4.

## SageMaker HyperPod AMI releases for Amazon EKS: October 21, 2024
<a name="sagemaker-hyperpod-release-ami-eks-20241021"></a>

**AMI general updates**
+ Updated SageMaker HyperPod base DLAMI to the following versions:
  + Amazon EKS: 1.28.11, 1.29.6, 1.30.2.

## SageMaker HyperPod AMI releases for Amazon EKS: September 10, 2024
<a name="sagemaker-hyperpod-release-ami-eks-20240910"></a>

**SageMaker HyperPod DLAMI for Amazon EKS support**

The AMIs include the following:

------
#### [ Deep Learning EKS AMI 1.28 ]
+ **Amazon EKS Components**
  + Kubernetes Version: 1.28.11
  + Containerd Version: 1.7.20
  + Runc Version: 1.1.11
  + Amazon IAM Authenticator: 0.6.21
+ **Amazon SSM Agent:** 3.3.380
+ **Linux Kernel:** 5.10.223
+ **OSS NVIDIA driver:** 535.183.01
+ **NVIDIA CUDA:** 12.2
+ **EFA Installer:** 1.32.0
+ **GDRCopy:** 2.4
+ **NVIDIA container toolkit:** 1.16.1
+ **Amazon OFI NCCL:** 1.9.1
+ **aws-neuronx-tools:** 2.18.3.0-1
+ **aws-neuronx-runtime-lib:** 2.21.41.0
+ **aws-neuronx-oci-hook:** 2.4.4.0-1
+ **aws-neuronx-dkms:** 2.17.17.0
+ **aws-neuronx-collectives:** 2.21.46.0

------
#### [ Deep Learning EKS AMI 1.29 ]
+ **Amazon EKS Components**
  + Kubernetes Version: 1.29.6
  + Containerd Version: 1.7.20
  + Runc Version: 1.1.11
  + Amazon IAM Authenticator: 0.6.21
+ **Amazon SSM Agent:** 3.3.380
+ **Linux Kernel:** 5.10.223
+ **OSS Nvidia driver:** 535.183.01
+ **NVIDIA CUDA:** 12.2
+ **EFA Installer:** 1.32.0
+ **GDRCopy:** 2.4
+ **Nvidia container toolkit:** 1.16.1
+ **Amazon OFI NCCL:** 1.9.1
+ **aws-neuronx-tools:** 2.18.3.0-1
+ **aws-neuronx-runtime-lib:** 2.21.41.0
+ **aws-neuronx-oci-hook:** 2.4.4.0-1
+ **aws-neuronx-dkms:** 2.17.17.0
+ **aws-neuronx-collectives:** 2.21.46.0

------
#### [ Deep Learning EKS AMI 1.30 ]
+ **Amazon EKS Components**
  + Kubernetes Version: 1.30.2
  + Containerd Version: 1.7.20
  + Runc Version: 1.1.11
  + Amazon IAM Authenticator: 0.6.21
+ **Amazon SSM Agent:** 3.3.380
+ **Linux Kernel:** 5.10.223
+ **OSS Nvidia driver:** 535.183.01
+ **NVIDIA CUDA:** 12.2
+ **EFA Installer:** 1.32.0
+ **GDRCopy:** 2.4
+ **Nvidia container toolkit:** 1.16.1
+ **Amazon OFI NCCL:** 1.9.1
+ **aws-neuronx-tools:** 2.18.3.0-1
+ **aws-neuronx-runtime-lib:** 2.21.41.0
+ **aws-neuronx-oci-hook:** 2.4.4.0-1
+ **aws-neuronx-dkms:** 2.17.17.0
+ **aws-neuronx-collectives:** 2.21.46.0

------

# Public AMI releases
<a name="sagemaker-hyperpod-release-public-ami"></a>

The following release notes track the latest updates for Amazon SageMaker HyperPod public AMI releases for Amazon EKS orchestration. Each release note includes a summarized list of packages pre-installed or pre-configured in the SageMaker HyperPod DLAMIs for Amazon EKS support. Each DLAMI is built on AL2023 and supports a specific Kubernetes version. For information about Amazon SageMaker HyperPod feature releases, see [Amazon SageMaker HyperPod release notes](sagemaker-hyperpod-release-notes.md).

This page is regularly updated to provide comprehensive AMI lifecycle management information including security vulnerabilities, deprecation announcements, and patching recommendations. As part of a commitment to maintaining secure and up-to-date infrastructure, SageMaker AI continuously monitors all HyperPod public AMIs for critical vulnerabilities using automated scanning workflows. When critical security issues are identified, AMIs are systematically deprecated with appropriate migration guidance. Regular updates include Common Vulnerabilites and Exposures (CVE) remediation status, compliance findings, and recommended actions to ensure that you can maintain secure HyperPod environments while minimizing operational disruption during AMI transitions.

## SageMaker HyperPod public AMI releases: August 04, 2025
<a name="sagemaker-hyperpod-release-public-ami-2025-08-04"></a>

Amazon SageMaker HyperPod now supports new public AMIs for Amazon EKS clusters. The AMIs include the following:

------
#### [ K8s v1.32 ]

AMI Name: HyperPod EKS 1.32 x86\$164 AMI Amazon Linux 2 2025080407
+ **Amazon EKS Components**
  + Kubernetes Version: 1.32.3
  + Containerd Version: 1.7.23
  + Runc Version: 1.2.6
  + Amazon IAM Authenticator: 0.6.29
+ **Amazon SSM Agent:** 3.3.2299.0
+ **Linux Kernel:** 5.10.238-234.956.amzn2.x86\$164
+ **OSS NVIDIA driver:** 550.163.01
+ **NVIDIA CUDA:** 12.2
+ **EFA Installer:** 1.38.0
+ **GDRCopy:** 2.4.1
+ **NVIDIA container toolkit:** 1.17.8
+ **Amazon OFI NCCL:** 1.13.0-aws
+ **Neuron packages:**
  + **aws-neuronx-dkms.noarch:** 2.22.2.0-dkms
  + **aws-neuronx-oci-hook.x86\$164:** 2.4.4.0-1
  + **aws-neuronx-tools.x86\$164:** 2.18.3.0-1
  + **aws-neuron-dkms.noarch:** 2.3.26.0-dkms
  + **aws-neuron-k8-plugin.x86\$164:** 1.9.3.0-1
  + **aws-neuron-k8-scheduler.x86\$164:** 1.9.3.0-1
  + **aws-neuron-runtime.x86\$164:** 1.6.24.0-1
  + **aws-neuron-runtime-base.x86\$164:** 1.6.21.0-1
  + **aws-neuron-tools.x86\$164:** 2.1.4.0-1
  + **aws-neuronx-collectives.x86\$164:** 2.27.34.0\$1ec8cd5e8b-1
  + **aws-neuronx-gpsimd-customop.x86\$164:** 0.2.3.0-1
  + **aws-neuronx-gpsimd-customop-lib.x86\$164:** 0.17.1.0-1
  + **aws-neuronx-gpsimd-tools.x86\$164:** 0.17.0.0\$1aacc27699-1
  + **aws-neuronx-k8-plugin.x86\$164:** 2.27.7.0-1
  + **aws-neuronx-k8-scheduler.x86\$164:** 2.27.7.0-1
  + **aws-neuronx-runtime-lib.x86\$164:** 2.27.23.0\$18deec4dbf-1
  + **aws-neuronx-tools.x86\$164:** 2.25.145.0-1
  + **tensorflow-model-server-neuron.x86\$164:** 2.8.0.2.3.0.0-0
  + **tensorflow-model-server-neuronx.x86\$164:** 2.10.1.2.12.2.0-0

------
#### [ K8s v1.30 ]

AMI Name: HyperPod EKS 1.30 x86\$164 AMI Amazon Linux 2 2025080407
+ **Amazon EKS Components**
  + Kubernetes Version: 1.30.11
  + Containerd Version: 1.7.\$1
  + Runc Version: 1.2.6
  + Amazon IAM Authenticator: 0.6.28
+ **Amazon SSM Agent:** 3.3.2299.0
+ **Linux Kernel:** 5.10.238-234.956.amzn2.x86\$164
+ **OSS NVIDIA driver:** 550.163.01
+ **NVIDIA CUDA:** 12.2
+ **EFA Installer:** 1.38.0
+ **GDRCopy:** 2.4.1
+ **NVIDIA container toolkit:** 1.17.8
+ **Amazon OFI NCCL:** 1.13.0-aws
+ **Neuron packages:**
  + **aws-neuronx-dkms.noarch:** 2.22.2.0-dkms
  + **aws-neuronx-oci-hook.x86\$164:** 2.4.4.0-1
  + **aws-neuronx-tools.x86\$164:** 2.18.3.0-1
  + **aws-neuron-dkms.noarch:** 2.3.26.0-dkms
  + **aws-neuron-k8-plugin.x86\$164:** 1.9.3.0-1
  + **aws-neuron-k8-scheduler.x86\$164:** 1.9.3.0-1
  + **aws-neuron-runtime.x86\$164:** 1.6.24.0-1
  + **aws-neuron-runtime-base.x86\$164:** 1.6.21.0-1
  + **aws-neuron-tools.x86\$164:** 2.1.4.0-1
  + **aws-neuronx-collectives.x86\$164:** 2.27.34.0\$1ec8cd5e8b-1
  + **aws-neuronx-gpsimd-customop.x86\$164:** 0.2.3.0-1
  + **aws-neuronx-gpsimd-customop-lib.x86\$164:** 0.17.1.0-1
  + **aws-neuronx-gpsimd-tools.x86\$164:** 0.17.0.0\$1aacc27699-1
  + **aws-neuronx-k8-plugin.x86\$164:** 2.27.7.0-1
  + **aws-neuronx-k8-scheduler.x86\$164:** 2.27.7.0-1
  + **aws-neuronx-runtime-lib.x86\$164:** 2.27.23.0\$18deec4dbf-1
  + **aws-neuronx-tools.x86\$164:** 2.25.145.0-1
  + **tensorflow-model-server-neuron.x86\$164:** 2.8.0.2.3.0.0-0
  + **tensorflow-model-server-neuronx.x86\$164:** 2.10.1.2.12.2.0-0

------
#### [ K8s v1.31 ]

AMI Name: HyperPod EKS 1.31 x86\$164 AMI Amazon Linux 2 2025080407
+ **Amazon EKS Components**
  + Kubernetes Version: 1.31.7
  + Containerd Version: 1.7.\$1
  + Runc Version: 1.2.6
  + Amazon IAM Authenticator: 0.6.28
+ **Amazon SSM Agent:** 3.3.2299.0
+ **Linux Kernel:** 5.10.238-234.956.amzn2.x86\$164
+ **OSS NVIDIA driver:** 550.163.01
+ **NVIDIA CUDA:** 12.2
+ **EFA Installer:** 1.38.0
+ **GDRCopy:** 2.4.1
+ **NVIDIA container toolkit:** 1.17.8
+ **Amazon OFI NCCL:** 1.13.0-aws
+ **Neuron packages:**
  + **aws-neuronx-dkms.noarch:** 2.22.2.0-dkms
  + **aws-neuronx-oci-hook.x86\$164:** 2.4.4.0-1
  + **aws-neuronx-tools.x86\$164:** 2.18.3.0-1
  + **aws-neuron-dkms.noarch:** 2.3.26.0-dkms
  + **aws-neuron-k8-plugin.x86\$164:** 1.9.3.0-1
  + **aws-neuron-k8-scheduler.x86\$164:** 1.9.3.0-1
  + **aws-neuron-runtime.x86\$164:** 1.6.24.0-1
  + **aws-neuron-runtime-base.x86\$164:** 1.6.21.0-1
  + **aws-neuron-tools.x86\$164:** 2.1.4.0-1
  + **aws-neuronx-collectives.x86\$164:** 2.27.34.0\$1ec8cd5e8b-1
  + **aws-neuronx-gpsimd-customop.x86\$164:** 0.2.3.0-1
  + **aws-neuronx-gpsimd-customop-lib.x86\$164:** 0.17.1.0-1
  + **aws-neuronx-gpsimd-tools.x86\$164:** 0.17.0.0\$1aacc27699-1
  + **aws-neuronx-k8-plugin.x86\$164:** 2.27.7.0-1
  + **aws-neuronx-k8-scheduler.x86\$164:** 2.27.7.0-1
  + **aws-neuronx-runtime-lib.x86\$164:** 2.27.23.0\$18deec4dbf-1
  + **aws-neuronx-tools.x86\$164:** 2.25.145.0-1
  + **tensorflow-model-server-neuron.x86\$164:** 2.8.0.2.3.0.0-0
  + **tensorflow-model-server-neuronx.x86\$164:** 2.10.1.2.12.2.0-0

------
#### [ K8s v1.29 ]

AMI Name: HyperPod EKS 1.29 x86\$164 AMI Amazon Linux 2 2025080407
+ **Amazon EKS Components**
  + Kubernetes Version: 1.29.15
  + Containerd Version: 1.7.\$1
  + Runc Version: 1.2.6
  + Amazon IAM Authenticator: 0.6.28
+ **Amazon SSM Agent:** 3.3.2299.0
+ **Linux Kernel:** 5.10.238-234.956.amzn2.x86\$164
+ **OSS NVIDIA driver:** 550.163.01
+ **NVIDIA CUDA:** 12.2
+ **EFA Installer:** 1.38.0
+ **GDRCopy:** 2.4.1
+ **NVIDIA container toolkit:** 1.17.8
+ **Amazon OFI NCCL:** 1.13.0-aws
+ **Neuron packages:**
  + **aws-neuronx-dkms.noarch:** 2.22.2.0-dkms
  + **aws-neuronx-oci-hook.x86\$164:** 2.4.4.0-1
  + **aws-neuronx-tools.x86\$164:** 2.18.3.0-1
  + **aws-neuron-dkms.noarch:** 2.3.26.0-dkms
  + **aws-neuron-k8-plugin.x86\$164:** 1.9.3.0-1
  + **aws-neuron-k8-scheduler.x86\$164:** 1.9.3.0-1
  + **aws-neuron-runtime.x86\$164:** 1.6.24.0-1
  + **aws-neuron-runtime-base.x86\$164:** 1.6.21.0-1
  + **aws-neuron-tools.x86\$164:** 2.1.4.0-1
  + **aws-neuronx-collectives.x86\$164:** 2.27.34.0\$1ec8cd5e8b-1
  + **aws-neuronx-gpsimd-customop.x86\$164:** 0.2.3.0-1
  + **aws-neuronx-gpsimd-customop-lib.x86\$164:** 0.17.1.0-1
  + **aws-neuronx-gpsimd-tools.x86\$164:** 0.17.0.0\$1aacc27699-1
  + **aws-neuronx-k8-plugin.x86\$164:** 2.27.7.0-1
  + **aws-neuronx-k8-scheduler.x86\$164:** 2.27.7.0-1
  + **aws-neuronx-runtime-lib.x86\$164:** 2.27.23.0\$18deec4dbf-1
  + **aws-neuronx-tools.x86\$164:** 2.25.145.0-1
  + **tensorflow-model-server-neuron.x86\$164:** 2.8.0.2.3.0.0-0
  + **tensorflow-model-server-neuronx.x86\$164:** 2.10.1.2.12.2.0-0

------
#### [ K8s v1.28 ]

AMI Name: HyperPod EKS 1.28 x86\$164 AMI Amazon Linux 2 2025080407
+ **Amazon EKS Components**
  + Kubernetes Version: 1.28.15
  + Containerd Version: 1.7.\$1
  + Runc Version: 1.2.6
  + Amazon IAM Authenticator: 0.6.28
+ **Amazon SSM Agent:** 3.3.2299.0
+ **Linux Kernel:** 5.10.238-234.956.amzn2.x86\$164
+ **OSS NVIDIA driver:** 550.163.01
+ **NVIDIA CUDA:** 12.2
+ **EFA Installer:** 1.38.0
+ **GDRCopy:** 2.4.1
+ **NVIDIA container toolkit:** 1.17.8
+ **Amazon OFI NCCL:** 1.13.0-aws
+ **Neuron packages:**
  + **aws-neuronx-dkms.noarch:** 2.22.2.0-dkms
  + **aws-neuronx-oci-hook.x86\$164:** 2.4.4.0-1
  + **aws-neuronx-tools.x86\$164:** 2.18.3.0-1
  + **aws-neuron-dkms.noarch:** 2.3.26.0-dkms
  + **aws-neuron-k8-plugin.x86\$164:** 1.9.3.0-1
  + **aws-neuron-k8-scheduler.x86\$164:** 1.9.3.0-1
  + **aws-neuron-runtime.x86\$164:** 1.6.24.0-1
  + **aws-neuron-runtime-base.x86\$164:** 1.6.21.0-1
  + **aws-neuron-tools.x86\$164:** 2.1.4.0-1
  + **aws-neuronx-collectives.x86\$164:** 2.27.34.0\$1ec8cd5e8b-1
  + **aws-neuronx-gpsimd-customop.x86\$164:** 0.2.3.0-1
  + **aws-neuronx-gpsimd-customop-lib.x86\$164:** 0.17.1.0-1
  + **aws-neuronx-gpsimd-tools.x86\$164:** 0.17.0.0\$1aacc27699-1
  + **aws-neuronx-k8-plugin.x86\$164:** 2.27.7.0-1
  + **aws-neuronx-k8-scheduler.x86\$164:** 2.27.7.0-1
  + **aws-neuronx-runtime-lib.x86\$164:** 2.27.23.0\$18deec4dbf-1
  + **aws-neuronx-tools.x86\$164:** 2.25.145.0-1
  + **tensorflow-model-server-neuron.x86\$164:** 2.8.0.2.3.0.0-0
  + **tensorflow-model-server-neuronx.x86\$164:** 2.10.1.2.12.2.0-0

------