

# Configuration files


Amazon ParallelCluster uses YAML 1.1 files for configuration parameters.

**Topics**
+ [

# Cluster configuration file
](cluster-configuration-file-v3.md)
+ [

# Build image configuration files
](image-builder-configuration-file-v3.md)

# Cluster configuration file
Cluster configuration file

Amazon ParallelCluster version 3 uses separate configuration files to control the definition of cluster infrastructure and the definition of custom AMIs. All configuration files use YAML 1.1 files. Detailed information for each of these configuration files is linked below. For some example configurations, see [https://github.com/aws/aws-parallelcluster/tree/release-3.0/cli/tests/pcluster/example\$1configs](https://github.com/aws/aws-parallelcluster/tree/release-3.0/cli/tests/pcluster/example_configs).

These objects are used for the Amazon ParallelCluster version 3 cluster configuration.

**Topics**
+ [

## Cluster configuration file properties
](#cluster-configuration-file-v3.properties)
+ [

# `Imds` section
](Imds-cluster-v3.md)
+ [

# `Image` section
](Image-v3.md)
+ [

# `HeadNode` section
](HeadNode-v3.md)
+ [

# `Scheduling` section
](Scheduling-v3.md)
+ [

# `SharedStorage` section
](SharedStorage-v3.md)
+ [

# `Iam` section
](Iam-v3.md)
+ [

# `LoginNodes` section
](LoginNodes-v3.md)
+ [

# `Monitoring` section
](Monitoring-v3.md)
+ [

# `Tags` section
](Tags-v3.md)
+ [

# `AdditionalPackages` section
](AdditionalPackages-v3.md)
+ [

# `DirectoryService` section
](DirectoryService-v3.md)
+ [

# `DeploymentSettings` section
](DeploymentSettings-cluster-v3.md)

## Cluster configuration file properties


`Region` (**Optional**, `String`)  
Specifies the Amazon Web Services Region for the cluster. For example, `us-east-2`.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

`CustomS3Bucket` (**Optional**, `String`)  
Specifies the name of an Amazon S3 bucket that is created in your Amazon account to store resources that are used are used by your clusters, such as the cluster configuration file, and to export logs. Amazon ParallelCluster maintains one Amazon S3 bucket in each Amazon Region that you create clusters in. By default, these Amazon S3 buckets are named `parallelcluster-hash-v1-DO-NOT-DELETE`.  
[Update policy: If this setting is changed, the update is not allowed. If you force the update, the new value will be ignored and the old value will be used.](using-pcluster-update-cluster-v3.md#update-policy-read-only-resource-bucket-v3)

`AdditionalResources` (**Optional**, `String`)  
Defines an additional Amazon CloudFormation template to launch along with the cluster. This additional template is used for creating resources that are outside of the cluster but are part of the cluster's lifecycle.  
The value must be an HTTPS URL to a public template, with all parameters provided.  
There is no default value.  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)

# `Imds` section


**(Optional)** Specifies the global instance metadata service (IMDS) configuration.

```
Imds:
  ImdsSupport: string
```

## `Imds` properties


`ImdsSupport` (**Optional**, `String`)  
Specifies which IMDS versions are supported in the cluster nodes. Supported values are `v1.0` and `v2.0`. The default value is `v2.0`.  
If `ImdsSupport` is set to `v1.0`, both IMDSv1 and IMDSv2 are supported.  
If `ImdsSupport` is set to `v2.0`, only IMDSv2 is supported.  
For more information, see [Use IMDSv2](https://docs.amazonaws.cn/AWSEC2/latest/UserGuide/configuring-instance-metadata-service.html) in the *Amazon EC2 User Guide for Linux instances*.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)  
Starting with Amazon ParallelCluster 3.7.0, the `ImdsSupport` default value is `v2.0`. We recommend that you set `ImdsSupport` to `v2.0` and replace IMDSv1 with IMDSv2 in your custom actions calls.  
Support for [`Imds`](#Imds-cluster-v3) / [`ImdsSupport`](#yaml-cluster-Imds-ImdsSupport) is added with Amazon ParallelCluster version 3.3.0.

# `Image` section


**Note**  
Unsupported versions of the official AMIs distributed by Amazon ParallelCluster will be made unavailable after 18 months of inactivity. These old images contain outdated software and cannot receive support in case of issues. We strongly suggest to move to the latest supported version.

**(Required)** Defines the operating system for the cluster.

```
Image:
  Os: string
  CustomAmi: string
```

## `Image` properties


`Os` (**Required**, `String`)  
Specifies the operating system to use for the cluster. The supported values are `alinux2`, `alinux2023`, `ubuntu2404`, `ubuntu2204`, `rhel8`, `rocky8`, `rhel9`, `rocky9`.  
RedHat Enterprise Linux 8.7 (`rhel8`) is added starting in Amazon ParallelCluster version 3.6.0.  
If you configure your cluster to use `rhel`, the on-demand cost for any instance type is higher than when you configure your cluster to use other supported operation systems. For more information about pricing, see [On-Demand Pricing](https://aws.amazon.com/ec2/pricing/on-demand) and [How is Red Hat Enterprise Linux on Amazon EC2 offered and priced?](https://aws.amazon.com/partners/redhat/faqs/#Pricing_and_Billing).  
RedHat Enterprise Linux 9 (rhel9) is added starting in Amazon ParallelCluster version 3.9.0.
All Amazon commercial Regions support all of the following operating systems.      
[\[See the AWS documentation website for more details\]](http://docs.amazonaws.cn/en_us/parallelcluster/latest/ug/Image-v3.html)
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)  
 Amazon ParallelCluster 3.8.0 supports Rocky Linux 8, but pre-built Rocky Linux 8 AMIs (for x86 and ARM architectures) are not available. Amazon ParallelCluster 3.8.0 supports creating clusters with Rocky Linux 8 using custom AMIs. For more information refer to [Operating system considerations](operating-systems-v3.md#OS-Consideration-v3). Amazon ParallelCluster 3.9.0 supports Rocky Linux 9, but pre-built Rocky Linux 9 AMIs (for x86 and ARM architectures) are not available. Amazon ParallelCluster 3.9.0 supports creating clusters with Rocky Linux 9 using custom AMIs. For more information refer to [Operating System Considerations](operating-systems-v3.md#OS-Consideration-v3).   
 

`CustomAmi` (**Optional**, `String`)  
Specifies the ID of a custom AMI to use for the head and compute nodes instead of the default AMI. For more information, see [Amazon ParallelCluster AMI customization](custom-ami-v3.md).  
If the custom AMI requires additional permissions for its launch, these permissions must be added to both the user and head node policies.  
For example, if a custom AMI has an encrypted snapshot associated with it, the following additional policies are required in both the user and head node policies:    
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "kms:DescribeKey",
                "kms:ReEncrypt*",
                "kms:CreateGrant",
                "kms:Decrypt"
            ],
            "Resource": [
                "arn:aws-cn:kms:us-east-1:111122223333:key/<AWS_KMS_KEY_ID>"
            ]
        }
    ]
}
```
To build a RedHat Enterprise Linux custom AMI, you must configure the OS for installing the packages that are provided by the RHUI (Amazon) repositories: `rhel-<version>-baseos-rhui-rpms`, `rhel-<version>-appstream-rhui-rpms`, and `codeready-builder-for-rhel-<version>-rhui-rpms`. Moreover, the repositories on the custom AMI must contain `kernel-devel` packages on the same version as the running kernel version. kernel.  

**Known limitations:**
+ Only RHEL 8.2 and later versions support FSx for Lustre.
+ RHEL 8.7 kernel version 4.18.0-425.3.1.el8 doesn't support FSx for Lustre.
+ Only RHEL 8.4 and later versions support EFA.
+ AL23 doesn't support NICE DCV, as it doesn't include a graphical desktop environment, which is required to run NICE DCV. For more information, see the official [NICE DCV documentation](https://docs.amazonaws.cn//dcv/).
To troubleshoot custom AMI validation warnings, see [Troubleshooting custom AMI issues](troubleshooting-v3-custom-amis.md).  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

# `HeadNode` section


**(Required)** Specifies the configuration for the head node.

```
HeadNode:
  InstanceType: string
  Networking:
    SubnetId: string
    ElasticIp: string/boolean
    SecurityGroups:
      - string
    AdditionalSecurityGroups:
      - string
    Proxy:
      HttpProxyAddress: string
  DisableSimultaneousMultithreading: boolean
  Ssh:
    KeyName: string
    AllowedIps: string
  LocalStorage:
    RootVolume:
      Size: integer
      Encrypted: boolean
      VolumeType: string
      Iops: integer
      Throughput: integer
      DeleteOnTermination: boolean
    EphemeralVolume:
      MountDir: string
  SharedStorageType: string
  Dcv:
    Enabled: boolean
    Port: integer
    AllowedIps: string
  CustomActions:
    OnNodeStart:
      Sequence:
        - Script: string
          Args:
            - string
      Script: string
      Args:
        - string
    OnNodeConfigured:
      Sequence:
        - Script: string
          Args:
            - string
      Script: string
      Args:
        - string
    OnNodeUpdated:
      Sequence:
        - Script: string
          Args: 
            - string
      Script: string
      Args:
        - string
  Iam:
    InstanceRole: string
    InstanceProfile: string
    S3Access:
      - BucketName: string
        EnableWriteAccess: boolean
        KeyName: string
    AdditionalIamPolicies:
      - Policy: string
  Imds:
    Secured: boolean
  Image:
    CustomAmi: string
```

## `HeadNode` properties


`InstanceType` (** Required**, `String`)  
Specifies the instance type for the head node.  
Specifies the Amazon EC2 instance type that's used for the head node. The architecture of the instance type must be the same as the architecture used for the Amazon Batch [`InstanceType`](Scheduling-v3.md#yaml-Scheduling-AwsBatchQueues-ComputeResources-InstanceTypes) or Slurm [`InstanceType`](Scheduling-v3.md#yaml-Scheduling-SlurmQueues-ComputeResources-InstanceType) setting.  
Amazon ParallelCluster doesn't support the following instance types for the `HeadNode` setting.  
+ hpc6id
If you define a p4d instance type or another instance type that has multiple network interfaces or a network interface card, you must set [`ElasticIp`](#yaml-HeadNode-Networking-ElasticIp) to `true` to provide public access. Amazon public IPs can only be assigned to instances launched with a single network interface. For this case, we recommend that you use a [ NAT gateway](https://docs.amazonaws.cn/vpc/latest/userguide/vpc-nat-gateway.html) to provide public access to the cluster compute nodes. For more information, see [Assign a public IPv4 address during instance launch](https://docs.amazonaws.cn/AWSEC2/latest/UserGuide/using-instance-addressing.html#public-ip-addresses) in the *Amazon EC2 User Guide for Linux Instances*.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

`DisableSimultaneousMultithreading` (**Optional**, `Boolean`)  
If `true`, disables hyper-threading on the head node. The default value is `false`.  
Not all instance types can disable hyper-threading. For a list of instance types that support disabling hyperthreading, see [ CPU cores and threads for each CPU core per instance type](https://docs.amazonaws.cn/AWSEC2/latest/UserGuide/instance-optimize-cpu.html#cpu-options-supported-instances-values) in the *Amazon EC2 User Guide*.   
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

`SharedStorageType` (**Optional**, `String`)  
Specifies the type of storage used for internally shared data. Internally shared data includes data that Amazon ParallelCluster uses to manage the cluster and the default shared `/home` if not specified in the [`SharedStorage` section](SharedStorage-v3.md) as a Mount directory to mount a shared filesystem volume. For more details on internal shared data refer [Amazon ParallelCluster internal directories](directories-v3.md).  
If `Ebs`, which is the default storage type, the head node will export portions of its root volume as shared directories for compute nodes and login nodes using NFS.  
If `Efs`, ParallelCluster will create an EFS filesystem to use for shared internal data and `/home`.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)  
When the cluster scales out, the EBS storage type may present performance bottlenecks as the head node shares data from the root volume with the compute nodes using NFS exports. Using EFS, you can avoid NFS exports as your cluster scales out and avoid performance bottlenecks associated with them. It is recommended to choose EBS for max read/write potential for small files and installation process. Choose EFS for scale.

## `Networking`


**(Required)** Defines the networking configuration for the head node.

```
Networking:
  SubnetId: string
  ElasticIp: string/boolean
  SecurityGroups:
    - string
  AdditionalSecurityGroups:
    - string
  Proxy:
    HttpProxyAddress: string
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

### `Networking` properties


`SubnetId` (**Required**, `String`)  
Specifies the ID of an existing subnet in which to provision the head node.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

`ElasticIp` (**Optional**, `String`)  
Creates or assigns an Elastic IP address to the head node. Supported values are `true`, `false`, or the ID of an existing Elastic IP address. The default is `false`.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

`SecurityGroups` (**Optional**, `[String]`)  
List of Amazon VPC security group ids to use for the head node. These replace the security groups that Amazon ParallelCluster creates if this property is not included.  
Verify that the security groups are configured correctly for your [SharedStorage](SharedStorage-v3.md) systems.  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)

`AdditionalSecurityGroups` (**Optional**, `[String]`)  
List of additional Amazon VPC security group ids to use for the head node.  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)

`Proxy` (** Optional**)  
Specifies the proxy settings for the head node.  

```
Proxy:
                            HttpProxyAddress: 
                            string
```  
` HttpProxyAddress` (**Optional**, `String`)  
Defines an HTTP or HTTPS proxy server, typically `https://x.x.x.x:8080`.  
There is no default value.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

## `Ssh`


**(Optional)** Defines the configuration for SSH access to the head node.

```
Ssh:
      KeyName: string
      AllowedIps: string
```

[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)

### `Ssh` properties


`KeyName` (** Optional**, `String`)  
Names an existing Amazon EC2 key pair to enable SSH access to the head node.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

`AllowedIps` (**Optional**, `String`)  
Specifies the CIDR-formatted IP range or a prefix list id for SSH connections to the head node. The default is `0.0.0.0/0`.  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)

## `LocalStorage`


**(Optional)** Defines the local storage configuration for the head node.

```
LocalStorage:
  RootVolume:
    Size: integer
    Encrypted: boolean
    VolumeType: string
    Iops: integer
    Throughput: integer
    DeleteOnTermination: boolean
  EphemeralVolume:
    MountDir: string
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

### `LocalStorage` properties


`RootVolume` (**Required**)  
Specifies the root volume storage for the head node.  

```
RootVolume:
  Size: integer
  Encrypted: boolean
  VolumeType: string
  Iops: integer
  Throughput: integer
  DeleteOnTermination: boolean
```
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)    
`Size` (**Optional**, `Integer`)  
Specifies the head node root volume size in gibibytes (GiB). The default size comes from the AMI. Using a different size requires that the AMI supports `growroot`.   
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)  
`Encrypted` (**Optional**, `Boolean`)  
Specifies if the root volume is encrypted. The default value is `true`.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)  
` VolumeType` (**Optional**, `String`)  
Specifies the [ Amazon EBS volume type](https://docs.amazonaws.cn/AWSEC2/latest/UserGuide/EBSVolumeTypes.html). Supported values are `gp2`, `gp3`, `io1`, `io2`, `sc1`, `st1`, and `standard`. The default value is `gp3`.  
For more information, see [Amazon EBS volume types](https://docs.amazonaws.cn/AWSEC2/latest/UserGuide/EBSVolumeTypes.html) in the *Amazon EC2 User Guide*.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)  
`Iops` (**Optional**, `Integer`)  
Defines the number of IOPS for `io1`, `io2`, and `gp3` type volumes.  
The default value, supported values, and `volume_iops` to `volume_size` ratio varies by `VolumeType` and `Size`.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)    
`VolumeType` = `io1`  
Default `Iops` = 100  
Supported values `Iops` = 100–64000 †  
Maximum `Iops` to `Size` ratio = 50 IOPS per GiB. 5000 IOPS requires a `Size` of at least 100 GiB.  
`VolumeType` = `io2`  
Default `Iops` = 100  
Supported values `Iops` = 100–64000 (256000 for `io2` Block Express volumes) †  
Maximum `Iops` to `Size` ratio = 500 IOPS per GiB. 5000 IOPS requires a `Size` of at least 10 GiB.  
`VolumeType` = `gp3`  
Default `Iops` = 3000  
Supported values `Iops` = 3000–16000  
Maximum `Iops` to `Size` ratio = 500 IOPS per GiB. 5000 IOPS requires a `Size` of at least 10 GiB.
† Maximum IOPS is guaranteed only on [ Instances built on the Nitro System](https://docs.amazonaws.cn/AWSEC2/latest/UserGuide/instance-types.html#ec2-nitro-instances) provisioned with more than 32,000 IOPS. Other instances guarantee up to 32,000 IOPS. Older `io1` volumes might not reach full performance unless you [ modify the volume](https://docs.amazonaws.cn/AWSEC2/latest/UserGuide/ebs-modify-volume.html). `io2` Block Express volumes support `Iops` values up to 256000 on `R5b` instance types. For more information, see [`io2` Block Express volumes](https://docs.amazonaws.cn/AWSEC2/latest/UserGuide/ebs-volume-types.html#io2-block-express) in the *Amazon EC2 User Guide*.  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)  
`Throughput` (**Optional**, `Integer`)  
Defines the throughput for `gp3` volume types, in MiB/s. This setting is valid only when `VolumeType` is `gp3`. The default value is `125`. Supported values: 125–1000 MiB/s  
The ratio of `Throughput` to `Iops` can be no more than 0.25. The maximum throughput of 1000 MiB/s requires that the `Iops` setting is at least 4000.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)  
 `DeleteOnTermination` (**Optional**, `Boolean`)  
Specifies whether the root volume should be deleted when the head node is terminated. The default value is `true`.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

`EphemeralVolume` (**Optional**)  
Specifies details for any instance store volume. For more information, see [ Instance store volumes](https://docs.amazonaws.cn/AWSEC2/latest/UserGuide/InstanceStorage.html#instance-store-volumes) in the *Amazon EC2 User Guide*.  

```
EphemeralVolume:
  MountDir: string
```
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)    
 `MountDir` (**Optional**, `String`)  
Specifies the mount directory for the instance store volume. The default is `/scratch`.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

## `Dcv`


**(Optional)** Defines configuration settings for the Amazon DCV server that runs on the head node.

For more information, see [Connect to the head and login nodes through Amazon DCV](dcv-v3.md).

```
Dcv:
  Enabled: boolean
  Port: integer
  AllowedIps: string
```

**Important**  
By default, the Amazon DCV port setup by Amazon ParallelCluster is open to all IPv4 addresses. However, you can connect to a Amazon DCV port only if you have the URL for the Amazon DCV session and connect to the Amazon DCV session within 30 seconds of when the URL is returned from `pcluster dcv-connect`. Use the `AllowedIps` setting to further restrict access to the Amazon DCV port with a CIDR-formatted IP range, and use the `Port` setting to set a nonstandard port.

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

### `Dcv` properties


`Enabled` (** Required**, `Boolean`)  
Specifies whether Amazon DCV is enabled on the head node. The default value is `false`.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)  
Amazon DCV automatically generates a self-signed certificate that's used to secure traffic between the Amazon DCV client and the Amazon DCV server that runs on the head node. To configure your own certificate, see [Amazon DCV HTTPS certificate](dcv-v3.md#dcv-v3-certificate).

`Port` (** Optional**, `Integer`)  
Specifies the port for Amazon DCV. The default value is `8443`.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

`AllowedIps` (**Optional, Recommended**, `String`)  
Specifies the CIDR-formatted IP range for connections to Amazon DCV. This setting is used only when Amazon ParallelCluster creates the security group. The default value is `0.0.0.0/0`, which allows access from any internet address.  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)

## `CustomActions`


**(Optional)** Specifies custom scripts to run on the head node.

```
CustomActions:
  OnNodeStart:
    Sequence:
      - Script: string
        Args:
          - string
    Script: string
    Args:
      - string
  OnNodeConfigured:
    Sequence:
      - Script: string
        Args:
          - string
    Script: string
    Args:
      - string
  OnNodeUpdated:
    Sequence:
      - Script: string
        Args: 
          - string
    Script: string
    Args: 
      - string
```

### `CustomActions` properties


`OnNodeStart` (**Optional**)  
Specifies single script or a sequence of scripts to run on the head node before any node deployment bootstrap action is started. For more information, see [Custom bootstrap actions](custom-bootstrap-actions-v3.md).    
 `Sequence` (**Optional**)  
List of scripts to run. Amazon ParallelCluster runs the scripts in the same order as they are listed in the configuration file, starting with the first.    
 `Script` (**Required**, `String`)  
Specifies the file to use. The file path can start with `https://` or `s3://`.  
 `Args` (**Optional**, `[String]`)  
List of arguments to pass to the script.  
 `Script` (**Required**, `String`)  
Specifies the file to use for a single script. The file path can start with `https://` or `s3://`.  
`Args` (**Optional**, `[String]`)  
List of arguments to pass to the single script.
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

`OnNodeConfigured` (**Optional**)  
Specifies a single script or a sequence of scripts to run on the head node after the node bootstrap actions are complete. For more information, see [Custom bootstrap actions](custom-bootstrap-actions-v3.md).    
 `Sequence` (**Optional**)  
Specifies the list of scripts to run.    
 `Script` (**Required**, `String`)  
Specifies the file to use. The file path can start with `https://` or `s3://`.  
 `Args` (**Optional**, `[String]`)  
List of arguments to pass to the script.  
 `Script` (**Required**, `String`)  
Specifies the file to use for a single script. The file path can start with `https://` or `s3://`.  
 `Args` (**Optional**, `[String]`)  
List of arguments to pass to the single script.
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

`OnNodeUpdated` (**Optional**)  
Specifies a single script or a sequence of scripts to run on the head node after node update actions are complete. For more information, see [Custom bootstrap actions](custom-bootstrap-actions-v3.md).    
 `Sequence` (**Optional**)  
Specifies the list of scripts to run.    
 `Script` (**Required**, `String`)  
Specifies the file to use. The file path can start with `https://` or `s3://`.  
 `Args` (**Optional**, `[String]`)  
List of arguments to pass to the script.  
 `Script` (**Required**, `String`)  
Specifies the file to use for the single script. The file path can start with `https://` or `s3://`.  
 `Args` (**Optional**, `[String]`)  
List of arguments to pass to the single script.
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)  
`OnNodeUpdated` is added starting with Amazon ParallelCluster 3.4.0.  
`Sequence` is added starting with Amazon ParallelCluster version 3.6.0. When you specify `Sequence`, you can list multiple scripts for a custom action. Amazon ParallelCluster continues to support configuring a custom action with a single script, without including `Sequence`.  
Amazon ParallelCluster doesn't support including both a single script and `Sequence` for the same custom action.

## `Iam`


**(Optional)** Specifies either an instance role or an instance profile to use on the head node to override the default instance role or instance profile for the cluster.

```
Iam:
  InstanceRole: string
  InstanceProfile: string
  S3Access:
    - BucketName: string
      EnableWriteAccess: boolean
      KeyName: string
  AdditionalIamPolicies:
    - Policy: string
```

[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)

### `Iam` properties


`InstanceProfile` (**Optional**, `String`)  
Specifies an instance profile to override the default head node instance profile. You can't specify both `InstanceProfile` and `InstanceRole`. The format is `arn:Partition:iam::Account:instance-profile/InstanceProfileName`.  
If this is specified, the `S3Access` and `AdditionalIamPolicies` settings can't be specified.  
We recommend that you specify one or both of the `S3Access` and `AdditionalIamPolicies` settings because features added to Amazon ParallelCluster often require new permissions.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

`InstanceRole` (**Optional**, `String`)  
Specifies an instance role to override the default head node instance role. You can't specify both `InstanceProfile` and `InstanceRole`. The format is `arn:Partition:iam::Account:role/RoleName`.  
If this is specified, the `S3Access` and `AdditionalIamPolicies` settings can't be specified.  
We recommend that you specify one or both of the `S3Access` and `AdditionalIamPolicies` settings because features added to Amazon ParallelCluster often require new permissions.  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)

### `S3Access`


`S3Access` (**Optional**)  
Specifies a bucket. This is used to generate policies to grant the specified access to the bucket.  
If this is specified, the `InstanceProfile` and `InstanceRole` settings can't be specified.  
We recommend that you specify one or both of the `S3Access` and `AdditionalIamPolicies` settings because features added to Amazon ParallelCluster often require new permissions.  

```
S3Access:
  - BucketName: string
    EnableWriteAccess: boolean
    KeyName: string
```
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)    
`BucketName` (**Required**, `String`)  
The name of the bucket.  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)  
`KeyName` (**Optional**, `String`)  
The key for the bucket. The default value is "`*`".  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)  
` EnableWriteAccess` (**Optional**, `Boolean`)  
Indicates whether write access is enabled for the bucket. The default value is `false`.  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)

### `AdditionalIamPolicies`


`AdditionalIamPolicies` (**Optional**)  
Specifies a list of Amazon Resource Names (ARNs) of IAM policies for Amazon EC2. This list is attached to the root role used for the head node in addition to the permissions required by Amazon ParallelCluster.  
An IAM policy name and its ARN are different. Names can't be used.  
If this is specified, the `InstanceProfile` and `InstanceRole` settings can't be specified.  
We recommend that you use `AdditionalIamPolicies` because `AdditionalIamPolicies` are added to the permissions that Amazon ParallelCluster requires, and the `InstanceRole` must include all permissions required. The permissions required often change from release to release as features are added.  
There is no default value.  

```
AdditionalIamPolicies:
  - Policy: string
```
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)    
` Policy` (**Optional**, `[String]`)  
List of IAM policies.  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)

## `Imds`


**(Optional)** Specifies the properties for instance metadata service (IMDS). For more information, see [ How instance metadata service version 2 works](https://docs.amazonaws.cn/AWSEC2/latest/UserGuide/configuring-instance-metadata-service.html#instance-metadata-v2-how-it-works) in the *Amazon EC2 User Guide*.

```
Imds:
    Secured: boolean
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

### `Imds` properties


`Secured` (**Optional**, `Boolean`)  
If `true`, restricts access to the head node's IMDS (and the instance profile credentials) to a subset of superusers.  
If `false`, every user in the head node has access to the head node's IMDS.  

The following users are permitted access to the head node's IMDS:
+ root user
+ cluster administrative user (`pc-cluster-admin` by default)
+ operating system specific default user (`ec2-user` on Amazon Linux 2 and RedHat, and `ubuntu` on Ubuntu 18.04.
The default is `true`.  
The `default` users are responsible for ensuring a cluster has the permissions it needs to interact with Amazon resources. If you disable `default` user IMDS access, Amazon ParallelCluster can't manage the compute nodes and stops working. Don't disable `default` user IMDS access.  
When a user is granted access to the head node's IMDS, they can use the permissions included in the [head node's instance profile](iam-roles-in-parallelcluster-v3.md). For example, they can use these permissions to launch Amazon EC2 instances or to read the password for an AD domain that the cluster is configured to use for authentication.  
To restrict IMDS access, Amazon ParallelCluster manages a chain of `iptables`.  
Cluster users with `sudo` access can selectively enable or disable access to the head node's IMDS for other individual users, including `default` users, by running the command:  

```
$ sudo /opt/parallelcluster/scripts/imds/imds-access.sh --allow <USERNAME>
```
You can disable user IMDS access with the `--deny` option for this command.  
If you unknowingly disable `default` user IMDS access, you can restore the permission by using the `--allow` option.  
Any customization of `iptables` or `ip6tables` rules can interfere with the mechanism used to restrict IMDS access on the head node.
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

## `Image`


**(Optional)** Defines a custom image for the head node.

```
Image:
     CustomAmi: string
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

### `Image` properties


`CustomAmi` (**Optional**, `String`)  
Specifies the ID of a custom AMI to use for the head node instead of the default AMI. For more information, see [Amazon ParallelCluster AMI customization](custom-ami-v3.md).  
If the custom AMI requires additional permissions for its launch, these permissions must be added to both the user and head node policies.  
For example, if a custom AMI has an encrypted snapshot associated with it, the following additional policies are required in both the user and head node policies:    
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "kms:DescribeKey",
                "kms:ReEncrypt*",
                "kms:CreateGrant",
                "kms:Decrypt"
            ],
            "Resource": [
                "arn:aws-cn:kms:us-east-1:111122223333:key/<AWS_KMS_KEY_ID>"
            ]
        }
    ]
}
```
To troubleshoot custom AMI validation warnings, see [Troubleshooting custom AMI issues](troubleshooting-v3-custom-amis.md).  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

# `Scheduling` section


**(Required)** Defines the job scheduler that's used in the cluster and the compute instances that the job scheduler manages. You can either use the Slurm or Amazon Batch scheduler. Each supports a different set of settings and properties.

**Topics**
+ [

## `Scheduling` properties
](#Scheduling-v3.properties)
+ [

## `AwsBatchQueues`
](#Scheduling-v3-AwsBatchQueues)
+ [

## `SlurmQueues`
](#Scheduling-v3-SlurmQueues)
+ [

## `SlurmSettings`
](#Scheduling-v3-SlurmSettings)

```
Scheduling:
  Scheduler: slurm
  ScalingStrategy: string    
  SlurmSettings:
    MungeKeySecretArn: string        
    ScaledownIdletime: integer    
    QueueUpdateStrategy: string
    EnableMemoryBasedScheduling: boolean
    CustomSlurmSettings: [dict]
    CustomSlurmSettingsIncludeFile: string
    Database:
      Uri: string
      UserName: string
      PasswordSecretArn: string
      DatabaseName: string    
    ExternalSlurmdbd: boolean
      Host: string
      Port: integer  
    Dns:
      DisableManagedDns: boolean
      HostedZoneId: string
      UseEc2Hostnames: boolean  
  SlurmQueues:
    - Name: string  
      ComputeSettings:
        LocalStorage:
          RootVolume:
            Size: integer
            Encrypted: boolean
            VolumeType: string
            Iops: integer
            Throughput: integer
          EphemeralVolume:
            MountDir: string
      CapacityReservationTarget:
        CapacityReservationId: string
        CapacityReservationResourceGroupArn: string
      CapacityType: string
      AllocationStrategy: string
      JobExclusiveAllocation: boolean
      CustomSlurmSettings: dict
      Tags:
        - Key: string
          Value: string
      HealthChecks:
        Gpu:
          Enabled: boolean
      Networking:
        SubnetIds:
          - string
        AssignPublicIp: boolean
        SecurityGroups:
          - string
        AdditionalSecurityGroups:
          - string
        PlacementGroup:
          Enabled: boolean
          Id: string
          Name: string
        Proxy:
          HttpProxyAddress: string
      ComputeResources:
        - Name: string
          InstanceType: string
          Instances:
            - InstanceType: string
          MinCount: integer
          MaxCount: integer
          DynamicNodePriority: integer
          StaticNodePriority: integer
          SpotPrice: float
          DisableSimultaneousMultithreading: boolean
          SchedulableMemory: integer
          HealthChecks:
            Gpu:
              Enabled: boolean
          Efa:
            Enabled: boolean
            GdrSupport: boolean          
          CapacityReservationTarget:
            CapacityReservationId: string
            CapacityReservationResourceGroupArn: string
          Networking:   
            PlacementGroup:
              Enabled: boolean
              Name: string
          CustomSlurmSettings: dict
          Tags:
            - Key: string
              Value: string
          LaunchTemplateOverrides:
            LaunchTemplateId: string
            Version: string
      CustomActions:
        OnNodeStart:
          Sequence:
            - Script: string
              Args:
                - string
          Script: string
          Args:
            - string
        OnNodeConfigured:
          Sequence:
            - Script: string
              Args:
                - string
          Script: string
          Args:
            - string
      Iam:
        InstanceProfile: string
        InstanceRole: string
        S3Access:
          - BucketName: string
            EnableWriteAccess: boolean
            KeyName: string
        AdditionalIamPolicies:
          - Policy: string
      Image:
        CustomAmi: string
```

```
Scheduling:
  Scheduler: awsbatch
  AwsBatchQueues:
    - Name: string
      CapacityType: string
      Networking:
        SubnetIds:
          - string
        AssignPublicIp: boolean
        SecurityGroups:
          - string
        AdditionalSecurityGroups:
          - string
      ComputeResources:  # this maps to a Batch compute environment (initially we support only 1)
        - Name: string
          InstanceTypes:
            - string
          MinvCpus: integer
          DesiredvCpus: integer
          MaxvCpus: integer
          SpotBidPercentage: float
```

## `Scheduling` properties


**`Scheduler` (**Required**, `String`)**  
Specifies the type of scheduler that's used. Supported values are `slurm` and `awsbatch`.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)  
`awsbatch` only supports the `alinux2` operating system and `x86_64` platform.

**`ScalingStrategy` (**Optional**, `String`)**  
Allows you to choose how dynamic Slurm nodes scale up. Supported values are `all-or-nothing`, `greedy-all-or-nothing` and `best-effort` The default value is `all-or-nothing`.  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)  
The scaling strategy applies only to nodes to be resumed by Slurm, not to nodes that are eventually already running.
+ `all-or-nothing`This strategy strictly follows an all-or-nothing-approach, aimed at avoiding idle instances at the end of the scaling process. It operates on an all-or-nothing basis, meaning it either scales up completely or not at all. Be aware that there may be additional costs due to temporarily launched instances, when jobs require over 500 nodes or span multiple compute resources. This strategy has the lowest throughput among the three possible Scaling Strategies. The scaling time depends on the number of jobs submitted per Slurm resume program execution. Also, you can't scale far beyond the default RunInstances resource account limit per execution, which is 1000 instances by defaults. More details can be found at the [Amazon EC2 API throttling documentation](https://docs.amazonaws.cn/AWSEC2/latest/APIReference/throttling.html)
+ `greedy-all-or-nothing `Similar to the all-or-nothing strategy, it aims to avoid idle instances post-scaling. This strategy allows for temporary over-scaling during the scaling process in order to achieve higher throughput than the all-or-nothing approach but also comes with the same scaling limit of 1000 instances as per the RunInstances resource account limit.
+ `best-effort `This strategy prioritizes high throughput, even if it means that some instances might be idle at the end of the scaling process. It attempts to allocate as many nodes as requested by the jobs, but there's a possibility of not fulfilling the entire request. Unlike the other strategies, the best-effort approach can accumulate more instances than the standard RunInstances limit, at the cost of having idle resources along the multiple scaling process executions.

Each strategy is designed to cater to different scaling needs, allowing you to select one that meets your specific requirements and constraints.

## `AwsBatchQueues`


**(Optional)** The Amazon Batch queue settings. Only one queue is supported. If [`Scheduler`](#yaml-Scheduling-Scheduler) is set to `awsbatch`, this section is required. For more information about the `awsbatch` scheduler, see [networking setup](network-configuration-v3-batch.md) and [Using Amazon Batch (`awsbatch`) scheduler with Amazon ParallelCluster](awsbatchcli-v3.md).

```
AwsBatchQueues:
  - Name: string
    CapacityType: string
    Networking:
      SubnetIds:
        - string
      AssignPublicIp: boolean
      SecurityGroups:
        - string
      AdditionalSecurityGroups:
        - string
    ComputeResources:  # this maps to a Batch compute environment (initially we support only 1)
      - Name: string
        InstanceTypes:
          - string
        MinvCpus: integer
        DesiredvCpus: integer
        MaxvCpus: integer
        SpotBidPercentage: float
```

[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)

### `AwsBatchQueues` properties


**`Name` (**Required**, `String`)**  
The name of the Amazon Batch queue.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

**`CapacityType` (**Optional**, `String`)**  
The type of the compute resources that the Amazon Batch queue uses. Supported values are `ONDEMAND` , `SPOT` or `CAPACITY_BLOCK`. The default value is `ONDEMAND`.  
If you set `CapacityType` to `SPOT`, your account must contain an `AWSServiceRoleForEC2Spot` service-linked role. You can create this role using the following Amazon CLI command.  

```
$ aws iam create-service-linked-role --aws-service-name spot.amazonaws.com
```
For more information, see [Service-linked role for Spot Instance requests](https://docs.amazonaws.cn/AWSEC2/latest/UserGuide/spot-requests.html#service-linked-roles-spot-instance-requests) in the *Amazon Amazon EC2 User Guide for Linux Instances*.
[Update policy: The compute fleet must be stopped for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-compute-fleet-v3)

#### `Networking`


**(Required)** Defines the networking configuration for the Amazon Batch queue.

```
Networking:
  SubnetIds:
    - string
  AssignPublicIp: boolean
  SecurityGroups:
    - string
  AdditionalSecurityGroups:
    - string
```

##### `Networking` properties


**`SubnetIds` (**Required**, `[String]`)**  
Specifies the ID of an existing subnet to provision the Amazon Batch queue in. Currently only one subnet is supported.  
[Update policy: The compute fleet must be stopped for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-compute-fleet-v3)

**`AssignPublicIp` (**Optional**, `String`)**  
Creates or assigns a public IP address to the nodes in the Amazon Batch queue. Supported values are `true` and `false`. The default depends on the subnet that you specified.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

**`SecurityGroups` (**Optional**, `[String]`)**  
List of security groups that the Amazon Batch queue uses. If you don't specify security groups, Amazon ParallelCluster creates new security groups.  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)

**`AdditionalSecurityGroups` (**Optional**, `[String]`)**  
List of security groups that the Amazon Batch queue uses.  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)

#### `ComputeResources`


**(Required)** Defines the ComputeResources configuration for the Amazon Batch queue.

```
ComputeResources:  # this maps to a Batch compute environment (initially we support only 1)
  - Name: string
    InstanceTypes:
      - string
    MinvCpus: integer
    DesiredvCpus: integer
    MaxvCpus: integer
    SpotBidPercentage: float
```

##### `ComputeResources` properties


**`Name` (**Required**, `String`)**  
The name of the Amazon Batch queue compute environment.  
[Update policy: The compute fleet must be stopped for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-compute-fleet-v3)

**`InstanceTypes` (**Required**, `[String]`)**  
The Amazon Batch compute environment array of instance types. All of the instance types must use the `x86_64` architecture.  
[Update policy: The compute fleet must be stopped for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-compute-fleet-v3)

**`MinvCpus` (**Optional**, `Integer`)**  
The minimum number of VCPUs that an Amazon Batch compute environment can use.  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)

**`DesiredVcpus` (**Optional**, `Integer`)**  
The desired number of VCPUs in the Amazon Batch compute environment. Amazon Batch adjusts this value between `MinvCpus` and `MaxvCpus` based on the demand in the job queue.  
[Update policy: This setting is not analyzed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-ignored-v3)

**`MaxvCpus` (**Optional**, `Integer`)**  
The maximum number of VCPUs for the Amazon Batch compute environment. You can't set this to a value that's lower than `DesiredVcpus`.  
[Update policy: This setting can't be decreased during an update.](using-pcluster-update-cluster-v3.md#update-policy-no-decrease-v3)

**`SpotBidPercentage` (**Optional**, `Float`)**  
The maximum percentage of the On-Demand price for the instance type that an Amazon EC2 Spot Instance price can reach before instances are launched. The default value is `100` (100%). The supported range is `1`-`100`.  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)

## `SlurmQueues`


**(Optional)** Settings for the Slurm queue. If [`Scheduler`](#yaml-Scheduling-Scheduler) is set to `slurm`, this section is required.

```
SlurmQueues:
  - Name: string
    ComputeSettings:
      LocalStorage:
        RootVolume:
          Size: integer
          Encrypted: boolean
          VolumeType: string
          Iops: integer
          Throughput: integer
        EphemeralVolume:
          MountDir: string
    CapacityReservationTarget:
      CapacityReservationId: string
      CapacityReservationResourceGroupArn: string
    CapacityType: string
    AllocationStrategy: string
    JobExclusiveAllocation: boolean
    CustomSlurmSettings: dict
    Tags:
      - Key: string
        Value: string
    HealthChecks:
      Gpu:
        Enabled: boolean
    Networking:
      SubnetIds:
        - string
      AssignPublicIp: boolean
      SecurityGroups:
        - string
      AdditionalSecurityGroups:
        - string
      PlacementGroup:
        Enabled: boolean
        Id: string
        Name: string
      Proxy:
        HttpProxyAddress: string
    ComputeResources:
      - Name: string
        InstanceType: string
        Instances:
          - InstanceType: string        
        MinCount: integer
        MaxCount: integer
        DynamicNodePriority: integer
        StaticNodePriority: integer
        SpotPrice: float
        DisableSimultaneousMultithreading: boolean
        SchedulableMemory: integer
        HealthChecks:
          Gpu:
            Enabled: boolean
        Efa:
          Enabled: boolean
          GdrSupport: boolean    
        CapacityReservationTarget:
          CapacityReservationId: string
          CapacityReservationResourceGroupArn: string     
        Networking:   
          PlacementGroup:
            Enabled: boolean
            Name: string
        CustomSlurmSettings: dict
        Tags:
          - Key: string
            Value: string
        LaunchTemplateOverrides:
          LaunchTemplateId: string
          Version: string
    CustomActions:
      OnNodeStart:
        Sequence:
          - Script: string
            Args:
              - string
        Script: string
        Args:
          - string
      OnNodeConfigured:
        Sequence:
          - Script: string
            Args:
              - string        
        Script: string
        Args:
          - string
    Iam:
      InstanceProfile: string
      InstanceRole: string
      S3Access:
        - BucketName: string
          EnableWriteAccess: boolean
          KeyName: string
      AdditionalIamPolicies:
        - Policy: string
    Image:
      CustomAmi: string
```

[Update policy: For this list values setting, a new value can be added during an update or the compute fleet must be stopped when removing an existing value.](using-pcluster-update-cluster-v3.md#update-policy-list-values-v3)

### `SlurmQueues` properties


**`Name` (**Required**, `String`)**  
The name of the Slurm queue.  
Cluster size may change during an update. For more information, see [Cluster capacity size and update](slurm-workload-manager-v3.md)
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

**`CapacityReservationTarget`**  
`CapacityReservationTarget` is added with Amazon ParallelCluster version 3.3.0.

```
CapacityReservationTarget:
   CapacityReservationId: string
   CapacityReservationResourceGroupArn: string
```
Specifies the On-Demand capacity reservation for the queue's compute resources.    
**`CapacityReservationId` (**Optional**, `String`)**  
The ID of the existing capacity reservation to target for the queue's compute resources. The ID can refer to an [ODCR](https://docs.amazonaws.cn/AWSEC2/latest/UserGuide/ec2-capacity-reservations.html) or a [Capacity Block for ML](https://docs.amazonaws.cn/AWSEC2/latest/UserGuide/ec2-capacity-blocks.html).  
The reservation must use the same platform that the instance uses. For example, if your instances run on `rhel8`, your capacity reservation must run on the Red Hat Enterprise Linux platform. For more information, see [Supported platforms](https://docs.amazonaws.cn/AWSEC2/latest/UserGuide/ec2-capacity-reservations.html#capacity-reservations-platforms) in the *Amazon EC2 User Guide for Linux Instances*.  
If you include [`Instances`](#yaml-Scheduling-SlurmQueues-ComputeResources-Instances) in the cluster configuration, you must exclude this queue level `CapacityReservationId` setting from the configuration.
[Update policy: The compute fleet must be stopped or QueueUpdateStrategy must be set for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-queue-update-strategy-v3)  
**`CapacityReservationResourceGroupArn` (**Optional**, `String`)**  
The Amazon Resource Name (ARN) of the resource group that serves as the service-linked group of capacity reservations for the queue's compute resources. Amazon ParallelCluster identifies and uses the most appropriate capacity reservation from the resource group based on the following conditions:  
+ If `PlacementGroup` is enabled in [`SlurmQueues`](#Scheduling-v3-SlurmQueues) / [`Networking`](#yaml-Scheduling-SlurmQueues-ComputeResources-Networking) or [`SlurmQueues`](#Scheduling-v3-SlurmQueues) / [`ComputeResources`](#Scheduling-v3-SlurmQueues-ComputeResources) / [`Networking`](#yaml-Scheduling-SlurmQueues-ComputeResources-Networking), Amazon ParallelCluster selects a resource group that targets the instance type and `PlacementGroup` for a compute resource, if the compute resource exists.

  The `PlacementGroup` must target one of the instance types that's defined in [`ComputeResources`](#Scheduling-v3-SlurmQueues-ComputeResources).
+ If `PlacementGroup` isn't enabled in [`SlurmQueues`](#Scheduling-v3-SlurmQueues) / [`Networking`](#yaml-Scheduling-SlurmQueues-ComputeResources-Networking) or [`SlurmQueues`](#Scheduling-v3-SlurmQueues) / [`ComputeResources`](#Scheduling-v3-SlurmQueues-ComputeResources) / [`Networking`](#yaml-Scheduling-SlurmQueues-ComputeResources-Networking), Amazon ParallelCluster selects a resource group that targets only the instance type of a compute resource, if the compute resource exists.
The resource group must have at least one ODCR for each instance type reserved in an Availability Zone across all of the queue's compute resources and Availability Zones. For more information, see [Launch instances with On-Demand Capacity Reservations (ODCR)](launch-instances-odcr-v3.md).  
For more information on multiple subnet configuration requirements, see [`Networking`](#Scheduling-v3-SlurmQueues-Networking) / [`SubnetIds`](#yaml-Scheduling-SlurmQueues-Networking-SubnetIds).  
Multiple Availability Zones is added in Amazon ParallelCluster version 3.4.0.
[Update policy: The compute fleet must be stopped or QueueUpdateStrategy must be set for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-queue-update-strategy-v3)

**`CapacityType` (**Optional**, `String`)**  
The type of the compute resources that the Slurm queue uses. Supported values are `ONDEMAND` , `SPOT` or `CAPACITY_BLOCK`. The default value is `ONDEMAND`.  
If you set the `CapacityType` to `SPOT`, your account must have an `AWSServiceRoleForEC2Spot` service-linked role. You can use the following Amazon CLI command to create this role.  

```
$ aws iam create-service-linked-role --aws-service-name spot.amazonaws.com
```
For more information, see [Service-linked role for Spot Instance requests](https://docs.amazonaws.cn/AWSEC2/latest/UserGuide/spot-requests.html#service-linked-roles-spot-instance-requests) in the *Amazon Amazon EC2 User Guide for Linux Instances*.
[Update policy: The compute fleet must be stopped or QueueUpdateStrategy must be set for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-queue-update-strategy-v3)

**`AllocationStrategy` (**Optional**, `String`)**  
Specify the allocation strategy for all the compute resources defined in [`Instances`](#yaml-Scheduling-SlurmQueues-ComputeResources-Instances).  
Valid values: `lowest-price` \$1 `capacity-optimized` \$1 `price-capacity-optimized` \$1 `prioritized` \$1 `capacity-optimized-prioritized`       
[\[See the AWS documentation website for more details\]](http://docs.amazonaws.cn/en_us/parallelcluster/latest/ug/Scheduling-v3.html)
Default: `lowest-price`    
**`lowest-price`**  
+ If you use `CapacityType = ONDEMAND`, Amazon EC2 Fleet uses price to determine the order and launches the lowest price instances first.
+ If you use `CapacityType = SPOT`, Amazon EC2 Fleet launches instances from the lowest price Spot Instance pool that has available capacity. If a pool runs out of capacity before it fulfills your required capacity, Amazon EC2 Fleet fulfills your request by launching instances for you. In particular, Amazon EC2 Fleet launches instances from the lowest price Spot Instance pool that has available capacity. Amazon EC2 Fleet might launch Spot Instances from several different pools.
+ If you set `CapacityType = CAPACITY_BLOCK`, there are no allocation strategies, thus `AllocationStrategy` parameter cannot be configured.  
**`capacity-optimized`**  
+ If you set `CapacityType = ONDEMAND`, `capacity-optimized` isn't available.
+ If you set `CapacityType = SPOT`, Amazon EC2 Fleet launches instances from Spot Instance pools with optimal capacity for the number of instances to be launched.  
**`price-capacity-optimized`**  
+ If you set `CapacityType = ONDEMAND`, `capacity-optimized` isn't available.
+ If you set `CapacityType = SPOT`, Amazon EC2 Fleet identifies the pools with the highest capacity availability for the number of instances that are launching. This means that we will request Spot Instances from the pools that we believe have the lowest chance of interruption in the near term. Amazon EC2 Fleet then requests Spot Instances from the lowest priced of these pools.  
**`prioritized`**  
+ If you set `CapacityType = ONDEMAND`, Amazon EC2 Fleet honors the priority order that Amazon ParallelCluster applies to the LaunchTemplate overrides when multiple subnets are specified. Amazon ParallelCluster derives the override `priority` from the position of the target subnet in `SlurmQueues/Networking/SubnetIds` with the first Subnet getting the highest priority. The priorities are drived by Amazon ParallelCluster in descending order from `SlurmQueues/Networking/SubnetIds`, with the first SubnetId having the highest priority and the last SubnetID having the lowest priority. 
+ If you set `CapacityType = SPOT`, `prioritized` isn't available.  
**`capacity-optimized-prioritized`**  
+ If you set `CapacityType = ONDEMAND`, `capacity-optimized-prioritized` isn't available.
+ If you set `CapacityType = SPOT`, Amazon EC2 Fleet optimizes for capacity first and then applies, on a best-effort basis, the priority order that Amazon ParallelCluster assigns to LaunchTemplate overrides. The priorities are drived by Amazon ParallelCluster in descending order from `SlurmQueues/Networking/SubnetIds`, with the first SubnetId having the highest priority and the last SubnetID having the lowest priority. All overrides that target the same subnet receive the same priority value.
[Update policy: The compute fleet must be stopped or QueueUpdateStrategy must be set for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-queue-update-strategy-v3)  
`AllocationStrategy` is supported starting in Amazon ParallelCluster version 3.3.0.  
**New in 3.14.0**: `prioritized` (for On-Demand) and `capacity-optimized-prioritized` (for Spot).

**`JobExclusiveAllocation` (**Optional**, `String`)**  
If set to `true`, the Slurm partition `OverSubscribe` flag is set to `EXCLUSIVE`. When `OverSubscribe`=`EXCLUSIVE`, jobs in the partition have exclusive access to all allocated nodes. For more information, see [EXCLUSIVE](https://slurm.schedmd.com/slurm.conf.html#OPT_EXCLUSIVE) in the Slurm documentation.  
Valid values: `true` \$1 `false`  
Default: `false`  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)  
`JobExclusiveAllocation` is supported starting in Amazon ParallelCluster version 3.7.0.

**`CustomSlurmSettings` (**Optional**, `Dict`)**  
Defines the custom Slurm partition (queue) configuration settings.  
Specifies a dictionary of custom Slurm configuration parameter key-value pairs that apply to queues (partitions).  
Each separate key-value pair, such as `Param1: Value1`, is added separately to the end of the Slurm partition configuration line in the format `Param1=Value1`.  
You can only specify Slurm configuration parameters that aren't deny-listed in `CustomSlurmSettings`. For information about deny-listed Slurm configuration parameters, see [Deny-listed Slurm configuration parameters for `CustomSlurmSettings`](slurm-configuration-settings-v3.md#slurm-configuration-denylists-v3).  
Amazon ParallelCluster only checks whether a parameter is in a deny list. Amazon ParallelCluster doesn't validate your custom Slurm configuration parameter syntax or semantics. It is your responsibility to validate your custom Slurm configuration parameters. Invalid custom Slurm configuration parameters can cause Slurm daemon failures that can lead to cluster create and update failures.  
For more information about how to specify custom Slurm configuration parameters with Amazon ParallelCluster, see [Slurm configuration customization](slurm-configuration-settings-v3.md).  
For more information about Slurm configuration parameters, see [slurm.conf](https://slurm.schedmd.com/slurm.conf.html) in the Slurm documentation.  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)  
`CustomSlurmSettings` is supported starting with Amazon ParallelCluster version 3.6.0.

**`Tags` (**Optional**, [String])**  
A list of tag key-value pairs. [`ComputeResource`](#yaml-Scheduling-SlurmQueues-ComputeResources-Tags) tags override duplicate tags specified in the [`Tags` section](Tags-v3.md) or in `SlurmQueues` / `Tags`.    
**`Key` (**Optional**, `String`)**  
The tag key.  
**`Value` (**Optional**, `String`)**  
The tag value.
[Update policy: The compute fleet must be stopped or QueueUpdateStrategy must be set for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-queue-update-strategy-v3)

**`HealthChecks` (**Optional**)**  
Specify compute node health checks on all compute resources in the queue.    
`Gpu` (**Optional**)  
Specify GPU health checks on all compute resources in a queue.  
Amazon ParallelCluster doesn't support `HealthChecks` / `Gpu` in nodes that use `alinux2` ARM operating systems. These platforms don't support the [NVIDIA Data Center GPU Manager (DCGM)](https://docs.nvidia.com/datacenter/dcgm/latest/user-guide/getting-started.html#supported-linux-distributions).  
Enabling GPU health checks when using instance types whose total GPU memory size is higher than 327680 MiB is discouraged.  
`Enabled` (**Optional**, `Boolean`)  
Whether Amazon ParallelCluster performs GPU health checks on compute nodes. The default is `false`.

**`Gpu` health check behavior**
+ If `Gpu` / `Enabled` is set to `true`, Amazon ParallelCluster performs GPU health checks on compute resources in the queue.
+ The `Gpu` health check performs GPU health checks on compute resources to prevent the submission of jobs on nodes with a degraded GPU.
+ If a compute node fails a `Gpu` health check, the compute node state changes to `DRAIN`. New jobs don't start on this node. Existing jobs run to completion. After all running jobs complete, the compute node terminates if it's a dynamic node, and it's replaced if it's a static node.
+ The duration of the `Gpu` health check depends on the selected instance type, the number of GPUs in the instance, the total GPU memory and the number of `Gpu` health check targets (equivalent to the number of job GPU targets). For example, on a p4d.24xlarge, the typical duration is 3 minutes.
+ If the `Gpu` health check runs on an instance that's not supported, it exits and the job runs on the compute node. For example, if an instance doesn't have a GPU, or, if an instance has a GPU, but it isn't an NVIDIA GPU, the health check exits and the job runs on the compute node. Only NVIDIA GPUs are supported.
+ The `Gpu` health check uses the `dcgmi` tool to perform health checks on a node and takes the following steps: 

  When the `Gpu` health check begins in a node:

  1. It detects whether the `nvidia-dcgm` and `nvidia-fabricmanager` services are running.

  1. If these services aren't running, the `Gpu` health check starts them.

  1. It detects whether the persistence mode is enabled.

  1. If the persistence mode isn't enabled, the `Gpu` health check enables it.

  At the end of the health check, the `Gpu` health check restores these services and resources to their initial state.
+ If the job is assigned to a specific set of node GPUs, the `Gpu` health check runs only on that specific set. Otherwise, the `Gpu` health check runs on all GPUs in the node.
+ If a compute node receives 2 or more `Gpu` health check requests at the same time, only the first health check runs and the others are skipped. This is also the case for health checks that target node GPUs. You can check the log files for additional information regarding this situation.
+ The health check log for a specific compute node is available in the `/var/log/parallelcluster/slurm_health_check.log` file. The file is available in Amazon CloudWatch, in the cluster CloudWatch log group, where you can find:
  + Details on the action run by the `Gpu` health check, including enabling and disabling services and persistence mode.
  + The GPU identifier, serial ID, and the UUID.
  + The health check output.
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)  
`HealthChecks` is supported starting in Amazon ParallelCluster version 3.6.0.

#### `Networking`


**(Required)** Defines the networking configuration for the Slurm queue.

```
Networking:
  SubnetIds:
    - string
  AssignPublicIp: boolean
  SecurityGroups:
    - string
  AdditionalSecurityGroups:
    - string
  PlacementGroup:
    Enabled: boolean
    Id: string
    Name: string
  Proxy:
    HttpProxyAddress: string
```

[Update policy: The compute fleet must be stopped or QueueUpdateStrategy must be set for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-queue-update-strategy-v3)

##### `Networking` properties


**`SubnetIds` (**Required**, `[String]`)**  
The IDs of existing subnets that you provision the Slurm queue in.  
If you configure instance types in [`SlurmQueues`](#Scheduling-v3-SlurmQueues) / [`ComputeResources`](#Scheduling-v3-SlurmQueues-ComputeResources) / [`InstanceType`](#yaml-Scheduling-SlurmQueues-ComputeResources-InstanceType), you can only define one subnet.  
If you configure instance types in [`SlurmQueues`](#Scheduling-v3-SlurmQueues) / [`ComputeResources`](#Scheduling-v3-SlurmQueues-ComputeResources) / [`Instances`](#yaml-Scheduling-SlurmQueues-ComputeResources-Instances), you can define a single subnet or multiple subnets.  
If you use multiple subnets, all subnets defined for a queue must be in the same VPC, with each subnet in a separate Availability Zone (AZ).  
For example, suppose you define subnet-1 and subnet-2 for your queue.  
`subnet-1` and `subnet-2` can't both be in AZ-1.  
`subnet-1` can be in AZ-1 and `subnet-2` can be in AZ-2.  
If you configure only one instance type and want to use multiple subnets, define your instance type in `Instances` rather than `InstanceType`.  
For example, define `ComputeResources` / `Instances` / `InstanceType`=`instance.type` instead of `ComputeResources` / `InstanceType`=`instance.type`.  
Elastic Fabric Adapter (EFA) isn't supported over different availability zones.
The use of multiple Availability Zones might cause increases in storage networking latency and added inter-AZ data transfer costs. For example, this could occur when an instance accesses file storage that's located in a different AZ. For more information, see [Data Transfer within the same Amazon Web Services Region](https://aws.amazon.com/ec2/pricing/on-demand/#Data_Transfer_within_the_same_AWS_Region).  

**Cluster updates to change from the use of a single subnet to multiple subnets:**
+ Suppose the subnet definition of a cluster is defined with a single subnet and an Amazon ParallelCluster managed FSx for Lustre file system. Then, you can't update this cluster with an updated subnet ID definition directly. To make the cluster update, you must first change the managed file system to an external file system. For more information, see [Convert Amazon ParallelCluster managed storage to external storage](shared-storage-conversion-v3.md).
+ Suppose the subnet definition of a cluster is defined with a single subnet and an external Amazon EFS file system if EFS mount targets don't exist for all of the AZs for the multiple subnets defined to be added. Then, you can't update this cluster with an updated subnet ID definition directly. To make the cluster update or to create a cluster, you must first create all of the mount targets for all of the AZs for the defined multiple subnets.

**Availability Zones and cluster capacity reservations defined in [CapacityReservationResourceGroupArn](#yaml-Scheduling-SlurmQueues-CapacityReservationResourceGroupArn):**
+ You can't create a cluster if there is no overlap between the set of instance types and availability zones covered by the defined capacity reservation resource group and the set of instance types and availability zones defined for the queue.
+ You can create a cluster if there is a partial overlap between the set of instance types and availability zones covered by the defined capacity reservation resource group and the set of instance types and availability zones defined for the queue. Amazon ParallelCluster sends a warning message about the partial overlap for this case.
+ For more information, see [Launch instances with On-Demand Capacity Reservations (ODCR)](launch-instances-odcr-v3.md).
Multiple Availability Zones is added in Amazon ParallelCluster version 3.4.0.
This warning applies to all 3.x.y Amazon ParallelCluster versions prior to version 3.3.1. Amazon ParallelCluster version 3.3.1 isn't impacted if this parameter is changed.  
For Amazon ParallelCluster 3 versions prior to version 3.3.1:  
If you change this parameter and update a cluster this creates a new managed FSx for Lustre file system and deletes the existing managed FSx for Lustre file system without preserving the existing data. This results in data loss. Before you proceed, make sure you back up the data from the existing FSx for Lustre file system if you want to preserve data. For more information, see [Working with backups](https://docs.amazonaws.cn/fsx/latest/LustreGuide/using-backups-fsx.html) in the *FSx for Lustre User Guide*.
If a new subnet value is added, [Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)  
If a subnet value is removed, [Update policy: The compute fleet must be stopped or QueueUpdateStrategy must be set for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-queue-update-strategy-v3)

**`AssignPublicIp` (**Optional**, `String`)**  
Creates or assigns a public IP address to the nodes in the Slurm queue. Supported values are `true` and `false`. The subnet that you specify determines the default value. A subnet with public IPs default to assigning public IP addresses.  
If you define a p4d or hpc6id instance type, or another instance type that has multiple network interfaces or a network interface card, you must set [`HeadNode`](HeadNode-v3.md) / [`Networking`](HeadNode-v3.md#HeadNode-v3-Networking) / [`ElasticIp`](HeadNode-v3.md#yaml-HeadNode-Networking-ElasticIp) to `true` to provide public access. Amazon public IPs can only be assigned to instances launched with a single network interface. For this case, we recommend that you use a [NAT gateway](https://docs.amazonaws.cn/vpc/latest/userguide/vpc-nat-gateway.html) to provide public access to the cluster compute nodes. In this case, set `AssignPublicIp` to `false`. For more information on IP addresses, see [Assign a public IPv4 address during instance launch](https://docs.amazonaws.cn/AWSEC2/latest/UserGuide/using-instance-addressing.html#public-ip-addresses) in the *Amazon EC2 User Guide for Linux Instances*.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

**`SecurityGroups` (**Optional**, `[String]`)**  
A list of security groups to use for the Slurm queue. If no security groups are specified, Amazon ParallelCluster creates security groups for you.  
Verify that the security groups are configured correctly for your [SharedStorage](SharedStorage-v3.md) systems.  
This warning applies to all 3.*x*.*y* Amazon ParallelCluster versions prior to version 3.3.0. Amazon ParallelCluster version 3.3.0 isn't impacted if this parameter is changed.  
For Amazon ParallelCluster 3 versions prior to version 3.3.0:  
If you change this parameter and update a cluster this creates a new managed FSx for Lustre file system and deletes the existing managed FSx for Lustre file system without preserving the existing data. This results in data loss. Make sure to back up the data from the existing FSx for Lustre file system if you want to preserve data. For more information, see [Working with backups](https://docs.amazonaws.cn/fsx/latest/LustreGuide/using-backups-fsx.html) in the *FSx for Lustre User Guide*.
If you enable [Efa](#yaml-Scheduling-SlurmQueues-ComputeResources-Efa) for your compute instances, make sure that your EFA-enabled instances are members of a security group that allows all inbound and outbound traffic to itself.
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)

**`AdditionalSecurityGroups` (**Optional**, `[String]`)**  
A list of additional security groups to use for the Slurm queue.  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)

**`PlacementGroup` (**Optional**)**  
Specifies the placement group settings for the Slurm queue.  

```
PlacementGroup:
  Enabled: boolean
  Id: string
  Name: string
```
[Update policy: All compute nodes must be stopped for a managed placement group deletion. The compute fleet must be stopped or QueueUpdateStrategy must be set for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-remove-placement-group-v3)    
**`Enabled` (**Optional**, `Boolean`)**  
Indicates whether a placement group is used for the Slurm queue. The default is `false`.  
[Update policy: The compute fleet must be stopped or QueueUpdateStrategy must be set for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-queue-update-strategy-v3)  
**`Id` (**Optional**, `String`)**  
The placement group ID for an existing cluster placement group that the Slurm queue uses. Make sure to provide the placement group *ID* and *not the name*.  
[Update policy: The compute fleet must be stopped or QueueUpdateStrategy must be set for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-queue-update-strategy-v3)  
**`Name` (**Optional**, `String`)**  
The placement group name for an existing cluster placement group that the Slurm queue uses. Make sure to provide the placement group *name* and *not the ID*.  
[Update policy: The compute fleet must be stopped or QueueUpdateStrategy must be set for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-queue-update-strategy-v3)
+ If `PlacementGroup` / `Enabled` is set to `true`, without a `Name` or `Id` defined, each compute resource is assigned its own managed placement group, unless [`ComputeResources`](#Scheduling-v3-SlurmQueues-ComputeResources) / [`Networking`](#yaml-Scheduling-SlurmQueues-ComputeResources-Networking) / [`PlacementGroup`](#yaml-Scheduling-SlurmQueues-ComputeResources-Networking-PlacementGroup) is defined to override this setting.
+ Starting with Amazon ParallelCluster version 3.3.0, [`SlurmQueues`](#Scheduling-v3-SlurmQueues) / [`Networking`](#Scheduling-v3-SlurmQueues-Networking) / [`PlacementGroup`](#yaml-Scheduling-SlurmQueues-Networking-PlacementGroup) / [`Name`](#yaml-Scheduling-SlurmQueues-Networking-PlacementGroup-Name) was added as a preferred alternative to [`SlurmQueues`](#Scheduling-v3-SlurmQueues) / [`Networking`](#Scheduling-v3-SlurmQueues-Networking) / [`PlacementGroup`](#yaml-Scheduling-SlurmQueues-Networking-PlacementGroup) / [`Id`](#yaml-Scheduling-SlurmQueues-Networking-PlacementGroup-Id).

  [`PlacementGroup`](#yaml-Scheduling-SlurmQueues-Networking-PlacementGroup) / [`Id`](#yaml-Scheduling-SlurmQueues-Networking-PlacementGroup-Id) and [`PlacementGroup`](#yaml-Scheduling-SlurmQueues-Networking-PlacementGroup) / [`Name`](#yaml-Scheduling-SlurmQueues-Networking-PlacementGroup-Name) are equivalent. You can use either one.

   If you include both [`PlacementGroup`](#yaml-Scheduling-SlurmQueues-Networking-PlacementGroup) / [`Id`](#yaml-Scheduling-SlurmQueues-Networking-PlacementGroup-Id) and [`PlacementGroup`](#yaml-Scheduling-SlurmQueues-Networking-PlacementGroup) / [`Name`](#yaml-Scheduling-SlurmQueues-Networking-PlacementGroup-Name), Amazon ParallelCluster fails. You can only choose one or the other.

  You don't need to update your cluster to use [`PlacementGroup`](#yaml-Scheduling-SlurmQueues-Networking-PlacementGroup) / [`Name`](#yaml-Scheduling-SlurmQueues-Networking-PlacementGroup-Name).
+ When using a capacity block reservation, a placement group constraint should not be set as insufficient capacity errors may occur due to placement constraints outside of the reservation even if the capacity reservation has remaining capacity.

**`Proxy` (**Optional**)**  
Specifies the proxy settings for the Slurm queue.  

```
Proxy:
  HttpProxyAddress: string
```
[Update policy: The compute fleet must be stopped or QueueUpdateStrategy must be set for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-queue-update-strategy-v3)    
**`HttpProxyAddress` (**Optional**, `String`)**  
Defines an HTTP or HTTPS proxy server for the Slurm queue. Typically, it's `https://x.x.x.x:8080`.  
There's no default value.  
[Update policy: The compute fleet must be stopped or QueueUpdateStrategy must be set for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-queue-update-strategy-v3)

#### `Image`


**(Optional)** Specifies the image to use for the Slurm queue. To use the same AMI for all nodes, use the [CustomAmi](Image-v3.md#yaml-Image-CustomAmi) setting in the [`Image` section](Image-v3.md).

```
Image:
  CustomAmi: string
```

[Update policy: The compute fleet must be stopped or QueueUpdateStrategy must be set for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-queue-update-strategy-v3)

##### `Image` Properties


**`CustomAmi` (**Optional**, `String`)**  
The AMI to use for the Slurm queue instead of the default AMIs. You can use the pcluster CLI command to view a list of the default AMIs.  
The AMI must be based on the same operating system that's used by the head node.

```
pcluster list-official-images
```
If the custom AMI requires additional permissions for its launch, you must add these permissions to the head node policy.  
For example, if a custom AMI has an encrypted snapshot associated with it, the following additional policies are required in the head node policies.    
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "kms:DescribeKey",
                "kms:ReEncrypt*",
                "kms:CreateGrant",
                "kms:Decrypt"
            ],
            "Resource": [
                "arn:aws-cn:kms:us-east-1:111122223333:key/<AWS_KMS_KEY_ID>"
            ]
        }
    ]
}
```
To troubleshoot custom AMI validation warnings, see [Troubleshooting custom AMI issues](troubleshooting-v3-custom-amis.md).  
[Update policy: The compute fleet must be stopped or QueueUpdateStrategy must be set for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-queue-update-strategy-v3)

#### `ComputeResources`


**(Required)** Defines the `ComputeResources` configuration for the Slurm queue.

**Note**  
Cluster size may change during an update. For more information, see [Cluster capacity size and update](slurm-workload-manager-v3.md).
New compute resources can be added to the cluster only if they are deployed in subnets that belong to CIDR blocks that exist when the cluster is created.

```
ComputeResources:
  - Name: string
    InstanceType: string
    Instances:
      - InstanceType: string    
    MinCount: integer
    MaxCount: integer
    DynamicNodePriority: integer
    StaticNodePriority: integer
    SpotPrice: float
    DisableSimultaneousMultithreading: boolean
    SchedulableMemory: integer
    HealthChecks:
      Gpu:    
        Enabled: boolean
    Efa:
      Enabled: boolean
      GdrSupport: boolean
    CapacityReservationTarget:
      CapacityReservationId: string
      CapacityReservationResourceGroupArn: string
    Networking:   
      PlacementGroup:
        Enabled: boolean
        Name: string
    CustomSlurmSettings: dict   
    Tags:
      - Key: string
        Value: string
    LaunchTemplateOverrides:
      LaunchTemplateId: string
      Version: string
```

[Update policy: For this list values setting, a new value can be added during an update or the compute fleet must be stopped when removing an existing value.](using-pcluster-update-cluster-v3.md#update-policy-list-values-v3)

##### `ComputeResources` properties


**`Name` (**Required**, `String`)**  
The name of the Slurm queue compute environment. The name can have up to 25 characters.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

**`InstanceType` (**Required**, `String`)**  
The instance type that's used in this Slurm compute resource. All of the instance types in a cluster must use the same processor architecture. Instances can use either the `x86_64` or `arm64` architecture.  
The cluster configuration must define either [InstanceType](#yaml-Scheduling-SlurmQueues-ComputeResources-InstanceType) or [Instances](#yaml-Scheduling-SlurmQueues-ComputeResources-Instances). If both are defined, Amazon ParallelCluster fails.  
When you define `InstanceType`, you can't define multiple subnets. If you configure only one instance type and want to use multiple subnets, define your instance type in `Instances` rather than in `InstanceType`. For more information, see [`Networking`](#Scheduling-v3-SlurmQueues-Networking) / [`SubnetIds`](#yaml-Scheduling-SlurmQueues-Networking-SubnetIds).  
If you define a p4d or hpc6id instance type, or another instance type that has multiple network interfaces or a network interface card, you must launch the compute instances in private subnet as described in [Amazon ParallelCluster using two subnets](network-configuration-v3-two-subnets.md). Amazon public IPs can only be assigned to instances that are launched with a single network interface. For more information, see [Assign a public IPv4 address during instance launch](https://docs.amazonaws.cn/AWSEC2/latest/UserGuide/using-instance-addressing.html#public-ip-addresses) in the *Amazon EC2 User Guide for Linux Instances*.  
[Update policy: The compute fleet must be stopped for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-compute-fleet-v3)

**`Instances` (**Required**)**  
Specifies the list of instance types for a compute resource. To specify the allocation strategy for the list of instance types, see [`AllocationStrategy`](#yaml-Scheduling-SlurmQueues-AllocationStrategy).  
The cluster configuration must define either [`InstanceType`](#yaml-Scheduling-SlurmQueues-ComputeResources-InstanceType) or [`Instances`](#yaml-Scheduling-SlurmQueues-ComputeResources-Instances). If both are defined, Amazon ParallelCluster fails.  
For more information, see [Multiple instance type allocation with Slurm](slurm-multiple-instance-allocation-v3.md).  

```
`Instances`:
   - `InstanceType`: string
```
Starting with Amazon ParallelCluster version 3.7.0, `EnableMemoryBasedScheduling` can be enabled if you configure multiple instance types in [Instances](#yaml-Scheduling-SlurmQueues-ComputeResources-Instances).  
For Amazon ParallelCluster versions 3.2.0 to 3.6.*x*, `EnableMemoryBasedScheduling` can't be enabled if you configure multiple instance types in [Instances](#yaml-Scheduling-SlurmQueues-ComputeResources-Instances).
[Update policy: For this list values setting, a new value can be added during an update or the compute fleet must be stopped when removing an existing value.](using-pcluster-update-cluster-v3.md#update-policy-list-values-v3)    
**`InstanceType` (**Required**, `String`)**  
The instance type to use in this Slurm compute resource. All of the instance types in a cluster must use the same processor architecture, either `x86_64` or `arm64`.  
The instance types listed in [`Instances`](#yaml-Scheduling-SlurmQueues-ComputeResources-Instances) must have:  
+ The same number of vCPUs, or, if [`DisableSimultaneousMultithreading`](#yaml-Scheduling-SlurmQueues-ComputeResources-DisableSimultaneousMultithreading) is set to `true`, the same number of cores.
+ The same number of accelerators of the same manufacturers.
+ EFA supported, if [`Efa`](#yaml-Scheduling-SlurmQueues-ComputeResources-Efa) / [`Enabled`](#yaml-Scheduling-SlurmQueues-ComputeResources-Efa-Enabled) set to `true`.
The instance types that are listed in [`Instances`](#yaml-Scheduling-SlurmQueues-ComputeResources-Instances) can have:  
+ Different amount of memory.

  In this case, the minimum memory is to be set as a consumable Slurm resource.
**Note**  
Starting with Amazon ParallelCluster version 3.7.0, `EnableMemoryBasedScheduling` can be enabled if you configure multiple instance types in [Instances](#yaml-Scheduling-SlurmQueues-ComputeResources-Instances).  
For Amazon ParallelCluster versions 3.2.0 to 3.6.*x*, `EnableMemoryBasedScheduling` can't be enabled if you configure multiple instance types in [Instances](#yaml-Scheduling-SlurmQueues-ComputeResources-Instances).
+ Different network cards.

  In this case, the number of network interfaces configured for the compute resource is defined by the instance type with the smallest number of network cards.
+ Different network bandwidth.
+ Different instance store size.
If you define a p4d or hpc6id instance type, or another instance type that has multiple network interfaces or a network interface card, you must launch the compute instances in private subnet as described in [Amazon ParallelCluster using two subnets](network-configuration-v3-two-subnets.md). Amazon public IPs can only be assigned to instances launched with a single network interface. For more information, see [Assign a public IPv4 address during instance launch](https://docs.amazonaws.cn/AWSEC2/latest/UserGuide/using-instance-addressing.html#public-ip-addresses) in the *Amazon EC2 User Guide for Linux Instances*.  
[Update policy: The compute fleet must be stopped for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-compute-fleet-v3)
`Instances` is supported starting with Amazon ParallelCluster version 3.3.0.

**`MinCount` (**Optional**, `Integer`)**  
The minimum number of instances that the Slurm compute resource uses. The default is 0.  
Cluster size may change during an update. For more information, see [Cluster capacity size and update](slurm-workload-manager-v3.md)
[Update policy: The compute fleet must be stopped for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-compute-fleet-v3)

**`MaxCount` (**Optional**, `Integer`)**  
The maximum number of instances that the Slurm compute resource uses. The default is 10.  
When you use `CapacityType = CAPACITY_BLOCK`, `MaxCount` must be equal to `MinCount` and greater than 0, because all the instances part of the Capacity Block reservation are managed as static nodes.  
At cluster creation time, the head node waits for all the static nodes to be ready before signaling the success of cluster creation. However, when you use `CapacityType = CAPACITY_BLOCK`, the nodes part of the compute resources associated to Capacity Blocks won't be considered for this check. The cluster will be created even if not all the configured Capacity Blocks are active.  
Cluster size may change during an update. For more information, see [Cluster capacity size and update](slurm-workload-manager-v3.md)
 

**`DynamicNodePriority` (**Optional**, `Integer`)**  
The priority of dynamic nodes in a queue compute resource. The priority maps to the Slurm node [https://slurm.schedmd.com/slurm.conf.html#OPT_Weight](https://slurm.schedmd.com/slurm.conf.html#OPT_Weight) configuration parameter for the compute resource dynamic nodes. The default value is `1000`.  
Slurm prioritizes nodes with the lowest `Weight` values first.  
The use of many different `Weight` values in a Slurm partition (queue) might slow down the rate of job scheduling in the queue.  
In Amazon ParallelCluster versions earlier than version 3.7.0, both static and dynamic nodes were assigned the same default weight of `1`. In this case, Slurm might prioritize idle dynamic nodes over idle static nodes due to the naming schema for static and dynamic nodes. When all else is equal, Slurm schedules nodes alphabetically by name.
`DynamicNodePriority` is added in Amazon ParallelCluster version 3.7.0.
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)

**`StaticNodePriority` (**Optional**, `Integer`)**  
The priority of static nodes in a queue compute resource. The priority maps to the Slurm node [https://slurm.schedmd.com/slurm.conf.html#OPT_Weight](https://slurm.schedmd.com/slurm.conf.html#OPT_Weight) configuration parameter for the compute resource static nodes. The default value is `1`.  
Slurm prioritizes nodes with the lowest `Weight` values first.  
The use of many different `Weight` values in a Slurm partition (queue) might slow down the rate of job scheduling in the queue.
`StaticNodePriority` is added in Amazon ParallelCluster version 3.7.0.
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)

**`SpotPrice` (**Optional**, `Float`)**  
The maximum price that paid for an Amazon EC2 Spot Instance before any instances are launched. The default value is the On-Demand price.  
[Update policy: The compute fleet must be stopped or QueueUpdateStrategy must be set for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-queue-update-strategy-v3)

**`DisableSimultaneousMultithreading` (**Optional**, `Boolean`)**  
If `true`, multithreading on the nodes in the Slurm queue is disabled. The default value is `false`.  
Not all instance types can disable multithreading. For a list of instance types that support disabling multithreading, see [CPU cores and threads for each CPU core per instance type](https://docs.amazonaws.cn/AWSEC2/latest/UserGuide/instance-optimize-cpu.html#cpu-options-supported-instances-values) in the *Amazon EC2 User Guide*.   
[Update policy: The compute fleet must be stopped for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-compute-fleet-v3)

**`SchedulableMemory` (**Optional**, `Integer`)**  
The amount of memory in MiB that's configured in the Slurm parameter `RealMemory` for the compute nodes of a compute resource. This value is the upper limit for the node memory available to jobs when [`SlurmSettings`](#Scheduling-v3-SlurmSettings) / [`EnableMemoryBasedScheduling`](#yaml-Scheduling-SlurmSettings-EnableMemoryBasedScheduling) is enabled. The default value is 95 percent of the memory that's listed in [Amazon EC2 Instance Types](https://www.amazonaws.cn/ec2/instance-types) and returned by the Amazon EC2 API [DescribeInstanceTypes](https://docs.amazonaws.cn/AWSEC2/latest/APIReference/API_DescribeInstanceTypes.html). Make sure to convert values that are given in GiB to MiB.  
Supported values: `1-EC2Memory`  
`EC2Memory` is the memory (in MiB) that's listed in [Amazon EC2 Instance Types](https://www.amazonaws.cn/ec2/instance-types) and returned by the Amazon EC2 API [DescribeInstanceTypes](https://docs.amazonaws.cn/AWSEC2/latest/APIReference/API_DescribeInstanceTypes.html). Make sure to convert values that are given in GiB to MiB.  
This option is most relevant when [`SlurmSettings`](#Scheduling-v3-SlurmSettings) / [`EnableMemoryBasedScheduling`](#yaml-Scheduling-SlurmSettings-EnableMemoryBasedScheduling) is enabled. For more information, see [Slurm memory-based scheduling](slurm-mem-based-scheduling-v3.md).  
`SchedulableMemory` is supported starting with Amazon ParallelCluster version 3.2.0.  
Starting with version 3.2.0, by default, Amazon ParallelCluster configures `RealMemory` for Slurm compute nodes to 95 percent of the memory that's returned by the Amazon EC2 API `DescribeInstanceTypes`. This configuration is independent of the value of `EnableMemoryBasedScheduling`.
[Update policy: The compute fleet must be stopped or QueueUpdateStrategy must be set for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-queue-update-strategy-v3)

**`HealthChecks` (**Optional**)**  
Specify health checks on a compute resource.    
`Gpu` (**Optional**)  
Specify GPU health checks on a compute resource.    
`Enabled` (**Optional**, `Boolean`)  
Whether Amazon ParallelCluster performs GPU health checks on compute a resource in a queue. The default is `false`.  
Amazon ParallelCluster doesn't support `HealthChecks` / `Gpu` in nodes that use `alinux2` ARM operating systems. These platforms don't support the [NVIDIA Data Center GPU Manager (DCGM)](https://docs.nvidia.com/datacenter/dcgm/latest/user-guide/getting-started.html#supported-linux-distributions).

**`Gpu` health check behavior**
+ If `Gpu` / `Enabled` is set to `true`, Amazon ParallelCluster performs health GPU health checks on a compute resource.
+ The `Gpu` health check performs health checks on a compute resource to prevent the submission of jobs on nodes with a degraded GPU.
+ If a compute node fails a `Gpu` health check, the compute node state changes to `DRAIN`. New jobs don't start on this node. Existing jobs run to completion. After all running jobs complete, the compute node terminates if it's a dynamic node, and it's replaced if it's a static node.
+ The duration of the `Gpu` health check depends on the selected instance type, the number of GPUs in the instance, and the number of `Gpu` health check targets (equivalent to the number of job GPU targets). For an instance with 8 GPUs, the typical duration is less than 3 minutes.
+ If the `Gpu` health check runs on an instance that's not supported, it exits and the job runs on the compute node. For example, if an instance doesn't have a GPU, or, if an instance has a GPU, but it isn't an NVIDIA GPU, the health check exits and the job runs on the compute node. Only NVIDIA GPUs are supported.
+ The `Gpu` health check uses the `dcgmi` tool to perform health checks on a node and takes the following steps: 

  When the `Gpu` health check begins in a node:

  1. It detects whether the `nvidia-dcgm` and `nvidia-fabricmanager` services are running.

  1. If these services aren't running, the `Gpu` health check starts them.

  1. It detects whether the persistence mode is enabled.

  1. If the persistence mode isn't enabled, the `Gpu` health check enables it.

  At the end of the health check, the `Gpu` health check restores these services and resources to their initial state.
+ If the job is assigned to a specific set of node GPUs, the `Gpu` health check runs only on that specific set. Otherwise, the `Gpu` health check runs on all GPUs in the node.
+ If a compute node receives 2 or more `Gpu` health check requests at the same time, only the first health check runs and the others are skipped. This is also the case for health checks targeting node GPUs. You can check the log files for additional information regarding this situation.
+ The health check log for a specific compute node is available in the `/var/log/parallelcluster/slurm_health_check.log` file. This file is available in Amazon CloudWatch, in the cluster CloudWatch log group, where you can find:
  + Details on the action run by the `Gpu` health check, including enabling and disabling services and persistence mode.
  + The GPU identifier, serial ID, and the UUID.
  + The health check output.
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)  
`HealthChecks` is supported starting in Amazon ParallelCluster version 3.6.0.

**`Efa` (**Optional**)**  
Specifies the Elastic Fabric Adapter (EFA) settings for the nodes in the Slurm queue.  

```
Efa:
  Enabled: boolean
  GdrSupport: boolean
```
[Update policy: The compute fleet must be stopped or QueueUpdateStrategy must be set for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-queue-update-strategy-v3)    
**`Enabled` (**Optional**, `Boolean`)**  
Specifies that Elastic Fabric Adapter (EFA) is enabled. To view the list of Amazon EC2 instances that support EFA, see [Supported instance types](https://docs.amazonaws.cn/AWSEC2/latest/UserGuide/efa.html#efa-instance-types) in the *Amazon EC2 User Guide for Linux Instances*. For more information, see [Elastic Fabric Adapter](efa-v3.md). We recommend that you use a cluster [`SlurmQueues`](#Scheduling-v3-SlurmQueues) / [`Networking`](#Scheduling-v3-SlurmQueues-Networking) / [`PlacementGroup`](#yaml-Scheduling-SlurmQueues-Networking-PlacementGroup) to minimize latencies between instances.  
The default value is `false`.  
Elastic Fabric Adapter (EFA) isn't supported over different availability zones. For more information, see [SubnetIds](#yaml-Scheduling-SlurmQueues-Networking-SubnetIds).
If you're defining a custom security group in [SecurityGroups](#yaml-Scheduling-SlurmQueues-Networking-SecurityGroups), make sure that your EFA-enabled instances are members of a security group that allows all inbound and outbound traffic to itself.
[Update policy: The compute fleet must be stopped or QueueUpdateStrategy must be set for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-queue-update-strategy-v3)  
**`GdrSupport` (**Optional**, `Boolean`)**  
**(Optional)** Starting with Amazon ParallelCluster version 3.0.2, this setting has no effect. Elastic Fabric Adapter (EFA) support for GPUDirect RDMA (remote direct memory access) is always enabled if it's supported by the instance type for the Slurm compute resource and the operating system.  
[Update policy: The compute fleet must be stopped or QueueUpdateStrategy must be set for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-queue-update-strategy-v3)

**`CapacityReservationTarget`**  

```
CapacityReservationTarget:
   CapacityReservationId: string
   CapacityReservationResourceGroupArn: string
```
Specifies the on-demand capacity reservation to use for the compute resource.    
**`CapacityReservationId` (**Optional**, `String`)**  
The ID of the existing capacity reservation to target for the queue's compute resources. The id can refer to an [ODCR](https://docs.amazonaws.cn/AWSEC2/latest/UserGuide/ec2-capacity-reservations.html) or a [Capacity Block for ML](https://docs.amazonaws.cn/AWSEC2/latest/UserGuide/ec2-capacity-blocks.html).  
When this parameter is specified at compute resource level, InstanceType is optional, it will be automatically retrieved from the reservation.  
**`CapacityReservationResourceGroupArn` (**Optional**, `String`)**  
Indicates the Amazon Resource Name (ARN) of the resource group that serves as the service linked group of capacity reservations for the compute resource. Amazon ParallelCluster identifies and uses the most appropriate capacity reservation from the group. The resource group must have at least one ODCR for each instance type that's listed for the compute resource. For more information, see [Launch instances with On-Demand Capacity Reservations (ODCR)](launch-instances-odcr-v3.md).  
+ If `PlacementGroup` is enabled in [`SlurmQueues`](#Scheduling-v3-SlurmQueues) / [`Networking`](#yaml-Scheduling-SlurmQueues-ComputeResources-Networking) or [`SlurmQueues`](#Scheduling-v3-SlurmQueues) / [`ComputeResources`](#Scheduling-v3-SlurmQueues-ComputeResources) / [`Networking`](#yaml-Scheduling-SlurmQueues-ComputeResources-Networking), Amazon ParallelCluster selects a resource group that targets the instance type and `PlacementGroup` for a compute resource if it exists.

  The `PlacementGroup` must target one of the instances types defined in [`ComputeResources`](#Scheduling-v3-SlurmQueues-ComputeResources).
+ If `PlacementGroup` isn't enabled in [`SlurmQueues`](#Scheduling-v3-SlurmQueues) / [`Networking`](#yaml-Scheduling-SlurmQueues-ComputeResources-Networking) or [`SlurmQueues`](#Scheduling-v3-SlurmQueues) / [`ComputeResources`](#Scheduling-v3-SlurmQueues-ComputeResources) / [`Networking`](#yaml-Scheduling-SlurmQueues-ComputeResources-Networking), Amazon ParallelCluster selects a resource group that targets only the instance type of a compute resource, if it exists.
[Update policy: The compute fleet must be stopped or QueueUpdateStrategy must be set for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-queue-update-strategy-v3)  
`CapacityReservationTarget` is added with Amazon ParallelCluster version 3.3.0.

**`Networking`**  

```
Networking:   
  PlacementGroup:
    Enabled: boolean
    Name: string
```
[Update policy: All compute nodes must be stopped for a managed placement group deletion. The compute fleet must be stopped or QueueUpdateStrategy must be set for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-remove-placement-group-v3)    
**`PlacementGroup` (**Optional**)**  
Specifies the placement group settings for the compute resource.    
**`Enabled` (**Optional**, `Boolean`)**  
Indicates whether a placement group is used for the compute resource.  
+ If set to `true`, without a `Name` defined, that compute resource is assigned its own managed placement group, regardless of the [`SlurmQueues`](#Scheduling-v3-SlurmQueues) / [`Networking`](#Scheduling-v3-SlurmQueues-Networking) / [`PlacementGroup`](#yaml-Scheduling-SlurmQueues-Networking-PlacementGroup) setting.
+ If set to `true`, with a `Name` defined, that compute resource is assigned the named placement group, regardless of `SlurmQueues` / `Networking` / `PlacementGroup` settings.
[Update policy: The compute fleet must be stopped or QueueUpdateStrategy must be set for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-queue-update-strategy-v3)  
**`Name` (**Optional**, `String`)**  
The placement group name for an existing cluster placement group that's used for the compute resource.  
[Update policy: The compute fleet must be stopped or QueueUpdateStrategy must be set for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-queue-update-strategy-v3)
+ If both `PlacementGroup` / `Enabled` and `Name` aren't set, their respective values default to the [`SlurmQueues`](#Scheduling-v3-SlurmQueues) / [`Networking`](#Scheduling-v3-SlurmQueues-Networking) / [`PlacementGroup`](#yaml-Scheduling-SlurmQueues-Networking-PlacementGroup) settings.
+ When using a capacity block reservation, a placement group constraint should not be set as insufficient capacity errors may occur due to placement constraints outside of the reservation even if the capacity reservation has remaining capacity.
+ `ComputeResources` / `Networking` / `PlacementGroup` is added with Amazon ParallelCluster version 3.3.0.

**`CustomSlurmSettings` (**Optional**, `Dict`)**  
**(Optional)** Defines the custom Slurm node (compute resource) configuration settings.  
Specifies a dictionary of custom Slurm configuration parameter key-value pairs that apply to Slurm nodes (compute resources).  
Each separate key-value pair, such as `Param1: Value1`, is added separately to the end of the Slurm node configuration line in the format `Param1=Value1`.  
You can only specify Slurm configuration parameters that aren't deny-listed in `CustomSlurmSettings`. For information about deny-listed Slurm configuration parameters, see [Deny-listed Slurm configuration parameters for `CustomSlurmSettings`](slurm-configuration-settings-v3.md#slurm-configuration-denylists-v3).  
Amazon ParallelCluster only checks whether a parameter is in a deny list. Amazon ParallelCluster doesn't validate your custom Slurm configuration parameter syntax or semantics. It is your responsibility to validate your custom Slurm configuration parameters. Invalid custom Slurm configuration parameters can cause Slurm daemon failures that can lead to cluster create and update failures.  
For more information about how to specify custom Slurm configuration parameters with Amazon ParallelCluster, see [Slurm configuration customization](slurm-configuration-settings-v3.md).  
For more information about Slurm configuration parameters, see [slurm.conf](https://slurm.schedmd.com/slurm.conf.html) in the Slurm documentation.  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)  
`CustomSlurmSettings` is supported starting with Amazon ParallelCluster version 3.6.0.

**`Tags` (**Optional**, [String])**  
A list of tag key-value pairs. `ComputeResource` tags override duplicate tags specified in the [`Tags` section](Tags-v3.md) or [`SlurmQueues`](#yaml-Scheduling-SlurmQueues-Tags) / `Tags`.    
**`Key` (**Optional**, `String`)**  
The tag key.  
**`Value` (**Optional**, `String`)**  
The tag value.
[Update policy: The compute fleet must be stopped or QueueUpdateStrategy must be set for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-queue-update-strategy-v3)

**`LaunchTemplateOverrides` (**Optional**)**  
`LaunchTemplateOverrides` is added with Amazon ParallelCluster version 3.15.0.
Specifies a launch template to override the default launch template that Amazon ParallelCluster creates for the compute resource. The launch template should only contain network interfaces overrides. Amazon ParallelCluster validates the launch template and prevents overriding other parameters. For more information on how to use this override, see [Customize compute node network interfaces with launch template overrides](tutorial-network-customization-v3.md).  

```
LaunchTemplateOverrides:
  LaunchTemplateId: string
  Version: string
```  
**`LaunchTemplateId` (**Required**, `String`)**  
The ID of the launch template.  
[Update policy: The compute fleet must be stopped or QueueUpdateStrategy must be set for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-queue-update-strategy-v3)  
**`Version` (**Required**, `String`)**  
The version number of the launch template.  
[Update policy: The compute fleet must be stopped or QueueUpdateStrategy must be set for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-queue-update-strategy-v3)

#### `ComputeSettings`


**(Required)** Defines the `ComputeSettings` configuration for the Slurm queue.

##### `ComputeSettings` properties


Specifies the properties of `ComputeSettings` of the nodes in the Slurm queue.

```
ComputeSettings:
  LocalStorage:
    RootVolume:
      Size: integer
      Encrypted: boolean
      VolumeType: string
      Iops: integer
      Throughput: integer
     EphemeralVolume:
      MountDir: string
```

[Update policy: The compute fleet must be stopped or QueueUpdateStrategy must be set for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-queue-update-strategy-v3)

**`LocalStorage` (**Optional**)**  
Specifies the properties of `LocalStorage` of the nodes in the Slurm queue.  

```
LocalStorage:
  RootVolume:
    Size: integer
    Encrypted: boolean
    VolumeType: string
    Iops: integer
    Throughput: integer
  EphemeralVolume:
    MountDir: string
```
[Update policy: The compute fleet must be stopped or QueueUpdateStrategy must be set for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-queue-update-strategy-v3)    
**`RootVolume` (**Optional**)**  
Specifies the details of the root volume of the nodes in the Slurm queue.  

```
RootVolume:
  Size: integer
  Encrypted: boolean
  VolumeType: string
  Iops: integer
  Throughput: integer
```
[Update policy: The compute fleet must be stopped or QueueUpdateStrategy must be set for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-queue-update-strategy-v3)    
**`Size` (**Optional**, `Integer`)**  
Specifies the root volume size in gibibytes (GiB) for the nodes in the Slurm queue. The default size comes from the AMI. Using a different size requires that the AMI supports `growroot`.   
[Update policy: The compute fleet must be stopped or QueueUpdateStrategy must be set for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-queue-update-strategy-v3)  
**`Encrypted` (**Optional**, `Boolean`)**  
If `true`, the root volume of the nodes in the Slurm queue are encrypted. The default value is `true`.  
[Update policy: The compute fleet must be stopped or QueueUpdateStrategy must be set for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-queue-update-strategy-v3)  
**`VolumeType` (**Optional**, `String`)**  
Specifies the [Amazon EBS volume type](https://docs.amazonaws.cn/AWSEC2/latest/UserGuide/EBSVolumeTypes.html) of the nodes in the Slurm queue. Supported values are `gp2`, `gp3`, `io1`, `io2`, `sc1`, `st1`, and `standard`. The default value is `gp3`.  
For more information, see [Amazon EBS volume types](https://docs.amazonaws.cn/AWSEC2/latest/UserGuide/EBSVolumeTypes.html) in the *Amazon EC2 User Guide*.  
[Update policy: The compute fleet must be stopped or QueueUpdateStrategy must be set for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-queue-update-strategy-v3)  
**`Iops` (**Optional**, `Boolean`)**  
Defines the number of IOPS for `io1`, `io2`, and `gp3` type volumes.  
The default value, supported values, and `volume_iops` to `volume_size` ratio varies by `VolumeType` and `Size`.    
**`VolumeType` = `io1`**  
Default `Iops` = 100  
Supported values `Iops` = 100–64000 †  
Maximum `volume_iops` to `volume_size` ratio = 50 IOPS per GiB. 5000 IOPS requires a `volume_size` of at least 100 GiB.  
**`VolumeType` = `io2`**  
Default `Iops` = 100  
Supported values `Iops` = 100–64000 (256000 for `io2` Block Express volumes) †  
Maximum `Iops` to `Size` ratio = 500 IOPS per GiB. 5000 IOPS requires a `Size` of at least 10 GiB.  
**`VolumeType` = `gp3`**  
Default `Iops` = 3000  
Supported values `Iops` = 3000–16000 †  
Maximum `Iops` to `Size` ratio = 500 IOPS per GiB for volumes with IOPS greater than 3000.
† Maximum IOPS is guaranteed only on [Instances built on the Nitro System](https://docs.amazonaws.cn/AWSEC2/latest/UserGuide/instance-types.html#ec2-nitro-instances) that are also provisioned with more than 32,000 IOPS. Other instances can have up to 32,000 IOPS. Earlier `io1` volumes might not reach full performance unless you [modify the volume](https://docs.amazonaws.cn/AWSEC2/latest/UserGuide/ebs-modify-volume.html). `io2` Block Express volumes support `volume_iops` values up to 256000 on `R5b` instance types. For more information, see [`io2` Block Express volumes](https://docs.amazonaws.cn/AWSEC2/latest/UserGuide/ebs-volume-types.html#io2-block-express) in the *Amazon EC2 User Guide*.  
[Update policy: The compute fleet must be stopped or QueueUpdateStrategy must be set for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-queue-update-strategy-v3)  
**`Throughput` (**Optional**, `Integer`)**  
Defines the throughput for `gp3` volume types, in MiB/s. This setting is valid only when `VolumeType` is `gp3`. The default value is `125`. Supported values: 125–1000 MiB/s  
The ratio of `Throughput` to `Iops` can be no more than 0.25. The maximum throughput of 1000 MiB/s requires that the `Iops` setting is at least 4000.  
[Update policy: The compute fleet must be stopped or QueueUpdateStrategy must be set for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-queue-update-strategy-v3)  
**`EphemeralVolume` (**Optional**, `Boolean`)**  
Specifies the settings for the ephemeral volume. The ephemeral volume is created by combining all instance store volumes into a single logical volume formatted with the `ext4` file system. The default is `/scratch`. If the instance type doesn't have any instance store volumes, no ephemeral volume is created. For more information, see [Instance store volumes](https://docs.amazonaws.cn/AWSEC2/latest/UserGuide/InstanceStorage.html#instance-store-volumes) in the *Amazon EC2 User Guide*.  

```
EphemeralVolume:
  MountDir: string
```
[Update policy: The compute fleet must be stopped or QueueUpdateStrategy must be set for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-queue-update-strategy-v3)    
**`MountDir` (**Optional**, `String`)**  
The mount directory for the ephemeral volume for each node in the Slurm queue.   
[Update policy: The compute fleet must be stopped or QueueUpdateStrategy must be set for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-queue-update-strategy-v3)

#### `CustomActions`


**(Optional)** Specifies custom scripts to run on the nodes in the Slurm queue.

```
CustomActions:
  OnNodeStart:
    Sequence:
      - Script: string
        Args:
          - string
    Script: string
    Args:
      - string
  OnNodeConfigured:
    Sequence:
      - Script: string
        Args:
          - string
    Script: string
    Args:
      - string
```

[Update policy: The compute fleet must be stopped or QueueUpdateStrategy must be set for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-queue-update-strategy-v3)

##### `CustomActions` Properties


**`OnNodeStart` (**Optional**, `String`)**  
Specifies a sequence of scripts or single script to run on the nodes in the Slurm queue before any node deployment bootstrap action is started. Amazon ParallelCluster doesn't support including both a single script and `Sequence` for the same custom action. For more information, see [Custom bootstrap actions](custom-bootstrap-actions-v3.md).    
**`Sequence` (**Optional**)**  
List of scripts to run.  
[Update policy: The compute fleet must be stopped or QueueUpdateStrategy must be set for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-queue-update-strategy-v3)    
**`Script` (**Required**, `String`)**  
The file to use. The file path can start with `https://` or `s3://`.  
[Update policy: The compute fleet must be stopped or QueueUpdateStrategy must be set for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-queue-update-strategy-v3)  
**`Args` (**Optional**, `[String]`)**  
The list of arguments to pass to the script.  
[Update policy: The compute fleet must be stopped or QueueUpdateStrategy must be set for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-queue-update-strategy-v3)  
**`Script` (**Required**, `String`)**  
The file to use for a single script. The file path can start with `https://` or `s3://`.  
[Update policy: The compute fleet must be stopped or QueueUpdateStrategy must be set for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-queue-update-strategy-v3)  
**`Args` (**Optional**, `[String]`)**  
The list of arguments to pass to the single script.  
[Update policy: The compute fleet must be stopped or QueueUpdateStrategy must be set for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-queue-update-strategy-v3)
[Update policy: The compute fleet must be stopped or QueueUpdateStrategy must be set for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-queue-update-strategy-v3)

**`OnNodeConfigured` (**Optional**, `String`)**  
Specifies a sequence of scripts or a single script to run on the nodes in the Slurm queue after all of the node bootstrap actions are complete. Amazon ParallelCluster doesn't support including both a single script and `Sequence` for the same custom action. For more information, see [Custom bootstrap actions](custom-bootstrap-actions-v3.md).    
**`Sequence` (**Optional**)**  
List of scripts to run.  
[Update policy: The compute fleet must be stopped or QueueUpdateStrategy must be set for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-queue-update-strategy-v3)    
**`Script` (**Required**, `String`)**  
The file to use. The file path can start with `https://` or `s3://`.  
[Update policy: The compute fleet must be stopped or QueueUpdateStrategy must be set for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-queue-update-strategy-v3)  
**`Args` (**Optional**, `[String]`)**  
The list of arguments to pass to the script.  
[Update policy: The compute fleet must be stopped or QueueUpdateStrategy must be set for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-queue-update-strategy-v3)  
**`Script` (**Required**, `String`)**  
The file to use for a single script. The file path can start with `https://`or `s3://`.  
[Update policy: The compute fleet must be stopped or QueueUpdateStrategy must be set for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-queue-update-strategy-v3)  
**`Args` (**Optional**, `[String]`)**  
A list of arguments to pass to the single script.  
[Update policy: The compute fleet must be stopped or QueueUpdateStrategy must be set for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-queue-update-strategy-v3)
[Update policy: The compute fleet must be stopped or QueueUpdateStrategy must be set for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-queue-update-strategy-v3)  
`Sequence` is added starting with Amazon ParallelCluster version 3.6.0. When you specify `Sequence`, you can list multiple scripts for a custom action. Amazon ParallelCluster continues to support configuring a custom action with a single script, without including `Sequence`.  
Amazon ParallelCluster doesn't support including both a single script and `Sequence` for the same custom action.

#### `Iam`


**(Optional)** Defines optional IAM settings for the Slurm queue.

```
Iam:
  S3Access:
    - BucketName: string
      EnableWriteAccess: boolean
      KeyName: string
  AdditionalIamPolicies:
    - Policy: string
  InstanceProfile: string
  InstanceRole: string
```

[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)

##### `Iam` Properties


**`InstanceProfile` (**Optional**, `String`)**  
Specifies an instance profile to override the default instance role or instance profile for the Slurm queue. You cannot specify both `InstanceProfile` and `InstanceRole`. The format is `arn:${Partition}:iam::${Account}:instance-profile/${InstanceProfileName}`.  
If this is specified, the `S3Access` and `AdditionalIamPolicies` settings can't be specified.  
We recommend that you specify one or both of the `S3Access` and `AdditionalIamPolicies` settings because features added to Amazon ParallelCluster often require new permissions.  
[Update policy: The compute fleet must be stopped for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-compute-fleet-v3)

**`InstanceRole` (**Optional**, `String`)**  
Specifies an instance role to override the default instance role or instance profile for the Slurm queue. You cannot specify both `InstanceProfile` and `InstanceRole`. The format is `arn:${Partition}:iam::${Account}:role/${RoleName}`.  
If this is specified, the `S3Access` and `AdditionalIamPolicies` settings can't be specified.  
We recommend that you specify one or both of the `S3Access` and `AdditionalIamPolicies` settings because features added to Amazon ParallelCluster often require new permissions.  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)

**`S3Access` (**Optional**)**  
Specifies a bucket for the Slurm queue. This is used to generate policies to grant the specified access to the bucket in the Slurm queue.  
If this is specified, the `InstanceProfile` and `InstanceRole` settings can't be specified.  
We recommend that you specify one or both of the `S3Access` and `AdditionalIamPolicies` settings because features added to Amazon ParallelCluster often require new permissions.  

```
S3Access:
  - BucketName: string
    EnableWriteAccess: boolean
    KeyName: string
```
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)    
**`BucketName` (**Required**, `String`)**  
The name of the bucket.  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)  
**`KeyName` (**Optional**, `String`)**  
The key for the bucket. The default value is `*`.  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)  
**`EnableWriteAccess` (**Optional**, `Boolean`)**  
Indicates whether write access is enabled for the bucket.  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)

**`AdditionalIamPolicies` (**Optional**)**  
Specifies a list of Amazon Resource Names (ARNs) of IAM policies for Amazon EC2 . This list is attached to the root role used for the Slurm queue in addition to the permissions that are required by Amazon ParallelCluster.  
An IAM policy name and its ARN are different. Names can't be used.  
If this is specified, the `InstanceProfile` and `InstanceRole` settings can't be specified.  
We recommend that you use `AdditionalIamPolicies` because `AdditionalIamPolicies` are added to the permissions that Amazon ParallelCluster requires, and the `InstanceRole` must include all permissions required. The permissions required often change from release to release as features are added.  
There's no default value.  

```
AdditionalIamPolicies:
  - Policy: string
```
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)    
**`Policy` (**Required**, `[String]`)**  
List of IAM policies.  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)

## `SlurmSettings`


**(Optional)** Defines the settings for Slurm that apply to the entire cluster.

```
SlurmSettings:
  ScaledownIdletime: integer
  QueueUpdateStrategy: string
  EnableMemoryBasedScheduling: boolean
  CustomSlurmSettings: [dict] 
  CustomSlurmSettingsIncludeFile: string
  Database:
    Uri: string
    UserName: string
    PasswordSecretArn: string
  ExternalSlurmdbd:
    Host: string
    Port: integer
  Dns:
    DisableManagedDns: boolean
    HostedZoneId: string
    UseEc2Hostnames: boolean
```

### `SlurmSettings` Properties


**`ScaledownIdletime` (**Optional**, `Integer`)**  
Defines the amount of time (in minutes) that there's no job and the Slurm node terminates.  
The default value is `10`.  
[Update policy: The compute fleet must be stopped for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-compute-fleet-v3)

**`MungeKeySecretArn` (**Optional**, `String`)**  
 The Amazon Resource Name (ARN) of the plaintext Amazon Secrets Manager secret that contains the base64-encoded munge key to be used in the Slurm cluster. This munge key will be used to authenticate RPC calls between Slurm client commands and Slurm daemons acting as remote servers. If MungeKeySecretArn is not provided, Amazon ParallelCluster will generate a random munge key for the cluster.  
`MungeKeySecretArn` is supported starting with Amazon ParallelCluster version 3.8.0.
If the MungeKeySecretArn is newly added to an existing cluster, ParallelCluster will not restore the previous munge Key in the event of a Rollback or when later removing the MungeKeySecretArn. Instead, a new random munge key will be generated.
If the Amazon ParallelCluster user has the permission to [ DescribeSecret](https://docs.amazonaws.cn/secretsmanager/latest/apireference/API_DescribeSecret.html) on that specific secret resource, MungeKeySecretArn is validated. MungeKeySecretArn is valid if:  
+ The specified secret exists, and
+ The secret is plaintext and contains a valid base64-encoded string, and
+ The decoded binary munge key has a size between 256 and 8192 bits.
If the pcluster user IAM policy doesn't include DescribeSecret, MungeKeySecretArn is not validated and a warning message is displayed. For more information, see [Base Amazon ParallelCluster `pcluster` user policy](iam-roles-in-parallelcluster-v3.md#iam-roles-in-parallelcluster-v3-base-user-policy).  
When you update MungeKeySecretArn, the compute fleet and all login nodes must be stopped.  
If the secret value in the secret ARN is modified while the ARN remains the same, the cluster won't automatically be updated with the new munge key. In order to use the secret ARN's new munge key, you must stop the compute fleet and login nodes then run the following command from the head node.  
`sudo /opt/parallelcluster/scripts/slurm/update_munge_key.sh`  
After you run the command, you can resume both the compute fleet and login nodes: the newly provisioned compute and login nodes will automatically start using the new munge key.  
To generate a base64-encoded custom munge key, you can use the [mungekey utility](https://github.com/dun/munge/wiki/Man-8-mungekey) distributed with the munge software and then encode it using the base64 utility generally available in your OS. Alternatively, you either use bash (please set the bs parameter between 32 and 1024)  
`dd if=/dev/random bs=128 count=1 2>/dev/null | base64 -w 0`  
or Python as follows:  

```
import random
import os
import base64

# key length in bytes
key_length=128

base64.b64encode(os.urandom(key_length)).decode("utf-8")
```
[Update Policy: The compute fleet and login nodes must be stopped for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md)

**`QueueUpdateStrategy` (**Optional**, `String`)**  
Specifies the replacement strategy for the [`SlurmQueues`](#Scheduling-v3-SlurmQueues) section parameters that have the following update policy:  
[Update policy: The compute fleet must be stopped or `QueueUpdateStrategy` must be set for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-queue-update-strategy-v3)  
The `QueueUpdateStrategy` value is used only when a cluster update process starts.  
Valid values: `COMPUTE_FLEET_STOP` \$1 `DRAIN` \$1 `TERMINATE`  
Default value: `COMPUTE_FLEET_STOP`    
**`DRAIN`**  
Nodes in queues with changed parameter values are set to `DRAINING`. Nodes in this state don't accept new jobs and running jobs continue to completion.  
After a node becomes `idle` (`DRAINED`), a node is replaced if the node is static, and the node is terminated if the node is dynamic. Other nodes in other queues without changed parameter values aren't impacted.  
The time this strategy needs to replace all of the queue nodes with changed parameter values depends on the running workload.  
**`COMPUTE_FLEET_STOP`**  
The default value of the `QueueUpdateStrategy` parameter. With this setting, updating parameters under the [`SlurmQueues`](#Scheduling-v3-SlurmQueues) section requires you to [stop the compute fleet](pcluster.update-compute-fleet-v3.md) before you perform a cluster update:  

```
$ pcluster update-compute-fleet --status STOP_REQUESTED
```  
**`TERMINATE`**  
In queues with changed parameter values, running jobs are terminated and the nodes are powered down immediately.  
Static nodes are replaced and dynamic nodes are terminated.  
Other nodes in other queues without changed parameter values aren't impacted.
[Update policy: This setting is not analyzed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-ignored-v3)  
`QueueUpdateStrategy` is supported starting with Amazon ParallelCluster version 3.2.0.

**`EnableMemoryBasedScheduling` (**Optional**, `Boolean`)**  
If `true`, memory-based scheduling is enabled in Slurm. For more information, see [`SlurmQueues`](#Scheduling-v3-SlurmQueues) / [`ComputeResources`](#Scheduling-v3-SlurmQueues-ComputeResources) / [`SchedulableMemory`](#yaml-Scheduling-SlurmQueues-ComputeResources-SchedulableMemory).  
The default value is `false`.  
Enabling memory-based scheduling impacts the way that the Slurm scheduler handles jobs and node allocation.  
For more information, see [Slurm memory-based scheduling](slurm-mem-based-scheduling-v3.md).
`EnableMemoryBasedScheduling` is supported starting with Amazon ParallelCluster version 3.2.0.
Starting with Amazon ParallelCluster version 3.7.0, `EnableMemoryBasedScheduling` can be enabled if you configure multiple instance types in [Instances](#yaml-Scheduling-SlurmQueues-ComputeResources-Instances).  
For Amazon ParallelCluster versions 3.2.0 to 3.6.*x*, `EnableMemoryBasedScheduling` can't be enabled if you configure multiple instance types in [Instances](#yaml-Scheduling-SlurmQueues-ComputeResources-Instances).
[Update policy: The compute fleet must be stopped for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-compute-fleet-v3)

**`CustomSlurmSettings` (**Optional**, `[Dict]`)**  
Defines the custom Slurm settings that apply to the entire cluster.  
Specifies a list of Slurm configuration dictionaries of key-value pairs to be appended to the end of the `slurm.conf` file that Amazon ParallelCluster generates.  
Each dictionary in the list appears as a separate line added to the Slurm configuration file. You can specify either simple or complex parameters.  
Simple parameters consist of a single key pair, as shown in the following examples:  

```
 - Param1: 100
 - Param2: "SubParam1,SubParam2=SubValue2"
```
Example rendered in Slurm configuration:  

```
Param1=100
Param2=SubParam1,SubParam2=SubValue2
```
Complex Slurm configuration parameters consist of multiple space-separated key-value, pairs as shown in the next examples:  

```
 - NodeName: test-nodes[1-10]
   CPUs: 4
   RealMemory: 4196
   ... # other node settings
 - NodeSet: test-nodeset
   Nodes: test-nodes[1-10]
   ... # other nodeset settings
 - PartitionName: test-partition
   Nodes: test-nodeset
   ... # other partition settings
```
Example, rendered in Slurm configuration:  

```
NodeName=test-nodes[1-10] CPUs=4 RealMemory=4196 ... # other node settings
NodeSet=test-nodeset Nodes=test-nodes[1-10] ... # other nodeset settings
PartitionName=test-partition Nodes=test-nodeset ... # other partition settings
```
Custom Slurm nodes must not contain the `-st-` or `-dy-` patterns in their names. These patterns are reserved for nodes managed by Amazon ParallelCluster.
If you specify custom Slurm configuration parameters in `CustomSlurmSettings`, you must not specify custom Slurm configuration parameters for `CustomSlurmSettingsIncludeFile`.  
You can only specify Slurm configuration parameters that aren't deny-listed in `CustomSlurmSettings`. For information about deny-listed Slurm configuration parameters, see [Deny-listed Slurm configuration parameters for `CustomSlurmSettings`](slurm-configuration-settings-v3.md#slurm-configuration-denylists-v3).  
Amazon ParallelCluster only checks whether a parameter is in a deny list. Amazon ParallelCluster doesn't validate your custom Slurm configuration parameter syntax or semantics. It is your responsibility to validate your custom Slurm configuration parameters. Invalid custom Slurm configuration parameters can cause Slurm daemon failures that can lead to cluster create and update failures.  
For more information about how to specify custom Slurm configuration parameters with Amazon ParallelCluster, see [Slurm configuration customization](slurm-configuration-settings-v3.md).  
For more information about Slurm configuration parameters, see [slurm.conf](https://slurm.schedmd.com/slurm.conf.html) in the Slurm documentation.  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)  
`CustomSlurmSettings` is supported starting with Amazon ParallelCluster version 3.6.0.

**`CustomSlurmSettingsIncludeFile` (**Optional**, `String`)**  
Defines the custom Slurm settings that apply to the entire cluster.  
Specifies the custom Slurm file consisting of custom Slurm configuration parameters to be appended at the end of the `slurm.conf` file that Amazon ParallelCluster generates.  
You must include the path to the file. The path can start with `https://` or `s3://`.  
If you specify custom Slurm configuration parameters for `CustomSlurmSettingsIncludeFile`, you must not specify custom Slurm configuration parameters for `CustomSlurmSettings`.  
Custom Slurm nodes must not contain the `-st-` or `-dy-` patterns in their names. These patterns are reserved for nodes managed by Amazon ParallelCluster.
You can only specify Slurm configuration parameters that aren't deny-listed in `CustomSlurmSettingsIncludeFile`. For information about deny-listed Slurm configuration parameters, see [Deny-listed Slurm configuration parameters for `CustomSlurmSettings`](slurm-configuration-settings-v3.md#slurm-configuration-denylists-v3).  
Amazon ParallelCluster only checks whether a parameter is in a deny list. Amazon ParallelCluster doesn't validate your custom Slurm configuration parameter syntax or semantics. It is your responsibility to validate your custom Slurm configuration parameters. Invalid custom Slurm configuration parameters can cause Slurm daemon failures that can lead to cluster create and update failures.  
For more information about how to specify custom Slurm configuration parameters with Amazon ParallelCluster, see [Slurm configuration customization](slurm-configuration-settings-v3.md).  
For more information about Slurm configuration parameters, see [slurm.conf](https://slurm.schedmd.com/slurm.conf.html) in the Slurm documentation.  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)  
`CustomSlurmSettings` is supported starting with Amazon ParallelCluster version 3.6.0.

### `Database`


**(Optional)** Defines the settings to enable Slurm Accounting on the cluster. For more information, see [Slurm accounting with Amazon ParallelCluster](slurm-accounting-v3.md).

```
Database:
   Uri: string
   UserName: string
   PasswordSecretArn: string
```

[Update policy: The compute fleet must be stopped for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-compute-fleet-v3)

#### `Database` properties


**`Uri` (**Required**, `String`)**  
The address to the database server that's used as the backend for Slurm accounting. This URI must be formatted as `host:port` and must not contain a scheme, such as `mysql://`. The host can either be an IP address or a DNS name that's resolvable by the head node. If a port isn't provided, Amazon ParallelCluster uses the MySQL default port 3306.  
Amazon ParallelCluster bootstraps the Slurm accounting database to the cluster and must access the database.  
The database must be reachable before the following occurs:  
+ A cluster is created.
+ Slurm accounting is enabled with a cluster update.
[Update policy: The compute fleet must be stopped for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-compute-fleet-v3)

**`UserName` (**Required**, `String`)**  
The identity that Slurm uses to connect to the database, write accounting logs, and perform queries. The user must have both read and write permissions on the database.  
[Update policy: The compute fleet must be stopped for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-compute-fleet-v3)

**`PasswordSecretArn` (**Required**, `String`)**  
The Amazon Resource Name (ARN) of the Amazon Secrets Manager secret that contains the `UserName` plaintext password. This password is used together with `UserName` and Slurm accounting to authenticate on the database server.  
+ When you create a secret using the Amazon Secrets Manager console be sure to select "Other type of secret", select plaintext, and only include the password text in the secret.
+ You cannot use the '\$1' character in the Database password as Slurm does not support it in slurmdbd.conf.
+ For more information on how to use Amazon Secrets Manager to create a secret refer to [ Create an Amazon Secrets Manager Secret](https://docs.amazonaws.cn//secretsmanager/latest/userguide/create_secret).
If the user has the permission to [DescribeSecret](https://docs.amazonaws.cn/secretsmanager/latest/apireference/API_DescribeSecret.html), `PasswordSecretArn` is validated. `PasswordSecretArn` is valid if the specified secret exists. If the user IAM policy doesn't include `DescribeSecret`, `PasswordSecretArn` isn't validated and a warning message is displayed. For more information, see [Base Amazon ParallelCluster `pcluster` user policy](iam-roles-in-parallelcluster-v3.md#iam-roles-in-parallelcluster-v3-base-user-policy).  
When you update `PasswordSecretArn`, the compute fleet must be stopped. If the secret value changes, and the secret ARN doesn't change, the cluster isn't automatically updated with the new database password. To update the cluster for the new secret value, you must run the following command from within the head node after the compute fleet is stopped.  

```
$ sudo /opt/parallelcluster/scripts/slurm/update_slurm_database_password.sh
```
We recommend that you only change the database password when the compute fleet is stopped to avoid loss of accounting data.
[Update policy: The compute fleet must be stopped for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-compute-fleet-v3)

**`DatabaseName` (**Optional**, `String`)**  
Name of the database on the database server (defined by the Uri parameter) to be used for Slurm Accounting.  
The name of the database may contain lowercase letters, numbers and underscores. The name may not be longer than 64 characters.  
This parameter maps to the `StorageLoc` parameter of [slurmdbd.conf](https://slurm.schedmd.com/slurmdbd.conf.html#OPT_StorageLoc).  
If `DatabaseName` is not provided, ParallelCluster will use the name of the cluster to define a value for `StorageLoc`.  
Updating the `DatabaseName` is allowed, with the following considerations:  
+ If a database with a name DatabaseName does not yet exist on the database server, slurmdbd will create it. It will be your responsibility to reconfigure the new database as needed (e.g. adding the accounting entities — clusters, accounts, users, associations, QOSs, etc.).
+ If a database with a name DatabaseName already exists on the database server, slurmdbd will use it for the Slurm Accounting functionality.
[Update policy: The compute fleet must be stopped for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-compute-fleet-v3)

**Note**  
`Database` is added starting with release 3.3.0.

### ExternalSlurmdbd


**(Optional)** Defines the settings to enable Slurm Accounting with an external slurmdbd server. For more information, see [Slurm accounting with Amazon ParallelCluster](slurm-accounting-v3.md).

```
ExternalSlurmdbd:
  Host: string
  Port: integer
```

#### `ExternalSlurmdbd` properties


** `Host` (**Required**, `String`)**  
The address to the external slurmdbd server for Slurm accounting. The host can either be an IP address or a DNS name that's resolvable by the head node.  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)

** `Port` (**Optional**, `Integer`)**  
The port the slurmdbd service listens to. The default value is `6819`.  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)

### `Dns`


**(Optional)** Defines the settings for Slurm that apply to the entire cluster.

```
Dns:
  DisableManagedDns: boolean
  HostedZoneId: string
  UseEc2Hostnames: boolean
```

#### `Dns` properties


**`DisableManagedDns` (**Optional**, `Boolean`)**  
If `true`, the DNS entries for the cluster aren't created and Slurm node names aren't resolvable.  
By default, Amazon ParallelCluster creates a Route 53 hosted zone where nodes are registered when launched. The default value is `false`. If `DisableManagedDns` is set to `true`, the hosted zone isn't created by Amazon ParallelCluster.  
To learn how to use this setting to deploy clusters in subnets with no internet access, see [Amazon ParallelCluster in a single subnet with no internet access](aws-parallelcluster-in-a-single-public-subnet-no-internet-v3.md).  
A name resolution system is required for the cluster to operate properly. If `DisableManagedDns` is set to `true`, you must provide a name resolution system. To use the Amazon EC2 default DNS, set `UseEc2Hostnames` to `true`. Alternatively, configure your own DNS resolver and make sure that node names are registered when instances are launched. For example, you can do this by configuring [`CustomActions`](#Scheduling-v3-SlurmQueues-CustomActions) / [`OnNodeStart`](#yaml-Scheduling-SlurmQueues-CustomActions-OnNodeStart).
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

**`HostedZoneId` (**Optional**, `String`)**  
Defines a custom Route 53 hosted zone ID to use for DNS name resolution for the cluster. When provided, Amazon ParallelCluster registers cluster nodes in the specified hosted zone and doesn't create a managed hosted zone.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

**`UseEc2Hostnames` (**Optional**, `Boolean`)**  
If `true`, cluster compute nodes are configured with the default EC2 hostname. The Slurm `NodeHostName` is also updated with this information. The default is `false`.  
To learn how to use this setting to deploy clusters in subnets with no internet access, see [Amazon ParallelCluster in a single subnet with no internet access](aws-parallelcluster-in-a-single-public-subnet-no-internet-v3.md).  
**This note isn't relevant starting with Amazon ParallelCluster version 3.3.0.**  
For Amazon ParallelCluster supported versions prior to 3.3.0:  
When `UseEc2Hostnames` is set to `true`, the Slurm configuration file is set with the Amazon ParallelCluster `prolog` and `epilog` scripts:  
+ `prolog` runs to add nodes info to `/etc/hosts` on compute nodes when each job is allocated.
+ `epilog` runs to clean contents written by `prolog`.
To add custom `prolog` or `epilog` scripts, add them to the `/opt/slurm/etc/pcluster/prolog.d/` or `/opt/slurm/etc/pcluster/epilog.d/` folders respectively.
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

# `SharedStorage` section


**(Optional)** The shared storage settings for the cluster.

Amazon ParallelCluster supports either using [Amazon EBS](https://docs.amazonaws.cn/AWSEC2/latest/UserGuide/AmazonEBS.html), [FSx for ONTAP](https://docs.amazonaws.cn/fsx/latest/ONTAPGuide/what-is-fsx-ontap.html), and [FSx for OpenZFS](https://docs.amazonaws.cn/fsx/latest/OpenZFSGuide/what-is-fsx.html) shared storage volumes, [Amazon EFS](https://docs.amazonaws.cn/efs/latest/ug/whatisefs.html) and [FSx for Lustre](https://docs.amazonaws.cn/fsx/latest/LustreGuide/what-is.html) shared storage file systems, or [File Caches](https://docs.amazonaws.cn/fsx/latest/FileCacheGuide/what-is.html).

In the `SharedStorage` section, you can define either external or managed storage:
+ **External storage** refers to an existing volume or file system that you manage. Amazon ParallelCluster doesn't create or delete it.
+ **Amazon ParallelCluster managed storage** refers to a volume or file system that Amazon ParallelCluster created and can delete.

For [shared storage quotas](shared-storage-quotas-v3.md) and more information about how to configure your shared storage, see [Shared storage](shared-storage-quotas-integration-v3.md) in *Using Amazon ParallelCluster*.

**Note**  
If Amazon Batch is used as a scheduler, FSx for Lustre is only available on the cluster head node.

```
SharedStorage:
  - MountDir: string
    Name: string
    StorageType: Ebs
    EbsSettings:
      VolumeType: string
      Iops: integer
      Size: integer
      Encrypted: boolean
      KmsKeyId: string
      SnapshotId: string
      Throughput: integer
      VolumeId: string
      DeletionPolicy: string
      Raid:
        Type: string
        NumberOfVolumes: integer
  - MountDir: string
    Name: string
    StorageType: Efs
    EfsSettings:
      Encrypted: boolean
      KmsKeyId: string
      EncryptionInTransit: boolean
      IamAuthorization: boolean
      PerformanceMode: string
      ThroughputMode: string
      ProvisionedThroughput: integer
      FileSystemId: string
      DeletionPolicy: string
      AccessPointId: string
  - MountDir: string
    Name: string
    StorageType: FsxLustre
    FsxLustreSettings:
      StorageCapacity: integer
      DeploymentType: string
      ImportedFileChunkSize: integer
      DataCompressionType: string
      ExportPath: string
      ImportPath: string
      WeeklyMaintenanceStartTime: string
      AutomaticBackupRetentionDays: integer
      CopyTagsToBackups: boolean
      DailyAutomaticBackupStartTime: string
      PerUnitStorageThroughput: integer
      BackupId: string
      KmsKeyId: string
      FileSystemId: string
      AutoImportPolicy: string
      DriveCacheType: string
      StorageType: string
      DeletionPolicy: string
      DataRepositoryAssociations:
      - Name: string
        BatchImportMetaDataOnCreate: boolean
        DataRepositoryPath: string
        FileSystemPath: string
        ImportedFileChunkSize: integer
        AutoExportPolicy: string
        AutoImportPolicy: string
  - MountDir: string
    Name: string
    StorageType: FsxOntap
    FsxOntapSettings:
      VolumeId: string
  - MountDir: string
    Name: string
    StorageType: FsxOpenZfs
    FsxOpenZfsSettings:
      VolumeId: string
  - MountDir: string
    Name: string
    StorageType: FileCache
    FileCacheSettings:
      FileCacheId: string
```

## `SharedStorage` update policies

+ For managed/external EBS, managed EFS and managed FSx Lustre, the update policy is [Update policy: For this list values setting, the compute fleet must be stopped or QueueUpdateStrategy must be set to add a new value; the compute fleet must be stopped when removing an existing value.](using-pcluster-update-cluster-v3.md#update-policy-update-cluster-v3)
+ For external EFS, FSx Lustre, FSx ONTAP, FSx OpenZfs and File Cache, the update policy is, [Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)

## `SharedStorage` properties


`MountDir` (**Required**, `String`)  
The path where the shared storage is mounted.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

`Name` (**Required**, `String`)  
The name of the shared storage. You use this name when you update the settings.  
If you specify Amazon ParallelCluster managed shared storage, and you change the value for `Name`, the existing managed shared storage and data is deleted and new managed shared storage is created. Changing the value for `Name` with a cluster update is equivalent to replacing the existing managed shared storage with a new one. Make sure you back up your data before you change `Name` if you need to retain the data from the existing shared storage.
[Update policy: For this list values setting, the compute fleet must be stopped or QueueUpdateStrategy must be set to add a new value; the compute fleet must be stopped when removing an existing value.](using-pcluster-update-cluster-v3.md#update-policy-update-cluster-v3)

`StorageType` (**Required**, `String`)  
The type of the shared storage. Supported values are `Ebs`, `Efs`, `FsxLustre`, `FsxOntap`, and `FsxOpenZfs`.  
For more information, see [`FsxLustreSettings`](#SharedStorage-v3-FsxLustreSettings), [`FsxOntapSettings`](#SharedStorage-v3-FsxOntapSettings), and [`FsxOpenZfsSettings`](#SharedStorage-v3-FsxOpenZfsSettings).  
If you use Amazon Batch as a scheduler, FSx for Lustre is only available on the cluster head node.
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

## `EbsSettings`


**(Optional)** The settings for an Amazon EBS volume.

```
EbsSettings:
  VolumeType: string
  Iops: integer
  Size: integer
  Encrypted: boolean
  KmsKeyId: string
  SnapshotId: string
  VolumeId: string
  Throughput: integer
  DeletionPolicy: string
  Raid:
    Type: string
    NumberOfVolumes: integer
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

### `EbsSettings` properties


When the [DeletionPolicy](#yaml-SharedStorage-EbsSettings-DeletionPolicy) is set to `Delete`, a managed volume, with its data, is deleted if the cluster is deleted or if the volume is removed with a cluster update. 

For more information, see [Shared storage](shared-storage-quotas-integration-v3.md) in *Using Amazon ParallelCluster*.

`VolumeType` (**Optional**, `String`)  
Specifies the [Amazon EBS volume type](https://docs.amazonaws.cn/AWSEC2/latest/UserGuide/EBSVolumeTypes.html). Supported values are `gp2`, `gp3`, `io1`, `io2`, `sc1`, `st1`, and `standard`. The default value is `gp3`.  
For more information, see [Amazon EBS volume types](https://docs.amazonaws.cn/AWSEC2/latest/UserGuide/EBSVolumeTypes.html) in the *Amazon EC2 User Guide*.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

`Iops` (**Optional**, `Integer`)  
Defines the number of IOPS for `io1`, `io2`, and `gp3` type volumes.  
The default value, supported values, and `volume_iops` to `volume_size` ratio varies by `VolumeType` and `Size`.    
`VolumeType` = `io1`  
Default `Iops` = 100  
Supported values `Iops` = 100–64000 †  
Maximum `volume_iops` to `volume_size` ratio = 50 IOPS for each GiB. 5000 IOPS requires a `volume_size` of at least 100 GiB.  
`VolumeType` = `io2`  
Default `Iops` = 100  
Supported values `Iops` = 100–64000 (256000 for `io2` Block Express volumes) †  
Maximum `Iops` to `Size` ratio = 500 IOPS for each GiB. 5000 IOPS requires a `Size` of at least 10 GiB.  
`VolumeType` = `gp3`  
Default `Iops` = 3000  
Supported values `Iops` = 3000–16000  
Maximum `Iops` to `Size` ratio = 500 IOPS for each GiB. 5000 IOPS requires a `Size` of at least 10 GiB.
† Maximum IOPS is guaranteed only on [Instances built on the Nitro System](https://docs.amazonaws.cn/AWSEC2/latest/UserGuide/instance-types.html#ec2-nitro-instances) provisioned with more than 32,000 IOPS. Other instances guarantee up to 32,000 IOPS. Unless you [modify the volume](https://docs.amazonaws.cn/AWSEC2/latest/UserGuide/ebs-modify-volume.html), earlier `io1` volumes might not reach full performance. `io2` Block Express volumes support `volume_iops` values up to 256000 on `R5b` instance types. For more information, see [`io2` Block Express volumes](https://docs.amazonaws.cn/AWSEC2/latest/UserGuide/ebs-volume-types.html#io2-block-express) in the *Amazon EC2 User Guide*.  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)

`Size` (**Optional**, `Integer`)  
Specifies the volume size in gibibytes (GiB). The default value is 35.   
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

`Encrypted` (**Optional**, `Boolean`)  
Specifies if the volume is encrypted. The default value is `true`.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

`KmsKeyId` (**Optional**, `String`)  
Specifies a custom Amazon KMS key to use for encryption. This setting requires that the `Encrypted` setting is set to `true`.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

`SnapshotId` (**Optional**, `String`)  
Specifies the Amazon EBS snapshot ID if you use a snapshot as the source for the volume.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

`VolumeId` (**Optional**, `String`)  
Specifies the Amazon EBS volume ID. When this is specified for an `EbsSettings` instance, only the `MountDir` parameter can also be specified.  
The volume must be created in the same Availability Zone as the `HeadNode`.  
Multiple Availability Zones is added in Amazon ParallelCluster version 3.4.0.
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

`Throughput` (**Optional**, `Integer`)  
The throughput, in MiB/s to provision for a volume, with a maximum of 1,000 MiB/s.  
This setting is valid only when `VolumeType` is `gp3`. The supported range is 125 to 1000, with a default value of 125.  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)

`DeletionPolicy` (**Optional**, `String`)  
Specifies whether the volume should be retained, deleted, or snapshotted when the cluster is deleted or the volume is removed. The supported values are `Delete`, `Retain`, and `Snapshot`. The default value is `Delete`.  
When the [DeletionPolicy](#yaml-SharedStorage-EbsSettings-DeletionPolicy) set to `Delete`, a managed volume, with its data, is deleted if the cluster is deleted or if the volume is removed with a cluster update.  
For more information, see [Shared storage](shared-storage-quotas-integration-v3.md).  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)  
`DeletionPolicy` is supported starting with Amazon ParallelCluster version 3.2.0.

### `Raid`


**(Optional)** Defines the configuration of a RAID volume.

```
Raid:
  Type: string
  NumberOfVolumes: integer
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

#### `Raid` properties


`Type` (**Required**, `String`)  
Defines the type of RAID array. Supported values are "0" (striped) and "1" (mirrored).  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

`NumberOfVolumes` (**Optional**, `Integer`)  
Defines the number of Amazon EBS volumes to use to create the RAID array. The supported range of values is 2-5. The default value (when the `Raid` setting is defined) is 2.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

## `EfsSettings`


**(Optional)** The settings for an Amazon EFS file system.

```
EfsSettings:
  Encrypted: boolean
  KmsKeyId: string
  EncryptionInTransit: boolean
  IamAuthorization: boolean
  PerformanceMode: string
  ThroughputMode: string
  ProvisionedThroughput: integer
  FileSystemId: string
  DeletionPolicy: string
  AccessPointId: string
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

### `EfsSettings` properties


When the [DeletionPolicy](#yaml-SharedStorage-EfsSettings-DeletionPolicy) set to `Delete`, a managed file system, with its data, is deleted if the cluster is deleted, or if the file system is removed with a cluster update.

For more information, see [Shared storage](shared-storage-quotas-integration-v3.md) in *Using Amazon ParallelCluster*.

`Encrypted` (**Optional**, `Boolean`)  
Specifies if the Amazon EFS file system is encrypted. The default value is `false`.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

`KmsKeyId` (**Optional**, `String`)  
Specifies a custom Amazon KMS key to use for encryption. This setting requires that the `Encrypted` setting is set to `true`.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

`EncryptionInTransit` (**Optional**, `Boolean`)  
If set to `true`, Amazon EFS file systems are mounted using Transport Layer Security (TLS). By default, this is set to `false`.  
If Amazon Batch is used as scheduler, `EncryptionInTransit` isn't supported.
`EncryptionInTransit` is added starting with Amazon ParallelCluster version 3.4.0.
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

`IamAuthorization` (**Optional**, `Boolean`)  
`IamAuthorization` is added starting with Amazon ParallelCluster version 3.4.0.  
If set to `true`, Amazon EFS is authenticated by using the system's IAM identity. By default, this is set to `false`.  
If `IamAuthorization` is set to `true`, `EncryptionInTransit` must also be set to `true`.
If Amazon Batch is used as scheduler, `IamAuthorization` isn't supported.
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

`PerformanceMode` (**Optional**, `String`)  
Specifies the performance mode of the Amazon EFS file system. Supported values are `generalPurpose` and `maxIO`. The default value is `generalPurpose`. For more information, see [Performance modes](https://docs.amazonaws.cn/efs/latest/ug/performance.html#performancemodes) in the *Amazon Elastic File System User Guide*.  
We recommend the `generalPurpose` performance mode for most file systems.  
File systems that use the `maxIO` performance mode can scale to higher levels of aggregate throughput and operations per second. However, there's a trade-off of slightly higher latencies for most file operations.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

`ThroughputMode` (**Optional**, `String`)  
Specifies the throughput mode of the Amazon EFS file system. Supported values are `bursting` and `provisioned`. The default value is `bursting`. When `provisioned` is used, `ProvisionedThroughput` must be specified.  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)

`ProvisionedThroughput` (**Required** when `ThroughputMode` is `provisioned`, `Integer`)  
Defines the provisioned throughput (in MiB/s) of the Amazon EFS file system, measured in MiB/s. This corresponds to the [ProvisionedThroughputInMibps](https://docs.amazonaws.cn/efs/latest/ug/API_CreateFileSystem.html#efs-CreateFileSystem-response-ProvisionedThroughputInMibps) parameter in the *Amazon EFS API Reference*.  
If you use this parameter, you must set `ThroughputMode` to `provisioned`.  
The supported range is `1`-`1024`. To request a limit increase, contact Amazon Web Services Support.  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)

`FileSystemId` (**Optional**, `String`)  
Defines the Amazon EFS file system ID for an existing file system.  
If the cluster is configured to span multiple Availability Zones, you must define a file system mount target in each Availability Zone that's used by the cluster.  
When this is specified, only `MountDir` can be specified. No other `EfsSettings` can be specified.  

**If you set this option, the following must be true for the file systems that you define:**
+ The file systems have an existing mount target in each of the cluster's Availability Zones, with inbound and outbound NFS traffic allowed from the `HeadNode` and `ComputeNodes`. Multiple availability zones are configured in [Scheduling](Scheduling-v3.md) / [SlurmQueues](Scheduling-v3.md#Scheduling-v3-SlurmQueues) / [Networking](Scheduling-v3.md#Scheduling-v3-SlurmQueues-Networking) / [SubnetIds](Scheduling-v3.md#yaml-Scheduling-SlurmQueues-Networking-SubnetIds).

  

**To make sure traffic is allowed between the cluster and file system, you can do one of the following:**
  + Configure the security groups of the mount target to allow the traffic to and from the CIDR or prefix list of cluster subnets.
**Note**  
Amazon ParallelCluster validates that ports are open and that the CIDR or prefix list is configured. Amazon ParallelCluster doesn't validate the content of CIDR block or prefix list.
  + Set custom security groups for cluster nodes by using [`SlurmQueues`](Scheduling-v3.md#Scheduling-v3-SlurmQueues) / [`Networking`](Scheduling-v3.md#Scheduling-v3-SlurmQueues-Networking) / [`SecurityGroups`](Scheduling-v3.md#yaml-Scheduling-SlurmQueues-Networking-SecurityGroups) and [`HeadNode`](HeadNode-v3.md) / [`Networking`](HeadNode-v3.md#HeadNode-v3-Networking) / [`SecurityGroups`](HeadNode-v3.md#yaml-HeadNode-Networking-SecurityGroups). The custom security groups must be configured to allow traffic between the cluster and the file system.
**Note**  
If all cluster nodes use custom security groups, Amazon ParallelCluster only validates that the ports are open. Amazon ParallelCluster doesn't validate that the source and destination are properly configured.
EFS OneZone is only supported if all compute nodes and the head node are in the same Availability Zone. EFS OneZone can have only one mount target.
Multiple Availability Zones is added in Amazon ParallelCluster version 3.4.0.
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

`DeletionPolicy` (**Optional**, `String`)  
Specifies whether the file system should be retained or deleted when the file system is removed from the cluster or the cluster is deleted. The supported values are `Delete` and `Retain`. The default value is `Delete`.  
When the [DeletionPolicy](#yaml-SharedStorage-EfsSettings-DeletionPolicy) is set to `Delete`, a managed file system, with its data, is deleted if the cluster is deleted, or if the file system is removed with a cluster update.  
For more information, see [Shared storage](shared-storage-quotas-integration-v3.md).  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)  
`DeletionPolicy` is supported starting with Amazon ParallelCluster version 3.3.0.

`AccessPointId` (**Optional**, `String`)  
If this option is specified, the filesystem entry point defined by the `access point ID` will be mounted rather than the filesystem root.  
For more information, see [Shared storage](shared-storage-quotas-integration-v3.md).  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

## `FsxLustreSettings`


**Note**  
You must define `FsxLustreSettings` if `FsxLustre` is specified for [`StorageType`](#yaml-SharedStorage-StorageType).

**(Optional)** The settings for an FSx for Lustre file system.

```
FsxLustreSettings:
  StorageCapacity: integer
  DeploymentType: string
  ImportedFileChunkSize: integer
  DataCompressionType: string
  ExportPath: string
  ImportPath: string
  WeeklyMaintenanceStartTime: string
  AutomaticBackupRetentionDays: integer
  CopyTagsToBackups: boolean
  DailyAutomaticBackupStartTime: string
  PerUnitStorageThroughput: integer
  BackupId: string # BackupId cannot coexist with some of the fields
  KmsKeyId: string
  FileSystemId: string # FileSystemId cannot coexist with other fields
  AutoImportPolicy: string
  DriveCacheType: string
  StorageType: string
  DeletionPolicy: string
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

**Note**  
If Amazon Batch is used as a scheduler, FSx for Lustre is only available on the cluster head node.

### `FsxLustreSettings` properties


When the [DeletionPolicy](#yaml-SharedStorage-FsxLustreSettings-DeletionPolicy) is set to `Delete`, a managed file system, with its data, is deleted if the cluster is deleted, or if the file system is removed with a cluster update.

For more information, see [Shared storage](shared-storage-quotas-integration-v3.md).

`StorageCapacity` (**Required**, `Integer`)  
Sets the storage capacity of the FSx for Lustre file system, in GiB. `StorageCapacity` is required if you're creating a new file system. Do not include `StorageCapacity` if `BackupId` or `FileSystemId` is specified.  
+ For `SCRATCH_2`, `PERSISTENT_1`, and `PERSISTENT_2` deployment types, valid values are 1200 GiB, 2400 GiB, and increments of 2400 GiB.
+ For `SCRATCH_1` deployment type, valid values are 1200 GiB, 2400 GiB, and increments of 3600 GiB.
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

`DeploymentType` (**Optional**, `String`)  
Specifies the deployment type of the FSx for Lustre file system. Supported values are `SCRATCH_1`, `SCRATCH_2`, `PERSISTENT_1`, and `PERSISTENT_2`. The default value is `SCRATCH_2`.  
Choose `SCRATCH_1` and `SCRATCH_2` deployment types when you need temporary storage and shorter term processing of data. The `SCRATCH_2` deployment type provides in transit encryption of data and higher burst throughput capacity than `SCRATCH_1`.  
Choose `PERSISTENT_1` deployment type for longer term storage and for throughput focused workloads that aren't latency sensitive. `PERSISTENT_1` supports encryption of data in transit. It's available in all Amazon Web Services Regions where FSx for Lustre is available.  
Choose `PERSISTENT_2` deployment type for longer term storage and for latency sensitive workloads that require the highest levels of IOPS and throughput. `PERSISTENT_2` supports SSD storage and offers higher `PerUnitStorageThroughput` (up to 1000 MB/s/TiB). `PERSISTENT_2` is available in a limited number of Amazon Web Services Regions. For more information about deployment types and the list of Amazon Web Services Regions where `PERSISTENT_2` is available, see [File system deployment options for FSx for Lustre](https://docs.amazonaws.cn/fsx/latest/LustreGuide/using-fsx-lustre.html#lustre-deployment-types) in the *Amazon FSx for Lustre User Guide*.  
Encryption of data in transit is automatically enabled when you access `SCRATCH_2`, `PERSISTENT_1`, or `PERSISTENT_2` deployment type file systems from Amazon EC2 instances that support [this feature](https://docs.amazonaws.cn/AWSEC2/latest/UserGuide/data-protection.html).  
Encryption of data in transit for `SCRATCH_2`, `PERSISTENT_1`, and `PERSISTENT_2` deployment types is supported when accessed from supported instance types in supported Amazon Web Services Regions. For more information, see [Encrypting data in transit](https://docs.amazonaws.cn/fsx/latest/LustreGuide/encryption-in-transit-fsxl.html) in the *Amazon FSx for Lustre User Guide*.  
Support for the `PERSISTENT_2` deployment type was added with Amazon ParallelCluster version 3.2.0.
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

`ImportedFileChunkSize` (**Optional**, `Integer`)  
For files that are imported from a data repository, this value determines the stripe count and maximum amount of data for each file (in MiB) that's stored on a single physical disk. The maximum number of disks that a single file can be striped across is limited by the total number of disks that make up the file system.  
The default chunk size is 1,024 MiB (1 GiB) and can go as high as 512,000 MiB (500 GiB). Amazon S3 objects have a maximum size of 5 TB.  
This parameter isn't supported for file systems that use the `PERSISTENT_2` deployment type. For instructions on how to configure data repositories associations, see [Linking your file system to an S3 bucket](https://docs.amazonaws.cn/fsx/latest/LustreGuide/create-dra-linked-data-repo.html) in the *Amazon FSx for Lustre User Guide*.
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

`DataCompressionType` (**Optional**, `String`)  
Sets the data compression configuration for the FSx for Lustre file system. The supported value is `LZ4`. `LZ4` indicates that data compression is turned on with the LZ4 algorithm. When `DataCompressionType` isn't specified, data compression is turned off when the file system is created.  
For more information, see [Lustre data compression](https://docs.amazonaws.cn/fsx/latest/LustreGuide/data-compression.html).  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)

`ExportPath` (**Optional**, `String`)  
The path in Amazon S3 where the root of your FSx for Lustre file system is exported. This setting is only supported when the `ImportPath` parameter is specified. The path must use the same Amazon S3 bucket as specified in `ImportPath`. You can provide an optional prefix to which new and changed data is to be exported from your FSx for Lustre file system. If an `ExportPath` value is not provided, FSx for Lustre sets a default export path, `s3://amzn-s3-demo-bucket/FSxLustre[creation-timestamp]`. The timestamp is in UTC format, for example `s3://amzn-s3-demo-bucket/FSxLustre20181105T222312Z`.  
The Amazon S3 export bucket must be the same as the import bucket specified by `ImportPath`. If you only specify a bucket name, such as `s3://amzn-s3-demo-bucket`, you get a 1:1 mapping of file system objects to Amazon S3 bucket objects. This mapping means that the input data in Amazon S3 is overwritten on export. If you provide a custom prefix in the export path, such as `s3://amzn-s3-demo-bucket/[custom-optional-prefix]`, FSx for Lustre exports the contents of your file system to that export prefix in the Amazon S3 bucket.  
This parameter isn't supported for file systems that use the `PERSISTENT_2` deployment type. Configure data repositories associations as described in [Linking your file system to an S3 bucket](https://docs.amazonaws.cn/fsx/latest/LustreGuide/create-dra-linked-data-repo.html) in the *Amazon FSx for Lustre User Guide*.
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

`ImportPath` (**Optional**, `String`)  
The path to the Amazon S3 bucket (including the optional prefix) that you're using as the data repository for your FSx for Lustre file system. The root of your FSx for Lustre file system will be mapped to the root of the Amazon S3 bucket you select. An example is `s3://amzn-s3-demo-bucket/optional-prefix`. If you specify a prefix after the Amazon S3 bucket name, only object keys with that prefix are loaded into the file system.  
This parameter isn't supported for file systems that use the `PERSISTENT_2` deployment type. Configure data repositories associations as described in [Linking your file system to an S3 bucket](https://docs.amazonaws.cn/fsx/latest/LustreGuide/create-dra-linked-data-repo.html) in the *Amazon FSx for Lustre User Guide*.
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

`WeeklyMaintenanceStartTime` (**Optional**, `String`)  
The preferred start time to perform weekly maintenance. It's in the `"d:HH:MM"` format in the UTC\$10 time zone. For this format, `d` is the weekday number from 1 through 7, beginning with Monday and ending with Sunday. Quotation marks are required for this field.  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)

`AutomaticBackupRetentionDays` (**Optional**, `Integer`)  
The number of days to retain automatic backups. Setting this to 0 disables automatic backups. The supported range is 0-90. The default is 0. This setting is only valid for use with `PERSISTENT_1` and `PERSISTENT_2` deployment types. For more information, see [Working with backups](https://docs.amazonaws.cn/fsx/latest/LustreGuide/using-backups-fsx.html) in the *Amazon FSx for Lustre User Guide*.  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)

`CopyTagsToBackups` (**Optional**, `Boolean`)  
If `true`, copy the tags for the FSx for Lustre file system to backups. This value defaults to `false`. If it's set to `true`, all tags for the file system are copied to all automatic and user-initiated backups where the user doesn't specify tags. If this value is `true`, and you specify one or more tags, only the specified tags are copied to backups. If you specify one or more tags when you create a user-initiated backup, no tags are copied from the file system, regardless of this value. This setting is only valid for use with `PERSISTENT_1` and `PERSISTENT_2` deployment types.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

`DailyAutomaticBackupStartTime` (**Optional**, `String`)  
A recurring daily time, in the `HH:MM` format. `HH` is the zero-padded hour of the day (00-23). `MM` is the zero-padded minute of the hour (00-59). For example, `05:00` specifies 5 A.M. daily. This setting is only valid for use with `PERSISTENT_1` and `PERSISTENT_2` deployment types.  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)

`PerUnitStorageThroughput` (**Required for `PERSISTENT_1` and `PERSISTENT_2` deployment types**, `Integer`)  
Describes the amount of read and write throughput for each 1 tebibyte of storage, in MB/s/TiB. File system throughput capacity is calculated by multiplying ﬁle system storage capacity (TiB) by the `PerUnitStorageThroughput` (MB/s/TiB). For a 2.4 TiB ﬁle system, provisioning 50 MB/s/TiB of `PerUnitStorageThroughput` yields 120 MB/s of ﬁle system throughput. You pay for the amount of throughput that you provision. This corresponds to the [PerUnitStorageThroughput](https://docs.amazonaws.cn/AWSCloudFormation/latest/UserGuide/aws-properties-fsx-filesystem-lustreconfiguration.html#cfn-fsx-filesystem-lustreconfiguration-perunitstoragethroughput) property.  
Valid values:  
+ PERSISTENT\$11 SSD storage: 50, 100, 200 MB/s/TiB.
+ PERSISTENT\$11 HDD storage: 12, 40 MB/s/TiB.
+ PERSISTENT\$12 SSD storage: 125, 250, 500, 1000 MB/s/TiB.
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

`BackupId` (**Optional**, `String`)  
Specifies the ID of the backup to use to restore the FSx for Lustre file system from an existing backup. When the `BackupId` setting is specified, the `AutoImportPolicy`, `DeploymentType`, `ExportPath`, `KmsKeyId`, `ImportPath`, `ImportedFileChunkSize`, `StorageCapacity`, and `PerUnitStorageThroughput` settings must not be specified. These settings are read from the backup. Additionally, the `AutoImportPolicy`, `ExportPath`, `ImportPath`, and `ImportedFileChunkSize` settings must not be specified. This corresponds to the [BackupId](https://docs.amazonaws.cn/AWSCloudFormation/latest/UserGuide/aws-resource-fsx-filesystem.html#cfn-fsx-filesystem-backupid) property.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

`KmsKeyId` (**Optional**, `String`)  
The ID of the Amazon Key Management Service (Amazon KMS) key ID that's used to encrypt the FSx for Lustre file system's data for persistent FSx for Lustre file systems at rest. If not specified, the FSx for Lustre managed key is used. The `SCRATCH_1` and `SCRATCH_2` FSx for Lustre file systems are always encrypted at rest using FSx for Lustre managed keys. For more information, see [Encrypt](https://docs.amazonaws.cn//kms/latest/APIReference/API_Encrypt.html) in the *Amazon Key Management Service API Reference*.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

`FileSystemId` (**Optional**, `String`)  
Specifies the ID of an existing FSx for Lustre file system.  
If this option is specified, only the `MountDir` and `FileSystemId` settings in the `FsxLustreSettings` are used. All other settings in the `FsxLustreSettings` are ignored.  
If Amazon Batch scheduler is used, FSx for Lustre is only available on the head node.
The file system must be associated to a security group that allows inbound and outbound TCP traffic through ports 988, 1021, 1022, and 1023.
Make sure that traffic is allowed between the cluster and file system by doing one of the following:  
+ Configure the security groups of the file system to allow the traffic to and from the CIDR or prefix list of cluster subnets.
**Note**  
Amazon ParallelCluster validates that ports are open and that the CIDR or prefix list is configured. Amazon ParallelCluster doesn't validate the content of CIDR block or prefix list.
+ Set custom security groups for cluster nodes by using [`SlurmQueues`](Scheduling-v3.md#Scheduling-v3-SlurmQueues) / [`Networking`](Scheduling-v3.md#Scheduling-v3-SlurmQueues-Networking) / [`SecurityGroups`](Scheduling-v3.md#yaml-Scheduling-SlurmQueues-Networking-SecurityGroups) and [`HeadNode`](HeadNode-v3.md) / [`Networking`](HeadNode-v3.md#HeadNode-v3-Networking) / [`SecurityGroups`](HeadNode-v3.md#yaml-HeadNode-Networking-SecurityGroups). The custom security groups must be configured to allow traffic between the cluster and the file system.
**Note**  
If all cluster nodes use custom security groups, Amazon ParallelCluster only validates that the ports are open. Amazon ParallelCluster doesn't validate that the source and destination are properly configured.
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

`AutoImportPolicy` (**Optional**, `String`)  
When you create your FSx for Lustre file system, your existing Amazon S3 objects appear as file and directory listings. Use this property to choose how FSx for Lustre keeps your file and directory listings up to date as you add or modify objects in your linked Amazon S3 bucket. `AutoImportPolicy` can have the following values:  
+  `NEW` - Automatic import is on. FSx for Lustre automatically imports directory listings of any new objects added to the linked Amazon S3 bucket that do not currently exist in the FSx for Lustre file system. 
+  `NEW_CHANGED` - Automatic import is on. FSx for Lustre automatically imports file and directory listings of any new objects added to the Amazon S3 bucket and any existing objects that are changed in the Amazon S3 bucket after you choose this option. 
+  `NEW_CHANGED_DELETED` - Automatic import is on. FSx for Lustre automatically imports file and directory listings of any new objects added to the Amazon S3 bucket, any existing objects that are changed in the Amazon S3 bucket, and any objects that were deleted in the Amazon S3 bucket after you choose this option.
**Note**  
Support for `NEW_CHANGED_DELETED` was added in Amazon ParallelCluster version 3.1.1.
If `AutoImportPolicy` isn't specified, automatic import is off. FSx for Lustre only updates file and directory listings from the linked Amazon S3 bucket when the file system is created. FSx for Lustre doesn't update file and directory listings for any new or changed objects after choosing this option.  
For more information, see [Automatically import updates from your S3 bucket](https://docs.amazonaws.cn/fsx/latest/LustreGuide/autoimport-data-repo.html) in the *Amazon FSx for Lustre User Guide*.  
This parameter isn't supported for file systems using the `PERSISTENT_2` deployment type. For instructions on how to configure data repositories associations, see [Linking your file system to an S3 bucket](https://docs.amazonaws.cn/fsx/latest/LustreGuide/create-dra-linked-data-repo.html) in the *Amazon FSx for Lustre User Guide*.
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

`DriveCacheType` (**Optional**, `String`)  
Specifies that the file system has an SSD drive cache. This can only be set if the `StorageType` setting is set to `HDD`, and the `DeploymentType` setting is set to `PERSISTENT_1`. This corresponds to the [DriveCacheType](https://docs.amazonaws.cn/AWSCloudFormation/latest/UserGuide/aws-properties-fsx-filesystem-lustreconfiguration.html#cfn-fsx-filesystem-lustreconfiguration-drivecachetype) property. For more information, see [FSx for Lustre deployment options](https://docs.amazonaws.cn/fsx/latest/LustreGuide/using-fsx-lustre.html) in the *Amazon FSx for Lustre User Guide*.  
The only valid value is `READ`. To disable the SSD drive cache, don’t specify the `DriveCacheType` setting.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

`StorageType` (**Optional**, `String`)  
Sets the storage type for the FSx for Lustre file system that you're creating. Valid values are `SSD` and `HDD`.  
+ Set to `SSD` to use solid state drive storage.
+ Set to `HDD` to use hard disk drive storage. `HDD` is supported on `PERSISTENT` deployment types. 
The default value is `SSD`. For more information, see [ Storage Type Options](https://docs.amazonaws.cn/fsx/latest/WindowsGuide/optimize-fsx-costs.html#storage-type-options) in the *Amazon FSx for Windows User Guide* and [Multiple Storage Options](https://docs.amazonaws.cn/fsx/latest/LustreGuide/what-is.html#storage-options) in the *Amazon FSx for Lustre User Guide*.   
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

`DeletionPolicy` (**Optional**, `String`)  
Specifies whether the file system should be retained or deleted when the file system is removed from the cluster or the cluster is deleted. The supported values are `Delete` and `Retain`. The default value is `Delete`.  
When the [DeletionPolicy](#yaml-SharedStorage-FsxLustreSettings-DeletionPolicy) is set to `Delete`, a managed file system, with its data, is deleted if the cluster is deleted, or if the file system is removed with a cluster update.  
For more information, see [Shared storage](shared-storage-quotas-integration-v3.md).  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)  
`DeletionPolicy` is supported starting with Amazon ParallelCluster version 3.3.0.

`DataRepositoryAssociations` (**Optional**, `String`)  
List of DRAs (up to 8 per file system)  
Each data repository association must have a unique Amazon FSx file system directory and a unique S3 bucket or prefix associated with it.  
You can not use [ExportPath](#yaml-SharedStorage-FsxLustreSettings-ExportPath) and [ImportPath](#yaml-SharedStorage-FsxLustreSettings-ImportPath) in the FsxLustreSettings at the same time as using DRAs.  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)

`Name` (**Required**, `String`)  
The name of the DRA. You use this name when you update the settings.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

`BatchImportMetaDataOnCreate` (**Optional**, `Boolean`)  
A boolean flag indicating whether an import data repository task to import metadata should run after the data repository association is created. The task runs if this flag is set to `true`.  
Default value: `false`  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

`DataRepositoryPath` (**Required**, `String`)  
The path to the Amazon S3 data repository that will be linked to the file system. The path can be an S3 bucket or prefix in the format `s3://amzn-s3-demo-bucket/myPrefix/`. This path specifies where in the S3 data repository files will be imported from or exported to.  
Cannot overlap with other DRAs  
Pattern: `^[^\u0000\u0085\u2028\u2029\r\n]{3,4357}$`  
Minimum: `3`  
Maximum: `4357`  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

`FileSystemPath` (**Required**, `String`)  
A path on the Amazon FSx for Lustre file system that points to a high-level directory (such as `/ns1/`) or subdirectory (such as `/ns1/subdir/`) that will be mapped 1-1 with `DataRepositoryPath`. The leading forward slash in the name is required. Two data repository associations cannot have overlapping file system paths. For example, if a data repository is associated with file system path `/ns1/`, then you cannot link another data repository with file system path `/ns1/ns2`.  
This path specifies where in your file system files will be exported from or imported to. This file system directory can be linked to only one Amazon S3 bucket, and no other S3 bucket can be linked to the directory.  
Cannot overlap with other DRAs  
 If you specify only a forward slash (`/`) as the file system path, you can link only one data repository to the file system. You can only specify "`/`" as the file system path for the first data repository associated with a file system. 
Pattern: `^[^\u0000\u0085\u2028\u2029\r\n]{1,4096}$`  
Minimum: `1`  
Maximum: `4096`  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

`ImportedFileChunkSize` (**Optional**, `Integer`)  
For files imported from a data repository, this value determines the stripe count and maximum amount of data per file (in MiB) stored on a single physical disk. The maximum number of disks that a single file can be striped across is limited by the total number of disks that make up the file system or cache.  
The default chunk size is 1,024 MiB (1 GiB) and can go as high as 512,000 MiB (500 GiB). Amazon S3 objects have a maximum size of 5 TB.  
Minimum: `1`  
Maximum: `4096`  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)

`AutoExportPolicy` (**Optional**, `Array of strings`)  
The list can contain one or more of the following values:  
+ `NEW` - New files and directories are automatically exported to the data repository as they are added to the file system.
+ `CHANGED` - Changes to files and directories on the file system are automatically exported to the data repository.
+ `DELETED` - Files and directories are automatically deleted on the data repository when they are deleted on the file system.
You can define any combination of event types for your `AutoExportPolicy`.  
Maximum: `3`  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)

`AutoImportPolicy` (**Optional**, `Array of strings`)  
The list can contain one or more of the following values:  
+ `NEW` - Amazon FSx automatically imports metadata of files added to the linked S3 bucket that do not currently exist in the FSx file system.
+ `CHANGED` - Amazon FSx automatically updates file metadata and invalidates existing file content on the file system as files change in the data repository.
+ `DELETED` - Amazon FSx automatically deletes files on the file system as corresponding files are deleted in the data repository.
You can define any combination of event types for your `AutoImportPolicy`.  
Maximum: `3`  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)

## `FsxOntapSettings`


**Note**  
You must define `FsxOntapSettings` if `FsxOntap` is specified for [`StorageType`](#yaml-SharedStorage-StorageType).

**(Optional)** The settings for an FSx for ONTAP file system.

```
FsxOntapSettings:
  VolumeId: string
```

### `FsxOntapSettings` properties


`VolumeId` (**Required**, `String`)  
Specifies the volume ID of the existing FSx for ONTAP system.

**Note**  
If an Amazon Batch scheduler is used, FSx for ONTAP is only available on the head node.
If the FSx for ONTAP deployment type is `Multi-AZ`, make sure that the head node subnet's route table is properly configured.
Support for FSx for ONTAP was added in Amazon ParallelCluster version 3.2.0.
The file system must be associated to a security group that allows inbound and outbound TCP and UDP traffic through ports 111, 635, 2049, and 4046.

Make sure traffic is allowed between the cluster and file system by doing one of the following actions:
+ Configure the security groups of the file system to allow the traffic to and from the CIDR or prefix list of cluster subnets.
**Note**  
Amazon ParallelCluster validates that ports are open and that the CIDR or prefix list is configured. Amazon ParallelCluster doesn't validate the content of CIDR block or prefix list.
+ Set custom security groups for cluster nodes by using [`SlurmQueues`](Scheduling-v3.md#Scheduling-v3-SlurmQueues) / [`Networking`](Scheduling-v3.md#Scheduling-v3-SlurmQueues-Networking) / [`SecurityGroups`](Scheduling-v3.md#yaml-Scheduling-SlurmQueues-Networking-SecurityGroups) and [`HeadNode`](HeadNode-v3.md) / [`Networking`](HeadNode-v3.md#HeadNode-v3-Networking) / [`SecurityGroups`](HeadNode-v3.md#yaml-HeadNode-Networking-SecurityGroups). The custom security groups must be configured to allow traffic between the cluster and the file system.
**Note**  
If all cluster nodes use custom security groups, Amazon ParallelCluster only validates that the ports are open. Amazon ParallelCluster doesn't validate that the source and destination are properly configured.

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

## `FsxOpenZfsSettings`


**Note**  
You must define `FsxOpenZfsSettings` if `FsxOpenZfs` is specified for [`StorageType`](#yaml-SharedStorage-StorageType).

**(Optional)** The settings for a FSx for OpenZFS file system.

```
FsxOpenZfsSettings:
  VolumeId: string
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

### `FsxOpenZfsSettings` properties


`VolumeId` (**Required**, `String`)  
Specifies the volume ID of the existing FSx for OpenZFS system.

**Note**  
If an Amazon Batch scheduler is used, FSx for OpenZFS is only available on the head node.
Support for FSx for OpenZFS was added in Amazon ParallelCluster version 3.2.0.
The file system must be associated to a security group that allows inbound and outbound TCP and UDP traffic through ports 111, 2049, 20001, 20002, and 20003.

Make sure that traffic is allowed between the cluster and file system by doing one of the following:
+ Configure the security groups of the file system to allow the traffic to and from the CIDR or prefix list of cluster subnets.
**Note**  
Amazon ParallelCluster validates that ports are open and that the CIDR or prefix list is configured. Amazon ParallelCluster doesn't validate the content of CIDR block or prefix list.
+ Set custom security groups for cluster nodes by using [`SlurmQueues`](Scheduling-v3.md#Scheduling-v3-SlurmQueues) / [`Networking`](Scheduling-v3.md#Scheduling-v3-SlurmQueues-Networking) / [`SecurityGroups`](Scheduling-v3.md#yaml-Scheduling-SlurmQueues-Networking-SecurityGroups) and [`HeadNode`](HeadNode-v3.md) / [`Networking`](HeadNode-v3.md#HeadNode-v3-Networking) / [`SecurityGroups`](HeadNode-v3.md#yaml-HeadNode-Networking-SecurityGroups). The custom security groups must be configured to allow traffic between the cluster and the file system.
**Note**  
If all cluster nodes use custom security groups, Amazon ParallelCluster only validates that the ports are open. Amazon ParallelCluster doesn't validate that the source and destination are properly configured.

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

## `FileCacheSettings`


**Note**  
You must define `FileCacheSettings` if `FileCache` is specified for [`StorageType`](#yaml-SharedStorage-StorageType).

**(Optional)** The settings for a File Cache.

```
FileCacheSettings:
  FileCacheId: string
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

### `FileCacheSettings` properties


`FileCacheId` (**Required**, `String`)  
Specifies the File Cache ID of an existing File Cache.

**Note**  
File Cache doesn't support Amazon Batch schedulers.
Support for File Cache is added in Amazon ParallelCluster version 3.7.0.
The file system must be associated to a security group that allows inbound and outbound TCP traffic through port 988.

Make sure that traffic is allowed between the cluster and file system by doing one of the following:
+ Configure the security groups of the File Cache to allow the traffic to and from the CIDR or prefix list of cluster subnets.
**Note**  
Amazon ParallelCluster validates that ports are open and that the CIDR or prefix list is configured. Amazon ParallelCluster doesn't validate the content of CIDR block or prefix list.
+ Set custom security groups for cluster nodes by using [`SlurmQueues`](Scheduling-v3.md#Scheduling-v3-SlurmQueues) / [`Networking`](Scheduling-v3.md#Scheduling-v3-SlurmQueues-Networking) / [`SecurityGroups`](Scheduling-v3.md#yaml-Scheduling-SlurmQueues-Networking-SecurityGroups) and [`HeadNode`](HeadNode-v3.md) / [`Networking`](HeadNode-v3.md#HeadNode-v3-Networking) / [`SecurityGroups`](HeadNode-v3.md#yaml-HeadNode-Networking-SecurityGroups). The custom security groups must be configured to allow traffic between the cluster and the file system.
**Note**  
If all cluster nodes use custom security groups, Amazon ParallelCluster only validates that the ports are open. Amazon ParallelCluster doesn't validate that the source and destination are properly configured.

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

# `Iam` section


**(Optional)** Specifies IAM properties for the cluster.

```
Iam:
  Roles:
    LambdaFunctionsRole: string
  PermissionsBoundary: string
  ResourcePrefix: string
```

[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)

## `Iam` properties


`PermissionsBoundary` (**Optional**, `String`)  
The ARN of the IAM policy to use as permissions boundary for all roles created by Amazon ParallelCluster. For more information, see [Permissions boundaries for IAM entities](https://docs.amazonaws.cn/IAM/latest/UserGuide/access_policies_boundaries.html) in the *IAM User Guide*. The format is `arn:${Partition}:iam::${Account}:policy/${PolicyName}`.  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)

`Roles` (**Optional**)  
Specifies settings for the IAM roles used by the cluster.  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)    
`LambdaFunctionsRole` (**Optional**, `String`)  
The ARN of the IAM role to use for Amazon Lambda. This overrides the default role attached to all Lambda functions backing Amazon CloudFormation custom resources. Lambda needs to be configured as the principal allowed to assume the role. This will not override the role of Lambda functions used for Amazon Batch. The format is `arn:${Partition}:iam::${Account}:role/${RoleName}`.  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)

`ResourcePrefix` (**Optional**)  
Specifies a path or name prefix for IAM resources that are created by Amazon ParallelCluster.  
The resource prefix must follow the [naming rules specified by IAM](https://docs.amazonaws.cn/IAM/latest/UserGuide/reference_identifiers.html):  
+ A name can contain up to 30 characters.
+ A name can only be a string with no slash (`/`) characters.
+ A path can be up to 512 characters.
+ A path must start and end with a slash (`/`). It can contain multiple slashes (`/`) between the start and end slashes (`/`).
+ You can combine the path and name `/path/name`.
Specify a name.  

```
Iam:
  ResourcePrefix: my-prefix
```
Specify a path.  

```
Iam:
  ResourcePrefix: /org/dept/team/project/user/
```
Specify a path and name.  

```
Iam:
  ResourcePrefix: /org/dept/team/project/user/my-prefix
```
If you specify `/my-prefix`, an error is returned.  

```
Iam:
  ResourcePrefix: /my-prefix
```
A configuration error is returned. A path must have two `/`s. A prefix by itself can't have `/`s.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

# `LoginNodes` section


**Note**  
Support for `LoginNodes` is added in Amazon ParallelCluster version 3.7.0.

**(Optional)** Specifies the configuration for the login nodes pool.

```
LoginNodes:
  Pools:
    - Name: string
      Count: integer
      InstanceType: string
      GracetimePeriod: integer
      Image:
        CustomAmi: string
      Ssh:
        KeyName: string
        AllowedIps: string
      Networking:
        SubnetIds:
          - string
        SecurityGroups:
          - string
        AdditionalSecurityGroups:
          - string
      Dcv:
        Enabled: boolean
        Port: integer
        AllowedIps: string
      CustomActions:
        OnNodeStart:
          Sequence:
            - Script: string
              Args:
                - string
          Script: string
          Args:
            - string
        OnNodeConfigured:
          Sequence:
            - Script: string
              Args:
                - string
          Script: string
          Args:
            - string
        OnNodeUpdated:
          Sequence:
            - Script: string
              Args:
                - string
          Script: string
          Args:
            - string
      Iam:
        InstanceRole: string
        InstanceProfile: string
        AdditionalIamPolicies:
          - Policy: string
```

[Update policy: The login nodes in the cluster must be stopped for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-update-login-node-cluster)

## `LoginNodes` properties


### `Pools` properties


Defines groups of login nodes that have the same resource configuration. Starting with Amazon ParallelCluster 3.11.0 up to 10 pools can be specified.

```
Pools:
  - Name: string
    Count: integer
    InstanceType: string
    GracetimePeriod: integer
    Image:
      CustomAmi: string
    Ssh:
      KeyName: string
      AllowedIps: string
    Networking:
      SubnetIds:
        - string
      SecurityGroups:
        - string
      AdditionalSecurityGroups:
        - string
    Dcv:
      Enabled: boolean
      Port: integer
      AllowedIps: string
    CustomActions:
      OnNodeStart:
        Sequence:
          - Script: string
            Args:
              - string
        Script: string
        Args:
          - string
      OnNodeConfigured:
        Sequence:
          - Script: string
            Args:
              - string
        Script: string
        Args:
          - string
      OnNodeUpdated:
        Sequence:
          - Script: string
            Args:
              - string
        Script: string
        Args:
          - string
    Iam:
      InstanceRole: string
      InstanceProfile: string
      AdditionalIamPolicies:
        - Policy: string
```

[Update policy: Login node pools can be added, but removing a pool requires all login nodes in the cluster are stopped.](using-pcluster-update-cluster-v3.md#update-policy-add-login-node-pools)

`Name` (**Required** `String`)  
Specifies the name of the `LoginNodes` pool. This is used to tag the `LoginNodes` resources.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)   
Starting with Amazon ParallelCluster version 3.11.0, the update policy is: The login nodes in the pool must be stopped for this setting to be changed for an update.

`Count` (**Required** `Integer`)  
Specifies the number of login nodes to keep active.  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)

`InstanceType` (**Required** `String`)  
Specifies the Amazon EC2 instance type that's used for the login node. The architecture of the instance type must be the same as the architecture used for Slurm `InstanceType` setting.  
[Update policy](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3): This setting can be changed if the login nodes pool is stopped.  
Starting with Amazon ParallelCluster version 3.11.0, the update policy is: The login nodes in the pool must be stopped for this setting to be changed for an update.

`GracetimePeriod` (**Optional** `Integer`)  
Specifies the minimum amount of time in minutes that elapse between the notification to the logged in user that a login node is to be decommissioned and the actual stop event. Valid values for `GracetimePeriod` are from 3 up to 120 minutes. The default is 10 minutes.  
The triggering event involves interactions between multiple Amazon services. Sometimes, network latency and propagation of the information might take some time so the grace time period may take longer than expected due to internal delays in Amazon services.
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)

`Image` (**Optional**)  
Defines the image configuration for the login nodes.  

```
Image:
  CustomAmi: String
```  
`CustomAmi` (**Optional** `String`)  
Specifies the custom AMI used to provision the login nodes. If not specified, the value defaults to the one specified in the [`HeadNode` section](HeadNode-v3.md).  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

`Ssh` (**Optional**)  
Defines the `ssh` configuration for the login nodes.  

```
Ssh:
  KeyName: string
  AllowedIps: string
```
Starting with Amazon ParallelCluster version 3.11.0, the update policy is: The login nodes in the pool must be stopped for this setting to be changed for an update.  
`KeyName` (**Optional** `String`)  
Specifies the `ssh` key used to log in into the login nodes. If not specified, the value defaults to the one specified in the [`HeadNode` section](HeadNode-v3.md).  
[Update policy: The login nodes in the pool must be stopped for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-update-login-node-pools)  
Deprecated – The configuration parameter `LoginNodes/Pools/Ssh/KeyName` has been deprecated, and it will be removed in future releases. The CLI now returns a warning message when it is used in the cluster configuration. See [ https://github.com/aws/aws-parallelcluster/issues/6811](https://github.com/aws/aws-parallelcluster/issues/6811) for details.  
`AllowedIps` (**Optional** `String`)  
Specifies the CIDR-formatted IP range or a prefix list id for SSH connections to login nodes in the pool. The default is the [AllowedIps](HeadNode-v3.md#yaml-HeadNode-Ssh-AllowedIps) defined in the head node configuration, or `0.0.0.0/0` if not specified.[`HeadNode` section](HeadNode-v3.md).  
[Update policy: The login nodes in the pool must be stopped for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-update-login-node-pools)  
Support for AllowedIps for login nodes is added in Amazon ParallelCluster version 3.11.0.

`Networking` (**Required**)  
  

```
Networking:
  SubnetIds:
    - string
  SecurityGroups:
    - string
  AdditionalSecurityGroups:
    - string
```
Starting with Amazon ParallelCluster version 3.11.0, the update policy is: The login nodes in the pool must be stopped for this setting to be changed for an update.  
`SubnetIds` (**Required** `[String]`)  
The ID of existing subnet that you provision the login nodes pool in. You can only define one subnet.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)  
`SecurityGroups` (**Optional** `[String]`)  
A list of security groups to use for the login nodes pool. If no security groups are specified, Amazon ParallelCluster creates security groups for you.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)  
`AdditionalSecurityGroups` (**Optional** `[String]`)  
A list of additional security groups to use for the login nodes pool.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

`Dcv` (**Optional**)  
Defines configuration settings for the NICE DCV server that runs on the [login nodes](#LoginNodes-v3). For more information, see [Connect to the head and login nodes through Amazon DCV](dcv-v3.md)  

```
Dcv:
  Enabled: boolean
  Port: integer
  AllowedIps: string
```
By default, the NICE DCV port setup by Amazon ParallelCluster is open to all IPv4 addresses. You can connect to a NICE DCV port only if you have the URL for the NICE DCV session and connect to the NICE DCV session within 30 seconds of when the URL is returned from pcluster dcv-connect. Use the `AllowedIps` setting to further restrict access to the NICE DCV port with a CIDR-formatted IP range and use the Port setting to set a nonstandard port.
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)  
Support for DCV on login nodes is added in Amazon ParallelCluster version 3.11.0.  
`Enabled` (**Required** `Boolean`)  
Specifies whether NICE DCV is enabled on the login nodes in the pool. The default value is `false`.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)  
NICE DCV automatically generates a self-signed certificate that's used to secure traffic between the NICE DCV client and NICE DCV server that runs on the login node. To configure your own certificate, see [Amazon DCV HTTPS certificate](dcv-v3.md#dcv-v3-certificate).  
`Port` (**Optional** `Integer`)  
Specifies the port for NICE DCV. The default value is `8443`.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)  
`AllowedIps` (**Optional** `String`)  
Specifies the CIDR-formatted IP range for connections to NICE DCV. This setting is used only when Amazon ParallelCluster creates the security group. The default value is `0.0.0.0/0`, which allows access from any Internet address.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

`CustomActions` (**Optional**)  
Specifies the custom scripts to run on the login nodes.  

```
CustomActions:
  OnNodeStart:
    Sequence:
      - Script: string
        Args: 
          - string
    Script: string
    Args:
      - string
  OnNodeConfigured:
    Sequence:
      - Script: string
        Args:
          - string
    Script: string
    Args:
      - string
  OnNodeUpdated:
    Sequence:
      - Script: string
        Args:
          - string
    Script: string
    Args:
      - string
```
Support for custom actions on login nodes is added in Amazon ParallelCluster version 3.11.0.  
`OnNodeStart` (**Optional**)  
Specifies single script or a sequence of scripts to run on the [login nodes](#LoginNodes-v3) before any node deployment bootstrap action is started. For more information, see [Custom bootstrap actions](custom-bootstrap-actions-v3.md).    
`Sequence` (**Optional**)  
List of scripts to run. Amazon ParallelCluster runs the scripts in the same order as they are listed in the configuration file, starting with the first.    
`Script` (**Required** `String`)  
Specifies the file to use. The file path can start with `https://` or `s3://`.  
`Args` (**Optional** `[String]`)  
List of arguments to pass to the script.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)  
`Script` (**Required** `String`)  
Specifies the file to use for a single script. The file path can start with `https://` or `s3://`.  
`Args` (**Optional** `[String]`)  
List of arguments to pass to the single script.  
`OnNodeConfigured` (**Optional**)  
Specifies single script or a sequence of scripts to run on the [login nodes](#LoginNodes-v3) after the node bootstrap processes are complete. For more information, see [Custom bootstrap actions](custom-bootstrap-actions-v3.md).    
`Sequence` (**Optional**)  
List of scripts to run. Amazon ParallelCluster runs the scripts in the same order as they are listed in the configuration file, starting with the first.    
`Script` (**Required** `String`)  
Specifies the file to use. The file path can start with `https://` or `s3://`.  
`Args` (**Optional** `[String]`)  
List of arguments to pass to the script.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)  
`Script` (**Required** `String`)  
Specifies the file to use for a single script. The file path can start with `https://` or `s3://`.  
`Args` (**Optional** `[String]`)  
List of arguments to pass to the single script.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)  
`OnNodeUpdated` (**Optional**)  
Specifies single script or a sequence of scripts to run after the head node update is completed and the scheduler and shared storage are aligned with the latest cluster configuration changes. For more information, see [Custom bootstrap actions](custom-bootstrap-actions-v3.md).    
`Sequence` (**Optional**)  
List of scripts to run. Amazon ParallelCluster runs the scripts in the same order as they are listed in the configuration file, starting with the first.    
`Script` (**Required** `String`)  
Specifies the file to use. The file path can start with `https://` or `s3://`.  
`Args` (**Optional** `[String]`)  
List of arguments to pass to the script.  
`Script` (**Required** `String`)  
Specifies the file to use for a single script. The file path can start with `https://` or `s3://`.  
`Args` (**Optional** `[String]`)  
List of arguments to pass to the single script.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)  
Amazon ParallelCluster doesn't support including both a single script and `Sequence` for the same custom action.

`Iam` (**Optional**)  
Specifies either an instance role or an instance profile to use on the login nodes to override the default instance role or instance profile for the cluster.  

```
Iam:
  InstanceRole: string
  InstanceProfile: string
  AdditionalIamPolicies:
    - Policy: string
```
Starting with Amazon ParallelCluster version 3.11.0, the update policy is: The login nodes in the pool must be stopped for this setting to be changed for an update.  
`InstanceProfile` (**Optional** `String`)  
Specifies an instance profile to override the default login node instance profile. You can't specify both `InstanceProfile` and `InstanceRole`. The format is `arn:Partition:iam::Account:instance-profile/InstanceProfileName`. If this is specified, the `InstanceRole` and `AdditionalIamPolicies` settings can't be specified.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)  
`InstanceRole` (**Optional** `String`)  
Specifies an instance role to override the default login node instance role. You can't specify both `InstanceProfile` and `InstanceRole`. The format is `arn:Partition:iam::Account:role/RoleName`. If this is specified, the `InstanceProfile` and `AdditionalIamPolicies` settings can't be specified.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)  
`AdditionalIamPolicies` (**Optional**)  

```
AdditionalIamPolicies:
  - Policy: string
```
An IAM policy Amazon Resource Name (ARN).  
Specifies a list of Amazon Resource Names (ARNs) of IAM policies for Amazon EC2. This list is attached to the root role used for the login node in addition to the permissions that are required by Amazon ParallelCluster.  
An IAM policy name and its ARN are different. Names can't be used.  
If this is specified, the `InstanceProfile` and `InstanceRole` settings can't be specified. We recommend that you use `AdditionalIamPolicies` because `AdditionalIamPolicies` are added to the permissions that Amazon ParallelCluster requires, and the `InstanceRole` must include all required permissions. The required permissions often change from release to release as features are added.  
There's no default value.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)    
`Policy` (**Required** `[String]`)  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

# `Monitoring` section


**(Optional)** Specifies the monitoring settings for the cluster.

```
Monitoring:
  Logs:
    CloudWatch:
      Enabled: boolean
      RetentionInDays: integer
      DeletionPolicy: string
    Rotation:
      Enabled: boolean
  Dashboards:
    CloudWatch:
      Enabled: boolean
  DetailedMonitoring: boolean
  Alarms:
   Enabled: boolean
```

[Update policy: This setting is not analyzed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-ignored-v3)

## `Monitoring` properties


`Logs` (**Optional**)  
The log settings for the cluster.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)    
`CloudWatch` (**Optional**)  
The CloudWatch Logs settings for the cluster.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)    
`Enabled` (**Required**, `Boolean`)  
If `true`, cluster logs are streamed to CloudWatch Logs. The default value is `true`.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)  
`RetentionInDays` (**Optional**, `Integer`)  
The number of days to retain the log events in CloudWatch Logs. The default value is 180. The supported values are 0, 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1827, and 3653. A value of 0 will use the default CloudWatch log retention setting, i.e. never expire.  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)  
`DeletionPolicy` (**Optional**, `String`)  
Indicates whether to delete log events on CloudWatch Logs when the cluster is deleted. The possible values are `Delete` and `Retain`. The default value is `Retain`.  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)  
`Rotation` (**Optional**)  
The log rotation settings for the cluster.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)    
`Enabled` (**Required**, `Boolean`)  
If `true`, log rotation is enabled. The default is `true`. When a Amazon ParallelCluster configured log file reaches a certain size, it is rotated and a single backup is maintained. For more information, see [Amazon ParallelCluster configured log rotation](log-rotation-v3.md).  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

`Dashboards` (**Optional**)  
The dashboard settings for the cluster.  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)    
`CloudWatch` (**Optional**)  
The CloudWatch dashboard settings for the cluster.  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)    
`Enabled` (**Required**, `Boolean`)  
If `true`, the CloudWatch dashboard is enabled. The default value is `true`.  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)

`DetailedMonitoring` (**Optional**, `Boolean`)  
If set to `true`, detailed monitoring is enabled for the compute fleet Amazon EC2 instances. When enabled, the Amazon EC2 console displays graphs for monitoring the instances at 1 minute intervals. There are added costs when this feature is enabled. The default is `false`.  
For more information, see [Enable or turn off detailed monitoring for your instances](https://docs.amazonaws.cn/AWSEC2/latest/UserGuide/using-cloudwatch-new.html) in the *Amazon EC2 User Guide for Linux Instances*.  
[Update policy: The compute fleet must be stopped for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-compute-fleet-v3)  
`DetailedMonitoring` is added starting with Amazon ParallelCluster version 3.6.0.

`Alarms` (**Optional**)  
CloudWatch Alarms for the cluster.  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)    
`Enabled` (**Optional**)  
If `true`, the CloudWatch Alarms for the cluster will be created. The default value is `true`.  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)  
Starting with Amazon ParallelCluster version 3.8.0, the following alarms are created for the Head Node: Amazon EC2 Health Check, CPU/Memory/Disk utilization and a composite alarm that includes all the others.

# `Tags` section


**(Optional), Array** Defines the tags that are used by Amazon CloudFormation and propagated to all the cluster resources. For more information, see [Amazon CloudFormation resource tag](https://docs.amazonaws.cn/AWSCloudFormation/latest/UserGuide/aws-properties-resource-tags.html) in the *Amazon CloudFormation User Guide*.

```
Tags:
  - Key: string
    Value: string
```

[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)

## `Tags` properties


`Key` (**Required**, `String`)  
Defines the name of the tag.  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)

`Value` (**Required**, `String`)  
Defines the value of the tag.  
[Update policy: This setting can be changed during an update.](using-pcluster-update-cluster-v3.md#update-policy-setting-supported-v3)

**Note**  
Starting with Amazon ParallelCluster 3.15.0, Tag updates are supported with the following limitations:  
EBS Volume on HeadNode - Will only retain the tags from when the cluster was created; updating tags on this EBS volume is not supported.
Running Nodes - Tag updates will not be applied to running compute or login nodes.

# `AdditionalPackages` section


**(Optional)** Used to identify additional packages to install.

```
AdditionalPackages:
  IntelSoftware:
    IntelHpcPlatform: boolean
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

## `IntelSoftware`


**(Optional)** Defines the configuration for Intel select solutions.

```
IntelSoftware:
  IntelHpcPlatform: boolean
```

[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

### `IntelSoftware` properties


` IntelHpcPlatform` (**Optional**, `Boolean`)  
If `true`, indicates that the [ End user license agreement](https://software.intel.com/en-us/articles/end-user-license-agreement) for Intel Parallel Studio is accepted. This causes Intel Parallel Studio to be installed on the head node and shared with the compute nodes. This adds several minutes to the time it takes the head node to bootstrap.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)  
Starting with Amazon ParallelCluster version 3.10.0 the `IntelHpcPlatform` parameter is no longer supported.

# `DirectoryService` section


**Note**  
Support for `DirectoryService` was added in Amazon ParallelCluster version 3.1.1.

**(Optional)** The directory service settings for a cluster that supports multiple user access.

Amazon ParallelCluster manages permissions that support multiple user access to clusters with an Active Directory (AD) over Lightweight Directory Access Protocol (LDAP) supported by the [System Security Services Daemon (SSSD)](https://sssd.io/docs/introduction.html). For more information, see [ What is Amazon Directory Service?](https://docs.amazonaws.cn/directoryservice/latest/admin-guide/what_is.html) in the *Amazon Directory Service Administration Guide*.

We recommend that you use LDAP over TLS/SSL (abbreviated LDAPS for short) to ensure that any potentially sensitive information is transmitted over encrypted channels.

```
DirectoryService:
  DomainName: string
  DomainAddr: string
  PasswordSecretArn: string
  DomainReadOnlyUser: string
  LdapTlsCaCert: string
  LdapTlsReqCert: string
  LdapAccessFilter: string
  GenerateSshKeysForUsers: boolean
  AdditionalSssdConfigs: dict
```

[Update policy: The compute fleet must be stopped for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-compute-fleet-v3)

## `DirectoryService` properties


**Note**  
If you plan to use Amazon ParallelCluster in a single subnet with no internet access, see [Amazon ParallelCluster in a single subnet with no internet access](aws-parallelcluster-in-a-single-public-subnet-no-internet-v3.md) for additional requirements.

`DomainName` (**Required**, `String`)  
The Active Directory (AD) domain that you use for identity information.  
`DomainName` accepts both the Fully Qualified Domain Name (FQDN) and LDAP Distinguished Name (DN) formats.  
+ FQDN example: `corp.example.com`
+ LDAP DN example: `DC=corp,DC=example,DC=com`
This property corresponds to the sssd-ldap parameter that's called `ldap_search_base`.  
[Update policy: The compute fleet must be stopped for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-compute-fleet-v3)

`DomainAddr` (**Required**, `String`)  
The URI or URIs that point to the AD domain controller that's used as the LDAP server. The URI corresponds to the SSSD-LDAP parameter that's called `ldap_uri`. The value can be a comma separated string of URIs. To use LDAP, you must add `ldap://` to the beginning of the each URI.  
Example values:  

```
ldap://192.0.2.0,ldap://203.0.113.0          # LDAP
ldaps://192.0.2.0,ldaps://203.0.113.0        # LDAPS without support for certificate verification
ldaps://abcdef01234567890.corp.example.com  # LDAPS with support for certificate verification
192.0.2.0,203.0.113.0                        # Amazon ParallelCluster uses LDAPS by default
```
If you use LDAPS with certificate verification, the URIs must be hostnames.  
If you use LDAPS without certificate verification or LDAP, URIs can be hostnames or IP addresses.  
Use LDAP over TLS/SSL (LDAPS) to avoid transmission of passwords and other sensitive information over unencrypted channels. If Amazon ParallelCluster doesn't find a protocol, it adds `ldaps://` to the beginning of each URI or hostname.  
[Update policy: The compute fleet must be stopped for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-compute-fleet-v3)

`PasswordSecretArn` (**Required**, `String`)  
The Amazon Resource Name (ARN) of the Amazon Secrets Manager secret that contains the `DomainReadOnlyUser` plaintext password. The content of the secret corresponds to SSSD-LDAP parameter that's called `ldap_default_authtok`.  
When you use the Amazon Secrets Manager console to create a secret, be sure to select "Other type of secret", select plaintext, and only include the password text in the secret.  
For more information on how to use Amazon Secrets Manager to create a secret refer to [Create an Amazon Secrets Manager Secret](https://docs.amazonaws.cn//secretsmanager/latest/userguide/create_secret)
The LDAP client uses the password to authenticate to the AD domain as a `DomainReadOnlyUser` when it requests identity information.  
If the user has the permission to [https://docs.amazonaws.cn/secretsmanager/latest/apireference/API_DescribeSecret.html](https://docs.amazonaws.cn/secretsmanager/latest/apireference/API_DescribeSecret.html), `PasswordSecretArn` is validated. `PasswordSecretArn` is valid if the specified secret exists. If the user IAM policy doesn't include `DescribeSecret`, `PasswordSecretArn` isn't validated and a warning message is displayed. For more information, see [Base Amazon ParallelCluster `pcluster` user policy](iam-roles-in-parallelcluster-v3.md#iam-roles-in-parallelcluster-v3-base-user-policy).  
When the value of the secret changes, the cluster *isn't* automatically updated. To update the cluster for the new secret value, you must stop the compute fleet with the [`pcluster update-compute-fleet`](pcluster.update-compute-fleet-v3.md) command and then run the following command from within the head node.  

```
$ sudo /opt/parallelcluster/scripts/directory_service/update_directory_service_password.sh
```
[Update policy: The compute fleet must be stopped for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-compute-fleet-v3)

`DomainReadOnlyUser` (**Required**, `String`)  
The identity that's used to query the AD domain for identity information when authenticating cluster user logins. It corresponds to SSSD-LDAP parameter that's called `ldap_default_bind_dn`. Use your AD identity information for this value.  
Specify the identity in the form required by the specific LDAP client that's on the node:  
+ MicrosoftAD:

  ```
  cn=ReadOnlyUser,ou=Users,ou=CORP,dc=corp,dc=example,dc=com
  ```
+ SimpleAD:

  ```
  cn=ReadOnlyUser,cn=Users,dc=corp,dc=example,dc=com
  ```
[Update policy: The compute fleet must be stopped for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-compute-fleet-v3)

`LdapTlsCaCert` (**Optional**, `String`)  
The absolute path to a certificates bundle that contains the certificates for every certification authority in the certification chain that issued a certificate for the domain controllers. It corresponds to the SSSD-LDAP parameter that's called `ldap_tls_cacert`.  
A certificate bundle is a file that's composed of the concatenation of distinct certificates in PEM format, also known as DER Base64 format in Windows. It is used to verify the identity of the AD domain controller that acts as the LDAP server.  
Amazon ParallelCluster isn't responsible for initial placement of certificates onto nodes. As the cluster administrator, you can configure the certificate in the head node manually after the cluster is created or you can use a [bootstrap script](custom-bootstrap-actions-v3.md). Alternatively, you can use an Amazon Machine Image (AMI) that includes the certificate configured on the head node.  
[Simple AD](https://docs.amazonaws.cn/directoryservice/latest/admin-guide/directory_simple_ad.html) doesn't provide LDAPS support. To learn how to integrate a Simple AD directory with Amazon ParallelCluster, see [How to configure an LDAPS endpoint for Simple AD](https://amazonaws-china.com/blogs/security/how-to-configure-ldaps-endpoint-for-simple-ad/) in the *Amazon Security Blog*.  
[Update policy: The compute fleet must be stopped for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-compute-fleet-v3)

`LdapTlsReqCert` (**Optional**, `String`)  
Specifies what checks to perform on server certificates in a TLS session. It corresponds to SSSD-LDAP parameter that's called `ldap_tls_reqcert`.  
Valid values: `never`, `allow`, `try`, `demand`, and `hard`.  
`never`, `allow`, and `try` enable connections to proceed even if problems with certificates are found.  
`demand` and `hard` enable communication to continue if no problems with certificates are found.  
If the cluster administrator uses a value that doesn't require the certificate validation to succeed, a warning message is returned to the administrator. For security reasons, we recommend that you don't disable certificate verification.  
The default value is `hard`.  
[Update policy: The compute fleet must be stopped for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-compute-fleet-v3)

`LdapAccessFilter` (**Optional**, `String`)  
Specifies a filter to limit directory access to a subset of users. This property corresponds to the SSSD-LDAP parameter that's called `ldap_access_filter`. You can use it to limit queries to an AD that supports a large number of users.  
This filter can block user access to the cluster. However, it doesn't impact the discoverability of blocked users.  
If this property is set, the SSSD parameter `access_provider` is set to `ldap` internally by Amazon ParallelCluster and must not be modified by [`DirectoryService`](#DirectoryService-v3) / [`AdditionalSssdConfigs`](#yaml-DirectoryService-AdditionalSssdConfigs) settings.  
If this property is omitted and customized user access isn't specified in [`DirectoryService`](#DirectoryService-v3) / [`AdditionalSssdConfigs`](#yaml-DirectoryService-AdditionalSssdConfigs), all users in the directory can access the cluster.  
Examples:  

```
"!(cn=SomeUser*)"  # denies access to every user with an alias that starts with "SomeUser"
"(cn=SomeUser*)"   # allows access to every user with alias that starts with "SomeUser"
"memberOf=cn=TeamOne,ou=Users,ou=CORP,dc=corp,dc=example,dc=com" # allows access only to users in group "TeamOne".
```
[Update policy: The compute fleet must be stopped for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-compute-fleet-v3)

`GenerateSshKeysForUsers` (**Optional**, `Boolean`)  
Defines whether Amazon ParallelCluster generates an SSH key for cluster users immediately after their initial authentication on the head node.  
If set to `true`, an SSH key is generated and saved to `USER_HOME_DIRECTORY/.ssh/id_rsa`, if it doesn't exist, for every user after their first authentication on the head node.  

For a user that has not yet been authenticated on the head node, first authentication can happen in the following cases:
+ The user logs into the head node for the first time with her or his own password.
+ In the head node, a sudoer switches to the user for the first time: `su USERNAME`
+ In the head node, a sudoer runs a command as the user for the first time: `su -u USERNAME COMMAND`
Users can use the SSH key for subsequent logins to the cluster head node and compute nodes. With Amazon ParallelCluster, password logins to cluster compute nodes are disabled by design. If a user hasn't logged into the head node, SSH keys aren't generated and the user won't be able to log in to compute nodes.  
The default is `true`.  
[Update policy: The compute fleet must be stopped for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-compute-fleet-v3)

`AdditionalSssdConfigs` (**Optional**, `Dict`)  
A dictionary of key-value pairs that contain SSSD parameters and values to write to the SSSD config file on cluster instances. For a full description of the SSSD configuration file, see the on-instance man pages for `SSSD` and related configuration files.  
The SSSD parameters and values must be compatible with Amazon ParallelCluster's SSSD configuration as described in the following list.  
+ `id_provider` is set to `ldap` internally by Amazon ParallelCluster and must not be modified.
+ `access_provider` is set to `ldap` internally by Amazon ParallelCluster when [`DirectoryService`](#DirectoryService-v3) / [`LdapAccessFilter`](#yaml-DirectoryService-LdapAccessFilter) is specified, and this setting must not be modified.

  If [`DirectoryService`](#DirectoryService-v3) / [`LdapAccessFilter`](#yaml-DirectoryService-LdapAccessFilter) is omitted, its `access_provider` specification is omitted also. For example, if you set `access_provider` to `simple` in [`AdditionalSssdConfigs`](#yaml-DirectoryService-AdditionalSssdConfigs), then [`DirectoryService`](#DirectoryService-v3) / [`LdapAccessFilter`](#yaml-DirectoryService-LdapAccessFilter) must not be specified.
The following configuration snippets are examples of valid configurations for `AdditionalSssdConfigs`.  
This example enables debug level for SSSD logs, restricts the search base to a specific organizational unit, and disables credentials caching.  

```
DirectoryService:
  ...
  AdditionalSssdConfigs:
    debug_level: "0xFFF0"
    ldap_search_base: OU=Users,OU=CORP,DC=corp,DC=example,DC=com
    cache_credentials: False
```
This example specifies the configuration of an SSSD [https://www.mankier.com/5/sssd-simple](https://www.mankier.com/5/sssd-simple) `access_provider`. Users from the `EngineeringTeam` are provided access to the directory. [`DirectoryService`](#DirectoryService-v3) / [`LdapAccessFilter`](#yaml-DirectoryService-LdapAccessFilter) must not be set in this case.  

```
DirectoryService:
  ...
  AdditionalSssdConfigs:
    access_provider: simple
    simple_allow_groups: EngineeringTeam
```
[Update policy: The compute fleet must be stopped for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-compute-fleet-v3)

# `DeploymentSettings` section


**Note**  
`DeploymentSettings` is added starting with Amazon ParallelCluster version 3.4.0.

**(Optional)** Specifies the deployment settings configuration.

```
DeploymentSettings:
  LambdaFunctionsVpcConfig:
    SecurityGroupIds:
      - string
    SubnetIds:
      - string
  DisableSudoAccessForDefaultUser: Boolean
  DefaultUserHome: string # 'Shared' or 'Local'
```

## `DeploymentSettings` properties


### `LambdaFunctionsVpcConfig`


**(Optional)** Specifies the Amazon Lambda functions VPC configurations. For more information, see [Amazon Lambda VPC configuration in Amazon ParallelCluster](lambda-vpc-v3.md).

```
LambdaFunctionsVpcConfig:
  SecurityGroupIds:
    - string
  SubnetIds:
    - string
```

#### `LambdaFunctionsVpcConfig properties`


 `SecurityGroupIds` (**Required**, `[String]`)  
The list of Amazon VPC security group IDs that are attached to the Lambda functions.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

 `SubnetIds` (**Required**, `[String]`)  
The list of subnet IDs that are attached to the Lambda functions.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

**Note**  
The subnets and security groups must be in the same VPC.

### DisableSudoAccessForDefaultUser property


**Note**  
This config Option is only supported with Slurm Clusters.

(Optional) If `True` , the sudo privileges of the default User will be disabled. This applies to all the nodes in the cluster.

```
# Main DeploymentSettings section in config yaml(applies to HN, CF and LN)
DeploymentSettings:
  DisableSudoAccessForDefaultUser: True
```

To update the value of `DisableSudoAccessForDefaultUser`, you must stop the compute fleet and all login nodes.

[Update policy: The compute fleet and login nodes must be stopped for this setting to be changed for an update.](using-pcluster-update-cluster-v3.md#update-policy-compute-login-v3)

### DefaultUserHome property


When set to `Shared`, the cluster will use the default setup and share the default user’s directory across the cluster by `/home/<default user>`.

When set to `Local`, the head node, login nodes, and compute nodes will each have a separate local default user directory stored in `local/home/<default user>`.

# Build image configuration files


Amazon ParallelCluster version 3 uses YAML 1.1 files for build image configuration parameters. Please confirm that indentation is correct to reduce configuration errors. For more information, see the YAML 1.1 spec at [https://yaml.org/spec/1.1/](https://yaml.org/spec/1.1/).

These configuration files are used to define how your custom Amazon ParallelCluster AMIs are built using EC2 Image Builder. Custom AMI building processes are triggered using the [`pcluster build-image`](pcluster.build-image-v3.md) command. For some example configuration files, see [https://github.com/aws/aws-parallelcluster/tree/release-3.0/cli/tests/pcluster/schemas/test\$1imagebuilder\$1schema/test\$1imagebuilder\$1schema](https://github.com/aws/aws-parallelcluster/tree/release-3.0/cli/tests/pcluster/schemas/test_imagebuilder_schema/test_imagebuilder_schema).

**Topics**
+ [

## Build image configuration file properties
](#build-image-v3.properties)
+ [

# `Build` section
](Build-v3.md)
+ [

# `Image` section
](build-Image-v3.md)
+ [

# `DeploymentSettings` section
](DeploymentSettings-build-image-v3.md)

## Build image configuration file properties


`Region` (**Optional**, `String`)  
Specifies the Amazon Web Services Region for the `build-image` operation. For example, `us-east-2`.

`CustomS3Bucket` (**Optional**, `String`)  
Specifies the name of an Amazon S3 bucket that is created in your Amazon account to store resources that are used by the custom AMI build process and to export logs. The info used by the image is in the custom bucket for image config. Amazon ParallelCluster maintains one Amazon S3 bucket in each Amazon Region that you create clusters in. By default, these Amazon S3 buckets are named `parallelcluster-hash-v1-DO-NOT-DELETE`.

# `Build` section


**(Required)** Specifies the configuration in which the image will be built.

```
Build:
  Imds:
    ImdsSupport: string
  InstanceType: string
  SubnetId: string
  ParentImage: string
  Iam:
    InstanceRole: string
    InstanceProfile: string
    CleanupLambdaRole: string
    AdditionalIamPolicies:
      - Policy: string
    PermissionsBoundary: string
  Components:
    - Type: string
      Value: string
  Tags:
    - Key: string
      Value: string
  SecurityGroupIds:
    - string
  UpdateOsPackages:
    Enabled: boolean
  Installation:
    NvidiaSoftware: 
      Enabled: boolean
    LustreClient:
      Enabled: boolean
```

## `Build` properties


`InstanceType` (**Required**, `String`)  
Specifies the instance type for the instance used to build the image.

`SubnetId` (**Optional**, `String`)  
Specifies the ID of an existing subnet in which to provision the instance to build the image. The provided subnet requires internet access. Note that you might need to [ Modify the IP addressing attributes of your subnet](https://docs.amazonaws.cn/vpc/latest/userguide/subnet-public-ip.html) if the build fails.  
`pcluster build-image` uses the default VPC. If the default VPC has been deleted, perhaps by using Amazon Control Tower or Amazon Landing Zone, then the subnet ID must be specified.
When you specify the SubnetId, it is recommended to specify the SecurityGroupIds property as well. If you leave SecurityGroupIds out, Amazon ParallelCluster will use default security groups or rely on the default behavior within the specified subnet. When you use both, you gain these advantages:  
+ Granular control: When you explicitly define both you ensure the instances launched during the image build process are placed in the correct subnet and have the precise network access you need for your build components and any required services (like access to S3 for build scripts).
+ Security best practices: When you define appropriate security groups this helps restrict network access to only necessary ports and services, which enhances the security of your build environment.
+ Avoiding potential issues: If you rely solely on defaults this might result in security groups that are too open or too restrictive, which can lead to problems during the build process.

`ParentImage` (**Required**, `String`)  
Specifies the base image. The parent image can be either a non Amazon ParallelCluster AMI or an official Amazon ParallelCluster AMI for the same version. You can't use a Amazon ParallelCluster official or custom AMI from a different version of Amazon ParallelCluster. The format must either be the ARN of a image `arn:Partition:imagebuilder:Region:Account:image/ImageName/ImageVersion` or an AMI ID `ami-12345678`.

`SecurityGroupIds` (**Optional**, `[String]`)  
Specifies the list of security group IDs for the image.

### `Imds`


#### `Imds` properties


**(Optional)** Specifies the Amazon EC2 ImageBuilder build and test instance metadata service (IMDS) settings.

```
Imds:
  ImdsSupport: string
```

`ImdsSupport` (**Optional**, `String`)  
Specifies which IMDS versions are supported in the Amazon EC2 ImageBuilder build and test instances. Supported values are `v2.0` and `v1.0`. The default value is `v2.0`.  
If `ImdsSupport` is set to `v1.0`, both IMDSv1 and IMDSv2 are supported.  
If `ImdsSupport` is set to `v2.0`, only IMDSv2 is supported.  
For more information, see [Use IMDSv2](https://docs.amazonaws.cn/AWSEC2/latest/UserGuide/configuring-instance-metadata-service.html) in the *Amazon EC2 User Guide for Linux instances*.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)  
Starting with Amazon ParallelCluster version 3.7.0, the `ImdsSupport` default value is `v2.0`. We recommend that you set `ImdsSupport` to `v2.0` and replace IMDSv1 with IMDSv2 in your custom actions calls.  
Support for [`Imds`](#Build-v3-Imds) / [`ImdsSupport`](#yaml-build-image-Build-Imds-ImdsSupport) is added with Amazon ParallelCluster version 3.3.0.

### `Iam`


#### `Iam` properties


(**Optional**) Specifies the IAM resources for the image build.

```
Iam:
  InstanceRole: string
  InstanceProfile: string
  CleanupLambdaRole: string
  AdditionalIamPolicies:
    - Policy: string
  PermissionsBoundary: string
```

`InstanceProfile` (**Optional**, `String`)  
Specifies an instance profile to override the default instance profile for the EC2 Image Builder instance. `InstanceProfile` and `InstanceRole` and `AdditionalIamPolicies` cannot be specified together. The format is `arn:Partition:iam::Account:instance-profile/InstanceProfileName`.

`InstanceRole` (**Optional**, `String`)  
Specifies an instance role to override the default instance role for the EC2 Image Builder instance. `InstanceProfile` and `InstanceRole` and `AdditionalIamPolicies` cannot be specified together. The format is `arn:Partition:iam::Account:role/RoleName`.

`CleanupLambdaRole` (**Optional**, `String`)  
The ARN of the IAM role to use for the Amazon Lambda function that backs the Amazon CloudFormation custom resource that removes build artifacts on build completion. Lambda needs to be configured as the principal allowed to assume the role. The format is `arn:Partition:iam::Account:role/RoleName`.

`AdditionalIamPolicies` (**Optional**)  
Specifies additional IAM policies to attach to the EC2 Image Builder instance used to produce the custom AMI.  

```
AdditionalIamPolicies:
  - Policy: string
```  
`Policy` (**Optional**, `[String]`)  
List of IAM policies. The format is `arn:Partition:iam::Account:policy/PolicyName`.

`PermissionsBoundary` (**Optional**, `String`)  
The ARN of the IAM policy to use as permissions boundary for all roles created by Amazon ParallelCluster. For more information on IAM permissions boundaries please refer to [Permissions boundaries for IAM entities](https://docs.amazonaws.cn/IAM/latest/UserGuide/access_policies_boundaries.html) in the *IAM User Guide*. The format is `arn:Partition:iam::Account:policy/PolicyName`.

### `Components`


#### `Components` properties


(**Optional**) Specifies Amazon EC2 ImageBuilder components to use during the AMI build process in addition to the ones provided by default by Amazon ParallelCluster. Such components can be used to customize the AMI build process. For more information, see [Amazon ParallelCluster AMI customization](custom-ami-v3.md).

```
Components:
  - Type: string
    Value: string
```

`Type` (**Optional**, `String`)  
Specifies the type of the type-value pair for the component. Type can be `arn` or `script`.

`Value` (**Optional**, `String`)  
Specifies the value of the type-value pair for the component. When type is `arn`, this is the ARN of a EC2 Image Builder component. When type is `script`, this is the https or s3 link that points to the script to use when you create the EC2 Image Builder component.

### `Tags`


#### `Tags` properties


(**Optional**) Specifies the list of tags to be set in the resources used to build the AMI.

```
Tags:
  - Key: string
    Value: string
```

`Key` (**Optional**, `String`)  
Defines the name of the tag.

`Value` (**Optional**, `String`)  
Defines the value of the tag.

### `UpdateOsPackages`


#### `UpdateOsPackages` properties


(**Optional**) Specifies whether the operating system is updated before installing Amazon ParallelCluster software stack.

```
UpdateOsPackages:
  Enabled: boolean
```

`Enabled` (**Optional**, `Boolean`)  
If `true`, the OS is updated and rebooted before installing the Amazon ParallelCluster software. The default is `false`.  
When `UpdateOsPackages` is enabled, all available OS packages are updated, including the kernel. As a customer, it is your responsibility to verify that the update is compatible with the AMI dependencies that aren't included in the update.  
For example, suppose you want to build an AMI for Amazon ParallelCluster version X.0 that's shipped with kernel version Y.0 and some component version Z.0. Suppose the available update includes updated kernel version Y.1 without updates to component Z.0. Before you enable `UpdateOsPackages`, it's your responsibility to verify that component Z.0 supports kernel Y.1.

### `Installation`


#### `Installation` properties


**(Optional)** Specifies additional software to be installed on the image.

```
Installation:
  NvidiaSoftware: 
    Enabled: boolean
  LustreClient:
    Enabled: boolean
```

`NvidiaSoftware` properties (**Optional**)  
Specifies the Nvidia Software to be installed.  

```
NvidiaSoftware: 
    Enabled: boolean
```  
`Enabled` (**Optional**, `boolean`)  
If `true`, the Nvidia GPU driver and CUDA will be installed. The default is `false`.

`LustreClient` properties (**Optional**)  
Specifies that the Amazon FSx Lustre client will be installed.  

```
LustreClient:
    Enabled: boolean
```  
`Enabled` (**Optional**, `boolean`)  
If `true`, the Lustre client will be installed. The default is `true`.

# `Image` section


**(Optional)** Defines the image properties for the image build.

```
Image:
  Name: string
  RootVolume:
    Size: integer
    Encrypted: boolean
    KmsKeyId: string
  Tags:
    - Key: string
      Value: string
```

## `Image` properties


`Name` (**Optional**, `String`)  
Specifies the name of the AMI. If not specified, the name used when calling the [`pcluster build-image`](pcluster.build-image-v3.md) command is used.

### `Tags`


#### `Tags` properties


(**Optional**) Specifies key-value pairs for the image.

```
Tags:
  - Key: string
    Value: string
```

`Key` (**Optional**, `String`)  
Defines the name of the tag.

`Value` (**Optional**, `String`)  
Defines the value of the tag.

### `RootVolume`


#### `RootVolume` properties


(**Optional**) Specifies properties of the root volume for the image.

```
RootVolume:
  Size: integer
  Encrypted: boolean
  KmsKeyId: string
```

`Size` (**Optional**, `Integer`)  
Specifies the size of the root volume for the image, in GiB. The default size is the size of the [`ParentImage`](Build-v3.md#yaml-build-image-Build-ParentImage) plus 27 GiB.

`Encrypted` (**Optional**, `Boolean`)  
Specifies if the volume is encrypted. The default value is `false`.

`KmsKeyId` (**Optional**, `String`)  
Specifies the ARN of the Amazon KMS key used to encrypt the volume. The format is "`arn:Partition:kms:Region:Account:key/KeyId`.

# `DeploymentSettings` section


**Note**  
`DeploymentSettings` is added starting with Amazon ParallelCluster version 3.4.0.

**(Optional)** Specifies the deployment settings configuration.

```
DeploymentSettings:
  LambdaFunctionsVpcConfig:
    SecurityGroupIds:
      - string
    SubnetIds:
      - string
```

## `DeploymentSettings` properties


### `LambdaFunctionsVpcConfig`


**(Optional)** Specifies the Amazon Lambda functions VPC configurations. For more information, see [Amazon Lambda VPC configuration in Amazon ParallelCluster](lambda-vpc-v3.md).

```
LambdaFunctionsVpcConfig:
  SecurityGroupIds:
    - string
  SubnetIds:
    - string
```

#### `LambdaFunctionsVpcConfig properties`


 `SecurityGroupIds` (**Required**, `[String]`)  
The list of Amazon VPC security group IDs that are attached to the Lambda functions.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

 `SubnetIds` (**Required**, `[String]`)  
The list of subnet IDs that are attached to the Lambda functions.  
[Update policy: If this setting is changed, the update is not allowed.](using-pcluster-update-cluster-v3.md#update-policy-fail-v3)

**Note**  
The subnets and security groups must be in the same VPC.