Settings and prerequisites
The cluster setup uses parameters, including SID
and System
Number
that are unique to your setup. It is useful to predetermine
the values with the following examples and guidance.
Topics
Define reference parameters for setup
The cluster setup relies on the following parameters.
Topics
Global Amazon parameters
Name | Parameter | Example |
---|---|---|
Amazon Web Services account ID | <account_id> |
123456789100 |
Amazon Web Services Region | <region_id> |
us-east-1 |
-
Amazon Web Services account – For more details, see Your Amazon Web Services account ID and its alias.
-
Amazon Web Services Region – For more details, see Describe your Regions.
Amazon EC2 instance parameters
Name | Parameter | Primary example | Secondary example |
---|---|---|---|
Amazon EC2 instance ID | <instance_id> |
i-xxxxinstidforhost1 |
i- xxxxinstidforhost2 |
Hostname | <hostname> |
slxhost01 |
slxhost02 |
Host IP | <host_ip> |
10.1.10.1 |
10.1.20.1 |
Host additional IP | <host_additional_ip> |
10.1.10.2 |
10.1.20.2 |
Configured subnet | <subnet_id> |
subnet-xxxxxxxxxxsubnet1 |
subnet-xxxxxxxxxxsubnet2 |
-
Hostname – Hostnames must comply with SAP requirements outlined in SAP Note 611361 - Hostnames of SAP ABAP Platform servers
(requires SAP portal access). Run the following command on your instances to retrieve the hostname.
hostname
-
Amazon EC2 instance ID – run the following command (IMDSv2 compatible) on your instances to retrieve instance metadata.
/usr/bin/curl --noproxy '*' -w "\n" -s -H "X-aws-ec2-metadata-token: $(curl --noproxy '*' -s -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600")" http://169.254.169.254/latest/meta-data/instance-id
For more details, see Retrieve instance metadata and Instance identity documents.
SAP and Pacemaker resource parameters
Name | Parameter | Example |
---|---|---|
SID | <SID> or
<sid> |
SLX |
ASCS Alias | <ascs_virt_hostname> |
slxascs |
ASCS System Number | <ascs_sys_nr> |
00 |
ASCS Overlay IP | <ascs_oip> |
172.16.30.5 |
ASCS NFS Mount Point | <ascs_nfs_mount_point> |
/SLX_ASCS00 |
ERS Alias | <ers_virt_hostname> |
slxers |
ERS System Number | <ers_sys_nr> |
10 |
ERS Overlay IP | <ers_oip> |
172.16.30.6 |
ERS NFS Mount Point | <ers_nfs_mount_point> |
/SLX_ERS10 |
ENSA Type | <ensa_type> |
ENSA2 |
VPC Route Tables | <rtb_id> |
rtb-xxxxxroutetable1 |
Sapmnt NFS ID or CNAME | <sapmnt_nfs_id> |
fs-xxxxxxxxxxxxxefs1 |
-
SAP details – SAP parameters, including SID and instance number must follow the guidance and limitations of SAP and Software Provisioning Manager. Refer to SAP Note 1979280 - Reserved SAP System Identifiers (SAPSID) with Software Provisioning Manager
for more details. Post-installation, use the following command to find the details of the instances running on a host.
sudo /usr/sap/hostctrl/exe/saphostctrl -function ListInstances
-
Overlay IP – This value is defined by you. For more information, see Overlay IP.
-
NFS mount points – This value is defined by you. Consider which systems are going to share an NFS file system (Amazon EFS or Amazon FSx), and ensure that your naming standards allow it.
SLES cluster parameters
Name | Parameter | Example |
---|---|---|
Cluster user | cluster_user |
hacluster |
Cluster password | cluster_password |
|
Cluster tag | cluster_tag |
pacemaker |
Amazon CLI cluster profile | aws_cli_cluster_profile |
cluster |
Cluster connector | cluster_connector |
sap-suse-cluster-connector |
Amazon EC2 instance settings
Amazon EC2 instance settings can be applied using Infrastructure as Code or manually using Amazon Command Line Interface or Amazon Web Services Management Console. We recommend Infrastructure as Code automation to reduce manual steps, and ensure consistency.
Topics
Create IAM roles and policies
In addition to the permissions required for standard SAP operations, two IAM policies are required for the cluster to control Amazon resources on ASCS. These policies must be assigned to your Amazon EC2 instance using an IAM role. This enables Amazon EC2 instance, and therefore the cluster to call Amazon Web Services services.
Create these policies with least-privilege permissions, granting access to only the specific resources that are required within the cluster. For multiple clusters, you need to create multiple policies.
For more information, see IAM roles for Amazon EC2.
STONITH policy
The SLES STONITH resource agent (external/ec2
)
requires permission to start and stop both the nodes of the
cluster. Create a policy as shown in the following example.
Attach this policy to the IAM role assigned to both Amazon EC2
instances in the cluster.
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:DescribeInstances", "ec2:DescribeTags" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "ec2:StartInstances", "ec2:StopInstances" ], "Resource": [ "arn:aws:ec2:<region>:<account_id>:instance/<instance_id_1>", "arn:aws:ec2:<region>:<account_id>:instance/<instance_id_2>" ] } ] }
Amazon Overlay IP policy
The SLES Overlay IP resource agent (aws-vpc-move-ip
)
requires permission to modify a routing entry in route tables.
Create a policy as shown in the following example. Attach this
policy to the IAM role assigned to both Amazon EC2 instances in the
cluster.
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "ec2:ReplaceRoute", "Resource": [ "arn:aws:ec2:<region>:<account_id>:route-table/<rtb_id_1>", "arn:aws:ec2:<region>:<account_id>:route-table/<rtb_id_2>" ] }, { "Effect": "Allow", "Action": "ec2:DescribeRouteTables", "Resource": "*" } ] }
Note
If you are using a Shared VPC, see Shared VPC – optional.
Assign IAM role
The two cluster resource IAM policies must be assigned to an IAM
role associated with your Amazon EC2 instance. If an IAM role is not
associated to your instance, create a new IAM role for cluster
operations. To assign the role, go to https://console.amazonaws.cn/ec2/
Modify security groups for cluster communication
A security group controls the traffic that is allowed to reach and leave the resources that it is associated with. For more information, see Control traffic to your Amazon resources using security groups.
In addition to the standard ports required to access SAP and administrative functions, the following rules must be applied to the security groups assigned to both Amazon EC2 instances in the cluster.
Inbound | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Source | Protocol | Port range | Description | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
The security group ID (its own resource ID) | UDP | 5405 | Allows UDP traffic between cluster resources for corosync communication | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Bastion host security group or CIDR range for administration | TCP | 7630 | optional Used for SLES
Hawk2 Interface for monitoring and administration
using a Web Interface For more information, see
Configuring and Managing Cluster Resources with Hawk2 |
Note
Note the use of the UDP protocol.
If you are running a local firewall, such as iptables
,
ensure that communication on the preceding ports is allowed between
two Amazon EC2 instances.
Disable source/destination check
Amazon EC2 instances perform source/destination checks by default, requiring that an instance is either the source or the destination of any traffic it sends or receives.
In the pacemaker cluster, source/destination check must be disabled on both instances receiving traffic from the Overlay IP. You can disable check using Amazon CLI or Amazon Web Services Management Console.
Review automatic recovery and stop protection
After a failure, cluster-controlled operations must be resumed in a coordinated way. This helps ensure that the cause of failure is known and addressed, and the status of the cluster is as expected. For example, verifying that there are no pending fencing actions.
This can be achieved by not enabling pacemaker to run as a service at the operating system level or by avoiding auto restarts for hardware failure.
If you want to control the restarts resulting from hardware failure, disable simplified automatic recovery and do not configure Amazon CloudWatch action-based recovery for Amazon EC2 instances that are part of a pacemaker cluster. Use the following commands on both Amazon EC2 instances in the pacemaker cluster, to disable simplified automatic recovery via Amazon CLI. If making the change via Amazon CLI, run the command for both Amazon EC2 instances in the cluster.
Note
Modifying instance maintenance options will require admin privileges not covered by the IAM instance roles defined for operations of the cluster.
aws ec2 modify-instance-maintenance-options --instance-id
i-xxxxinstidforhost1
--auto-recovery disabled
aws ec2 modify-instance-maintenance-options --instance-id
i-xxxxinstidforhost2
--auto-recovery disabled
To ensure that STONITH actions can be executed, you must ensure that stop protection is disabled for Amazon EC2 instances that are part of a pacemaker cluster. If the default settings have been modified, use the following commands for both instances to disable stop protection via Amazon CLI.
Note
Modifying instance attributes will require admin privileges not covered by the IAM instance roles defined for operations of the cluster.
aws ec2 modify-instance-attribute --instance-id
i-xxxxinstidforhost1
--no-disable-api-stop
aws ec2 modify-instance-attribute --instance-id
i-xxxxinstidforhost2
--no-disable-api-stop
Create Amazon EC2 resource tags used by Amazon EC2 STONITH agent
Amazon EC2 STONITH agent uses Amazon resource tags to identify Amazon EC2 instances. Create tag for the primary and secondary Amazon EC2 instances via Amazon Web Services Management Console or Amazon CLI. For more information, see Tagging your Amazon resources.
Use the same tag key and the local hostname returned using the command
hostname
across instances. For example, a
configuration with the values defined in Global Amazon parameters would require the tags
shown in the following table.
Amazon EC2 | Key example | Value example |
---|---|---|
Instance 1 | pacemaker | slxhost1 |
Instance 2 | pacemaker | slxhost2 |
You can run the following command locally to validate the tag values and IAM permissions to describe the tags.
aws ec2 describe-tags --filters "Name=resource-id,Values=
<instance_id>
" "Name=key,Values=<pacemaker_tag>
" --region=<region>
--output=text | cut -f5
Operating system prerequisites
This section covers the following topics.
Topics
Root access
Verify root access on both cluster nodes. The majority of the setup commands in this document are performed with the root user. Assume that commands should be run as root unless there is an explicit call out to choose otherwise.
Install missing operating system packages
This is applicable to both cluster nodes. You must install any missing operating system packages.
The following packages and their dependencies are required for the pacemaker setup. Depending on your baseline image, for example, SLES for SAP, these packages may already be installed.
aws-cli chrony cluster-glue corosync crmsh dstat fence-agents ha-cluster-bootstrap iotop pacemaker patterns-ha-ha_sles resource-agents rsyslog sap-suse-cluster-connector sapstartsrv-resource-agents (simple-mount only)
We highly recommend installing the following additional packages for troubleshooting.
zypper-lifecycle-plugin supportutils yast2-support supportutils-plugin-suse-public-cloud supportutils-plugin-ha-sap
Important
Ensure that you have installed the newer version
sap-suse-cluster-connector
(dashes), and not the older
version sap_suse_cluster_connector
that uses
underscores.
Use the following command to check packages and versions.
for package in aws-cli chrony cluster-glue corosync crmsh dstat fence-agents ha-cluster-bootstrap iotop pacemaker patterns-ha-ha_sles resource-agents rsyslog sap-suse-cluster-connector sapstartsrv-resource-agents zypper-lifecycle-plugin supportutils yast2-support supportutils-plugin-suse-public-cloud supportutils-plugin-ha-sap; do echo "Checking if ${package} is installed..." RPM_RC=$(rpm -q ${package} --quiet; echo $?) if [ ${RPM_RC} -ne 0 ];then echo " ${package} is missing and needs to be installed" fi done
If a package is not installed, and you are unable to install it using
zypper
, it may be because SUSE Linux Enterprise
High Availability extension is not available as a repository in your
chosen image. You can verify the availability of the extension using
the following command.
zypper repos
To install or update a package or packages with confirmation, use the following command.
zypper install <package_name(s)>
Update and check operating system versions
You must update and confirm versions across nodes. Apply all the latest patches to your operating system versions. This ensures that bugs are addresses and new features are available.
You can update the patches individually or use the zypper
update. A clean reboot is recommended prior to setting up a
cluster.
zypper update reboot
Compare the operating system package versions on the two cluster nodes and ensure that the versions match on both nodes.
System logging
This is applicable to both cluster nodes. We recommend using the
rsyslogd
daemon for logging. It is the default
configuration in the cluster. Verify that the rsyslog
package is installed on both cluster nodes.
logd
is a subsystem to log additional information coming
from the STONITH agent.
systemctl enable --now logd systemctl status logd
Time synchronization services
This is applicable to both cluster nodes. Time synchronization is
important for cluster operation. Ensure that chrony
rpm
is installed, and configure appropriate time servers in the
configuration file.
You can use Amazon Time Sync Service that is available on any instance
running in a VPC. It does not require internet access. To ensure
consistency in the handling of leap seconds, don't mix Amazon Time
Sync Service with any other ntp
time sync servers or
pools.
Create or check the /etc/chrony.d/ec2.conf
file to define
the server.
# Amazon EC2 time source config server 169.254.169.123 prefer iburst minpoll 4 maxpoll 4
Start the chronyd.service
, using the following
command.
systemctl enable --now chronyd.service systemctl status chronyd
For more information, see Set the time for your Linux instance.
Amazon CLI profile
This is applicable to both cluster nodes. The cluster resource agents use Amazon Command Line Interface (Amazon CLI). You need to create an Amazon CLI profile for the root account on both instances.
You can either edit the config file at /root/.aws
manually or by using aws
configure
Amazon CLI command.
You can skip providing the information for the access and secret access keys. The permissions are provided through IAM roles attached to Amazon EC2 instances.
# aws configure AWS Access Key ID [None]: AWS Secret Access Key [None]: Default region name [None]:
<region_id>
Default output format [None]:
The profile name is configurable. The name chosen in this example is cluster – it is used in Create Amazon EC2 resource tags used by Amazon EC2 STONITH agent. The Amazon Web Services Region must be the default Amazon Web Services Region of the instance.
# aws configure –-profile
cluster
AWS Access Key ID [None]: AWS Secret Access Key [None]: Default region name [None]:<region_id>
Default output format [None]:
Pacemaker proxy settings
This is applicable to both cluster nodes. If your Amazon EC2 instance has been configured to access the internet and/or Amazon Web Services Cloud through proxy servers, then you need to replicate the settings in the pacemaker configuration. For more information, see Use an HTTP proxy.
Add the following lines to
/etc/sysconfig/pacemaker
.
http_proxy=http://
<proxyhost>
:<proxyport>
https_proxy= http://<proxyhost>
:<proxyport>
no_proxy=127.0.0.1,localhost,169.254.169.254,fd00:ec2::254
Modify proxyhost
and proxyport
to match your
settings. Ensure that you exempt the address used to access the
instance metadata. Configure no_proxy
to include the IP
address of the instance metadata service –
169.254.169.254
(IPV4) and fd00:ec2::254
(IPV6).
This address does not vary.
IP and hostname resolution prerequisites
This section covers the following topics.
Topics
Primary and secondary IP addresses
This is applicable to both cluster nodes. We recommend defining a
redundant communication channel (a second ring) in
corosync
for SUSE clusters. The cluster nodes
can use the second ring to communicate in case of underlying network
disruptions.
Create a redundant communication channel by adding a secondary IP address on both nodes.
Add a secondary IP address on both nodes. These IPs are only used in cluster configurations. They provide the same fault tolerance as a secondary Elastic Network Interface (ENI). For more information, see Assign a secondary private IPv4 address.
On correct configuration, the following command returns two IPs from the same subnet on both, the primary and secondary node.
ip -o -f inet addr show eth0 | awk -F " |/" '{print $7}'
These IP addresses are required for ring0_addr
and
ring1_addr
in
corosync.conf
.
Add initial VPC route table entries for overlay IPs
You need to add initial route table entries for overlay IPs. For more information on overlay IP, see Overlay IP.
Add entries to the VPC route table or tables associated with the subnets of your Amazon EC2 instance for the cluster. The entries for destination (overlay IP CIDR) and target (Amazon EC2 instance or ENI) must be added manually for ASCS and ERS. This ensures that the cluster resource has a route to modify. It also supports the install of SAP using the virtual names associated with the overlay IP before the configuration of the cluster.
Modify or add a route to a route table using Amazon Web Services Management Console
-
Open the Amazon VPC console at https://console.amazonaws.cn/vpc/
. -
In the navigation pane, choose Route Tables, and select the route table associated with the subnets where your instances have been deployed.
-
Choose Actions, Edit routes.
-
To add a route, choose Add route. You must choose Add route twice to add two routes, one for ASCS and another for ERS.
-
Add your chosen overlay IP address CIDR and the instance ID of your primary instance for ASCS. See the following table for an example.
Destination 172.16.30.5/32 Target i-xxxxinstidforhost1 -
Add your chosen overlay IP address CIDR and the instance ID of your secondary instance for ERS. See the following table for an example.
Destination 172.16.30.6/32 Target i-xxxxinstidforhost2
-
-
Choose Save changes.
You route table now has two entries, each for ASCS and ERS, in addition to the standard routes. The selected instances IDs resolve to the corresponding primary Elastic Network Interface (ENI).
The preceding steps can also be performed programmatically. We suggest performing the steps using administrative privileges, instead of instance-based privileges to preserve least privilege. CreateRoute API isn't necessary for ongoing operations.
Run the following command as a dry run on both nodes to confirm that the instances have the necessary permissions.
aws ec2 replace-route --route-table-id
rtb-xxxxxroutetable1
--destination-cidr-block172.16.30.5/32
--instance-id i-xxxxinstidforhost1
--dry-run --profile<aws_cli_cluster_profile>
Add overlay IPs to host IP configuration
You must configure the overlay IP as an additional IP address on the standard interface to enable SAP install. This action is managed by the cluster IP resource. However, to install SAP using the correct IP addresses prior to having the cluster configuration in place, you need to add these entries manually.
If you need to reboot the instance during setup, the assignment is lost, and must be re-added.
See the following examples. You must update the commands with your chosen IP addresses.
On EC2 instance 1, where you are installing ASCS, add the overlay IP allocated for ASCS.
ip addr add
172.16.30.5/32
dev eth0
On EC2 instance 2, where you are installing ERS, add the overlay IP allocated for ERS.
ip addr add
172.16.30.6/32
dev eth0
Hostname resolution
This is applicable to both cluster nodes. You must ensure that both
instances can resolve all hostnames in use. Add the hostnames for
cluster nodes to /etc/hosts
file on both cluster nodes.
This ensures that hostnames for cluster nodes can be resolved even
in case of DNS issues. See the following example.
# cat /etc/hosts
10.1.10.1 slxhost01.example.com slxhost01
10.1.20.1 slxhost02.example.com slxhost02
172.16.30.5 slxascs.example.com slxascs
172.16.30.6 slxers.example.com slxers
In this example, the secondary IPs used for the second cluster ring are not mentioned. They are only used in the cluster configuration. You can allocate virtual hostnames for administration and identification purposes.
Important
The overlay IP is out of VPC range, and cannot be reached from locations not associated with the route table, including on-premises.
File system prerequisites
This section covers the following topics.
Topics
Shared file systems
Amazon Elastic File System
We recommend sharing a single Amazon EFS or FSx for ONTAP file system across multiple SIDs within an account.
The file system's DNS name is the simplest mounting option. The DNS
automatically resolves to the mount target's IP address in the
Availability Zone of the connecting Amazon EC2 instance. You can also
create an alias to help with identifying the purpose of the shared
file system. We have used <nfs.fqdn>
in this
document. The following are some examples.
-
file-system-id.efs.aws-region.amazonaws.com
-
svm-id.fs-id.fsx.aws-region.amazonaws.com
-
qas_sapmnt_share.example.com
Note
Review the enableDnsHostnames
and
enableDnsSupport
DNS attributes for
your VPC. For more information, see View and update DNS attributes for your
VPC.
Create file systems
The following shared file systems are covered in this document.
Unique NFS Location (example) | File system location |
---|---|
SLX_sapmnt | /sapmnt/SLX |
SLX_ASCS00 | /usr/sap/SLX/ASCS00 |
SLX_ERS10 | /usr/sap/SLX/ERS10 |
For more information, see SAP Documentation – SAP System Directories on UNIX
The following options can differ depending on how you architect and operate your systems.
-
When using the simple mount architecture, you can share at the
/usr/sap/<SID>
level. There is no requirement to manage these file systems with the cluster. In this document, these file systems are separate. This is to simplify the migration, and to ensure that the recommendation to run app server executables from a local copy is followed in case you are co-hosting the ASCS/ERS with an application server. -
/usr/sap/trans
is not listed as a required file system for ASCS. You can add this to your setup. -
A shared home directory has not been included. This enables you to log on locally as the
<sid>adm
user, in the event of NFS issues. A shared home directory can be the suitable option if your administrators have root access.
Using the NFS ID created in the previous step, temporarily mount the
root directory of the NFS with the following command.
/mnt
is available by default; it can also be
substituted with another temporary location.
mount
<nfs.fqdn>
.amazonaws.com:/ /mnt
Create the directories using the following command.
mkdir -p /mnt/
SLX_sapmnt
mkdir -p /mnt/SLX_ASCS00
mkdir -p /mnt/SLX_ERS10
Change the ownership or permissions to ensure that the install as
<sid>adm
can write to the directories. If
<sid>adm
is going to be created by the
installer, temporarily change the permissions to 777, as seen in the
following command.
chmod 777 /mnt/
SLX_sapmnt
/mnt/SLX_ASCS00
/mnt/SLX_ERS10
Unmount the temporary mount using the following command.
umount /mnt
Update /etc/fstab
This is applicable to both cluster nodes. /etc/fstab
is a
configuration table containing the details required for mounting and
unmounting file systems to a host.
Create the directories for the required mount points (permanent or cluster controlled), using the following commands.
mkdir /sapmnt mkdir /usr/sap/
SLX
/ASCS00
mkdir /usr/sap/SLX
/ERS10
Add the file systems not managed by the cluster to
/etc/fstab
.
For both simple-mount and classic architectures, prepare and append an
entry for the sapmnt
file system to
/etc/fstab
.
<nfs.fqdn>/
SLX_sapmnt
/sapmnt nfs nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport 0 0
Simple-mount only – prepare and append
entries for the ASCS and ERS file systems to
/etc/fstab
.
<nfs.fqdn>:/
SLX_ASCS00
/usr/sap/SLX
/ASCS00
nfs nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport 0 0 <nfs.fqdn>:/SLX_ERS10
/usr/sap/SLX
/ERS10
nfs nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport 0 0
Important
Review the mount options to ensure that they match with your operating system, NFS file system type, and latest recommendations from SAP.
Use the following command to mount the file systems defined in
/etc/fstab
.
mount -a
Use the following command to check that the required file systems are available.
df -h
Temporarily mount ASCS and ERS directories for installation (classic only)
This is only applicable to the classic architecture. Simple-mount
architecture has these directories permanently available in
/etc/fstab
.
You must temporarily mount ASCS and ERS directories for installation.
Use the following command on the instance where you to plan to install ASCS.
mount <nfs.fqdn>:/
SLX_ASCS00
/usr/sap/SLX
/ASCS00
Use the following command on the instance where you to plan to install ERS.
mount <nfs.fqdn>:/
SLX_ERS10
/usr/sap/SLX
/ERS10
Shared VPC – optional
Amazon VPC sharing enables you to share subnets with other Amazon Web Services accounts within the same Amazon Organizations. Amazon EC2 instances can be deployed using the subnets of the shared Amazon VPC.
In the pacemaker cluster, the aws-vpc-move-ip
resource agent has
been enhanced to support a shared VPC setup while maintaining backward
compatibility with previous existing features.
The following checks and changes are required. We refer to the Amazon Web Services account that owns Amazon VPC as the sharing VPC account, and to the consumer account where the cluster nodes are going to be deployed as the cluster account.
This section covers the following topics.
Minimum version requirements
The latest version of the aws-vpc-move-ip
agent shipped
with SLES15 SP3 supports the shared VPC setup by default. The
following are the minimum version required to support a shared VPC
Setup:
-
SLES 12 SP5 - resource-agents-4.3.018.a7fb5035-3.79.1.x86_64
-
SLES 15 SP2 - resource-agents-4.4.0+git57.70549516-3.30.1.x86_64
-
SLES 15 SP3 - resource-agents-4.8.0+git30.d0077df0-8.5.1
IAM roles and policies
Using the overlay IP agent with a shared Amazon VPC requires a different set of IAM permissions to be granted on both Amazon Web Services accounts (sharing VPC account and cluster account).
Sharing VPC account
In sharing VPC account, create an IAM role to delegate permissions to the EC2 instances that will be part of the cluster. During the IAM Role creation, select “Another Amazon Web Services account” as the type of trusted entity, and enter the Amazon Web Services account ID where the EC2 instances will be deployed/running from.
After the IAM role has been created, create the following IAM policy on the sharing VPC account, and attach it to an IAM role. Add or remove route table entries as needed.
{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": "ec2:ReplaceRoute", "Resource": [ "arn:aws:ec2:<region>:<sharing_vpc_account_id>:route_table/<rtb_id_1>", "arn:aws:ec2:<region>:<sharing_vpc_account_id>:route_table/<rtb_id_2>" ] }, { "Sid": "VisualEditor1", "Effect": "Allow", "Action": "ec2:DescribeRouteTables", "Resource": "*" } ] }
Next, edit move to the “Trust relationships” tab in the IAM role, and ensure that the Amazon Web Services account you entered while creating the role has been correctly added.
Cluster account
In cluster account, create the following IAM policy, and attach it to an IAM role. This is the IAM Role that is going to be attached to the EC2 instances.
Amazon STS policy
{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": "sts:AssumeRole", "Resource": "arn:aws:iam::<sharing_vpc_account_id>:role/<sharing _vpc-account-cluster-role>" } ] }
STONITH policy
{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "ec2:StartInstances", "ec2:StopInstances" ], "Resource": [ "arn:aws:ec2:<region>:<cluster_account_id>:instance/<instance_id_1>", "arn:aws:ec2:<region>:<cluster_account_id>:instance/<instance_id_2>" ] }, { "Sid": "VisualEditor1", "Effect": "Allow", "Action": "ec2:DescribeInstances", "Resource": "*" } ] }
Shared VPC cluster resources
The cluster resource agent aws-vpc-move-ip
also uses a
different configuration syntax. When configuring the
aws-vpc-move-ip
resource agent, the following
new parameters must be used:
-
lookup_type=NetworkInterfaceId
-
routing_table_role="arn:aws:iam::<account_id>:role/<VPC-Account-Cluster-Role>"
The following IP Resource for ASCS needs to be created.
crm configure primitive rsc_ip_
SLX
_ASCS00
ocf:heartbeat:aws-vpc-move-ip params ip=172.16.30.5
routing_table=rtb-xxxxxroutetable1
interface=eth0 profile=cluster
lookup_type=NetworkInterfaceId routing_table_role="arn:aws:iam::<sharing_vpc_account_id>:role/<sharing_vpc_account_cluster_role>"
op start interval=0 timeout=180s op stop interval=0 timeout=180s op monitor interval=20s timeout=40s
The following IP Resource for ERS needs to be created.
crm configure primitive rsc_ip_
SLX
_ERS10
ocf:heartbeat:aws-vpc-move-ip params ip=172.16.30.6
routing_table=rtb-xxxxxroutetable1
interface=eth0 profile=cluster
lookup_type=NetworkInterfaceId routing_table_role="arn:aws:iam::<sharing_vpc_account_id>:role/<sharing_vpc_account_cluster_role>"
op start interval=0 timeout=180s op stop interval=0 timeout=180s op monitor interval=20s timeout=40s