Shut down the primary node
Description – shut down the primary node where ASCS is running.
Run node – primary node where ASCS is running.
Run steps
Log in to the Amazon Web Services Management Console and stop the ASCS instance.
Expected result – the cluster will detect the failed ASCS node and start ASCS on the secondary node (node 2).
hahost02:~ # crm status Cluster Summary: * Stack: corosync * Current DC: hahost02 (version 2.0.xxxxxx) - partition with quorum * Last updated: * Last change: by root via crm_resource on hahost02 * 2 nodes configured * 7 resource instances configured Node List: * Online: [ hahost02 ] * OFFLINE: [ hahost01 ] Full List of Resources: * res_AWS_STONITH (stonith:external/ec2): Started hahost02 * Resource Group: grp_HA1_ASCS00: * rsc_IP_HA1_ASCS00 (ocf::suse:aws-vpc-move-ip): Started hahost02 * rsc_FS_HA1_ASCS00 (ocf::heartbeat:Filesystem): Started hahost02 * rsc_SAP_HA1_ASCS00 (ocf::heartbeat:SAPInstance): Started hahost02 * Resource Group: grp_HA1_ERS10: * rsc_IP_HA1_ERS10 (ocf::suse:aws-vpc-move-ip): Started hahost02 * rsc_FS_HA1_ERS10 (ocf::heartbeat:Filesystem): Started hahost02 * rsc_SAP_HA1_ERS10 (ocf::heartbeat:SAPInstance): Started hahost02
Recovery procedure
Log in to the Amazon Web Services Management Console and start the ASCS instance.
The cluster will move ERS to node 1 when the instance is up.