Crash the primary database on node 2 - SAP HANA on Amazon
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China (PDF).

Crash the primary database on node 2

Description — Simulate a complete breakdown of the primary database system.

Run node — The primary SAP HANA database node (on node 2).

Run steps:

  • Crash the primary database (on node 2) system using the following command as <sid>adm.

    [root@sechana ~]# su - hdbadm hdbadm@sechana:/usr/sap/HDB/HDB00> HDB kill -9 hdbenv.sh: Hostname sechana defined in $SAP_RETRIEVAL_PATH=/usr/sap/ HDB/HDB00/sechana differs from host name defined on command line. hdbenv.sh: Error: Instance not found for host -9 killing HDB processes: kill -9 30751 /usr/sap/HDB/HDB00/sechana/trace/hdb.sapHDB_HDB00 -d -nw -f /usr/sap/HDB/HDB00/sechana/daemon.ini pf=/usr/sap/HDB/SYS/profile/HDB_HDB00_sechana kill -9 30899 hdbnameserver kill -9 31166 hdbcompileserver kill -9 31168 hdbpreprocessor kill -9 31209 hdbindexserver -port 30003 kill -9 31211 hdbxsengine -port 30007 kill -9 31721 hdbwebdispatcher kill orphan HDB processes: kill -9 30899 [hdbnameserver] <defunct> kill -9 31209 [hdbindexserver] <defunct>

Expected result:

  • The cluster detects the stopped primary SAP HANA database (on node 2) and promotes the secondary SAP HANA database (on node 1) to take over as primary.

    [root@sechana ~]# pcs status Cluster name: rhelhanaha Stack: corosync Current DC: prihana (version 1.1.19-8.el7_6.5-c3c624ea3d) - partition with quorum Last updated: Tue Nov 10 18:13:35 2020 Last change: Tue Nov 10 18:12:51 2020 by hacluster via crmd on sechana 2 nodes configured 6 resources configured Online: [ prihana sechana ] Full list of resources: clusterfence (stonith:fence_aws): Started prihana Clone Set: SAPHanaTopology_HDB_00-clone [SAPHanaTopology_HDB_00] Started: [ prihana sechana ] Master/Slave Set: SAPHana_HDB_00-master [SAPHana_HDB_00] Masters: [ prihana ] Slaves: [ sechana ] hana-oip (ocf::heartbeat:aws-vpc-move-ip): Started prihana Failed Actions: * SAPHana_HDB_00_monitor_59000 on sechana 'master (failed)' (9): call=41, status=complete, exitreason='', last-rc-change='Tue Nov 10 18:03:49 2020', queued=0ms, exec=0ms Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled
  • The overlay IP address is migrated to the new primary (on node 1).

  • Because AUTOMATED_REGISTER is set to true, the cluster restarts the failed SAP HANA database and registers it against the new primary.

Recovery procedure:

  • Clean up the cluster “failed actions” on node 2 as root.

    [root@prihana ~]# pcs resource cleanup SAPHana_HDB_00 --node sechana
  • After resource cleanup, ensure the cluster “failed actions” are cleaned up.