Stop the SAP HANA database on the primary node - SAP HANA on Amazon
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China (PDF).

Stop the SAP HANA database on the primary node

Description — Stop the primary SAP HANA database during normal cluster operation.

Run node — Primary SAP HANA database node

Run steps:

  • Stop the primary SAP HANA database gracefully as <sid>adm

    [root@prihana ~] su - hdbadm
    hdbadm@prihana:/usr/sap/HDB/HDB00> HDB stop
    hdbdaemon will wait maximal 300 seconds for NewDB services finishing.
    Stopping instance using: /usr/sap/HDB/SYS/exe/hdb/sapcontrol -prot
    NI_HTTP -nr 00 -function Stop 400
    
    12.11.2020 11:39:19
    Stop
    OK
    Waiting for stopped instance using:
    /usr/sap/HDB/SYS/exe/hdb/sapcontrol -prot NI_HTTP -nr 00 -function
    WaitforStopped 600 2
    
    12.11.2020 11:39:51
    WaitforStopped
    OK
    hdbdaemon is stopped.

Expected result:

  • The cluster detects stopped primary SAP HANA database (on node 1) and promotes the secondary SAP HANA database (on node 2) to take over as primary.

    [root@prihana ~] pcs status
    Cluster name: rhelhanaha
    Stack: corosync
    Current DC: sechana (version 1.1.19-8.el7_6.5-c3c624ea3d) - partition with quorum
    Last updated: Tue Nov 10 17:58:19 2020
    Last change: Tue Nov 10 17:57:41 2020 by root via crm_attribute on sechana
    
    2 nodes configured
    6 resources configured
    
    Online: [ prihana sechana ]
    
    Full list of resources:
    
     clusterfence   (stonith:fence_aws):    Started prihana
     Clone Set: SAPHanaTopology_HDB_00-clone [SAPHanaTopology_HDB_00]
         Started: [ prihana sechana ]
     Master/Slave Set: SAPHana_HDB_00-master [SAPHana_HDB_00]
         Masters: [ sechana ]
         Slaves: [ prihana ]
     hana-oip       (ocf::heartbeat:aws-vpc-move-ip):       Started sechana
    
    Failed Actions:
    * SAPHana_HDB_00_monitor_59000 on prihana 'master (failed)' (9): call=31,
    status=complete, exitreason='',
        last-rc-change='Tue Nov 10 17:56:52 2020', queued=0ms, exec=0ms
    
    
    Daemon Status:
      corosync: active/enabled
      pacemaker: active/enabled
      pcsd: active/enabled
  • The overlay IP address is migrated to the new primary (on node 2).

    [root@sechana ~] ip addr show
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host
           valid_lft forever preferred_lft forever
    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state
    UP group default qlen 1000
        link/ether 0e:ef:dd:3c:bf:1b brd ff:ff:ff:ff:ff:ff
        inet xx.xx.xx.xx/24 brd 11.0.2.255 scope global eth0
           valid_lft forever preferred_lft forever
        inet xx.xx.xx.xx/32 scope global eth0:1
           valid_lft forever preferred_lft forever
        inet 192.168.10.16/32 scope global eth0
           valid_lft forever preferred_lft forever
        inet6 fe80::cef:ddff:fe3c:bf1b/64 scope link
           valid_lft forever preferred_lft forever
  • Because AUTOMATED_REGISTER is set to true, the cluster restarts the failed SAP HANA database and registers it against the new primary. Validate the status of the primary SAP HANA database using the following command:

    hdbadm@prihana:/usr/sap/HDB/HDB00> sapcontrol -nr 00 -function GetProcessList
    
    10.11.2020 17:59:49
    GetProcessList
    OK
    name, description, dispstatus, textstatus, starttime, elapsedtime, pid
    hdbdaemon, HDB Daemon, GREEN, Running, 2020 11 10 17:58:47, 0:01:02, 25979
    hdbcompileserver, HDB Compileserver, GREEN, Running, 2020 11 10 17:58:52, 0:00:57, 26152
    hdbindexserver, HDB Indexserver-HDB, GREEN, Running, 2020 11 10 17:58:53, 0:00:56, 26201
    hdbnameserver, HDB Nameserver, GREEN, Running, 2020 11 10 17:58:48, 0:01:01, 25997
    hdbpreprocessor, HDB Preprocessor, GREEN, Running, 2020 11 10 17:58:52, 0:00:57, 26155
    hdbwebdispatcher, HDB Web Dispatcher, GREEN, Running, 2020 11 10 17:59:02, 0:00:47, 27100
    hdbxsengine, HDB XSEngine-HDB, GREEN, Running, 2020 11 10 17:58:53, 0:00:56, 26204
    hdbadm@prihana:/usr/sap/HDB/HDB00>

Recovery procedure:

  • Clean up the cluster "failed actions" on node 1 as root using the following command:

    [root@prihana ~] pcs resource cleanup SAPHana_HDB_00 --node prihana
  • After you run the cleanup command, "failed actions" messages should disappear from the cluster status.

     [root@prihana ~] pcs status
    Cluster name: rhelhanaha
    Stack: corosync
    Current DC: sechana (version 1.1.19-8.el7_6.5-c3c624ea3d) -
    partition with quorum
    Last updated: Tue Nov 10 18:01:02 2020
    Last change: Tue Nov 10 18:00:45 2020 by root via crm_attribute on sechana
    
    2 nodes configured
    6 resources configured
    
    Online: [ prihana sechana ]
    
    Full list of resources:
    
     clusterfence   (stonith:fence_aws):    Started prihana
     Clone Set: SAPHanaTopology_HDB_00-clone [SAPHanaTopology_HDB_00]
         Started: [ prihana sechana ]
     Master/Slave Set: SAPHana_HDB_00-master [SAPHana_HDB_00]
         Masters: [ sechana ]
         Slaves: [ prihana ]
     hana-oip       (ocf::heartbeat:aws-vpc-move-ip):       Started sechana
    
    Daemon Status:
      corosync: active/enabled
      pacemaker: active/enabled
      pcsd: active/enabled
    [root@prihana ~]