在辅助节点上停止 SAP HANA 数据库 - SAP HANA 开启 Amazon
Amazon Web Services 文档中描述的 Amazon Web Services 服务或功能可能因区域而异。要查看适用于中国区域的差异,请参阅 中国的 Amazon Web Services 服务入门 (PDF)

本文属于机器翻译版本。若本译文内容与英语原文存在差异,则一律以英文原文为准。

在辅助节点上停止 SAP HANA 数据库

描述-在正常集群操作期间,停止主 SAP HANA 数据库(在节点 2 上)。

运行节点 — 主要 SAP HANA 数据库节点(在节点 2 上)

运行步骤

  • <sid>adm在节点 2 上一样优雅地停止 SAP HANA 数据库。

    [root@sechana ~] su - hdbadm
    hdbadm@sechana:/usr/sap/HDB/HDB00> HDB stop
    hdbdaemon will wait maximal 300 seconds for NewDB services finishing.
    Stopping instance using: /usr/sap/HDB/SYS/exe/hdb/sapcontrol -prot NI_HTTP -nr
    00 -function Stop 400
    
    12.11.2020 11:45:21
    Stop
    OK
    Waiting for stopped instance using: /usr/sap/HDB/SYS/exe/hdb/sapcontrol
    -prot NI_HTTP -nr 00 -function WaitforStopped 600 2
    
    
    12.11.2020 11:45:53
    WaitforStopped
    OK
    hdbdaemon is stopped.

预期结果

  • 集群检测到已停止的主要 SAP HANA 数据库(在节点 2 上),并将辅助的 SAP HANA 数据库(在节点 1 上)提升为主数据库。

    [root@sechana ~] pcs status
    Cluster name: rhelhanaha
    Stack: corosync
    Current DC: sechana (version 1.1.19-8.el7_6.5-c3c624ea3d) - partition with quorum
    Last updated: Tue Nov 10 18:04:01 2020
    Last change: Tue Nov 10 18:04:00 2020 by root via crm_attribute on prihana
    
    2 nodes configured
    6 resources configured
    
    Online: [ prihana sechana ]
    
    Full list of resources:
    
     clusterfence   (stonith:fence_aws):    Started prihana
     Clone Set: SAPHanaTopology_HDB_00-clone [SAPHanaTopology_HDB_00]
         Started: [ prihana sechana ]
     Master/Slave Set: SAPHana_HDB_00-master [SAPHana_HDB_00]
         SAPHana_HDB_00     (ocf::heartbeat:SAPHana):       Promoting prihana
         Slaves: [ sechana ]
     hana-oip       (ocf::heartbeat:aws-vpc-move-ip):       Started prihana
    
    Failed Actions:
    * SAPHana_HDB_00_monitor_59000 on sechana 'master (failed)' (9): call=41,
    status=complete, exitreason='',
        last-rc-change='Tue Nov 10 18:03:49 2020', queued=0ms, exec=0ms
    
    
    Daemon Status:
      corosync: active/enabled
      pacemaker: active/enabled
      pcsd: active/enabled
    [root@sechana ~]
  • 重叠 IP 地址已迁移到新的主地址(在节点 1 上)。

    [root@prihana ~] ip addr show
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host
           valid_lft forever preferred_lft forever
    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default qlen 1000
        link/ether 0a:38:1c:ce:b4:3d brd ff:ff:ff:ff:ff:ff
        inet xx.xx.xx.xx/24 brd 11.0.1.255 scope global eth0
           valid_lft forever preferred_lft forever
        inet xx.xx.xx.xx/32 scope global eth0:1
           valid_lft forever preferred_lft forever
        inet 192.168.10.16/32 scope global eth0
           valid_lft forever preferred_lft forever
        inet6 fe80::838:1cff:fece:b43d/64 scope link
           valid_lft forever preferred_lft forever
  • AUTOMATED_REGISTER设置为后true,集群会重新启动出现故障的 SAP HANA 数据库,并将其注册到新的主数据库。

    使用以下命令检查辅助服务器的状态:

    hdbadm@sechana:/usr/sap/HDB/HDB00> sapcontrol -nr 00 -function GetProcessList
    
    10.11.2020 18:08:47
    GetProcessList
    OK
    name, description, dispstatus, textstatus, starttime, elapsedtime, pid
    hdbdaemon, HDB Daemon, GREEN, Running, 2020 11 10 18:05:44, 0:03:03, 6601
    hdbcompileserver, HDB Compileserver, GREEN, Running, 2020 11 10 18:05:48, 0:02:59, 6725
    hdbindexserver, HDB Indexserver-HDB, GREEN, Running, 2020 11 10 18:05:49, 0:02:58, 6828
    hdbnameserver, HDB Nameserver, GREEN, Running, 2020 11 10 18:05:44, 0:03:03, 6619
    hdbpreprocessor, HDB Preprocessor, GREEN, Running, 2020 11 10 18:05:48, 0:02:59, 6730
    hdbwebdispatcher, HDB Web Dispatcher, GREEN, Running, 2020 11 10 18:05:58, 0:02:49, 7797
    hdbxsengine, HDB XSEngine-HDB, GREEN, Running, 2020 11 10 18:05:49, 0:02:58, 6831
    hdbadm@sechana:/usr/sap/HDB/HDB00>

恢复程序

  • 使用以下命令以 root 用户身份清理节点 2 上的 “失败操作”:

    [root@sechana ~] pcs resource cleanup SAPHana_HDB_00 --node sechana
  • 资源清理后,确保清理群集 “失败的操作”。

    root@sechana ~] pcs status
    Cluster name: rhelhanaha
    Stack: corosync
    Current DC: sechana (version 1.1.19-8.el7_6.5-c3c624ea3d) - partition with quorum
    Last updated: Tue Nov 10 18:13:35 2020
    Last change: Tue Nov 10 18:12:51 2020 by hacluster via crmd on sechana
    
    2 nodes configured
    6 resources configured
    
    Online: [ prihana sechana ]
    
    Full list of resources:
    
     clusterfence   (stonith:fence_aws):    Started prihana
     Clone Set: SAPHanaTopology_HDB_00-clone [SAPHanaTopology_HDB_00]
         Started: [ prihana sechana ]
     Master/Slave Set: SAPHana_HDB_00-master [SAPHana_HDB_00]
         Masters: [ prihana ]
         Slaves: [ sechana ]
     hana-oip       (ocf::heartbeat:aws-vpc-move-ip):       Started prihana
    
    Daemon Status:
      corosync: active/enabled
      pacemaker: active/enabled
      pcsd: active/enabled
    [root@sechana ~]