测试 2:停止 SAP HANA 数据库在辅助节点上 - 上的 SAP HANAAmazon
Amazon Web Services 文档中描述的 Amazon Web Services 服务或功能可能因区域而异。要查看适用于中国区域的差异,请参阅中国的 Amazon Web Services 服务入门

本文属于机器翻译版本。若本译文内容与英语原文存在差异,则一律以英文原文为准。

测试 2:停止 SAP HANA 数据库在辅助节点上

说明— 在正常群集操作期间停止主 SAP HANA 数据库(在节点 2 上)。

运行节点— 主 SAP HANA 数据库节点(在节点 2 上)

运行步骤

  • 正常地停止 SAP HANA 数据库,因为<sid>adm在节点 2 上。

    sechana:~ # su - hdbadm hdbadm@sechana:/usr/sap/HDB/HDB00> HDB stop hdbdaemon will wait maximal 300 seconds for NewDB services finishing. Stopping instance using: /usr/sap/HDB/SYS/exe/hdb/sapcontrol -prot NI_HTTP -nr 00 -function Stop 400 12.11.2020 11:45:21 Stop OK Waiting for stopped instance using: /usr/sap/HDB/SYS/exe/hdb/sapcontrol -prot NI_HTTP -nr 00 -function WaitforStopped 600 2 12.11.2020 11:45:53 WaitforStopped OK hdbdaemon is stopped.

预期输出

  • 群集检测已停止的主 SAP HANA 数据库(在节点 2 上),并将辅助 SAP HANA 数据库(在节点 1 上)提升为主数据库。

    [root@sechana ~]# pcs status Cluster name: rhelhanaha Stack: corosync Current DC: sechana (version 1.1.19-8.el7_6.5-c3c624ea3d) - partition with quorum Last updated: Tue Nov 10 18:04:01 2020 Last change: Tue Nov 10 18:04:00 2020 by root via crm_attribute on prihana 2 nodes configured 6 resources configured Online: [ prihana sechana ] Full list of resources: clusterfence (stonith:fence_aws): Started prihana Clone Set: SAPHanaTopology_HDB_00-clone [SAPHanaTopology_HDB_00] Started: [ prihana sechana ] Master/Slave Set: SAPHana_HDB_00-master [SAPHana_HDB_00] SAPHana_HDB_00 (ocf::heartbeat:SAPHana): Promoting prihana Slaves: [ sechana ] hana-oip (ocf::heartbeat:aws-vpc-move-ip): Started prihana Failed Actions: * SAPHana_HDB_00_monitor_59000 on sechana 'master (failed)' (9): call=41, status=complete, exitreason='', last-rc-change='Tue Nov 10 18:03:49 2020', queued=0ms, exec=0ms Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled [root@sechana ~]#
  • 叠加 IP 地址将迁移到新的主节点(在节点 1 上)。

    prihana:~ # ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default qlen 1000 link/ether 0a:38:1c:ce:b4:3d brd ff:ff:ff:ff:ff:ff inet xx.xx.xx.xx/24 brd 11.0.1.255 scope global eth0 valid_lft forever preferred_lft forever inet xx.xx.xx.xx/32 scope global eth0:1 valid_lft forever preferred_lft forever inet 192.168.10.16/32 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::838:1cff:fece:b43d/64 scope link valid_lft forever preferred_lft forever
  • AUTOMATED_REGISTER将设置为true时,集群将重新启动出现故障的 SAP HANA 数据库,并将其注册到新的主数据库。

    使用以下命令检查辅助服务器的状态:

    sapcontrol –nr 00 –function GetProcessList
    hdbadm@sechana:/usr/sap/HDB/HDB00> sapcontrol -nr 00 -function GetProcessList 10.11.2020 18:08:47 GetProcessList OK name, description, dispstatus, textstatus, starttime, elapsedtime, pid hdbdaemon, HDB Daemon, GREEN, Running, 2020 11 10 18:05:44, 0:03:03, 6601 hdbcompileserver, HDB Compileserver, GREEN, Running, 2020 11 10 18:05:48, 0:02:59, 6725 hdbindexserver, HDB Indexserver-HDB, GREEN, Running, 2020 11 10 18:05:49, 0:02:58, 6828 hdbnameserver, HDB Nameserver, GREEN, Running, 2020 11 10 18:05:44, 0:03:03, 6619 hdbpreprocessor, HDB Preprocessor, GREEN, Running, 2020 11 10 18:05:48, 0:02:59, 6730 hdbwebdispatcher, HDB Web Dispatcher, GREEN, Running, 2020 11 10 18:05:58, 0:02:49, 7797 hdbxsengine, HDB XSEngine-HDB, GREEN, Running, 2020 11 10 18:05:49, 0:02:58, 6831 hdbadm@sechana:/usr/sap/HDB/HDB00>

恢复程序

  • 清除群集”failed actions” 作为 root 节点上使用以下命令:

    pcs resource cleanup SAPHana_HDB_00 --node sechana
    [root@sechana ~]# pcs resource cleanup SAPHana_HDB_00 --node sechana Cleaned up SAPHana_HDB_00:0 on sechana Cleaned up SAPHana_HDB_00:1 on sechana Waiting for 1 replies from the CRMd. OK
  • 资源清理后,请确保群集”failed actions” 被清理。

    root@sechana ~]# pcs status Cluster name: rhelhanaha Stack: corosync Current DC: sechana (version 1.1.19-8.el7_6.5-c3c624ea3d) - partition with quorum Last updated: Tue Nov 10 18:13:35 2020 Last change: Tue Nov 10 18:12:51 2020 by hacluster via crmd on sechana 2 nodes configured 6 resources configured Online: [ prihana sechana ] Full list of resources: clusterfence (stonith:fence_aws): Started prihana Clone Set: SAPHanaTopology_HDB_00-clone [SAPHanaTopology_HDB_00] Started: [ prihana sechana ] Master/Slave Set: SAPHana_HDB_00-master [SAPHana_HDB_00] Masters: [ prihana ] Slaves: [ sechana ] hana-oip (ocf::heartbeat:aws-vpc-move-ip): Started prihana Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled [root@sechana ~]#