测试群集 - 上的 SAP HANAAmazon
Amazon Web Services 文档中描述的 Amazon Web Services 服务或功能可能因区域而异。要查看适用于中国区域的差异,请参阅中国的 Amazon Web Services 服务入门

本文属于机器翻译版本。若本译文内容与英语原文存在差异,则一律以英文原文为准。

测试群集

群集设置完成后,执行下面显示的测试以验证群集设置。按顺序运行这些测试。

测试 1:停止 SAP HANA 数据库在主节点上

说明— 在正常群集操作期间停止主 SAP HANA 数据库。

运行节点— 主 SAP HANA 数据库节点

运行步骤

  • 正常地停止主 SAP HANA 数据库,因为<sid>adm

    prihana:~ # su - hdbadm hdbadm@prihana:/usr/sap/HDB/HDB00> HDB stop hdbdaemon will wait maximal 300 seconds for NewDB services finishing. Stopping instance using: /usr/sap/HDB/SYS/exe/hdb/sapcontrol -prot NI_HTTP -nr 00 -function Stop 400 12.11.2020 11:39:19 Stop OK Waiting for stopped instance using: /usr/sap/HDB/SYS/exe/hdb/sapcontrol -prot NI_HTTP -nr 00 -function WaitforStopped 600 2 12.11.2020 11:39:51 WaitforStopped OK hdbdaemon is stopped.

预期输出

  • 群集检测已停止的主 SAP HANA 数据库(在节点 1 上),并将辅助 SAP HANA 数据库(在节点 2 上)提升为主数据库。

    [root@prihana ~]# pcs status Cluster name: rhelhanaha Stack: corosync Current DC: sechana (version 1.1.19-8.el7_6.5-c3c624ea3d) - partition with quorum Last updated: Tue Nov 10 17:58:19 2020 Last change: Tue Nov 10 17:57:41 2020 by root via crm_attribute on sechana 2 nodes configured 6 resources configured Online: [ prihana sechana ] Full list of resources: clusterfence (stonith:fence_aws): Started prihana Clone Set: SAPHanaTopology_HDB_00-clone [SAPHanaTopology_HDB_00] Started: [ prihana sechana ] Master/Slave Set: SAPHana_HDB_00-master [SAPHana_HDB_00] Masters: [ sechana ] Slaves: [ prihana ] hana-oip (ocf::heartbeat:aws-vpc-move-ip): Started sechana Failed Actions: * SAPHana_HDB_00_monitor_59000 on prihana 'master (failed)' (9): call=31, status=complete, exitreason='', last-rc-change='Tue Nov 10 17:56:52 2020', queued=0ms, exec=0ms Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled [root@prihana ~]#
  • 叠加 IP 地址将迁移到新的主节点(在节点 2 上)。

    sechana:~ # ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default qlen 1000 link/ether 0e:ef:dd:3c:bf:1b brd ff:ff:ff:ff:ff:ff inet xx.xx.xx.xx/24 brd 11.0.2.255 scope global eth0 valid_lft forever preferred_lft forever inet xx.xx.xx.xx/32 scope global eth0:1 valid_lft forever preferred_lft forever inet 192.168.10.16/32 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::cef:ddff:fe3c:bf1b/64 scope link valid_lft forever preferred_lft forever
  • 由于AUTOMATED_REGISTER设置为 true 时,群集将重新启动出现故障的 SAP HANA 数据库,并将其注册到新的主数据库。使用以下命令验证主 SAP HANA 数据库的状态:

    sapcontrol –nr 00 –function GetProcessList
    hdbadm@prihana:/usr/sap/HDB/HDB00> sapcontrol -nr 00 -function GetProcessList 10.11.2020 17:59:49 GetProcessList OK name, description, dispstatus, textstatus, starttime, elapsedtime, pid hdbdaemon, HDB Daemon, GREEN, Running, 2020 11 10 17:58:47, 0:01:02, 25979 hdbcompileserver, HDB Compileserver, GREEN, Running, 2020 11 10 17:58:52, 0:00:57, 26152 hdbindexserver, HDB Indexserver-HDB, GREEN, Running, 2020 11 10 17:58:53, 0:00:56, 26201 hdbnameserver, HDB Nameserver, GREEN, Running, 2020 11 10 17:58:48, 0:01:01, 25997 hdbpreprocessor, HDB Preprocessor, GREEN, Running, 2020 11 10 17:58:52, 0:00:57, 26155 hdbwebdispatcher, HDB Web Dispatcher, GREEN, Running, 2020 11 10 17:59:02, 0:00:47, 27100 hdbxsengine, HDB XSEngine-HDB, GREEN, Running, 2020 11 10 17:58:53, 0:00:56, 26204 hdbadm@prihana:/usr/sap/HDB/HDB00>

恢复程序

  • 清除群集”failed actions” 作为 root 用户使用以下命令:

    pcs resource cleanup SAPHana_HDB_00 --node prihana
    [root@prihana ~]# pcs resource cleanup SAPHana_HDB_00 --node prihana Cleaned up SAPHana_HDB_00:0 on prihana Cleaned up SAPHana_HDB_00:1 on prihana Waiting for 1 replies from the CRMd. OK [root@prihana ~]#
  • 运行清理命令后,”failed actions” 消息应该从群集状态中消失。

    [root@prihana ~]# pcs status Cluster name: rhelhanaha Stack: corosync Current DC: sechana (version 1.1.19-8.el7_6.5-c3c624ea3d) - partition with quorum Last updated: Tue Nov 10 18:01:02 2020 Last change: Tue Nov 10 18:00:45 2020 by root via crm_attribute on sechana 2 nodes configured 6 resources configured Online: [ prihana sechana ] Full list of resources: clusterfence (stonith:fence_aws): Started prihana Clone Set: SAPHanaTopology_HDB_00-clone [SAPHanaTopology_HDB_00] Started: [ prihana sechana ] Master/Slave Set: SAPHana_HDB_00-master [SAPHana_HDB_00] Masters: [ sechana ] Slaves: [ prihana ] hana-oip (ocf::heartbeat:aws-vpc-move-ip): Started sechana Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled [root@prihana ~]#