Resource cleanup activities - SAP HANA on Amazon
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China (PDF).

Resource cleanup activities

  • You can run the command "crm resource cleanup rsc_SAPHana_<SID>_HDB<Instance Number> <hostname>" to clean up any failed actions, as shown in the following example:

    prihana:~ # crm status Stack: corosync Current DC: prihana (version 1.1.18+20180430.b12c320f5-3.24.1-b12c320f5) - partition with quorum Last updated: Thu Nov 12 12:41:07 2020 Last change: Thu Nov 12 12:40:44 2020 by root via crm_attribute on prihana 2 nodes configured 6 resources configured Online: [ prihana sechana ] Full list of resources: res_AWS_STONITH (stonith:external/ec2): Started prihana res_AWS_IP (ocf::suse:aws-vpc-move-ip): Started sechana Clone Set: cln_SAPHanaTopology_HDB_HDB00 [rsc_SAPHanaTopology_HDB_HDB00] Started: [ prihana sechana ] Master/Slave Set: msl_SAPHana_HDB_HDB00 [rsc_SAPHana_HDB_HDB00] Masters: [ sechana ] Slaves: [ prihana ] Failed Actions: * rsc_SAPHana_HDB_HDB00_monitor_61000 on prihana 'not running' (7): call=35, status=complete, exitreason='', last-rc-change='Thu Nov 12 12:39:49 2020', queued=0ms, exec=0ms prihana:~ # crm resource cleanup rsc_SAPHana_HDB_HDB00 prihana Cleaned up rsc_SAPHana_HDB_HDB00:0 on prihana Cleaned up rsc_SAPHana_HDB_HDB00:1 on prihana Waiting for 1 replies from the CRMd. OK prihana:~ #
  • When you manually migrate resources from one node to another, there will be constraints in the crm configuration. You can find the constraints with the command "crm configure show" as shown in the following example:

    prihana:~ # crm configure show node 1: prihana \ attributes lpa_hdb_lpt=30 hana_hdb_vhost=prihana hana_hdb_site=PRI hana_hdb_srmode=sync hana_hdb_remoteHost=sechana hana_hdb_op_mode=logreplay node 2: sechana \ attributes lpa_hdb_lpt=1605184953 hana_hdb_vhost=sechana hana_hdb_site=SEC hana_hdb_srmode=sync hana_hdb_remoteHost=prihana hana_hdb_op_mode=logreplay primitive res_AWS_IP ocf:suse:aws-vpc-move-ip \ params ip=192.168.10.16 routing_table=rtb-06ca3aca4c58bd17d interface=eth0 profile=cluster \ op start interval=0 timeout=180 \ op stop interval=0 timeout=180 \ op monitor interval=60 timeout=60 \ meta target-role=Started primitive res_AWS_STONITH stonith:external/ec2 \ op start interval=0 timeout=180 \ op stop interval=0 timeout=180 \ op monitor interval=120 timeout=60 \ meta target-role=Started \ params tag=pacemaker profile=cluster primitive rsc_SAPHanaTopology_HDB_HDB00 ocf:suse:SAPHanaTopology \ operations $id=rsc_sap2_HDB_HDB00-operations \ op monitor interval=10 timeout=600 \ op start interval=0 timeout=600 \ op stop interval=0 timeout=300 \ params SID=HDB InstanceNumber=00 primitive rsc_SAPHana_HDB_HDB00 ocf:suse:SAPHana \ operations $id=rsc_sap_HDB_HDB00-operations \ op start interval=0 timeout=3600 \ op stop interval=0 timeout=3600 \ op promote interval=0 timeout=3600 \ op monitor interval=60 role=Master timeout=700 \ op monitor interval=61 role=Slave timeout=700 \ params SID=HDB InstanceNumber=00 PREFER_SITE_TAKEOVER=true DUPLICATE_PRIMARY_ TIMEOUT=7200 AUTOMATED_REGISTER=true HANA_CALL_TIMEOUT=60 ms msl_SAPHana_HDB_HDB00 rsc_SAPHana_HDB_HDB00 \ meta clone-max=2 clone-node-max=1 interleave=true clone cln_SAPHanaTopology_HDB_HDB00 rsc_SAPHanaTopology_HDB_HDB00 \ meta clone-node-max=1 interleave=true location cli-prefer-rsc_SAPHana_HDB_HDB00 rsc_SAPHana_HDB_HDB00 role=Started inf: sechana colocation col_IP_Primary 2000: res_AWS_IP:Started msl_SAPHana_HDB_HDB00:Master order ord_SAPHana 2000: cln_SAPHanaTopology_HDB_HDB00 msl_SAPHana_HDB_HDB00 property SAPHanaSR: \ hana_hdb_site_srHook_SEC=PRIM \ hana_hdb_site_srHook_PRI=SOK property cib-bootstrap-options: \ stonith-enabled=true \ stonith-action=off \ stonith-timeout=600s \ have-watchdog=false \ dc-version="1.1.18+20180430.b12c320f5-3.24.1-b12c320f5" \ cluster-infrastructure=corosync \ last-lrm-refresh=1605184909 rsc_defaults rsc-options: \ resource-stickiness=1000 \ migration-threshold=5000 op_defaults op-options: \ timeout=600

You must clean up these location constraints before you perform any further cluster actions with following command:

prihana:~ # crm resource clear rsc_SAPHana_HDB_HDB00 INFO: Removed migration constraints for rsc_SAPHana_HDB_HDB00