Administration - SAP HANA on Amazon
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China (PDF).

Administration

This section provides guidance on common administrative tasks required to operate an SAP HANA system, including information about starting, stopping, and cloning systems.

Starting and Stopping EC2 Instances Running SAP HANA Hosts

At any time, you can stop one or multiple SAP HANA hosts. Before stopping the EC2 instance of an SAP HANA host, first stop SAP HANA on that instance.

When you resume the instance, it will automatically start with the same IP address, network, and storage configuration as before. You also have the option of using the EC2 Scheduler to schedule starts and stops of your EC2 instances. The EC2 Scheduler relies on the native shutdown and start-up mechanisms of the operating system. These native mechanisms will invoke the orderly shutdown and startup of your SAP HANA instance. Here is an architectural diagram of how the EC2 Scheduler works:


        EC2 Scheduler

Figure 1: EC2 Scheduler

Tagging SAP Resources on Amazon

Tagging your SAP resources on Amazon can significantly simplify identification, security, manageability, and billing of those resources. You can tag your resources using the Amazon Management Console or by using the create-tags functionality of the Amazon Command Line Interface (Amazon CLI). This table lists some example tag names and tag values:

Tag name Tag value
Name SAP server’s virtual (host) name
Environment SAP server’s landscape role; for example: SBX, DEV, QAT, STG, PRD.
Application SAP solution or product; for example: ECC, CRM, BW, PI, SCM, SRM, EP
Owner SAP point of contact
Service level Known uptime and downtime schedule

After you have tagged your resources, you can apply specific security restrictions such as access control, based on the tag values. Here is an example of such a policy from the Amazon Security blog:

{ "Version" : "2012-10-17", "Statement" : [ { "Sid" : "LaunchEC2Instances", "Effect" : "Allow", "Action" : [ "ec2:Describe*", "ec2:RunInstances" ], "Resource" : [ "*" ] }, { "Sid" : "AllowActionsIfYouAreTheOwner", "Effect" : "Allow", "Action" : [ "ec2:StopInstances", "ec2:StartInstances", "ec2:RebootInstances", "ec2:TerminateInstances" ], "Condition" : { "StringEquals" : { "ec2:ResourceTag/PrincipalId" : "${aws:userid}" } }, "Resource" : [ "*" ] } ] }

The Amazon Identity and Access Management (IAM) policy allows only specific permissions based on the tag value. In this scenario, the current user ID must match the tag value in order for the user to be granted permissions. For more information on tagging, see the Amazon documentation and Amazon blog.

Monitoring

You can use various Amazon, SAP, and third-party solutions to monitor your SAP workloads. Here are some of the core Amazon monitoring services:

  • Amazon CloudWatch – CloudWatch is a monitoring service for Amazon resources. It’s critical for SAP workloads where it’s used to collect resource utilization logs and to create alarms to automatically react to changes in Amazon resources.

  • Amazon CloudTrail – CloudTrail keeps track of all API calls made within your Amazon account. It captures key metrics about the API calls and can be useful for automating trail creation for your SAP resources.

Configuring CloudWatch detailed monitoring for SAP resources is mandatory for getting Amazon and SAP support. You can use native Amazon monitoring services in a complementary fashion with the SAP Solution Manager. You can find third-party monitoring tools in Amazon Marketplace.

Automation

Amazon offers multiple options for programmatically scripting your resources to operate or scale them in a predictable and repeatable manner. You can use Amazon CloudFormation to automate and operate SAP systems on Amazon. Here are some examples for automating your SAP environment on Amazon:

Area Activities Amazon services
Infrastructure deployment

Provision new SAP environment

SAP system cloning

Amazon CloudFormation

Amazon CLI

Capacity management

Automate scale-up/scale-out of SAP application servers

Amazon Lambda

Amazon CloudFormation

Operations

SAP backup automation (see the backup example)

Performing monitoring and visualization

Amazon CloudWatch Amazon Systems Manager

Patching

There are two ways for you to patch your SAP HANA database, with options for minimizing cost and/or downtime. With Amazon, you can provision additional servers as needed to minimize downtime for patching in a cost-effective manner. You can also minimize risks by creating on-demand copies of your existing production SAP HANA databases for lifelike production readiness testing.

This table summarizes the tradeoffs of the two patching methods:

Patching method

Benefits

Tradeoff

Technologies available

Patch an existing server

No costs for additional on-demand instances

Lowest levels of relative complexity and setup tasks involved

Need to patch the existing operating system and database

Longest downtime to the existing server and database

Native OS patching tools Patch Manager

Native SAP HANA patching tools

Provision and patch a new server

Leverage latest AMIs (only database patch is required)

Shortest downtime on the existing server and database

Option to patch and test the operating system and database separately or together

More costs for additional on-demand instances

More complexity and setup tasks involved

Amazon Machine Image (AMI)

Amazon CLI

Amazon CloudFormation

SAP HANA System Replication SAP HANA System Cloning SAP HANA backups

SAP Notes:

1984882 - Using HANA System Replication for Hardware Exchange with minimum/zero downtime

1913302 - HANA: Suspend DB connections for short maintenance tasks

The first method (patch an existing server) involves patching the operating system (OS) and database (DB) components of your SAP HANA server. The goal of this method is to minimize any additional server costs and to avoid any tasks needed to set up additional systems or tests. This method may be most appropriate if you have a well-defined patching process and are satisfied with your current downtime and costs. With this method you must use the correct operating system (OS) update process and tools for your Linux distribution. See this SUSE blog and Red Hat FAQ, or check each vendor’s documentation for their specific processes and procedures.

In addition to patching tools provided by our Linux partners,Amazon offers a free of charge patching service called Patch Manager. Patch Manager is an automated tool that helps you simplify your OS patching process. You can scan your EC2 instances for missing patches and automatically install them, select the timing for patch rollouts, control instance reboots, and many other tasks. You can also define auto-approval rules for patches with an added ability to black-list or white-list specific patches, control how the patches are deployed on the target instances (e.g., stop services before applying the patch), and schedule the automatic rollout through maintenance windows.

The second method (provision and patch a new server) involves provisioning a new EC2 instance that will receive a copy of your source system and database. The goal of the method is to minimize downtime, minimize risks (by having production data and executing production-like testing), and have repeatable processes. This method may be most appropriate if you are looking for higher degrees of automation to enable these goals and are comfortable with the trade- offs. This method is more complex and has a many more options to fit your requirements. Certain options are not exclusive and can be used together. For example, your Amazon CloudFormation template can include the latest Amazon Machine Images (AMIs), which you can then use to automate the provisioning, set up, and configuration of a new SAP HANA server.

For more information, see Automated patching.

Backup and Recovery

This section provides an overview of the Amazon services used in the backup and recovery of SAP HANA systems and provides an example backup and recovery scenario. This guide does not include detailed instructions on how to execute database backups using native HANA backup and recovery features or third- party backup tools. Please refer to the standard OS, SAP, and SAP HANA documentation or the documentation provided by backup software vendors. In addition, backup schedules, frequency, and retention periods might vary with your system type and business requirements. See the following standard SAP documentation for guidance on these topics.

Note

For a discussion of both general and advanced backup and recovery concepts for SAP systems on Amazon, see the SAP on Amazon Backup and Recovery Guide.

SAP Note

Description

1642148 FAQ: SAP HANA Database Backup & Recovery
1821207 Determining required recovery files
1869119 Checking backups using hdbbackupcheck
1873247 Checking recoverability with hdbbackupdiag --check
1651055 Scheduling SAP HANA Database Backups in Linux
2484177 Scheduling backups for multi-tenant SAP HANA Cockpit 2.0

Creating an Image of an SAP HANA System

You can use the Amazon Web Services Management Console or the command line to create your own AMI based on an existing instance. For more information, see the Amazon documentation. You can use an AMI of your SAP HANA instance for the following purposes:

  • To create a full offline system backup (of the OS /usr/sap, HANA shared, backup, data, and log files) – AMIs are automatically saved in multiple Availability Zones within the same Amazon Region.

  • To move a HANA system from one Amazon Region to another – You can create an image of an existing EC2 instance and move it to another Amazon Region by following the instructions in the Amazon documentation. When the AMI has been copied to the target Amazon Region, you can launch the new instance there.

  • To clone an SAP HANA system – You can create an AMI of an existing SAP HANA system to create an exact clone of the system. See the next section for additional information.

Note

See Restoring SAP HANA Backups and Snapshots later in this whitepaper to view the recommended restoration steps for production environments.

Tip

The SAP HANA system should be in a consistent state before you create an AMI. To do this, stop the SAP HANA instance before creating the AMI or by following the instructions in SAP Note 1703435.

Amazon Services and Components for Backup Solutions

Amazon provides a number of services and options for storage and backup, including Amazon Simple Storage Service (Amazon S3), Amazon Identity and Access Management (IAM), and S3 Glacier.

Amazon S3

Amazon S3 is the center of any SAP backup and recovery solution on Amazon. It provides a highly durable storage infrastructure designed for mission-critical and primary data storage. It is designed to provide 99.999999999% durability and 99.99% availability over a given year. See the Amazon S3 documentation for detailed instructions on how to create and configure an S3 bucket to store your SAP HANA backup files.

IAM

With IAM, you can securely control access to Amazon services and resources for your users. You can create and manage Amazon users and groups and use permissions to grant user access to Amazon resources. You can create roles in IAM and manage permissions to control which operations can be performed by the entity, or Amazon service, that assumes the role. You can also define which entity is allowed to assume the role.

During the deployment process, Amazon CloudFormation creates an IAM role that allows access to get objects from and/or put objects into Amazon S3. That role is subsequently assigned to each EC2 instance that is hosting SAP HANA master and worker nodes at launch time as they are deployed.


            IAM role example

Figure 2: IAM role example

To ensure security that applies the principle of least privilege, permissions for this role are limited only to actions that are required for backup and recovery.

{"Statement":[ {"Resource":"arn:aws:s3::: <your-s3-bucket-name>/*", "Action":["s3:GetObject","s3:PutObject","s3:DeleteObject", "s3:ListBucket","s3:Get*","s3:List*"], "Effect":"Allow"}, {"Resource":"*","Action":["s3:List*","ec2:Describe*","ec2:Attach NetworkInterface", "ec2:AttachVolume","ec2:CreateTags","ec2:CreateVolume","ec2:RunI nstances", "ec2:StartInstances"],"Effect":"Allow"}]}

To add functions later, you can use the Amazon Web Services Management Console to modify the IAM role.

S3 Glacier

S3 Glacier is an extremely low-cost service that provides secure and durable storage for data archiving and backup. S3 Glacier is optimized for data that is infrequently accessed and provides multiple options such as expedited, standard, and bulk methods for data retrieval. With standard and bulk retrievals, data is available in 3-5 hours or 5-12 hours, respectively.

However, with expedited retrieval, S3 Glacier provides you with an option to retrieve data in 3-5 minutes, which can be ideal for occasional urgent requests. With S3 Glacier, you can reliably store large or small amounts of data for as little as $0.01 per gigabyte per month, a significant savings compared to on-premises solutions. You can use lifecycle policies, as explained in the Amazon S3 Developer Guide, to push SAP HANA backups to S3 Glacier for long-term archiving.

Backup Destination

The primary difference between backing up SAP systems on Amazon compared with traditional on-premises infrastructure is the backup destination. Tape is the typical backup destination used with on-premises infrastructure. On Amazon, backups are stored in Amazon S3. Amazon S3 has many benefits over tape, including the ability to automatically store backups offsite from the source system, since data in Amazon S3 is replicated across multiple facilities within the Amazon Region.

SAP HANA systems provisioned with Amazon Launch Wizard for SAP are configured with a set of EBS volumes to be used as an initial local backup destination. HANA backups are first stored on these local EBS volumes and then copied to Amazon S3 for long-term storage.

You can use SAP HANA Studio, SQL commands, or the DBA Cockpit to start or schedule SAP HANA data backups. Log backups are written automatically unless disabled. The /backup file system is configured as part of the deployment process.


          SAP HANA file system layout

Figure 3: SAP HANA file system layout

The SAP HANA global.ini configuration file has been customized for database backups to go directly to /backup/data/<SID>, while automatic log archival files go to /backup/log/<SID>.

[persistence] basepath_shared = no savepoint_intervals = 300 basepath_datavolumes = /hana/data/<SID> basepath_logvolumes = /hana/log/<SID> basepath_databackup = /backup/data/<SID> basepath_logbackup = /backup/log/<SID>

Some third-party backup tools like Commvault, NetBackup, and IBM Tivoli Storage Manager (IBM TSM) are integrated with Amazon S3 capabilities and can be used to trigger and save SAP HANA backups directly into Amazon S3 without needing to store the backups on EBS volumes first.

Amazon CLI

The Amazon Command Line Interface (Amazon CLI), which is a unified tool to manage Amazon services, is installed as part of the base image. Using various commands, you can control multiple Amazon services from the command line directly and automate them through scripts. Access to your S3 bucket is available through the IAM role assigned to the instance (as discussed earlier). Using the Amazon CLI commands for Amazon S3, you can list the contents of the previously created bucket, back up files, and restore files, as explained in the Amazon CLI documentation.

imdbmaster:/backup # aws s3 ls --region=us-east-1 s3://node2- hana-s3bucket-gcynh5v2nqs3 Bucket: node2-hana-s3bucket-gcynh5v2nqs3 Prefix: LastWriteTime Length Name ------------- ------ ----

Backup Example

Here are the steps you can take for a typical backup task:

  1. In the SAP HANA Backup Editor, choose Open Backup Wizard. You can also open the Backup Wizard by right-clicking the system that you want to back up and choosing Back Up.

    1. Select the destination type File. This will back up the database to files in the specified file system.

    2. Specify the backup destination (/backup/data/<SID>) and the backup prefix.

      
                  SAP HANA backup example

      Figure 4: SAP HANA backup example

    3. Choose Next and then Finish. A confirmation message will appear when the backup is complete.

    4. Verify that the backup files are available at the OS level. The next step is to push or synchronize the backup files from the /backup file system to Amazon S3 by using the aws s3 sync command.

      imdbmaster:/ # aws s3 sync backup s3://node2-hana-s3bucket- gcynh5v2nqs3 --region=us-east-1
  2. Use the Amazon Web Services Management Console to verify that the files have been pushed to Amazon S3. You can also use the aws s3 ls command shown previously in the Amazon Command Line Interface section.

    
              Amazon S3 bucket contents after backup

    Figure 5: Amazon S3 bucket contents after backup

Tip

The aws s3 sync command will only upload new files that don’t exist in Amazon S3. Use a periodically scheduled cron job to sync, and then delete files that have been uploaded. See SAP Note 1651055 for scheduling periodic backup jobs in Linux, and extend the supplied scripts with aws s3 sync commands.

Scheduling and Executing Backups Remotely

You can use the Amazon Systems Manager Run Command, along with Amazon CloudWatch Events, to schedule backups of your SAP HANA system remotely without the need to log in to the EC2 instances. You can also use cron or any other instance-level scheduling mechanism.

The Systems Manager Run Command lets you remotely and securely manage the configuration of your managed instances. A managed instance is any EC2 instance or on-premises machine in your hybrid environment that has been configured for Systems Manager. The Run Command enables you to automate common administrative tasks and perform ad hoc configuration changes at

scale. You can use the Run Command from the Amazon EC2 console, the Amazon CLI, Windows PowerShell, or the Amazon SDKs.

Systems Manager Prerequisites

Systems Manager has the following prerequisites.

Supported operating system (Linux)

Instances must run a supported version of Linux.

64-bit and 32-bit systems:

  • Amazon Linux 2014.09, 2014.03 or later

  • Ubuntu Server 16.04 LTS, 14.04 LTS, or 12.04 LTS

  • Red Hat Enterprise Linux (RHEL) 6.5 or later

  • CentOS 6.3 or later

64-bit systems only:

  • Amazon Linux 2015.09, 2015.03 or later

  • Red Hat Enterprise Linux (RHEL) 7.x or later

  • CentOS 7.1 or later

  • SUSE Linux Enterprise Server (SLES) 12 or higher

For the latest information about supported operating systems, see the Amazon Systems Manager documentation.

Roles for Systems Manager

Systems Manager requires an IAM role for instances that will process commands and a separate role for users who are executing commands. Both roles require permission policies that enable them to communicate with the Systems Manager API. You can choose to use Systems Manager managed policies or you can create your own roles and specify permissions. For more information, see Configuring Security Roles for Systems Manager in the Amazon documentation.

If you are configuring on-premises servers or virtual machines (VMs) that you want to configure using Systems Manager, you must also configure an IAM service role. For more information, see Create an IAM Service Role in the Amazon documentation.

SSM Agent (EC2 Linux instances)

Amazon Systems Manager Agent (SSM Agent) processes Systems Manager requests and configures your machine as specified in the request. You must download and install SSM Agent to your EC2 Linux instances. For more information, see Installing SSM Agent on Linux in the Amazon documentation.

To schedule remote backups, follow these high-level steps:

  1. Install and configure SSM Agent on the EC2 instance. For detailed installation steps, see the Amazon Systems Manager documentation.

  2. Provide SSM access to the EC2 instance role that is assigned to the SAP HANA instance. For detailed information on how to assign SSM access to a role, see the Amazon Systems Manager documentation.

  3. Create an SAP HANA backup script. You can use the following sample script as a starting point and modify it to meet your requirements.

    #!/bin/sh set -x S3Bucket_Name=<Name of the S3 bucket where backup files will be copied> TIMESTAMP=$(date +\%F\_%H\%M) exec 1>/backup/data/${SAPSYSTEMNAME}/${TIMESTAMP}_backup_log.out 2>&1 echo "Starting to take backup of Hana Database and Upload the backup files to S3" echo "Backup Timestamp for $SAPSYSTEMNAME is $TIMESTAMP" BACKUP_PREFIX=${SAPSYSTEMNAME}_${TIMESTAMP} echo $BACKUP_PREFIX # source HANA environment source $DIR_INSTANCE/hdbenv.sh # execute command with user key hdbsql -U BACKUP "backup data using file ('$BACKUP_PREFIX')" echo "HANA Backup is completed" echo "Continue with copying the backup files in to S3" echo $BACKUP_PREFIX sudo -u root /usr/local/bin/aws s3 cp --recursive /backup/data/${SAPSYSTEMNAME}/ s3://${S3Bucket_Name}/bkps/${SAPSYSTEMNAME}/data/ --exclude "*" --include "${BACKUP_PREFIX}*" echo "Copying HANA Database log files in to S3" sudo -u root /usr/local/bin/aws s3 sync /backup/log/${SAPSYSTEMNAME}/ s3://${S3Bucket_Name}/bkps/${SAPSYSTEMNAME}/log/ --exclude "*" --include "log_backup*" sudo -u root /usr/local/bin/aws s3 cp /backup/data/${SAPSYSTEMNAME}/${TIMESTAMP}_backup_log.out s3://${S3Bucket_Name}/bkps/${SAPSYSTEMNAME}
    Note

    This script takes into consideration that hdbuserstore has a key named Backup.

  4. Test a one-time backup by executing an ssm command directly.

    Note

    For this command to execute successfully, you will have to enable <sid>adm login using sudo.

    aws ssm send-command --instance-ids <HANA master instance ID> --document-name Amazon-RunShellScript --parameters commands="sudo - u <HANA_SID>adm TIMESTAMP=$(date +\%F\_%H\%M) SAPSYSTEMNAME=<HANA_SID> DIR_INSTANCE=/hana/shared/${SAPSYSTEMNAME}/HDB00 -i /usr/sap/HDB/HDB00/hana_backup.sh"
  5. Using CloudWatch Events, you can schedule backups remotely at any desired frequency. Navigate to the CloudWatch Events page and create a rule.


            Creating Amazon CloudWatch Events rules

Figure 6: Creating Amazon CloudWatch Events rules

When configuring the rule:

  1. Choose Schedule.

  2. Select SSM Run Command as the target.

  3. Select Amazon-RunShellScript (Linux) as the document type.

  4. Choose InstanceIds or Tags as the target key.

  5. Choose Constant under Configure Parameters, and type the run command.

Restoring SAP HANA Backups and Snapshots

Restoring SAP Backups

To restore your SAP HANA database from a backup, perform the following steps:

  1. If the backup files are not already available in the /backup file system but are in Amazon S3, restore the files from Amazon S3 by using the aws s3 cp command. This command has the following syntax:

    aws --region <region> cp <s3-bucket/path> --recursive <backup- prefix>*.

    For example:

    imdbmaster:/backup/data/YYZ # aws --region us-east-1 s3 cp s3://node2-hana-s3bucket-gcynh5v2nqs3/data/YYZ . --recursive -- include COMPLETE*
  2. Recover the SAP HANA database by using the Recovery Wizard as outlined in the SAP HANA Administration Guide. Specify File as the destination type and enter the correct backup prefix.

    
              Restore example

    Figure 7: Restore example

  3. When the recovery is complete, you can resume normal operations and clean up backup files from the /backup/<SID>/* directories.

Restoring EBS Snapshots

To restore EBS snapshots, perform the following steps:

  1. Create a new volume from the snapshot:

    aws ec2 create-volume --region us-west-2 --availability-zone us- west-2a --snapshot-id snap-1234abc123a12345a --volume-type gp2
  2. Attach the newly created volume to your EC2 host:

    aws ec2 attach-volume --region=us-west-2 --volume-id vol- 4567c123e45678dd9 --instance-id i-03add123456789012 --device /dev/sdf
  3. Mount the logical volume associated with SAP HANA data on the host:

    mount /dev/sdf /hana/data
  4. Start your SAP HANA instance.

Note

For large mission-critical systems, we highly recommend that you execute the volume initialization command on the database data and log volumes after restoring the AMI but before starting the database. Executing the volume initialization command will help you avoid extensive wait times before the database is available. Here is the sample fio command that you can use:

sudo fio –filename=/dev/xvdf –rw=read –bs=128K –iodepth=32 – ioengine=libaiodirect=1 –name=volume-initialize

For more information about initializing Amazon EBS volumes, see the Amazon documentation.

Restoring AMI Snapshots

You can restore your SAP HANA AMI snapshots through the Amazon Web Services Management Console. Open the Amazon EC2 console, and choose AMIs in the navigation pane.

Choose the AMI that you want to restore, expand Actions, and then choose Launch.


          Restoring an AMI snapshot

Figure 8: Restoring an AMI snapshot