Rick Andersen

Protect Microsoft® Hyper-V Private Cloud Management Cluster with Hitachi Application Protector

Discussion created by Rick Andersen on Jan 23, 2014
Latest reply on Jun 10, 2014 by Herman Rutten

 

The purpose of this technical posting is to show how organizations can use Hitachi Application Protector to protect the Microsoft Hyper-V® Private Cloud management cluster.  The Microsoft Private Cloud leverages the Microsoft System Center family which provides a comprehensive set of tools to manage the private cloud. The components of Microsoft System Center are database driven applications.

SQL Server® 2012 Enterprise Edition is ideal for providing a highly available and well-performing database platform that is critical to the overall management of the private cloud environment. It is critical to maintain high availability for the private cloud by ensuring that the System Center databases are protected.

Hitachi Application Protector can protect your SQL data in the management cluster from the following scenarios:

§  Accidental deletion of data

§  Logical database file corruption

§  Physical file corruption caused by disk or hardware failure

Table 1. Test Case

Test Case

Pass/Fail Criteria

Result  

SQL database protection from the   following:

§  Accidental deletion of data

§  Logical database file corruption

§  Physical file corruption caused by disk or hardware failure

Successful

Full database recovery from snapshot

 

Hitachi Application Protector allows you to do the following:

§  Schedule or perform backups of your SQL Server databases on an as needed basis

§  Restore SQL Server databases rapidly in the event of accidental data deletion or data corruption

Hitachi Application Protector leveragesHitachi Thin Image snapshot technology and Microsoft Volume Shadow Copy Server (VSS) to provide the following application-consistent data protection:

§  Backup, recovery, and data protection software that is snapshot based and is application consistent

§  Off-loaded host server backup overhead with storage snapshots

§  Manages disk-to-disk-based backup and recovery, leveraging Microsoft Volume Shadow Copy Service infrastructure and Hitachi file clone software

§  Ability to schedule single, daily, weekly, or monthly snapshots

This document provides the following:

§  A proof point of basic functionality of this solution

§  High level technical reference for considering this solution

§  High level reference of the use case implementation

This document does not cover the following:

§  Performance measurement

§  Sizing information

§  Best practice

§  Implementation details

For implementation details, contact your Hitachi Data Systems representative.

NoteTesting of this configuration was in a lab environment. Many things affect production environments beyond prediction or duplication in a lab environment. Follow the recommended practice of conducting proof-of-concept testing for acceptable results in a non-production, isolated test environment that otherwise matches your production environment before your production implementation of this solution.

Use Case Overview

Microsoft SQL Server is a very critical part of Microsoft Private Cloud management. Ensuring data protection is one of the highest priorities of the database administrator's job.

Hitachi Application Protector provides protection for the private cloud and its associated databases. You also have the ability to rapidly recover from database failure to minimize the loss of productivity that is associated with a service outage.

Tested Components

These hardware and software components were used in the Hitachi Application Protector testing environment.

Table 2 Hardware Components

Hardware

Description

Version

Quantity

Hitachi Unified Storage VM

§  Dual controllers

§  2 × 8 Gb/sec Fibre Channel ports used

§  166 GB cache memory

§  8 × 600 GB SAS disks

73-02-00-00/01

1

Hitachi Compute Blade 500 chassis

§  8-blade chassis

§  2 management modules

§  6 cooling fan modules

§  4 power supply modules

§  2 Brocade 5460 Fibre Channel switch   modules

§  2 Brocade 10 GbE DCB switch modules

SVP: A0135-D-6829

1

520HB1 server blade

§  Half blade

§    2 × 8-core Intel Xeon E5-2680   processor, 2.70 GHz

§  160 GB RAM

§  Emulex 10 GbE CNA onboard network   adapter

§    Hitachi 8 Gb/sec Fibre Channel   mezzanine card

BMC: 01-56

2

Table 3 Software Components

Software

Version

Hitachi Application Protector

2.7.0.7

Hitachi Dynamic Provisioning

Microcode Dependent

Hitachi Thin Image

Licensed on Hitachi Unified Storage   VM

Command control interface

CCI-01-30-03

Hitachi VSS Hardware Provider

4.9.1

Microsoft Windows Server®

2012 Standard Edition

Microsoft SQL Server

2012 Enterprise Edition

High Level Test Infrastructure

The testing of Hitachi Application Protector functionality used the following:

§  A Hitachi Compute Blade 500 chassis with two half-size server blades for the Microsoft Private Cloud management cluster which also contains the Microsoft SQL Server 2012 cluster.

§  Hitachi Unified Storage VM for the SAN storage

Figure 1illustrates the high-level test infrastructure.

Hyper-V HaPRO diagram.jpg                      

Figure 1

Test Result

This is the test result and success criteria for this testing

Table 4, Test Results

Test   Case

Description

Result

Backup and restore

This test case performed a backup of   the SQL database and validated the ability to successfully restore the   database in the event of data deletion or corruption to the SQL data or log   files.

Passed

Expected result:

§  Successfully backup the SQL database

§  Successfully restore the SQL database

Using Hitachi Application Protector

Hitachi Application Protector uses a snapshot technology based on the storage system. It leverages Volume Shadow Copy Service from Microsoft to give you application-consistent data protection.

Application Protector understands how and where the storage is of the primary database file and supporting files (like log files). As a result, Application Protector ensures the tracking and backup of all application-related data changes at the requested point-in-time to guarantee recovery.

The storage system-based snapshot technology creates backup images quickly. The technology uses storage space efficiently to maintain changes to the database and log files.

Snapshot technology simplifies recovery because the protected data appears in a mirrored file system, just like the original. Right-click to recover the database and related files.

To learn more about Hitachi Application Protector, visit the Hitachi Data Systems website.

Using Hitachi Application Protector in a Microsoft SQL Server Clustered Environment

Using Hitachi Application Protector in a Microsoft SQL Server clustered environment requires manual steps to failover the Hitachi Application Protector metadata from one node to the other in the event of a SQL node failure.

Classify Hitachi Application Protector metadatainto the following, based on their location.

§  On-system metadata, such as WMI DB, Task Scheduler, and registry keys

§  On-disk metadata, such as on-disk objects, logs, and configuration

When using on-disk metadata, the following are the available options for metadata storage through the configurable Hitachi Application Protector metadata directory. Set this location-configurable data to one of the following paths:

§  Non-shared volume

§  This can be shared as a CIFS share during manual failover.

§  Shared non-cluster volume

§  Cluster resource volume

The following sections describe each option.

Non-Shared Volume

When hosting the metadata on a currently active, non-clustered, non-shared location on a given node, Hitachi Application Protector runs on that node.

Configure a separate, currently passive clustered node for Hitachi Application Protector to run.

Fail Over (Move Metadata to the New Active Node)

In case of a fail over, perform the following steps:

1.     Copy the metadata from the volume of the old active node (Node 1) to the volume of the new active node (Node 2).

Example: To run from Node 1, when the path of Node 1 is shared over \\Node1\METADATA\Metadata, type the following:

# robocopy\\Node1\HAPRO\Metadata\ H:\HAPRO\Metadata

2.     Run the HAPROS_SYNC command to sync on-disk into on-system on the new active node (Server 2):

HAPRO_SYNC –sync system

Shared Non-Cluster Location

For the cases where using a non-clustered shared volume to store metadata, the shared volume may appear as different drives on various systems.

Note — Use different paths to configure on Hitachi Application Protector running on the various cluster nodes, as shown in Table 5.

Table 5. Valid Configuration Example

Server

Shared   Volume

Drive   Letter

Metadata   Path

Node 1

Vol-1

H:

H:\HAPRO\Server1\

Node 2

Vol-1

G:

G:\HAPRO\Server2\

 

Table 6is an example of an invalid configuration.

Table 6. Invalid Configuration Example

Server

Shared   Volume

Drive   Letter

Metadata   Path

Comment

Node 1

Vol-1

H:

H:\HAPRO\Metadata\

Invalid configuration as it conflicts   with Server 2 (same location).

Node 2

Vol-1

G:

G:\HAPRO\Metadata\

Invalid configuration as it conflicts   with Server 1 (same location).

 

Fail Over

In case of a fail over, perform the following steps.

§  If Node 2 is the new active node, and it just took over from Node 1, then run the following command to sync on-system metadata using on-disk metadata:

HAPRO_SYNC –sync system

Cluster Resource Location

When using cluster resources, the disks are visible only to the active node. Switch the cluster resource to the new active node during failover.

The volume is visible under the same driver letter on all of the cluster nodes. In this case, configure the Hitachi Application Protector metadata path as in the case of Shared Non-Cluster Location. It can be the same path on all the nodes. Table 7provides an example configuration:

Table 7. Cluster Resource Location Example

Server

Shared   Volume

Drive   Letter

Metadata   Path

Server 1

Vol-1

H:

H:\HAPRO\Metadata\

Server 2

Vol-1

H:

H:\HAPRO\Metadata\

Fail Over

In case of a fail over, perform the following step:

§  If Node 2 is the new active node, and it just took over from Node 1, then run the following command to sync on-system metadata using on-disk metadata:

HAPRO_SYNC –sync system

Notes applicable to all syncing of Hitachi Application Protector metadata:

§  The HAPRO_SYNC command syncs the on-system metadata using the on-disk metadata that was recently copied with robocopy. Here the snapshot metadata path need not be specified explicitly, assuming the path conforms to the configured metadata path. During this operation, HAPRO_SYNC reads the on-disk metadata from the path from which it is configured as the snapshot metadata path.

§  If you need to additionally check for the consistency of the on-system metadata, combine the {–check|-c} flag with –sync flag, and run the following:

HAPRO_SYNC –sync system -check

§  If you wish to perform only on-system metadata consistency checking, which is recommended after running HAPRO_SYNC –s on the system, run the following:

HAPRO_SYNC {–check|-c}

§  If the user needs to copy Hitachi Application Protector on-disk metadata from the source location to the destination location, use this command:

HAPRO_SYNC [ {–replicate|-r}  {“source,destination”} ]

General Statements

These general statements apply to using Hitachi Application Protector:

§  Although Hitachi Application Protector metadatacan be hosted on a cluster resource, this may make manual metadata management a complex task.

§  Hitachi Data Systems recommends hosting metadata privately on a non-shared location, as Hitachi Application Protector is not yet fully cluster-aware.

§  As the command line interface can be used to copy the metadata, it should be possible to script the metadata movement successfully in case of a fail over. Hitachi Data Systems does not provide support for such scripts currently, but does support the correct functioning of the command line interface.

 

 

 

 

Outcomes