John Wesley

VMware protection in Hitachi Data Instance Director - Part 2: Protection

Blog Post created by John Wesley Employee on Jan 23, 2019

In this blog post I want to talk about the kinds of VMware protection that HDID can provide, and cover the kinds of situations that you may want to use one or the other (or even both!).

 

The HDID VMware protection options broadly fall into two areas:

  • Software based protection
  • Hardware based protection

 

Software based protection

 

There are two methods of providing software protection for a Virtual Machine depending upon the preference to install a HDID client (agent) on every Virtual Machine that requires protection or not. This decision may be forced by how the Virtual Machine disks are attached, see the last section in this blog post for more details.

 

All the Software based protection solutions make use the HDID Repository. For those new to HDID, here's a quick run down of what the Repository is and what it can do, taken from the HDID User Guide:

 

The Repository is a multi-functional object secondary storage system that can simultaneously perform multiple types of storage operations.

 

Unlike legacy storage systems (which inefficiently create copies of data despite trying to de-duplicate it and require separate stores for backup, archiving and CDP), the Repository has the ability to provide backup, archiving and continuous data protection (CDP) in the same store. This enables the repository to employ several data reduction

techniques, delivering massive storage cost savings. With HDID, if a policy targets backup, archiving, and CDP to the same repository store, then the data is stored only once. The data also is only sent once, reducing the amount of data that is moved around the system

 

The TL;DR of this is that the HDID Repository is its incremental forever without having a rehydration and performance penalty during recovery.

 

Creating a Repository is very simple, it can be found in the Storage section

 

repo_node.PNG

 

For creating a new repository node, only a directory in which to store data is required

 

repo_node_dir.PNG

 

Note

It is important to consider carefully where to place the Repository in order to maximize performance and minimize network traffic. For instance if the Repository node and the VMware ESX Agentless node share the same Proxy then the VMware SAN Transport Mode could be used.

 

Using a HDID Agent

 

Installing a HDID Agent upon a Virtual Machine allows it to be attached to the HDID master node, it will be represented on the Nodes screen as an Operating System node (OS Host). This node will then behave just like a non-virtualized OS host, and can be protected using Path based classifications rather than VMware classifications. The data is still stored in the Repository, but is less VMware specific reducing the available restore options. As such I won't explicitly cover how to setup the Policy and Dataflow when using the HDID agent in this blog series.

 

Using VMware vStorage API for Data Protection (VADP)

 

In order to provide software protection of Virtual Machines without an the installation of an agent, HDID uses the VDDK (Virtual Disk Development Kit) provided by VMware to read data from VMDK files and store it in the HDID Repository. This method is capable of supporting Datastores on array based storage as well as VSAN and connects to vSphere over the network using the vCenter as the endpoint.

 

Creating a Policy for software based protection requires use of the "Backup" operation

 

vadp_operation.PNG

 

Which has the familiar set of attributes:

 

vadp_operation_options.PNG

 

Now that the nodes have been created and the Policy has been defined the Dataflow can be constructed

 

vadp_dataflow.PNG

 

Note

The HDID rules compiler will generate a warning explaining that data cannot be automatically validated for VMware nodes. This can be safely ignored.

 

Once distributed and triggered backups appear in the Repository section of the Restore screen

 

restore_screen.PNG

 

Like so

 

repository_restore_screen.PNG

 

Clicking on one of these records allows a view of what has been protected

 

repository_restore_screen_snap.PNG

 

From here various restore operations can be performed that I will cover in a future blog post.

 

 

Hardware based protection

 

Hardware based protection only applies to VMware Datastores being backed by Hitachi Block Storage. Each VMware Datastore is resolved to a Logical Unit (LUN) which in turn can either become part of a Snapshot or Replication within the storage.

 

Whilst this can be achieved manually, by using only Block Storage nodes and manually specifying LUNs. It is significantly simpler and more flexible to use the VMware ESX Application node to leverage the selection methods described in my previous post.

 

We've already covered setting the classification for the Policy, so it's just the operation that we need to select

 

vadp_operation.PNG

 

From this point we can either choose Snapshot (a local differential copy of the data) or Replication (a local or remote copy of the data). Replications can be continuous; always mirroring changes, or batch; only mirroring changes when triggered.

 

Either option leads to the familiar looking operation attributes screen

 

snapshot_operation_options.PNG

 

replication_operation_options.PNG

 

Implementing the Snapshot Policy on a Dataflow is very straightforward, it is done directly on the VMware ESX Agentless node

 

snapshot_dataflow.PNG

 

Once the Snapshot type and Pool have been defined, this Dataflow can be compiled and distributed.

 

For a replication the destination node is required. In this example we have a local and remote Replication being used to create a Dataflow that is compatible with HDID's SRM support.

 

replication_dataflow.PNG

 

Once triggered, we can once again use the restore screen to see our backups this time selecting Hitachi Block

 

restore_screen.PNG

 

The screen shows Snapshots and Replications

 

block_restore_screen.PNG

 

Clicking on one of these records shows the contents in a slightly different layout to the Repository backup

 

block_restore_screen_snap.PNG

 

Once again, from here various restore operations can be performed that will be discussed in the future blog post.

 

Using Software and Hardware protection together

 

Whilst I have shown the Software and Hardware protection options separately in this blog post, this is only done for clarity. These solutions are fully compatible and are often used together in order to make a more cost effective backup solution that meets to the protection objective.

 

What's the right protection for me?

 

This answer to this question is always "it depends" but in order to help make an informed choice I've listed the available protection methods against a list of requirements that should be considered

 

Hardware ProtectionSoftware Protection (VADP)Software Protection (Agent)
Recovery PointsFrequentMid/LowLow
Impact to SystemsLowHigh / Mid (if using SAN)Very High
RetentionDays-WeeksDays-YearsDays-Years
Media CostHigh (primary storage)Low (offline / nearline storage)Low (offline / nearline storage)
Offsite Recovery capableYesYesYes
Setup TimeLowLowRequires agent on every VM

 

What can't be protected?

 

The table below explains the current level of protection provided by each method

 

Hardware ProtectionSoftware Protection (VADP)Software Protection (Agent)
Hitachi Hardware SupportYesYesYes
Non-Hitachi Hardware SupportNoYesYes
vRDMNoYesYes
pRDMNoNoYes

 

Thanks for reading, next time we'll cover the various restore options for our backups.

Outcomes