Hitachi Ops Center​

 View Only

Replication with Ops Center Protector on 25GB iSCSI

By Karan Patani posted 10-16-2023 15:41

  


 

Replication with Ops Center Protector on 25GB iSCSI

With the introduction of Ops Center Protector v7.6.0, several replication features, including Snapshots, Shadow Image (SI), True Copy (TC), and Universal Replicator (UR), are now supported with iSCSI over a 25GB connection.

 

Our objectives are as follows:

·       To test all the replication features using Ops Center Protector v7.6.0 with iSCSI over 25GB.

·       To conduct performance testing for each replication feature with several iterations.

Introduction

The Ops Center Protector software provides protection, recovery, and retention capabilities. With an easy-to-use whiteboard-style user interface, you can easily create new policies and data flows to automate and simplify data management.

Test Methodology

·       Ops Center Protector is set up on two virtual machine (VM) servers using a stand-alone installer, where one is the Ops Center Protector Master VM and the other is the Ops Center Protector client ISM. The Ops Center Protector Master VM acts as a central hub for distributing the policies used for a specific data flow and controls the UI. Whereas the Ops Center Protector ISM VM controls Hitachi Block storage devices using Hitachi Command Control Interface (CCI).

·       We are using the Hitachi Virtual Storage Platform E590 (VSP E590) storage system for iSCSI testing over a 25GB connection, where each VSP E590 storage system is connected to a Linux host using the 25GB iSCSI connection for data transfer. The replication ports of the two VSP E590 storage systems are configured over 32GB FC connections.

·       The following was discovered in the Ops Center Protector UI:

o   Two VSP E590 storage systems were discovered as block storage devices by specifying the Controller IP address and credentials.

o   Two Linux servers were discovered as block hosts.

·       We created new policies and data flows for replications, such as Snapshots, SI, TC, and UR.

·       We performed replication feature tests using two sets of volumes from the VSP E590 storage system to the Linux server (replication 1 x 128GB and 100 x 50GB).

·       We ran 15 iterations of each replication test and measured the average performance achieved.

Environment Configuration

The test environment layout is as follows:


 

The system configuration for each component is as follows:

 

Software

Software Version

OS Version

 

Software Components

VMware vCenter

VCSA v8.0.0.10000

-

 

Protector Master VM

v7.6.0

RHEL v7.6

 

Protector ISM VM

v7.6.0

RHEL v7.6

 

Hitachi CCI

v01-68-03/01

-

 

 

 

 

 

 

 

Server

CPU

Memory

OS Version

Server Components

2 x HA820 (RHEL Data server)

24 CPU(s) x Intel(R) Xeon(R) Silver 4310 CPU @ 2.10GHz

128GB

RHEL v8.2

DS220 (ESXi server)

72 CPU(s) x Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz

128GB

ESXi v7.0.2, 17630552

 

 

 

 

 

 

Storage System

VSP E590

 

 

Hitachi Subsystem

Microcode

93-07-01-40/00 (SVOS 9.8.6)

 

 

Drives

SSD

 

 

PG

2 x RAID 6 (6D+2P)

 

 

DP Pool

2 x HDP

 

 

Server Configuration

To configure the server, complete the following steps:

 

1.     Configure the same number of subnets as the number of iSCSI ports on the host.

Server

NIC

MAC Address

NIC IP

Host 1

eno3

93:40:9c:1f:20:12

192.168.50.101

eno4

93:40:9c:1f:20:13

192.168.60.102

 

2.     Change the MTU value to 9000 in the /etc/sysconfig/network-scripts/ifcfg-ensxxx file.


3.     If you are using multipathing, ensure that it is configured.


4.     Verify that the port speed is set to 25 GBs.


5.     Add the following iSCSI parameters in the /etc/sysctl.conf file.
Note that the net.ipv4.tcp_window_scaling value is an option used to increase the maximum window size to 1GB, and the net.core.rmem_max value defines the maximum receive window size.


6.     Set the iSCSI parameters in the /etc/iscsi/iscsid.conf file.

 

The following iSCSI parameters are used in the/etc/iscsi/iscsid.conf file:

Switch Configuration

Configure the settings as shown in the following figure for network segregation and optimum performance for all the switch ports where the server and storage are connected.


Storage Configuration

1.     To be in sync with the switch and the iSCSI Initiator, ensure that all the iSCSI Target ports on the storage system are configured as follows:

 


 

2.     When the configuration is complete, discover the iSCSI targets on the server.


 

3.     When discovered, verify the nodes and then log in to them.


 

While setting up the iSCSI environment over 25GB, ensure that the end-to-end configuration and the MTU settings are in sync on the server, switch, and storage. Ensure that the Windows scaling and flow control settings are in place along with the MTU settings. This improves the overall performance on a 25GB setup.

 

Replication using Ops Center Protector UI

For information on Hitachi Ops Center Protector, visit: https://www.hitachivantara.com/en-us/products/storage-software/data-protection-cyber-resiliency/ops-center-protector.html

 

For information on how to configure Ops Center Protector, visit: https://knowledge.hitachivantara.com/Documents/Management_Software/Ops_Center/10.9.x/Protector

 

Test Cases – Operation times for different backup functions

Hitachi Ops Center Protector performance was validated using Hitachi Storage Virtualization Operating System (SVOS) 9.8.6 with 25GB iSCSI cards on a VSP E590 storage system. Because this is the first time we tested, we used the iSCSI protocol with 25GB speed.

 

Operation times include:

  • Block storage-based snapshot (Hitachi Thin Image)
  • Hitachi block LDEV replication with Hitachi Shadow Image (SI)
  • Hitachi block LDEV replication with Hitachi True Copy (TC)
  • Hitachi block LDEV replication with Hitachi Universal Replication (UR)

 

Data collection: 

·       Response times were collected from the Jobs pane in the Ops Center Protector UI.

·       The test was repeated multiple times, and the average run time was calculated from 15 iterations.

·       Each test was run using an isolated storage system in an identical environment (pools, RAID level, resource groups, and so on).

·       For each operation, two variations were run. The first variation provisioned one volume of 128 GB, and the second variation provisioned 100 volumes of 50GB each. 

·       Volumes were created and assigned; however, no data was added.

 

Operational performance:

Test Items

Time (in mm:ss)  

Block storage-based snapshot (HTI) (1 x 128GB volume)

00:13

Block storage-based snapshot (HTI) (100 x 50GB volumes) 

07:28

Replicate Hitachi block LDEV with Shadow Image (1 x 128GB volume) 

00:42

Replicate Hitachi block LDEV with Shadow Image (100 x 50GB volumes) 

09:20

Replicate Hitachi block LDEV with TC (1 x 128GB volume) 

00:44

Replicate Hitachi block LDEV with TC (100 x 50GB volumes) 

07:32

Replicate Hitachi block LDEV with UR (1 x 128GB volume) 

01:00

Replicate Hitachi block LDEV with UR (100 x 50GB volumes) 

08:10

 


 

Limitations for the 25GB setup

The limitations for the 25GB setup are as follows:

·       iSCSI storage port direction can only be set as Target.

·       Remote connections (MCU/RCU) are currently only supported with an FC or 10GB iSCSI setup.

Summary

The iSCSI setup typically responds to random requests from Protector for different types of replication pairs. There was no unusual spike in CPU or memory usage on the server, and both local and remote replication worked without any issues. Protector had no special configuration other than the 25GB iSCSI configuration on the hardware.

 

Because this was the first time that we ran the tests, the results can be used as a benchmark for future tests in the 25GB environment.

#replication #iSCSI #DisasterRecovery

0 comments
36 views

Permalink