Introduction
This blog shows how to implement Veritas InfoScale 8.0.2 on a virtual machine hosted on an ESXi server, with remote replication by Global-active device (GAD) feature which enables to create and maintain synchronous remote copies of data volumes between primary and secondary storage systems.
Block Diagram
The following diagrams show the test environment of Veritas InfoScale 8.0.2 on a virtual machine with GAD.

Figure 1: Test environment

Figure 2: Detailed test environment
Hardware Requirements
The following table lists the hardware specifications used in the validation:
For more information, see the hardware compatibility list at Veritas InfoScale Foundation, Availability, Storage and Enterprise.
Software Requirements
The following table lists the software specifications used in the validation:

Note: Dynamic Multi-Pathing (DMP) does not perform multipathing within the VM. It is an integral part of the data path in Veritas InfoScale products and cannot be disabled.
For more information, see Veritas InfoScale™ Virtualization Guide - Linux on ESXi.
GAD Storage Configuration
CL3-H and CL4-H ports from primary storage are used to create a host group for the P-VOL of the GAD pair.
· Host Group CL3-H-1: Create host group with the ID 1, set host_mode to 0x21 and host_mode_opt to 2, 13, 22, 54, 63 ,114. Assign a 100GB LUN designated for VMDK, to be used as the boot LUN for the VM NODE-1.
· Host Group CL3-H-2: Similarly, create host group with the ID 2, set host_mode to 0x21 and host_mode_opt to 2 ,13, 22, 54, 63 114. Assign a 100GB LUN designated for VMDK to serve as the boot LUN for the VM NODE-2.
· Host Group CL4-H-1: Create host group with the ID 1, set host_mode to 00 and host_mode_opt to 2, 13, 22, 25, 40, 68. Assign 10GB LUNs for Raw Device Mapping (RDM) to provide direct storage access to the VM NODE-1.
· Host Group CL4-H-2: Similarly, create host group with ID 2, set host_mode to 00 and host_mode_opt to 2, 13, 22, 25, 40, 68. Assign 10GB LUNs for Raw Device Mapping (RDM) to provide direct storage access to the VM NODE-2.
Note: All storage ports and host group IDs mentioned here are for illustrative purposes in this test environment.
CL1-B and CL2-B ports from secondary storage are used to create a host group for the S-VOL of the GAD pair.
· Host Group CL1-B-1: Create host group with the ID 1, set host_mode to 0x21 and host_mode_opt to 2, 13, 22 ,54, 63, 114. Assign a 100GB LUN designated for VMDK to serve as the boot LUN for the VM NODE-1.
· Host Group CL1-B-2: Similarly, create host group with ID 2, set host_mode to 0x21 and host_mode_opt to 2,13, 22, 54, 63 114. Assign a 100GB LUN designated for VMDK to serve as the boot LUN for the VM NODE-2.
· Host Group CL2-B-1: Create host group with ID 1, set host_mode to 00 and host_mode_opt to 2,13, 22, 25, 40, 68. Assign 10GB LUNs for Raw Device Mapping (RDM) to provide direct storage access to the VM NODE-1.
· Host Group CL2-B-2: Similarly, create host group with ID 2, set host_mode to 00 and host_mode_opt to 2, 13, 22 ,25, 40, 68. Assign 10GB LUNs for Raw Device Mapping (RDM) to provide direct storage access to the VM NODE-2.

Note: All storage ports and host group IDs mentioned here are for illustrative purposes in this test environment.
Remote Paths and Quorum Paths for GAD
· Used storage ports CL3-C and CL4-C from the primary storage system and CL5-C and CL6-C from the secondary storage system to connect the remote paths.
· Used storage port CL2-H from the primary storage system, port CL7-A from the secondary storage system, and ports CL3-E and CL4-E from the quorum storage to connect the quorum path.
The Command Control Interface (CCI) is used to configure the GAD. The following lists the GAD configuration output:
- Remote paths information from both primary and secondary storage:

- External storage paths information from both primary and secondary storage:

- Quorum information from both primary and secondary storage:

- Virtual Storage Machine (VSM) information:

- Host group information from both primary and secondary storage:

- GAD Pair information:

For more information on configuring Global-Active Device (GAD) and Command Control Interface (CCI), see Global-active Device and Command Control Interface Installation and Configuration Guide.
VMware, Guest OS, and InfoScale Configuration
To configure VMware and guest OS, complete the following steps:
- Install VMware 7.0.U3 on the local disks of both ESXi hosts.
- Set ATS Heartbeat=OFF with the following esxcli command:
#esxcli system settings advanced set -i 0 -o /VMFS3/UseATSForHBOnVMFS5
-
Create a VMFS datastore from EXSi host1 for NODE-1 using a 100GB storage LUN that mapped to the host groups CL3-H-1 (primary storage) and CL1-B-1 (secondary storage).
Note: This datastore is designated for installing the guest OS.
-
Create a VMFS datastore from EXSi host2 for NODE-2 using a 100GB storage LUN that mapped to the host groups CL3-H-2 (primary storage) and CL1-B-2 (secondary storage).
Note: This datastore is designated for installing the guest OS.
- Create VMs and install the guest OS (RedHat Linux 9.3) on both ESXi Host1 and ESXi Host2.
- Assign all 10 GB LUNs to both RHEL guests as RDM (RDM-P), that are mapped to the host groups CL4-H-1 and CL4-H-2 (primary storage) as well as CL2-B-1 and CL2-B-2 (secondary storage).
- Connect two private interconnect links and one public link for InfoScale cluster. Then configure port group and vSwitches accordingly to provide network connectivity for VMs.
- Install InfoScale 8.0.2 on a dedicated physical server, then configure a single-node VCS cluster as the Coordination Point (CP) server for fencing, as shown in the following:
9. Install InfoScale 8.0.2 on both VMs (NODE-1 and NODE-2) and configure InfoScale enterprise, as shown in the following:
For more information about installation and configuration of InfoScale 8.0.2, see document InfoScale 8.0.2.