In vSphere 7, VMware added support for SCSI-3 Persistent Reservations (SCSI-3 PR) at the virtual disk (VMDK) level. Now you can deploy a Windows Server Failover Cluster (WSFC), using clustered (shared) VMDKs. Why is this relevant? Now, you can migrate and remove those RDMs that were created in your environment to handle failover clustering allowing these Windows VMs with access to VMware unified and simplified management of virtual disks. This article shows how to setup shared VMDKs for WSFC VMs with Hitachi VSP E990.
Clustered VMDKs on VMFS Prerequisites
- Your array must support ATS, SCSI-3 PR type Write Exclusive-All Registrant (WEAR).
- Only supported with arrays using Fibre Channel (FC) for connectivity.
- Only VMFS6 datastores.
- Storage devices must be claimed by NMP. ESXi does not support third-party plug-ins (MPPs) in clustered virtual disk configurations.
- VMDKs must be Eager Zeroed Thick (EZT) Provisioned.
- Clustered VMDKs must be attached to a virtual SCSI controller with bus sharing set to “physical.”
- A DRS anti-affinity rule is required to ensure VMs, nodes of a WSFC, run on separate hosts.
- Change/increase the WSFC Parameter "QuorumArbitrationTimeMax" to 60.
<Update March 2021>
Note, All supported VSP systems should have HMO22 enabled for this feature. For VSP G1x00 arrays only, you must also enable HMO72 in the respective host group(s)
- HMO22… PGR + Mode Sense response changed to SPC-3 operation (Good response)
- HMO72… PGR Type7,8 support
The links below describe more information regarding to clustered VMDK.
The following lists the validation environment.
- Hitachi VSP E990
- Host Mode: 21 VMware Extension with HMO 54, 63, 114
- 2x Hitachi DS120 Compute nodes (ESXi 7 hosts)
- VMware vSphere 7.0
- 2x Windows Server 2016 VM
- Joined to an Active Directory Domain
- Installed Failover Cluster feature
Setup Clustered VMDKs for WSFC VMs
Select the target datastore and go to Configure tab and enable Clustered VMDK. The Clustered VMDK option become available after all the ESXi hosts in a cluster are updated to vSphere 7. In this example the datastore named E990-1 is presented by VSP E990.
On the 1st Windows Server 2016 VM, create a new SCSI Controller and 2 hard disks (1 quorum and 1 data disk) from E990-1 datastore. (Note: If you have existing VM snapshots, creating new SCSI Controller may fail. Delete all VM snapshots and try again.)
For new SCSI controller select LSI Logic SAS or VMware Paravirtual (recommended) with SCSI Bus Sharing: Physical.
For new hard disks, select Disk Provisioning: Thick Provision Eager Zeroed and Virtual Device Node: New SCSI controller. (Note: Do NOT use Sharing: Multi-writer option. We have seen customers get tripped up by this)
On 2nd Windows Server 2016 VM, add new SCSI controller and hard disks with Existing Hard Disk.
Navigate to the 1st Windows Server VMDKs.
After selecting existing hard disks, select Virtual Device Node: New SCSI controller.
Log into the 1st VM and create new partition on new disks. You should see the same newly formatted disks in 2nd VM as well.
On the 1st VM, go to Server Manager > Tools > Failover Cluster Manager. Click Validate Configuration.
On Validate a Configuration Wizard, select 2 VMs for WSFC.
On the Test Selection page, select Storage only since we are focusing only on the VSP storage validation here.
Once test is done, you can open the validation report. All the tests should pass except Storage Spaces test. This is expected for clustered VMDKs.
The link below lists the vSphere WSFC Setup limitations.
Now the storage is validated, you can proceed to create Failover Cluster.
For more information about setting up WSFC VM, see the link below.
Additional Validation with RDM (and Migration to Clustered VMDK)
Migration from RDM to Shared VMDK was also validated. Follow the link here.
We also have validated WSFC storage with RDM. Setting up RDM storage for WSFC VM is almost same as above. Except for the 1st VM, select RDM Disk for new hard disks. With the RDM, all the storage validation should pass without warnings.