Block Storage

 View Only

Automating NVMe/TCP Setup Between VMware ESXi and Hitachi VSP One Block with Ansible

By Vinod Subramaniam posted an hour ago

  
Setting up NVMe over TCP between a VMware ESXi host and a Hitachi Vantara VSP One Block 28 storage array involves a lot of steps — creating storage volumes, configuring subsystems, setting up networking, and connecting everything together. Doing this manually is tedious and error-prone.
 
This post walks through a set of six Ansible playbooks that automate the entire process end-to-end.

What You're Building


The goal is simple: connect an ESXi host to a VSP One Block 28 over NVMe/TCP using two 100Gbps NICs for multipath connectivity. The automation handles both the storage side and the host side.
ESXi Host                                VSP One Block 28
┌────────────────────────┐               ┌─────────────────────────┐
│  vmnic2 (100G)         │── NVMe/TCP ── │  CL1-D (192.168.52.31)  │
│    └─ vmk1             │               │                         │
│       192.168.52.36    │               │  NVM Subsystem          │
│                        │               │   └─ 10 Namespaces      │
│  vmnic3 (100G)         │── NVMe/TCP ── │  CL2-D (192.168.52.32)  │
│    └─ vmk2             │               │                         │
│       192.168.52.37    │               │                         │
└────────────────────────┘               └─────────────────────────┘

Prerequisites

  • Ansible 2.14+
  • Python 3.9+ with PyVmomi (pip install PyVmomi)
  • Two Ansible collections:
    ansible-galaxy collection install hitachivantara.vspone_block
    ansible-galaxy collection install community.vmware
  • SSH access to the ESXi host
    ESXi 7.0 U3+ or 8.0+

The Six Playbooks

The setup is split into two phases: storage configuration (steps 1–3) and ESXi host configuration (steps 4–6).
 

Phase 1 — Storage Side

Step 1: Create LDEVs (01_create_ldevs.yml)

 
This playbook creates logical devices (LDEVs) on the VSP One Block. These become the NVMe namespaces that ESXi will see as disks. It checks if each LDEV already exists before creating it, so it's safe to re-run.
ansible-playbook 01_create_ldevs.yml --ask-vault-pass
By default, it creates 10 LDEVs of 10GB each with compression and deduplication enabled. You control this in storage_vars.yml:
 
ldev_start_id: 5888
ldev_count: 10
ldev_size: "10GB"

Step 2: Create the NVM Subsystem (02_create_nvm_subsystem.yml)

 
This creates an NVM Subsystem on the storage array with two NVMe/TCP ports, registers the ESXi host's NQN, and sets the host mode to VMWARE_EX. The playbook also auto-generates the subsystem NQN and saves it to a file so later playbooks can use it — no manual copy-paste needed.
ansible-playbook 02_create_nvm_subsystem.yml --ask-vault-pass

Step 3: Add Namespaces and Paths (03_add_namespaces_and_paths.yml)

 
Maps each LDEV as a namespace in the subsystem and creates access paths to the ESXi host NQN. After this step, the storage side is fully configured.
ansible-playbook 03_add_namespaces_and_paths.yml --ask-vault-pass

Phase 2 — ESXi Side

Step 4: Configure Networking (04_configure_esxi_networking.yml)

 
Sets up the ESXi network stack for NVMe/TCP. It creates one vSwitch per 100G NIC, adds portgroups, creates VMkernel adapters with static IPs, and tags them for NVMe/TCP traffic. MTU is set to 9000 (jumbo frames) for best performance.
ansible-playbook 04_configure_esxi_networking.yml --ask-vault-pass

Step 5: Enable the NVMe/TCP Adapter (05_add_nvme_tcp_adapter.yml)

 
Enables the software NVMe over TCP adapter on both 100G NICs via esxcli. This runs over SSH since ESXi doesn't have Python.
ansible-playbook 05_add_nvme_tcp_adapter.yml --ask-vault-pass

Step 6: Connect to Storage Controllers (06_add_nvme_tcp_controllers.yml)

 
The final step. Connects ESXi to the VSP One Block by creating two NVMe/TCP target controllers — one per storage port — giving you active-active multipath. It automatically picks up the subsystem NQN that was generated in step 2. The playbook is idempotent: re-running it when already connected won't cause errors.
ansible-playbook 06_add_nvme_tcp_controllers.yml --ask-vault-pass

Run It All at Once

If you want to skip running each playbook individually, a master playbook runs all six steps in sequence:
ansible-playbook site.yml --ask-vault-pass

Configuration Files

All variables live in two files under vars:
 
storage_vars.yml — Pool ID, LDEV count/size, subsystem name, storage ports, host NQN
esxi_vars.yml — NIC names, vSwitch names, VMkernel IPs, storage target IPs
Credentials are stored in Ansible Vault-encrypted files under ansible_vault_vars. Encrypt them before first use:
ansible-vault encrypt ansible_vault_vars/ansible_vault_storage_var.yml
ansible-vault encrypt ansible_vault_vars/ansible_vault_esxi_var.yml

Key Design Decisions


One vSwitch per NIC — This ensures proper NVMe/TCP port binding and clean multipath.
Idempotent playbooks — Every playbook checks current state before making changes. Safe to re-run.
Auto-generated NQN — The subsystem NQN is constructed automatically from the storage serial number, eliminating a common source of manual errors.
SSH for ESXi commands — Since ESXi lacks Python, the host-side playbooks use ansible.builtin.raw over SSH for esxcli commands.

Wrapping Up


These six playbooks take what would be a lengthy manual process and turn it into a single command. The storage and host configurations are cleanly separated, variables are centralized, and everything is idempotent. Whether you run it step-by-step or end-to-end, the result is NVMe/TCP connectivity with multipath between ESXi and VSP One Block 28.

The GitHub Repository 

https://github.com/visubramaniam/visa_demo.git


#VSPOneBlock

0 comments
5 views

Permalink