HDPS VMware Backup Transport Modes: NBD vs HotAdd vs SAN
A practical guide to choosing the right data path for faster, safer VMware backups and restores with Hitachi Data Protection Suite.
When a VMware backup “feels slow,” the bottleneck is often not your backup target or your dedupe ratio - it is the path your backup infrastructure uses to read virtual disk blocks out of vSphere. In VMware image-level protection, that path is controlled by the transport mode. The same HDPS storage policy can perform very differently depending on whether the data is being pulled over the LAN (NBD), read locally inside the cluster (HotAdd), or streamed directly from the SAN (Direct SAN).
In this post we’ll explain how each transport mode works, what it changes (and what it doesn’t), and how HDPS (powered by Commvault) orchestrates backups and restores in each mode - with simple diagrams you can reuse in your own documentation or internal runbooks.
How HDPS protects VMware VMs (the 30,000-foot view)
HDPS uses VMware’s vStorage APIs for Data Protection (VADP) to perform image-level backups. At a high level, there are two planes to keep in mind:
· Control plane: HDPS communicates with vCenter to create and manage VM snapshots, discover disks, and coordinate backup/restore workflows.
· Data plane: An HDPS access node (a VSA proxy and/or MediaAgent, depending on your design) reads blocks from the snapshot and writes the protected data to your backup storage target (for example: Hitachi storage, object storage, cloud, or another supported target).
Transport mode is a data-plane decision: it changes how the access node reads (and during restore, writes) VMware blocks. It does not change where you store the backup copy.
What transport mode changes (and what it doesn’t)
The transport mode controls how data is read from (and written back to) VMware datastores during backup and restore. It does not change your backup target (for example, storing backups on Hitachi VSP1 Object via an HDPS storage policy). It changes the source data path from vSphere to the HDPS access node.
Transport mode has direct impact on:
· Backup and restore throughput (how many GB/hour you can realistically move).
· ESXi host impact (CPU and storage stack overhead during reads/writes).
· Network utilization (management/LAN traffic vs SAN traffic).
· Operational complexity (what you must deploy, zone, secure, and maintain).
Speeds and feeds: turning link speed into backup-window math
A quick reminder from the field: network speeds are measured in bits per second, while storage and backup sizes are usually measured in bytes. Real throughput is lower than theoretical due to protocol overhead, packet headers, retries, and encryption.
A simple rule of thumb for low-latency networks is to plan around ~70% of theoretical throughput for sustained data movement.
|
Link speed
|
Theoretical (GB/sec)
|
Theoretical (GB/hour)
|
Rule-of-thumb @ 70% (GB/hour)
|
|
1 GbE
|
0.125
|
450
|
315
|
|
10 GbE
|
1.25
|
4,500
|
3,150
|
These numbers are not a guarantee - they are a sanity check. In real backups, proxy CPU, source storage latency, snapshot consolidation behavior, deduplication/compression, encryption, and concurrency settings can all be limiting factors.
How an HDPS VMware backup works (VADP in 7 steps)
1. HDPS communicates with vCenter to request a snapshot of the VM.
2. vCenter instructs the ESXi host to create the snapshot.
3. If application-consistent protection is enabled, the guest is quiesced (when supported) via VMware Tools and VSS/pre-freeze scripting.
4. The ESXi host creates a differencing (delta) disk so the base VMDK can be read consistently.
5. Changed Block Tracking (CBT) metadata is collected for incremental backups (when enabled and supported).
6. The HDPS access node reads blocks from the snapshot using the selected transport mode and streams them into the HDPS data pipeline.
7. After the job completes, HDPS signals completion and vSphere consolidates the snapshot (merging the delta back into the base VMDK).
Transport modes deep dive
1) NBD (Network Block Device)
NBD is the most common baseline transport because it’s broadly compatible and easy to enable. The tradeoff is performance: all VM data must traverse the ESXi network stack and your IP network on its way to the HDPS access node.
Figure 1. NBD transport - data is read from ESXi over the IP network to the HDPS access node.
How it works:
The access node (MediaAgent and/or VSA proxy) pulls VM disk blocks over the ESXi host’s network copy channel. All data flows over IP between ESXi and the access node.
How HDPS runs backups in NBD mode:
· HDPS triggers a VM snapshot via vCenter and uses CBT to identify changed blocks for incremental jobs (when available).
· The access node reads the snapshot blocks using NBD (or encrypted NBDSSL) over the VMkernel/IP network.
· Data is processed in the access node (for example: deduplication/compression and optional encryption) and written to the configured storage policy target.
· HDPS completes the job and the snapshot is consolidated in vSphere.
Benefits:
· Simplest to deploy: no Fibre Channel/iSCSI zoning, masking, or storage presentation to backup servers.
· Generally fast enough for most VM backups under 250GB. Backups done in under an hour.
· Works across most datastore types and cluster designs (including environments where you only have IP connectivity).
· Predictable operational model - a great starting point for standardization.
Considerations and limitations:
· Often the slowest mode due to host/network overhead and shared bandwidth with other management or VM traffic.
· If you enable NBDSSL for encryption-in-flight, expect additional CPU overhead on the proxy and/or ESXi hosts.
· Performance depends heavily on network design (10GbE vs 1GbE, VLAN isolation, and end-to-end latency).
HDPS tip: If you standardize on NBD, treat the backup network as production infrastructure - dedicate bandwidth where possible and scale out with multiple access nodes rather than pushing a single proxy to 100%.
2) HotAdd (VMware Hot Add)
HotAdd uses a virtual proxy inside the ESXi cluster. During protection operations, VMDKs are temporarily attached (hot-added) to the proxy VM, so data can be read through the ESXi storage stack without sending the read traffic over the LAN.
Figure 2. HotAdd transport - VMDKs are hot-added to a proxy VM for local read access.
How it works:
A VSA proxy VM is deployed in the VMware environment. For each job, the VM disks being protected are hot-added to the proxy so they appear as local disks. The proxy reads blocks locally and then sends protected data to the backup target.
How HDPS runs backups in HotAdd mode:
· HDPS triggers a VM snapshot via vCenter (and uses CBT for incremental jobs when available).
· The VSA proxy VM hot-adds the snapshot disks and reads blocks through the ESXi storage stack.
· At job completion, disks are detached and the snapshot is consolidated.
Benefits:
· Typically faster than NBD because the read path stays inside the cluster storage stack.
· All-virtual design: no need to zone a physical host into the SAN fabric.
· Scales well by deploying multiple proxies close to the workloads (rather than oversizing a single proxy). Many smaller proxies can be spread across multiple ESXi hosts.
· Because the data is already on the proxy, client-side processing (for example: deduplication/compression) can reduce the amount of data sent over the LAN to the backup target by as much as 95%. This reduction in traffic typically removes the need for a dedicated backup network.
Considerations and limitations:
· Proxy placement matters: cross-host HotAdd can add overhead if the proxy and source VM are not co-located or do not share the same datastores.
· HotAdd relies on correct virtual hardware and permissions (SCSI controller choices such as PVSCSI, disk attach/detach rights, etc.).
· There are practical scale limits per proxy (for example, SCSI device node limits). Load balance across multiple proxies to avoid job failures or fallbacks.
· Some VM/disk types (encrypted VMs, certain RDM configurations, vVol scenarios) may restrict HotAdd depending on vSphere version and support matrix.
3) Direct SAN transport
Direct SAN transport uses a physical HDPS MediaAgent that has Fibre Channel or iSCSI access to the VMFS LUNs. The access node reads (and can write during restore) blocks directly from the storage, bypassing the ESXi LAN data path.
Figure 3. Direct SAN transport - a physical MediaAgent reads VMFS data directly over FC/iSCSI.
How it works:
A physical MediaAgent is zoned/masked to the same VMFS LUNs used by the ESXi hosts. HDPS reads blocks directly over the SAN fabric. This minimizes ESXi host CPU impact and avoids consuming backup bandwidth on the management network.
How HDPS runs backups in Direct SAN mode:
· HDPS triggers a VM snapshot via vCenter (and uses CBT for incrementals when available).
· The physical MediaAgent reads snapshot blocks directly from the VMFS LUN(s) over FC/iSCSI.
· Data is processed and written to the configured backup target (disk, object storage, cloud, etc.).
· On completion, the snapshot is consolidated in vSphere.
Benefits:
· Often the highest throughput option in VMFS environments; excellent for tight backup and restore windows.
· Reduced ESXi host impact because the data path bypasses the ESXi network stack.
· Supports high concurrency when paired with multipath designs and multiple LUNs.
Considerations and limitations:
· Requires FC/iSCSI fabric access, zoning/masking, and multipath configuration - more moving parts operationally.
· Generally applicable to block datastores (VMFS). It is not used for NFS-only VMware datastores and is not supported for vSAN datastores.
· Operational safety matters: presented LUNs should remain offline/read-only to the OS to prevent accidental formatting or writes.
· Restore behavior can vary by disk type; for some scenarios, LAN-based restores can be faster than SAN-based restores depending on thin provisioning and zeroing behavior.
HDPS tip: Treat SAN mode as a performance tool for specific use cases (very large VMs or aggressive windows). Keep zoning/masking and change control tight, and document exactly which LUNs are presented to backup infrastructure.
How to choose the right transport mode (practical guidance)
In HDPS you can typically leave transport mode on Auto and let the software choose the best available path for each job based on the proxy type and datastore accessibility.
Typical Auto selection logic (simplified):
· If the datastore is accessible to a physical access node, HDPS can use Direct SAN mode.
· If the datastore is accessible to the ESXi host running a virtual proxy, HDPS can use HotAdd mode.
· Otherwise, HDPS uses NBD/NBDSSL over the LAN.
A simple decision approach:
· Start with NBD for broad compatibility and the lowest operational overhead.
· Move to HotAdd when you need better performance but want to stay all-virtual (or when SAN is not an option, such as vSAN).
· Use Direct SAN when you have VMFS on FC/iSCSI, a physical MediaAgent, and a clear need for maximum throughput - and you can support the added operational complexity.
Quick comparison
|
Mode
|
Read path
|
Requires
|
Best for
|
Watch outs
|
|
NBD
|
ESXi -> IP -> access node/MA
|
IP network; VM Kernel NFC enabled
|
Simple, compatible deployments
|
LAN contention; host stack overhead; encryption CPU
|
|
HotAdd
|
Datastore -> ESXi stack -> proxy VM
|
Proxy VM; shared datastores; HotAdd permissions
|
Faster reads without physical SAN host
|
Proxy placement; SCSI limits; special VM types
|
|
Direct SAN
|
Storage -> FC/iSCSI -> physical MA
|
Physical MA; zoning/masking; VMFS
|
Highest throughput for big VMFS workloads
|
More complexity; not for NFS/vSAN; LUN safety
|
Field-tested conclusion: performance is great, but simplicity wins long-term
Across real-world environments, the pattern is consistent: the faster the method, the more work and complexity is required to set it up and keep it running smoothly.
Practical guidance that holds up over time:
· NBD is the easiest and works well for most customers and most VMs.
· HotAdd is often the next step when you need better performance for larger VMs without adding SAN complexity.
· Direct SAN is a strong option for very large VMs and aggressive windows in VMFS environments - but it is not where most teams should start.
· Don’t design a technical adventure. A backup methodology should be easy to set up, easy to operate, and easy to troubleshoot.
Key takeaways
· Transport mode changes the source data path out of vSphere - not the backup target.
· NBD is your compatibility baseline; try this first.
· HotAdd is a strong all-virtual performance play, but proxy placement and sizing matter.
· Direct SAN can deliver excellent throughput for VMFS on FC/iSCSI, at the cost of higher operational complexity.
· In HDPS, start simple, measure, and only add complexity when the business need is clear.
#DataProtection