Converged & Hyperconverged Infrastructure

 View Only

Dealing with VMware Datastore space management on VSP Storage - part 1

By Paul Morrissey posted 03-12-2020 03:14

I recently encountered a cluster of customers who were dealing with datastore space management issues. They were evaluating/engineering an architecture around VMware vSphere vVols but wanted to get their existing VMFS datastores spring-cleaned. In this part 1, I’ll address vCenter datastore alarms/alerts they were dealing with and in part 2 , I’ll address the steps required if you want to verify automatic unmap for VMFS6 datastores down to unmap I/Os and in part 3, what we provided to enable them with some UNMAP powercli automation for their existing VMFS 5 datastores. I'll also share some insights on how customers are dynamically reclaiming storage as they expand their vVols footprint in a part 4.

So first to the alarms/alerts on VMFS datastore usage. Interestingly, this first issue was blocking their efforts to actually test VMware vSphere vVols on Hitachi VSP Storage. As a VMware admin, you may be familiar with “ Datastore usage on disk ” and “ Thin-provisioned volume capacity threshold exceeded ” alerts
Thin-provisioned volume capacity threshold exceededMonitors whether the thin provisioning threshold on the storage array exceeds for volumes backing the datastore.

Some of these customer’s run their VMFS datastores hot >80% and these alerts/alarms were triggering. Worse, depending on vSphere version/update, the latter alert (thin provisioning)  would not allow them to deploy new VMs on those datastores with the dreaded “The operation is not allowed in the current state of the datastore”.  

While I suspected that they had not run UNMAP operations very frequently, they did want to continue running at higher utilization level. Up to now, they were having to hack vCenter DB to temporarily clear the alerts and disable Hitachi VASA, so losing out on the value it provides for VMFS (automated tagging etc.) ,never mind a key block for enabling VMware vVols. I did 'gently' remind them that these type of daily/weekly datastore space management issues go away with vVols datastore with its logical container but we had to get them to happy spot first with their VMFS datastores

Fortunately, the answer is simple (albeit) not well documented to address the VMFS alerts/alarms. These thresholds are set by a VASA Provider. When you deploy Hitachi Storage (VASA) Provider into the environment, you get the option to change these threshold values. Below is the procedure
  1. Acknowledge and Reset to green existing alarms (on 6.7, go to Datastore/Monitor/Triggered Alarms)
  2. SSH into Hitachi VASA Provider appliance
  3. cd  /usr/local/hitachivp-b/tomcat/webapps/VasaProvider/META-INF/
  4. Create a backup of properties file, cp
  5. Edit properties file, vi
  6. search for /alarm level
  7. Change the default yellow (65) and red (80) alarms to desired values (suggest no larger than 90,95)
  8. Restart the VP service from VASA UI (or reboot VM)
  9. Highly recommended to run UNMAP service to free up disk space. Automated PowerShell scripts available.
  10. Deploy vVols Datastore, make your life even simplier.
Example from file:
# Determines the alarm level for a yellow alarm.
# This value must be a value between 0 and 100.
# Default value is 65.

# Determines the alarm level for a red alarm.
# This value must be a value between 0 and 100.
# Default value is 80.

I will provide update on what we provided them around automated UNMAP for VMFS 5 in subsequent blog and wisdom learned from vVols customers. Hopefully, google searches will grab this and avoid any more vCenter DB hacking and have customers continue to avail of benefits of Hitachi VASA Provider to enable efficient vVols and VMFS environments.

TIP: For Storage admins and VMware admins
Customer admins use our VASA UI (https://VP-appliance:50001) so they can instantly see association between datastores and LUNs and associated objects like hostgroups etc. This VASA interface also displays vVols and their associated storage resource and VM storage policies. Good tool when it comes to clear conversations between teams and effectively enabling each other.
For VMFS:-

For vVols:-

 On to Part 2 in the blog series



05-04-2022 12:28

Very detailed. Thanks

05-04-2022 12:28

Very detailed. Thanks