Skip navigation
1 2 Previous Next

Paul Morrissey's Blog

26 posts

A somewhat misleading or bemusing title but helps to capture the essence of what we have been working diligently on with VMware over the last few months as they worked on their VMware SRM 8.2 project which went officially public today. One of the major focus areas for SRM 8.2 was to move VMware SRM away from Windows to a Linux appliance based deployment model. This was to enable both a cloud friendly deployment model but also provide option for enterprises adopting the appliance model as default security posture. 

 

 

With this initiative, vendors like Hitachi Vantara who have built Site Recovery Manager adapters had to undertake a project to create a dockerized container based SRA in order to support this new deployment model. 

 

We have many joint enterprise customers who were actively interested in seeing this project come to completion. They rely on SRM to provide automated failover and failback but more so the audit-ability of DR processes that it provides which is an important attribute with its non-disruptive testing. I see many customers now in both 2DC and growing base of 3 datacenters (3DC) deployments.

 

Back to SRM 8.2, technically, SRM will now be delivered as an appliance (using VMware's Photon OS, a hardened linux distribution). It's deployed like any other appliance, using OVF with configuration template. The SRA adapter is then deployed by selecting the Hitachi docker image. You would then use HDID UI/API to drag/drop/create the replication pairs for existing datastore volumes. Note, you can configure vCenter to automatically create this pair relationship for all new datastores using a combination of placed vCenter tag which HDID will auto-detect which is pretty neat. Back on SRM, Its the same SRM process from then on that you are used to (discover array replicated datastores, create protection groups, put protection groups in recovery plans and test recovery plan)

 

I will say beside the deployment model, one of the other benefits for both customers and ourselves is the ease at which we can package updates and so the lifecycle of both the SRM appliance and the SRA container becomes a simple experience.

 

This SRA (v3.0.3) is compatible with Hitachi Data Instance Director (HDID) version 6.7.1 or higher and inter-operable with all current Hitachi storage and converged systems. The Hitachi SRA package will also support previous Windows based SRM as we package both the docker container and windows version in same package. Check VMware compatibility for Hitachi and SRM here. Note, This is a premium SRA we have made available in addition to our traditional standard storage SRA to truly simplify the DR protection experience when deployed in conjunction with our HDID data protection and copy data management software.

 

 

Download Hitachi SRA v3.0.3, implement HDID and start your HTML5 based VMware SRM Appliance journey with the only 100% data availability Hitachi Storage and Converged Infrastructure. My marketing colleague, Rich Vining released his blog on this topic which has relevant links to VMware announcements etc. on https://community.hitachivantara.com/people/rvining/blog/2019/05/09/hdid-supports-vmware-srm-82-on-day-1

 

Stay tuned for more in this area as we work on next related DR initiative based on this foundation....VVol...

I often receive questions about programmatic options to manage Hitachi Storage Infrastructure as customers move to higher degrees of automation in their environment. We have many options in this area including CM REST APIs, UCP Advisor, VMware vRealize Orchestrator Connector and Hitachi Snapin adapter for Microsoft PowerShell. One that we probably should highlight more is the Hitachi Infrastructure adapter for PowerShell

 

powershell_rm.png

The Hitachi Snapin for PowerShell software allows administrators to extend the PowerShell/PowerCLI capabilities in VMware, Microsoft and other environments with access to a set of cmdlets for discovering and managing Hitachi storage which can be included in scripts to accomplish a range of daily infrastructure tasks. This covers tasks for Block Storage and NAS Storage such as creating LUNs, CIFS shares, snapshots, clones, datastores, modifying hostgroups etc. It can also manage multi-site operations between arrays such as enabling replication (truecopy, universal replicator or global active device) on specific LUNs using powershell remoting feature which we support.The scripter can filter, sort, and group the storage information by piping the output of one Hitachi infrastructure cmdlets to other cmdlets.

 

We now have over 100+ cmdlets that cover key operations that administrators may want to automate. (I pasted the list below for google search). Simply, add the Snapin, add-storagedevice and ready to execute cmdlets against storage.

 

Go to support.hitachivantara.com and download the free PowerShell adapter ( I had it placed in the VMware adapter section for convenience) and review the admin guide in the package for complete list of cmdlets. We are on version 1.1 so always great to hear feedback on cmdlets that we should be adding going forward.

 

 

 

PS C:\Program Files\Hitachi\SystemCenter\HiPowerShellInfrastructure> Get-Command -Module Hitachi.storage.management.powershell2.admin
CommandType Name Version Source
----------- ---- ------- ------
Cmdlet Add-CIFSShareAccessAuthentication
Cmdlet Add-FileServer
Cmdlet Add-HostGroupToResourceGroup
Cmdlet Add-HostgroupWWN
Cmdlet Add-iSCSIInitiator
Cmdlet Add-iSCSILU
Cmdlet Add-iSCSITargetToResourceGroup
Cmdlet Add-LUToResourceGroup
Cmdlet Add-PortToResourceGroup
Cmdlet Add-QuorumDisk
Cmdlet Add-StorageDevice
Cmdlet Allow-SystemDriveAccess
Cmdlet Clone-FSDirectory
Cmdlet Create-CciConfiguration
Cmdlet Create-CIFSShare
Cmdlet Create-Clone
Cmdlet Create-EnterpriseSnapshot
Cmdlet Create-EVS
Cmdlet Create-FileClone
Cmdlet Create-Filesystem
Cmdlet Create-FSDirectory
Cmdlet Create-FSiSCSILogicalUnit
Cmdlet Create-FSiSCSITarget
Cmdlet Create-FSSnapshot
Cmdlet Create-FSStoragePool
Cmdlet Create-FSVirtualVolume
Cmdlet Create-FSVirtualVolumeQuota
Cmdlet Create-GADPair
Cmdlet Create-Hostgroup
Cmdlet Create-iSCSITarget
Cmdlet Create-Journal
Cmdlet Create-LU
Cmdlet Create-ModularSnapshot
Cmdlet Create-MultiSiteTrueCopy
Cmdlet Create-MultiSiteUniversalReplicatorPair
Cmdlet Create-NFSExport
Cmdlet Create-RemoteClone
Cmdlet Create-SnapOnSnapPair
Cmdlet Create-Snapshot
Cmdlet Create-StoragePool
Cmdlet CreateVirtualBox-ResourceGroup
Cmdlet Create-VVOL
Cmdlet Delete-CIFSShare
Cmdlet Delete-CIFSShareAccessAuthentication
Cmdlet Delete-Clone
Cmdlet Delete-EVS
Cmdlet Delete-Filesystem
Cmdlet Delete-FSDirectory
Cmdlet Delete-FSiSCSILogicalUnit
Cmdlet Delete-FSiSCSITarget
Cmdlet Delete-FSSnapshot
Cmdlet Delete-FSStoragePool
Cmdlet Delete-FSVirtualVolume
Cmdlet Delete-FSVirtualVolumeQuota
Cmdlet Delete-GADPair
Cmdlet Delete-Hostgroup
Cmdlet Delete-iSCSITarget
Cmdlet Delete-Journal
Cmdlet Delete-LU
Cmdlet Delete-NFSExport
Cmdlet Delete-RemoteClone
Cmdlet Delete-ReplicationPairGroup
Cmdlet Delete-ResourceGroup
Cmdlet Delete-Snapshot
Cmdlet Delete-StoragePool
Cmdlet Deny-SystemDriveAccess
Cmdlet Disable-EVS
Cmdlet Edit-HostgroupHostMode
Cmdlet Enable-CommandDevice
Cmdlet Enable-EVS
Cmdlet Expand-Filesystem
Cmdlet Expand-FSStoragePool
Cmdlet Expand-Journal
Cmdlet Expand-LU
Cmdlet Expand-StoragePool
Cmdlet Format-Filesystem
Cmdlet Get-CIFSShare
Cmdlet Get-CIFSShareAccessAuthentication
Cmdlet Get-Clone
Cmdlet Get-Controller
Cmdlet Get-Drive
Cmdlet Get-EVS
Cmdlet Get-FileServer
Cmdlet Get-FileServerNodes
Cmdlet Get-Filesystem
Cmdlet Get-FreeConsistencyGroup
Cmdlet Get-FreeLU
Cmdlet Get-FSiSCSIAggregateGroups
Cmdlet Get-FSiSCSIDomainName
Cmdlet Get-FSiSCSILogicalUnit
Cmdlet Get-FSiSCSITarget
Cmdlet Get-FSSnapshot
Cmdlet Get-FSStoragePool
Cmdlet Get-FSVirtualVolume
Cmdlet Get-FSVirtualVolumeQuota
Cmdlet Get-GADPair
Cmdlet Get-HitachiDisk
Cmdlet Get-Hostgroup
Cmdlet Get-iSCSITarget
Cmdlet Get-Journal
Cmdlet Get-LinkAggregation
Cmdlet Get-LU
Cmdlet Get-LUPerformance
Cmdlet Get-NFSExport
Cmdlet Get-Port
Cmdlet Get-PortLoginWWNs
Cmdlet Get-PortPerformance
Cmdlet Get-QuorumDisk
Cmdlet Get-RemoteClone
Cmdlet Get-ReplicationPairGroup
Cmdlet Get-ResourceGroup
Cmdlet Get-Snapshot
Cmdlet Get-StorageDevice
Cmdlet Get-StoragePool
Cmdlet Get-SystemDrive
Cmdlet Get-Version
Cmdlet Get-VirtualStorageDevice
Cmdlet Lock-ResourceGroup
Cmdlet Map-FloatingVVol
Cmdlet Map-VirtualLU
Cmdlet Modify-CIFSShare
Cmdlet Modify-FSiSCSILogicalUnit
Cmdlet Modify-FSiSCSITarget
Cmdlet Modify-FSVirtualVolume
Cmdlet Modify-FSVirtualVolumeQuota
Cmdlet Modify-NFSExport
Cmdlet Modify-SysLock
Cmdlet Mount-CIFSShare
Cmdlet Mount-Filesystem
Cmdlet Mount-FSiSCSILogicalUnit
Cmdlet Mount-LU
Cmdlet Present-LU
Cmdlet Present-LUAsNASDrive
Cmdlet Remove-FileServer
Cmdlet Remove-HostGroupFromResourceGroup
Cmdlet Remove-HostgroupWWN
Cmdlet Remove-iSCSIInitiator
Cmdlet Remove-iSCSILU
Cmdlet Remove-iSCSITargetFromResourceGroup
Cmdlet Remove-LUFromResourceGroup
Cmdlet Remove-PortFromResourceGroup
Cmdlet Remove-QuorumDisk
Cmdlet Rename-Filesystem
Cmdlet Reserve-LUForGAD
Cmdlet Restore-Filesystem
Cmdlet Restore-ReplicationPairGroup
Cmdlet Resync-Clone
Cmdlet Resync-GADPair
Cmdlet Resync-RemoteClone
Cmdlet Resync-ReplicationPairGroup
Cmdlet Resync-Snapshot
Cmdlet Set-FSiSCSIDomainName
Cmdlet Set-HDPPoolThreshold
Cmdlet Shrink-Journal
Cmdlet Shrink-StoragePool
Cmdlet Split-Clone
Cmdlet Split-GADPair
Cmdlet Split-RemoteClone
Cmdlet Split-ReplicationPairGroup
Cmdlet Split-Snapshot
Cmdlet UnLock-ResourceGroup
Cmdlet Unmap-FloatingVVol
Cmdlet Unmap-VirtualLU
Cmdlet Unmount-CIFSShare
Cmdlet Unmount-Filesystem
Cmdlet Unmount-FSiSCSILogicalUnit
Cmdlet Unmount-LU
Cmdlet Unpresent-LU
Cmdlet Unpresent-LUAsNASDrive

A lengthy title but the following is long overdue for our customers. In the most recent releases of vSphere 6.7 and vSphere 6.5, VMware has now included default multipathing claim rules for Hitachi VSP Storage.

 

As a refresher, most customers had to manually add the following SATP rules when configuring multipathing (specifically path selection (PSP) and path failover (SATP) rules for Hitachi devices) on every ESXi host. Now these rules are included out of the box in vSphere 6.7U1 and vSphere 6.5 P03 (GA today, Nov 30th 2018) builds or later. This further reduces the time to production usage when deploying new vSphere ESXi hosts/clusters connected Hitachi Storage or as part of Hitachi UCP converged offering. The rules will handle devices configured with or without ALUA with ALUA typically being used for active-active (GAD) configurations.

 

The following are the recommended rules and what is now baked into vSphere builds

 

esxcli storage nmp satp rule add -V HITACHI  -P VMW_PSP_RR -s VMW_SATP_ALUA -c tpgs_on -e "Hitachi VSP Storage with ALUA enabled"

esxcli storage nmp satp rule add --satp "VMW_SATP_DEFAULT_AA" -V HITACHI -P "VMW_PSP_RR" -e "Hitachi VSP Storage"

 

We kept the IO Operations Limit to the default of 1000 as every site has some uniqueness. This may differ from other vendors recommendations (some who recommend 1) but that low a value would spoil the sequential detection handling within the VSP array and you might lose some performance from increases in random port behavior. More tests in this area to follow but wouldn't go below value of 20 if you do want to tweak where you have smaller number of LUNs and a high number of paths based on some initial informal testing.

 

Here is the typical output you should see on fresh install.  Note rules are "system" and no longer "user"

-------------------------------------------------------------------------------------------------------------------------------

[root@localhost:~] esxcli storage nmp satp rule list |grep -i Hitachi

VMW_SATP_ALUA                        HITACHI   OPEN-V   system      tpgs_on                  VMW_PSP_RR            Hitachi VSP Storage with ALUA enabled                              

VMW_SATP_DEFAULT_AA          HITACHI                     system      inq_data[128]={0x44 0x46 0x30 0x30}  VMW_PSP_RR                                                                                

VMW_SATP_DEFAULT_AA          HITACHI   OPEN-V    system      tpgs_off                 VMW_PSP_RR             Hitachi VSP Storage                                                      

VMW_SATP_DEFAULT_AA          HITACHI                     system           

 

If you want to check what SATP/PSP is being claimed by a device (datastore LUN),  use the following command

------------------------------------------------------------------------------------------------------------------------------------------------

[root@localhost:~] esxcli storage nmp device list

naa.60060e800727200000302720000002aa

   Device Display Name: HITACHI Fibre Channel Disk (naa.60060e800727200000302720000002aa)

   Storage Array Type: VMW_SATP_ALUA

   Storage Array Type Device Config: {implicit_support=on; explicit_support=off; explicit_allow=on; alua_followover=on; action_OnRetryErrors=off; {TPG_id=7,TPG_state=AO}{TPG_id=4101,TPG_state=AO}}

  Path Selection Policy: VMW_PSP_RR

   Path Selection Policy Device Config: {policy=rr,iops=1000,bytes=10485760,useANO=0; lastPathIndex=1: NumIOsPending=0,numBytesPending=0}

   Path Selection Policy Device Custom Config:

   Working Paths: vmhba3:C0:T0:L0, vmhba2:C0:T0:L0

 

if you want to check the paths, [root@localhost:~] esxcli storage core path list

   Runtime Name: vmhba3:C0:T0:L1

   Device Display Name: HITACHI Fibre Channel Disk (naa.60060e800727200000302720000002ab)

   Adapter: vmhba3

   Channel: 0

   Target: 0

   LUN: 1

<truncated>

   ----------------------------------------------------------------------------------------------------------------------------------------------

So another step to improve time to value and out of box best practice reliability in your VMware ecosystems with Hitachi Infrastructure. As parting thought and for upcoming blog, did you know latest release of UCP Advisor can perform that ESXi deployment from scratch (on supported compute) to take this time to value to the next level..

 

[update] If you want more information on claimrules, I found the following VMware online document useful

https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.storage.doc/GUID-9B19EF2E-DA5A-43D2-B41F-8E7C112D2E00.html

 

https://docs.vmware.com/en/VMware-vSphere/6.5/rn/esxi650-201811002.html

We recently rolled out some updates across our VMware ecosystem integration portfolio and I thought would use the opportunity to refresh our Hitachi customers / prospects on the integration possibilities that we provide. In addition to updating six of the existing integrations, we also announced three new integrations for VMware, two of those supported by Hitachi Data Instance Director (HDID), our backup and copy management software.

 

  • Hitachi Infrastructure Management Pack for VMware vRealize Operations v2.0 - New
  • Hitachi Storage Connector for VMware vRealize Orchestrator v1.5
  • Hitachi Storage Content Pack for VMware vRealize Log Insight v1.3
  • Hitachi Storage Provider for VMware vCenter (VASA), v3.5.3
  • Hitachi Infrastructure Adapter for Microsoft® Windows PowerShell v1.1
  • Hitachi Storage Plug-in for VMware vCenter v3.9
  • Hitachi Data Instance Director (HDID) connector for VMware vRealize Orchestrator (vRO) - New
  • Hitachi Data Instance Director adapter for VMware Site Recovery Manager (SRM) - New

 

To recap the intent and focus on these integrations is to continue to give customers that single pane of glass experience with vCenter / vRealize / vCloud management stack while ensuring visibility, service access and automation control over Hitachi data center infrastructure and software.

ecosystem integrations.png

The new Infrastructure Management Pack for VMware vRealize Operations extends our original storage management pack and introduces initial support for our compute/converged offering Hitachi Unified Compute Platform (UCP) while continuing to support our Hitachi Virtual Storage Platform (VSP) storage in generic compute-VMware environments. In this version, we included visibility into Hitachi compute resources, including alerts for blade based compute systems. We leverage vRealize Log Insight to gather SNMP alerts and syslog which are then selectively passed to vROPS. We also extended the storage monitoring aspects by delivering smarter more intuitive dashboards for storage capacity management. It also addresses processor utilization in VMware environments, giving proactive awareness and guidance on addressing any allocation or utilization imbalances across the many storage processors (MPUs) in VSP enterprise storage systems

 

From a vRealize Orchestrator perspective, which is becoming key component of our customer's automation and self service delivery of infrastructure journey, we added 20 new vRO workflows to our storage connector based on customer feedback. These include enhanced end to end vSphere Cluster datastore provisioning, reclaiming unused LUNs and storage UNMAP reclamation workflow to automate UNMAP/zero page reclaim for VMFS5 datastores. (Customers using VMFS6 or VVol datastores get the advantage of vSphere's automated UNMAP)

 

We also introduced additional vRO connector workflows focused on backup and copy data management services packaged as part of our HDID software.

vro-hdid2.png

These workflows are natively exposed in vCenter via right click on VMs and also the real power of these workflows becomes very apparent when exposed in vRealize automation as custom actions on provisioned VMs. The workflows are focused on enabling customers to take and use space efficient storage backups for pinpoint recovery services or test/dev/analytics use cases. Now application owners/admins after requesting and getting their particular App service provisioned from vRealize automation or vCenter, can now take advantage of custom actions to take and access point in time backups/space efficient clones of their VMs .They can selectively create new test/dev copies of that VM data from any time in the past for analytics purposes. Even better, is the ability for SQL admin for example to access and mount previous versions of their database into same production VM instance (via mount VMDK workflow) to do data comparisons or surgical data extraction when required.

 

mount VMDK's.png

 

The updated Hitachi Storage (VASA) Provider for VMware addresses customer and security improvements plus has been extensively tested with vRealize Automation+vRealize Orchestrator to provide access to storage infrastructure policy selection (aka SPBM) to catalog services. The example below shows how our Hitachi Enterprise Cloud is exposing access to infrastructure and backup requirements as part of a LAMP stack provisioning request. In a nutshell, we have effectively integrated latest releases of vRA with vRO SPBM plugin accessing Hitachi VASA Provider for both VVol and VMFS customers. Pretty cool. I will get one of my colleagues to publish that effort back to github shortly so others in the community can take advantage of it

 

vra and SPBM.png

 

On the VMware Site Recovery Manager front, we are introducing SRM adapter (SRA v3.0) based on HDID while continuing to maintain our v2.x SRA for environments without HDID. The HDID software adds tremendous usability and operational time improvements in large scale multi datacenter environments or environments with generalist admins that prefer tag, drag and drop approach to backup/DR configurations. Enabling datastore replication is a simple as placing a tag on that new datastore and HDID auto-magically creates that pair relationship between datacenters. The SRA will support all mainstream SRM versions (6.1,6.5,8.1) and you can review the broader HDID benefits as extolled by Rich in his recent HDID blogs

 

I will have to devote a separate blog to expand on these and the other integrations we enhanced including vCenter plug-in , Powershell adapter and Log Insight content pack that I didn't get to in this blog.

 

We are constantly delivering on our strategy to provide premier experience for customers in VMware ecosystem and encourage our customer/partner ecosystem to take advantage of these as you step along in the journey to delivering self-service and simplified operations as part of private/hybrid cloud environment.

 

Download Links

Hitachi Vantara Support Portal; VMware adapter section

or VMware Marketplace subspace for Hitachi Vantara solutions

 

Informational links

Hitachi Vantara website for VMware Ecosystem Integrations
Hitachi Data Instance Director

hitachi_vmware.pngWe recently rolled out some software upgrades for our VMware ecosystem integrations and certifications which are generally available now or will be within the next 45 days. The specific products updates this month are Storage (VASA) Provider, vRealize Operations (vROPS) Management Pack and Storage Plugin for vCenter. This is in addition to updates to our flagship UCP Advisor software which provides provisioning and lifecycle management for infrastructure and resources from within vCenter. As always, the intent is to continue to empower IT roles who leverage the VMware vCenter/vRealize management stack for operations and automation with native integrated access to services, capabilities and data from Hitachi Infrastructure. I've outlined some of the high level advancements in the respective integration below.

 

Note, all Hitachi integrations are now posted on VMware marketplace for customers/partners to download.

 

Storage (VASA) Provider v3.5.0/v3.5.1

As a refresher, the Hitachi VASA provider software integration enables storage capability aware policy based management for VMFS/VVol while also enabling a VVol deployment.

 

One of the major new features additions in the 3.5 release is automating QoS or similar actions when storage policies are changed on a VM or datastore in order to bring it into compliance with that new policy. We focused our initial policy compliance efforts around our Hitachi Active Flash and data tiering (HDT) pooling technology which are used quite frequently in VMware environments. It consists of multiple tiers from both internal and external storage (an example might be FMD, SSD within Hitachi VSP and 3rd tier being a virtualized external 3rd party flash array from Pure/EMC etc). This pooling technology automatically moves or pins data blocks between tiers based on data access rules but there are cases where finer grained control for Application owners/VM admins is beneficial, whether its related to expected application usage behavior change or cost controls.

 

With this release of VASA Provider, when an administrator applies a new policy to a VM/VMDK or datastore, the Hitachi VASA Provider will initiate storage changes to bring that object into compliance.  One example of this is tiered data placement within our pool. If certain VMs/VMDKs or datastores are set with "Tier 3 Latency and Tier 3 IOPS" policy capability within vCenter, the system will automatically move those blocks to that lower tier freeing up higher tiers for new net applications or that high performing database that is growing in size.   Similarly, that application that now requires both "Tier 1 Latency and Tier 1 IOPS" is simply applied that policy by VM admin (or App owner). The Hitachi VASA software will invoke actions to pin/promote all blocks to that highest performing tier. We have made this capability available for both VMFS (taking advantage of custom tiering policies) and/or VVol datastores. VMware VVol will obviously allow finer grained granular control at VM/VMDK level given its object based implementation. This was the additional motivation to allow a level of Hitachi infrastructure resource control to be accessible to application owners (not just VM admins) through API/vRO/vRA Catalog services.

 

hdt+tiers-combo.png

This release also includes support for vSphere 6.7, environments with multiple SSO domains and configurations without external service processor (SVP). Also worth noting that the latest editions to Hitachi VSP All Flash storage platform powered by SVOS RF, now officially supports up to 8X increase in number of vSphere Virtual Volumes (VVols). So whether using Hitachi Storage, UCP Converged or Hitachi enterprise cloud environment, download and try the free virtual appliance and take advantage of it.

 

vROPS Management Pack v1.8

In this updated release of the Hitachi Storage Management Pack, we have introduced a new troubleshooting storage dashboard to identify and remediate potential risks/issues quickly within vROPS. This dashboard walks operations teams through 12 key questions and metrics that we have determined from past support experiences are helpful to get to quick resolution to VM-Storage potential root cause issues. It covers key health areas such as cache write pending levels, I/O port utilization, latency and storage processor busy metrics within dashboard for full correlated selected timeline view.

 

troubleshooting.png

 

We have also made improvements in capacity savings dashboard for customers to easily visualize space savings with deduplication and compression deployed in their all-flash Hitachi storage and UCP converged systems. Administrators can now visually see answers to questions related to deduplication ratio, data reduction savings and capacity trend based on current space efficiency rates continuing.

 

capacity+savings.png

 

This releases also supports vROPS 6.7 and  VMware’s vRealize Suite Lifecycle Manager (vRSLCM) for automated management pack updates. This will allow customers  that.leverage vRSLCM to automatically receive notification of partner management pack updates and have those updates be downloaded directly to their system from VMware's (VSX) marketplace.

 

Storage Plugin v3.8

While UCP Advisor is morphing to be the flagship management integration for all Hitachi infrastructure, we continue to evolve the storage plugin for vCenter. This release includes substantially improved datastore provisioning times on our recently announced next-generation VSP storage platforms using latest API integration. It  also fully supports provisioning against deduplication and compression based storage pools.

 

Other related Hitachi-VMware updates

From a VMware certification point of view:-

  • Site Recovery Manager (SRM):- Hitachi SRA v2.3.1 is now certified with VMware's SRM 8.1 release.
  • VMware vSphere Metro Storage Cluster (vMSC):- Hitachi VSP platforms certified to support vMSC configurations with iSCSI connectivity with vSphere 6.5 and with vSphere 6.7 in process. Check kb article for updates

 

Stay tuned as we continue to evolve these and other VMware ecosystem integrations.

 

Referenced Links

  • Hitachi VSP All Flash Storage Platform: a flash powered storage platform offering 100% data availability solution for VMware
  • Hitachi Unified Compute Platform (UCP) CI: a factory-built and tested package of compute, storage, and networking that when combined with VMware vSphere virtualization platform creates the best foundation for apps, cloud and business.
  • Hitachi UCP HC: an all-in-one hyperconverged system that combines the strength of Hitachi UCP with VMware vSphere, vCenter and vSAN for simplicity and IT agility.
  • Hitachi UCP Advisor: automated management and orchestration software provides single pane of glass visibility across, compute, network and storage. Tight integration with VMware vCenter means organizations don’t have to learn new systems, buy additional software or undertake manual, time consuming integration.
  • Hitachi Enterprise Cloud (HEC): a pre-engineered solution for VMware vRealize that provides a public cloud experience with private cloud security and enterprise-class service levels.

With the recent announcement of our VMware Cloud Foundation (VCF) powered UCP RS system to deliver a hybrid cloud reality (check Dinesh's blog here for details), one of the interesting questions from early prospects is advice or guidance on how others are managing a hybrid private environment which consists of a traditional VMFS environment (and lately VVol) as they bring VMware vSAN based architectures into their environments. The basis for this question or the outcome they want to meet is to provide a pool of resources accessible to the various line of business or application teams which should provide different characteristics while providing those consumers with some level of intuitive control on where their assets will run to ensure they can meet their intended SLAs.

 

Giving the topic of Hitachi UCP RS and its VCF foundation, Amazon services come to mind.ucp rs and vcf.png

Here are some Amazon EBS Storage options to give a perspective on why this will be important in your VMware powered private hybrid cloud designs.. Each separate EBS volume can be configured as EBS General Purpose (SSD), Provisioned IOPS (SSD), Throughput Optimized. (HDD), or Cold (HDD) as needed. They have stated that some of the best price/performance balanced workloads on EC2 do take advantage of different volume types on a single EC2 instance. For example, they mention they see Cassandra using General Purpose (SSD) volumes for data but Throughput Optimized (HDD) volumes for logs, or Hadoop using General Purpose (SSD) volumes for both data and logs. This level of differentiation is first step in providing tiers of service to consumer of cloud resources.

               Source: AWS Storage Options

 

But again, performance is just one layer. There are many characteristics when it comes to SLAs. Take the "availability" characteristic. As you may know, because an EBS volume is created in a particular availability zone, the volume will be unavailable in other availability zones if original availability zone itself became unavailable. Resources aren't replicated across regions unless you do so specifically. Again, that might be an important characteristic to an app service being rolled out (To be fair to AWS, they recommend creating snapshots as snapshot of volume(s) are available across all of the availability zones within a region)

 

This is an area that I've put some cycles into with the team when we defined the requirements around the latest release of our Hitachi VASA Provider (VP) version 3.4 to operationally enhance the right consumption of resources for vSAN, VMFS and/or VVol. Based on the VVol/SPBM program, we took advantage of some of the storage container concepts and latest tagging capabilities in vSphere 6.x to provide a better experience. With the latest Hitachi VP software, VMFS datastores (that may be adding additional datastore resources to an existing VCF based vSAN deployment or separate traditional VMFS environment), will be automatically tagged in vCenter with their specific SLA including cost characteristics. Click to enlarge GIF below to get a perspective of how the new VP WebUI (and API) provides the facility to assign capabilities to infrastructure resources, including automated vCenter tagging of VMFS datastores while allowing vSAN datastore(s) to be similar tagged with appropriate category capabilities. The end result is much more intuitive description of the resource capabilities available across vSAN, VMFS and VVol.

 

WebUI and tags.gif

With this automated tagging of capabilities to existing and new datastores, vSphere policies can now be much richer and descriptive to consumers. Click to enlarge animated GIF below as it rolls through a typical vSphere policy, in this case a policy describing  "Tier 1 Performance and DR Availability" with rulesets for VMFS, VVol and vSAN within the same policy. In my lab environment, this policy with its Tier 1 performance, Tier 2 availability and lowest cost capability found matching storage on all three entities allowing consumer to pick one of choices

 

Policies with tags.gif

 

The VMFS datastore highlighted below was configured to provide the highest level of availability and performance (GAD multi-datacenter active-active replicated enabled LDEV using accelerated flash on F1500 with data at rest encryption) and the VP software automatically tagged the corresponding datastore with the following capabilities; Tier 1 availability and performance, encryption and cost between 750 and 1000 units. This datastore would be a match when app owners or admins selected the "Tier 1 Performance, Encrypted and Active-Active availability" policy which in my lab environment ruled out vSAN or VVol as potential targets.

 

 

Taking the Apache Cassandra application example from Amazon, which I wanted to deploy on the VCF powered UCP RS system. During provisioning, I assigned the appropriate application owner understandable policy for each of the disks:-  the high performance data disks for Cassandra VM with lower capacity landed on the vSAN datastore, while the log disk, 10x the size, landed on the iSCSI VMFS datastore. I didn't consume unnecessary storage from my all-flash vSAN as the VMFS datastore (and VVol datastore) was a suitable match for the characteristics for the log data in this example. There is so much more that can be exploited when you think of these capabilities can easily be extended and expressed for other infrastructure resources.

 

 

In summary, when it comes to provisioning resources, whether its from vSphere Client or vRealize Automation with its SPBM awareness, these richer policies are select-able to ensure appropriate resources are selected at VM level or indeed VMDK level. Taking a leaf out of Amazon's trees in EC2, this is the type of resource variability and ease of consumption needed to run a sustainable cloud environment meeting diverse needs across many application services as you update and modernize your infrastructure.

 

Check out the live demonstration of VCF powered UCP RS and Hitachi VASA (VP) Software at #VMworld 2017

On we roll to the final part 3 blog on UCP Advisor v1.2. I jest with the part "Tree" as I was reminded that those unaccustomed to listening to the Irish accent don't know we tend to drop the "h" which sends my kids into hysterics when I say "3".

At least UCP Advisor is generating the right sort of hysterics.....tt.png

In part 2 , I covered the essential networking mgmt, some of the day 0-90 administration aspects and integrated data protection management features. I am going to conclude and cover firmware management, some of the bare metal support, integrated operational analysis with Log Insight content pack and close on some of the cloud automation integration aspects using Powershell and vRO.

 

When it comes to software firmware management, it's one of those areas that normally infuses dread into administrators. We all share the same apprehension when deciding to update some frequently used software package on our tablet/streaming TV that we tend to rely on after poor previous experience. The planning and execution to successfully navigate software upgrades in an enterprise environment of inter-dependent infrastructure components running production workloads, with the mandate to keep the business online, takes it to another time consuming level. Part of the issue is the passage of time between each upgrade requiring generalist admins to re-research and figure out the esoteric operations for each of the individual components based on latest best practices/software versions. With UCP Advisor, we tackled the problem of ensuring the software could update administrators when a suitable collection of firmware was available, give administrators a simple menu of to select where and what could be deployed. This includes BMC, BIOS and BMC firmware upgrades on the compute nodes supporting hyper converged UCP HC and converged UCP 2000 and also the network switches providing either FC and Ethernet services. UCP Advisor will apply the software updates carefully across the range of infrastructure components while ensuring non-impact to services that UCP is serving. It ensures, for example, firmware updates are completed on side A of FC fabric/spine-leaf infrastructure before doing side B, it ensures each or a set of compute nodes is back and operational before progressing to next one. In version 1.2, you can optionally add additional capacity before doing upgrades to ensure sufficient capacity to handle workloads during upgrade cycle. Click the visual below to get some some indication of the administration process.

firmware mgmt.gif

 

I mentioned bare-metal. Essentially, UCP Advisor is not restricted to providing infrastructure services to ESXi hosts. With right credentials, UCP Advisor exposes physical infrastructure management so Advisor administrator could automate the creation and hostgroup presentation of iSCSI LUNs to a RedHat physical node. Other areas covered include storage pools and subset of replication pair management

bm.gif

 

When it comes to operational insight, VMware's log insight (vRLI) provides a intuitive real time log analysis toolset to provide real time answers and insights to problems or potential operational problems related to systems and services on UCP platform. UCP Advisor comes with integrated content pack which provides the connection to logs from compute, network and storage. In a previous blog, I covered some examples of how a version of this content pack can be leveraged including automatic alerts based on unknown intrusions into infrastructure management domain from suspect IP addresses based purely on real time log analysis. Huge potential to exploit this for many different use cases. Click visual to get a perspective on this. I'm really interested to hear how you are exploiting the log insight content packs.

loginsight.gif

 

Finally, UCP and UCP Advisor can be integrated into cloud automation management toolset and processes such as vRealize Automation or similar products like Hitachi Enterprise Cloud. UCP Advisor comes with extended PowerCLI cmdlets and vRealize Orchestrator (vRO) workflows for the majority of the tasks that I've mentioned previously. There is a mixture of foundational workflows and high level workflows that are provided, whether for example its to allow service catalog based creation/deployment of ESXi/datastore infrastructure resources or allow self service VM recovery for tenant users.

vro.gif

 

To borrow from my "tree" opening, I highly recommend evaluating the infrastructure automation simplicity that UCP Advisor (download) brings to your virtualization/private cloud infrastructure projects so you can manage the forest and not have to climb each and every tree. Stay tuned as team is busy working on next-gen UCP Advisor with even more automation brilliance.

Ok, time for part 2. I'm back on and connected after a few days zip-lining and mountain biking through redwood trees and train tracks in northern California. As I was contemplating part 2, the biking time reminded me that infrastructure automation software end game is not too different. You want to spend the best quality bike time on the downhill adrenaline inducing sweeping single track through the trees versus the mundane paved path to the mountain..i.e. Let infrastructure work for you with automation rather than you tediously working the infrastructure to get better ROI from your quality time.

IMG_20170630_103003958.jpg

 

In Part 1 of this series, I started to peel back some of the well known UCP Advisor features that our customers are using when deploying our infrastructure automation software while sharing some of the updates we made in the most recent UCP Advisor v1.2 release. In this blog, I want to touch on aspects of networking mgmt, day 0 + day 90 administration and cool integrated data protection features.

 

So on to networking. I covered automating all the aspects around deploying storage datastores and compute ESXi hosts in the previous post and I wanted to complete the 3rd leg, the important networking management aspect. From a IP networking aspect, two key aspects I believe are VLAN management and topology views. When you update the VLANs on your distributed virtual switches, UCP Advisor provides an automated facility to synchronize VLANs to the top of rack and/or spine switches that make up your networking fabric. It also provides connectivity information so you can quickly determine the physical infrastructure connectivity topology between ESXi hosts and IP infrastructure. You can visualize some of this clicking on animated GIF below. Of course, firmware upgrade management which I'll chat about in part 3 is included for the networking switches.

 

network2.gif

 

Circling back to day 0 type operations from an administration perspective, most environments do/will end up with multiple appliances, whether its 30 satellite offices each with local needs or a datacenter with multiple UCP appliance pods for application, security and/or multi-tenancy requirements. UCP Advisor has a distributed model to manage multiple appliances from single vCenter including enhanced linked mode configurations. (vSphere 6.5 newly supported in 1.2 release). Each appliance or logical configuration has a dedicated control VM appliance (small Win2k16 based CVM) which allows the scalability to be only limited by vCenter max # of ESXi hosts which it can manage,  1000 at last check. Each appliance or logical system can be quickly on-boarded using CSV configuration to describe the appliance or new infrastructure elements (e.g adding a new chassis of compute on day 89) can be on-boarded using UI. The administration tab also covers aspects lack setting the schedule for automated backup of infrastructure config components, specifically the IP network and FC device configurations.

 

admin.gif

 

Speaking of data protection, UCP Advisor always provides integrated VM and datastore level operational backup and recovery capabilities when HDID software and its V2I component is recognized as being deployed. This is accessible through the data management services tab. With data protection moving to a snap and replicate model vs traditional backup to meet both scalability and fast self service recovery, I think this is an important inclusion. The ability to have every VM newly deployed to be automatically protected and ability to do full or granular recovery of VM data at the drop of a hat is key when you users need it, especially if its a multi-TB VM and time is money...The GIF visual shows you some aspects of this and more details on the VMware protection options from a previous blog I wrote a while back. For vSAN based UCP HC, HDID offers VADP based backup as well.

 

v2i.gif

In part 3, I'll free wheel home and close out to cover automated firmware management, physical workflows capabilities for bare metal support or custom infrastructure needs and some of the vRO and Powershell integrations that are available to further automate your cloud deployment with HDS UCP and UCP Advisor..  Feel free to drop a comment/questions on any aspect or what you would like to me to cover in more detail

We recently rolled out the latest release of UCP Advisor, v1.2, our flag ship infrastructure automation software for converged, hyper-converged and standalone storage.  In a previous blog, I included a longish voice over video which rolled through the various features but I thought I would take the opportunity to peel back the features in a shorter bites while also referencing the latest value features introduced in version 1.2

 

An essential element in converged automation is simplifying the operations and deployment of ESXi hosts, datastores and virtual to physical VLAN synchronization actions. These entities are what UCP Advisor calls virtual/logical resources. <Click animated GIF for visual>

vw.gif

Taking the all important datastore management which traditionally involve multiple admin groups and many days for completion of service tickets. UCP Advisor provides an intuitive interface and workflows for VMFS/NFS datastore creation and hides all the creation complexities and validation of FC zoning across multiple SAN switches, checking that WWPN of ESXi host(s) are in active zone and storage host groups, performing storage LUN creation/masking and finally attachment to ESXi cluster into single click operation. Provisioning times are now at least sub 1 minute. With v1.2 release, we now provide full end to end workflow support for iSCSI and NFS datastores as well.

 

But we have taken this a step further and also generate unique vCenter tagging of the storage capabilities of the just created VMFS datastore(s) using associated HDS VASA Provider software (v3.4). Now the characteristics of that datastore (performance, availability, cost, encryption etc. etc.) are tagged and available to vSphere administrators to exploit in vCenter policy based management framework for provisioning operations whether from vCenter or higher level cloud automation. The vCenter tags also enable admins to quickly find all related objects, for example all datastores that match Tier 1 IOPS Performance + provide data at rest encryption. Pretty cool SPBM for VMFS. <Click animated GIF for full visual>

ds-prov.gif

As referenced earlier, UCP Advisor supports vSAN based hyperconverged like UCP HC (updated support in v1.2 for vSAN 6.6), converged infrastructure like UCP 2000 that uses compute and external storage and a mode called Logical UCP which can manage flexible configurations including standalone storage. For vSAN based UCP HC, UCP Advisor provides visibility to health and capacity of the vSAN compute nodes respective SSD/HDD(s)  that form the cache and capacity tiers of vSAN datastores and also visibility to non allocated devices. It also provides access to compute inventory, topology and operations such as boot order, power and LID operations and most importantly firmware management which I'll cover in subsequent blog in this series <Click on animated GIF for full visual>

 

vsan.gif

 

Speaking of ESXi compute nodes, UCP Advisor can also deploy new node(s) or non-allocated ESXi compute nodes into ESXi clusters running on UCP 2000. It will surface up un-allocated compute nodes on UCP 2000 config (which are SAN Boot ESXi nodes), it will check/update the firmware of node(s) match the cluster, verify WWPNs on new host are correctly configured in active SAN zones and after deployment, it will ensure all existing VMFS and NFS datastores in the cluster are now available and presented to the new node(s). Again, this dramatically increases the time to use for new compute resources added into environment and providing the turnaround times now expected in the age of public computing expectations.

<Click on animated GIF for full visual>

 

deploy server.gif

 

In the next part 2 of this series, I will cover aspects of networking mgmt, on-boarding administration, topology views and integrated data protection and more in part 3

I recently published a short video blog Let's hear it - Introduction to UCP Advisor which introduced a new converged and hyper-converged infrastructure automation and delivery software from HDS. Some great feedback but as expected folks asking for more technical details and an opportunity to see the product in action. With that, here is a 20+ min video I put together which walks through the product including compute, storage, network,data protection and advanced infrastructure management capabilities. As mentioned in the previous blog, the intent is to put infrastructure tasks within the reach of the efficient fingertips of administrators to enable them to accelerate and manage the delivery of VM based application services on that dynamic infrastructure.

 

 

Reminder: You can view video on YouTube by selecting icon on the bottom but ensure the quality settings are set to 720P to view it if it starts looking blurry.

 

<updated Video based on version 1.2 released in June 2017, here is link to 1.2 related blog>

With the announcement that UCP Advisor went GA today, I wanted to expand on the initial blog Boom!- Rolling out UCP Advisor for Converged and Hyper-Converged - Infrastructure Orchestration Software where we introduced this new offering for infrastructure orchestration for the UCP family, initially focusing on our rack server based converged and hyperconverged solution. We are working on providing video snippets showing the solution in operation but while we process those, here is a 10 min verbal introduction of the use cases and features that UCP Advisor is providing as part of our infrastructure automation strategy

 

 

[update] Link to product demo Let's See the Demo - UCP Advisor in Action

 

https://www.hds.com/en-us/pdf/datasheet/hitachi-datasheet-ucp-advisor.pdf

More info @ http://www.hds.com/go/vmware

This week we rolled out the new Hitachi Management Automation Strategy to deliver modern infrastructure management software to enable customers to simplify and automate the delivery of infrastructure and application services supporting current and next-gen applications.  As customers transform their IT architectures to enable higher level of IT agility and support the business as part of digital transformation projects, the ability to automate infrastructure to allow IT to focus resources on business innovation becomes key pillar as part of that strategy. As part of the new Hitachi Management Automation Strategy announcement,  we added the new Unified Compute Platform (UCP) Advisor to the growing portfolio of automation-focused software.

 

In a nutshell, UCP Advisor delivers simplified, smart converged management for individual and multiple UCP 2000 (rack-scale, rack server based converged system) and UCP HC (Hyper-converged) systems. With the recently expanded UCP systems options now supporting deployments between 2-128 nodes with the UCP 2000 converged all-flash/hybrid systems or 2-64 nodes with our UCP HC systems, the ability to automate the lifecycle of this infrastructure to support VMs becomes paramount.

 

So what’s covered when we say “lifecycle of that infrastructure”. For the first release, we put initial emphasis on VMware ecosystem so I’ll use some example from it but some of the services equally applicable to other ecosystems and/or roadmap items.

 

Essentially, when it comes to converged infrastructure supporting a virtualization environment or private cloud environment, there are 3 essential areas that are top of mind in most customer conversations

  • Manage the lifecycle of the software/firmware upgrades and expansion of that infrastructure in reliable repeatable fashion consistent with change management best practices
  • Enable administrators to manage/automate the delivery of infrastructure resources to support VM and application workloads
  • Monitoring and remediation of that infrastructure with familiar toolsets.

 

I will cover some of these aspects in greater detail in subsequent blogs but first some snippets of screenshots to convey what to expect from UCP Advisor when it goes general availability in the coming weeks.

 

First, the deployment environment for converged/hyper-converged UCP systems varies and can often involve deployments in the datacenter and multiple individual systems deployed in remote branch offices. UCP Advisor can manage disparate UCP systems through vSphere taking advantage of vCenter enhanced linked mode where necessary.

UCPA_multi-systems.png

 

UCP Advisor provides visibility to the infrastructure resources (compute, network and storage) of UCP systems and provides specific information and actions against that underlying infrastructure from within vSphere.

 

UCPA_summary_scrn.png

UCPA-networktopology-cropped.png

 

In addition to physical resource management actions, there are a set of virtual/logical resource actions that are exposed via UI, PowerShell or API*. Some examples are deploying brand new ESXi nodes from infrastructure pool ready to consume VMs, provisioning datastores (FC, NFS or iSCSI) end to end or synchronizing VLANs from distributed switches to uplink top of rack switches. These type of services are essential infrastructure services for customer's layering on cloud architectures such as vRealize Automation (see Hitachi Enterprise Cloud) to enable a dynamic environment.

 

UCPA_summary_scrn with VR menu_cropped.png

UCPA-datastore provisioning.png

UCPA server.png

 

One of the harder challenges for infrastructure automation software is the ability to manage the firmware upgrade path of the underlying components that comprise a converged or hyper-converged system. UCP Advisor provides the necessary workflows to accomplish this directly from vSphere ensuring availability of services as components across compute, network and storage are upgraded.

 

UCPA-firmware-cropped.png

 

We've also integrated data management services, starting with VM or datastore level hardware snapshot protection through optional recommended integration with Hitachi Data Instance Director virtual services to enable administrators to cover that essential aspect of infrastructure lifecycle. With the ability to backup and recover TB VMs in real time, it assures customers they can reduce the risk of impacts to the business of unforeseen events. Stay tuned for more on in this area as we'll exploit these hardware snapshots of production data to offer additional data management services to support devops use cases.

 

UCPA-data protection-cropped2.png

So a short introduction but in subsequent blogs, I’ll expand on the additional aspects of UCP Advisor including its native vRealize Orchestrator for distributed automated operations, vRealize Log Insight integrations with ability to analyze infrastructure logs for security intrusion and other feature value aspects we are enabling to deliver on our automation strategy and fast track our customers to realize their operational needs for their next-gen cloud architectures.

 

[update]: Follow-on Blogs

Let's hear it - Introduction to UCP Advisor

Let's See the Demo - UCP Advisor in Action

That's quite a long title but probably best describes an upcoming VMUG presentation I'm tagged to present on Nov 9th at the 2015 VMUG virtual event 3.0. I wanted to give you a sneak peek at some of the topic areas I'll cover so you can maximize your time to get involved in the session (or defer to another VMUG session heaven forbid) plus opportunity to prep to bring your questions you may have in this area.


The plot of the presentation is around Software–Defined, Reliably Delivered High Availability / Disaster Recovery across Tier 1 Applications. Its really about guidelines or suggestions to avoid being part of this statistic, especially for your Tier 1 virtualized workloads. How can you effectively implement a solution that delivers operational recovery, DR and business continuity for those critical services...

factoid.png

operdrbc.png

 

Its focused on Tier 1 workloads that are virtualized are typically running on enterprise storage platforms or more so converged platforms such as Vblock, FlexPod or newer generation HDS UCP converged.

converged systems.png

I'll give some guidelines based on customer experiences (be it converged or straight up storage platforms) on the type of solutions you should be looking at to effectively achieve operational recovery, DR scenarios and spend some time on the new possibilities with Stretched ESXi clusters, stretched storage and the holy grail of 3DC

 

 

hdid local-remote snaps.png3dc.png

 

Highly recommend to register for this 2015 VMUG virtual event 3.0 Some great speakers and it features live and recorded breakout sessions, online chat capabilities, virtual booths, and downloadable resources.. As they say,"the VMUG Virtual Event 3.0 promises to educate and entertain" and stop by our HDS session at 2:35CT Nov 9th on Software–Defined, Reliably Delivered, High Availability / Disaster Recovery across Tier 1 Applications

 

TitleAuthorCreatedViews
Blog PostLet's See the Demo - UCP Advisor in Action

I recently published a short video blog Let's hear it - Introduction to UCP Advisor which introduced a new converged and hyper-converged infrastructure automation and delivery software from HDS...

Paul MorrisseyNovember 28, 2016 7:18 PM438201Actions
Blog PostLet's hear it - Introduction to UCP Advisor

With the announcement that UCP Advisor went GA today, I wanted to expand on the initial blog Boom!- Rolling out UCP Advisor for Converged and Hyper-Converged - Infrastructure Orchestration Software where we introduced..

Paul MorrisseyNovember 22, 2016 1:34 AM1025202Actions
Blog PostBoom!- Rolling out UCP Advisor for Converged and Hyper-Converged - Infrastructure Orchestration Software

This week we rolled out the new Hitachi Management Automation Strategy to deliver modern infrastructure management software to enable customers to simplify and automate the delivery of infrastructure and application s..

Paul MorrisseyOctober 11, 2016 8:09 AM2390304Actions
Blog PostVMUG Virtual Event - Software–Defined & Reliably Delivered High Availability / Disaster Recovery across Tier 1 Applications

in VMware

Paul MorrisseyNovember 5, 2015 1:03 AM1746311Actions
Blog PostLatest V Integration - Hitachi Content Packs for VMware vRealize Log Insight – Compute and Storage

in VMware

Paul MorrisseyOctober 14, 2015 8:09 AM968520Actions
Blog PostAutomating VM Snapshot Backups and Restoring 1TB VMs in seconds with Hitachi Virtual Infrastructure Integrator - Part 2

Simplifying data protection and virtual machine data management is every VM administrator and IT architecture team goal. More and more of my conversations with clients when discussing their plans for extending their E...

Paul MorrisseyJune 30, 2015 8:14 PM1842310Actions
Blog PostHitachi and VMware Virtual Volumes - Part 3

in VMware

Paul MorrisseyMarch 26, 2015 11:43 PM7602110Actions
Blog PostHitachi and VMware Virtual Volumes - Part 2

in VMware

Paul MorrisseyMarch 18, 2015 11:56 PM3953210Actions
Blog PostHitachi and VMware Virtual Volumes - Part 1

in VMware

Paul MorrisseyMarch 16, 2015 11:58 AM5992221Actions
Blog PostShining a light on Hitachi Storage Adapter for VMware vC Ops (hmm vROPS)

VMware vCenter Operations Management Suite (recently rebranded as VMware vRealize Operations) has become an integrated part of today’s data center management stack due its ability to monitor virtual infrastructu...

Paul MorrisseyNovember 7, 2014 4:50 PM3894001Actions
Blog PostKeeping VM and storage admins excited - updates from Hitachi Storage at VMworld Barcelona

This week at VMworld Barcelona, we are covering a number of initiatives that we have delivered on in recent months or just announced at the show. Beside'sall the great work we are doing with Unified Compute Platform ...

Paul MorrisseyOctober 15, 2014 6:15 AM3447121Actions
Blog PostAutomating VM Snapshot Backups and Recovering 1TB VMs in seconds with Hitachi Virtual Infrastructure Integrator

One of the key areas we assist our customer's leveraging HDS Storage for their VMware environments is automated VM-level backups. This is one of the key areas addressed by a solution called Hitachi Virtual Infrastruct...

Paul MorrisseyOctober 12, 2014 7:55 AM4302005Actions
Blog PostFearless: Hitachi , NFS and VMware ; Scaling from 200 to 15,000 VMs with HNAS

Hitachi is traditionally be seen as the leader in FC Block for enterprise storage systems. You want reliable scalable vSphere based environment, you could be fearless with Hitachi. Now, I want to give a callout to th...

Paul MorrisseyAugust 27, 2014 5:41 PM2707100Actions
Blog PostGlobal Active “Datastore” with VSP G1000 GAD is now VMware Metro Storage Cluster certified (block and file)

"Global Active Datastore" ready for active-active datacenters. Just received notification that we're officially vSphere Metro Storage Cluster (VMware vMSC) certified for 5.5 and 5.5U1 with both a new file certificati...

Paul MorrisseyAugust 25, 2014 10:13 PM4673202Actions
Blog PostNo Worry Desktops, Save Money - VMware Mirage with Hitachi Data Systems

No Worry Desktops, Save Money - VMware Mirage with Hitachi Data Systems   The advent of bring-your-own-device (BYOD) has pushed organizations worldwide to seriously consider centralized endpoint management for ...

Paul MorrisseyAugust 20, 2014 2:21 PM2693000Actions
Blog PostVMware Virtual Volumes (VVol) with Hitachi Storage and Converged Infrastructure.

This is the time of year when the tech community eagerly awaits the start of VMworld, where VMware and its partners unveil cutting edge technology products and solutions. In the run up to VMworld, nothing else merits ...

Paul MorrisseyJuly 27, 2014 8:33 PM4064522

I am going to have to update our graphic on the rich set of VMware integrations we provide for both our Hitachi converged and storage platforms. The latest, releasing Hitachi Content Packs for VMware vRealize Log Insight (vRLI).VM integrations.png

 

As you may know, VMware vRealize Log Insight delivers real-time log management for VMware environments, with machine learning-based Intelligent grouping, high performance search and troubleshooting across physical, virtual, and cloud environments. vRLI imports, and analyzes logs to provide real-time answers to problems related to systems, services, and applications and derive important insights. These logs are collected and analyzed through search filters and presented via dashboards that users can customize to their specific needs. Infrastructure Logs are extremely rich in content that can be exploited to optimize and secure the environment.

 

Our Content packs provide prebuilt dashboards and enabled the ability to allow Log Insight to retrieve logs from Hitachi Storage, Compute and Converged platforms. In short, admins configure the syslog server to point to Log Insight IP in order for it receive logs  From there, you can import the Hitachi content packs that will categorize  all your storage/server events on to Log Insight’s widget dashboards. The objective of our content packs is to provide knowledge about a specific set of events in a layout easily understood by administrators and architects to take action on.

 

Let's walk through a typical scenario with our new content pack

 

Case: You want to use vRLI to proactively provide up to the minute updates when there are the following security events

 

  • Multiple failed login attempts into infrastructure resources
  • Malicious activity

 

You would like to see how many failed login attempts there are when a user tries to access a infrastructure resources, such as storage array or converged platform and then get a determination of what activity the user previously accessed. i.e Do you have malicious activity occurring in the infrastructure. These failed login attempts may be caused by the user forgetting their password or kept entering it incorrectly but could be rogue external user forcing entry into infrastructure. You can dig deeper and do searches on what users are actually doing when they logged in and access the infrastructure resources

 

Let's step through it. The main dashboard highlights all the different events you can track.Both typical vCenter infrastructure resources plus Hitachi infrastructure resources once content pack is loaded

main-dashboard.png

 

First, within the query field, you can do a search for “authentication” (i.e. We want vRLI to scan through all the Hitachi infrastructure logs and find the word authentication )

1_event_Search.jpg

We now see all those log entries. You can start setting your parameters to narrow down to a specific search. You have the option to set a exact time frame. In this case I set the field to 7 days.

2_time_range.jpg

 

The screen shot below shown here is a consolidated view. You can do this by selecting the “ Event Types” tab (boxed in orange on the left side). By selecting this tab, it will combine all the related occurrences into separate event types rather than listing each event one by one.  Now, The VM admin will see every single log event related to “authentication” in their environment in graphic form in the bar chart above the search query. This shows the number of "authentication" events in the given time you specified. You will also see on the fields section (highlighted in orange on the right side), This displays what common categories are in each event. i.e. we see appname, ..,LoginResult,... Userid are typical fields in a log entry with the word "authentication". These categories are what make up the data on each dashboard widget


3_Event_Type.jpg


With this realtime information on "userids" exhibiting unusual authentication events, we can dig deeper into log and create a search filter on what users were are actually doing when they logged in and accessed infrastructure resources, such as storage infrastructure. The screen shot below is an example of log summary dashboard showing users who were issuing delete commands under the category of storage configuration access. You can quickly determine visually which users have high frequency of these activities in the time frame specified. The bar chart highlights the different users who logged on according to color. One nice option I like to do is that you can save this specific event type as a filter for future use.

5_Deleteprovisioning.jpg


Final Thoughts

There are many benefits to using vRealize Log Insight. Ideally, logs can and will be used proactively and in realtime to provide early detection for security events but many possibilities for identifying infrastructure optimization or brownouts.  An effective log managing tool should not require a whole team of dedicated resources and should not incur high cost to their environments.  vRLI solves many of these problems with an easy to use dashboard and search query and Hitachi has augmented it with our content pack. This is another great integration that HDS customers can use that adds value along with our other Integrations.

 

Hitachi Storage Content Pack for VMware vRealize Log Insight is a no charge adapter and can be downloaded from portal.HDS.com or VMware Solution Exchange.

 

For additional information on vRLI please visit:

http://www.vmware.com/products/vrealize-log-insight/

 

Need more information, reach out to me or pop a request at http://www.hds.com/get-more-information/ , ius “ Content Packs for Log Insight” within the subject lines.

 

Like to acknowledge the contributions of Andrew Robles who worked closely with engineering on this blog post