On the 14th July Hitachi Data Systems will be providing a private webinar to VMware vExperts for 2015. Anyone who is a current vExpert will have received the invite.
The title is:
HDS Unified Compute Platform Director
The ultimate endpoint for vRealize Automation suite in the Software Defined Datacenter
I am lucky enough to be able to have the Mic for much of this presentation. And thanks to the brilliant VMware vExpert Community Manager Corey Romero for giving us the ability to present.
Now don't get me wrong; That doesn't mean we think we should keep our technology private. But due to the nature of this subject matter I believe it will achieve a lot of resonance with this audience. You can expect to hear a huge amount more about this from a number of quarters as the private cloud marketplace matures. And we will be expanding the media via which we engage with customers and the market to be less formal and make Subject Matter Experts more readily available.
Suffice it to say it will be covering an area that I am working a lot with today, and that is the continuing growth of Private Cloud projects within our installed base.Not just with VMware but with Microsoft too.
In that area I believe the way our converged systems (and other systems for that matter) were designed offers major advantages over the competition for customers.
I'm not talking about hardware but I think it's fair to say that everyone in this industry accepts that not many can engineer better than Hitachi.
No......this is entirely about software, and has been for some time.
Despite some commentators in hyper-convergence (hardware) vendors like Nutanix talking about vendors publishing x million IOPS as part of some kind of macho competition. That's just FUD.
I just look at it like this:
Hitachi VSP GXXXX are the fastest, most reliable, most trusted and best engineered systems there are. HDS walks the walk where many vendors just talk the talk. If you can't walk the walk then you're not really in a position to criticise.
However, in the real world, if you talk to any HDS engineer in the field they will tell you about the fact they don't talk all day everyday about how many IOPS we support versus other competitors as they know customers don't care. The customers want to deal with a vendor who can accurately size the footprint needed to run their workloads.
It's the same with the UCP Director story for VMware vRA ......
What am I talking about ?
When we talk about Unified Compute Platform Director, most people probably think we mean hardware.
We have a product SKU called UCP which is a reference architecture. This is like a vBlock or a Flexpod. That's fine. You can have that if you want and do what you like with it. However UCP Director is not all that !! It's a reference architecture with a dose of HDS secret sauce added. And we call that combination UCP Director. The Director part is a software orchestration layer that lives within vCenter (and System Center.)
So you can provision, operate and troubleshoot all the elements of the Hitachi converged stack right there.
To relate this to the Infrastructure qualities, this is about:
- Massively Reducing manageability overhead
- Increasing availability
- Increasing performance
- Improving recoverability
- Improving security
UCP Director was designed from Day One with the following (far-seeing) attributes:
- Feature parity between vSphere vCenter and Microsoft System Center for full orchestration of storage, networking and servers as well as centralised monitoring and management.
- Where other converged stacks may resemble a loose reference architecture, UCP Director is a coherent software solution.
- Feature parity between UI (vCenter) and CLI
- Day 1 design feature.
- Feature parity between UI and API on all platforms
- Day 1 design feature.
- Support for multiple different network platforms (Brocade and Cisco) to offer customers choice.
- Day 1 design feature
- Support for new or existing HDS storage to offer investment protection for customers
- This is also thanks to the ability to virtualise existing storage for performance isolation and control using resource groups and software/hardware partitioning.
UCP Director ships with a full software suite installed by HDS and HDS partners that includes vCenter, SRM, Hitachi Command Suite, Compute Systems Manager, Tuning Manager, Windows Deployment Server, WSUS. So you can manage the entire stack inside vCenter and in effect it can be seen as an appliance (if that's how you like to consume it).
The difference between this and raw hyperconverged could be seen as the consistent ability to tune all the knobs without having to prescribe different server/flash configurations to meet varying requirements.
Here are some of the form factors you can have:
Endpoints and the Cloud
So when we talk about vRA and endpoints, UCP Director can be shipped in form factors from 2-128 blades with any storage combination you can think of. When you add capacity (blades, storage) Director continues to monitor it and pass telemetry directly into the vCenter database, as well as passing all events and tasks to the vCenter logs. So there is nothing to do. It just works !
Director orchestrates everything regardless of form factor, hypervisor, and you can do this via an API. That's where it allows consumption via Service Catalog/Portal using solutions such as vRA.
That's why it matters so much for vRA. Let me give you an example of what I mean. With two REST API calls to UCP Director API:
- Call 1: Create an ESXi HA-cluster (with server hardware profile, ESXi image and host profile) using a service template. This is a UCP Director construct.
- Call 2: Attach storage to all nodes (LUN creation, masking, mapping, zoning based on workload. rescanning on VMFS, creation of datastore etc etc)
What about vRealize Orchestrator?
For our part we are offering choice to facilitate the holy grail for "IT as a Service Provider", and that is to separate provisioning from consumption and massively reduce MTTR and the terrible historical separation of management of virtual machines and the underlying infrastructure.
We have already had compute and storage plugins for vRO (and previously vCO) but why just operate independently on servers, storage and networking when you can manage them all via UCP API.
So customers will mandate how they use it. The fact that these APIs are Day 1 design features will be the reason why many customers choose HDS for their vRA implementations over other vendors..
If you want to enable the Cloud give yourself the option to orchestrate the entire stack without having to build your own tools. For that there is no better solution for vRA than UCP Director. And if you think you have to go hyper-converged for a one-click upgrade how does my two-click API calls scenario above sound ?
Come talk to us later this year at VMworld or at your local VMUG where you'll see us popping up more and more. Until then ......