Valentin Hamburger

Unlock the power of UCP Director - Create vRA cloud services - Part 2

Blog Post created by Valentin Hamburger Employee on Oct 13, 2015

Lets get down to business

In Part 1 we learned what can be done with UCP Director and VMware's vRealize together. Now lets get our hands dirty and actually
create a service for vRA leveraging UCP Director, shall we?

Step 1 – Preperation

Workflow overview.png

  • First of all, we should get ourselves a copy of the UCP Director API reference guide. You can find it here
  • Make yourself familiar with vRealize Orchestrator (formally known as vCenter Orchestrator), here is a fabulous blog from the
    vco team(strongly recommended for a read)
  • Get the latest and greatest javascript methods you can’t remember

(Yes – vRO is using javascript to create “scripted tasks” as they are called)

  • Add UCP Director as a REST host to vRealize Orchestrator. This can be done by calling the “add a rest host” workflow in Orchestrator.
    You will find this here as well as other useful vRO documentation.
  • A REST browser plugin. I personally do favor the RESTClient for Firefox, but any will do (needed for testing your REST calls).



OK – once you got yourself prepared, lets start with the fun part – creating the workflow. First we should have in mind what we actually
need to do to make it work. I do like to put together a short plan on what to achieve:

  1. Create a volume in the storage system
  2. Attach the volume to all ESXi host systems in a given cluster (incl. FC zoning and mapping)
  3. Format and mount the data store to all ESXi host systems in a given cluster


Lets get started by first looking at the UCP API guide provided functions. On page 58 you will find a method called
Create and attach volume to cluster”. This function will take care of creating the volume, configuring the zoning, mapping the
volume and formatting it to VMFS. Than, it attaches it to the
cluster we specified. So all you need to do is call this
UCP Director REST API function.


The REST call


How to issue the REST API call:

POST https://ucpmanagement.your.domain/api/clusters/domain-c1234/createandattachvolume

HTTP/1.1 Content-Type: application/json; charset=utf-8


Request Body:
"PoolId": "1",

"VolumeSizeInBytes": 214748364800,
"ShouldFormat": true,
"StorageSystemId": "93040480",
"StorageSystemPortIds": null,
"VolumeName": "OurNewDS"



With this POST method we need to specify the parameters in the request body as shown above. Also, the “Content-Type” and

“accept-answer” should always be set to “application/json”.


So far so good, we know now which inputs we need to call the function:

  1. The Pool ID in which the volume should reside
  2. The volume size in Bytes
  3. Should it be formatted with VMFS or not (if set to false, it will not be available as a data store to the cluster after creation,
    but it will be mapped to the cluster. This might be useful for RDMs)
  4. The Storage system ID’s
  5. The Storage Port ID's (can be left out – the system will use the default ports)
  6. The volume name.


The user interaction: I decided the user only needs choose three options

  1. The UCP Director to issue the call to
  2. The cluster to add the volume to
  3. The size of the data store in Gigabytes


Step 2 – Realize

OK – lets get the workflow bit done. This is actually done with vRealize Orchestartor. After you have created the workflow,
it can be easily made available in vRealize Automation by the “Extendes Service” functionality (often referred to as “XaaS”).

If you are brand new to vRealize Orchestrator, I would recommend you this blog post on how to create your first workflow.

I will guide through the actual needed scripting blocks and elements - but it might be worthwhile to spend a few minutes on how

to use vRO itself


Workflow parameters

Ok, first create a new workflow in vRO and give it a descriptive name, like "Create and Attach Volume to Cluster".

Now we need to specify the wokflow inputs. This is done by clicking the "Inputs" tab at our newly created workflow:

WF inputs.png

Please make sure to choose the right match type (string, REST:RESTHost, etc…) I will explain the meaning within the next steps
while creating the workflow. Inputs are automatically gathered if the workflow gets started – vRO will create an input window to provide
data for each parameter.


Also, you can pre-create the workflow attributes. You will find these at the “General” tab. Attributes are used to pass information between
workflow elements. You can either pre-create them, or create them on the go as you need them.

This is how it looks in our workflow:

WF attributes.png

Hint: you might always want to choose "Increase version" if you save your work. It makes your life easier if you need to

          revert back to an earlier version for some reasons. If you spare that - there is no "revert" available since you have only one version.



Lets get it on!

For safety reasons and to prevent errors, we will check first if the specified datastore name is free (Remember: data store names must

be unique in vCenter). If it is not, the workflow will abort with the error: Datastore name already taken!


ActionIcon.pngIn vRO this can be simply done by adding a “Custom Decision” and an “Action Item”.

The action is called “getAllDatastoresMatchinRegexp”.

All we need is to provide is the data store name and it comes back with an array of data stores which will match.

The data store name is received from an input string variable we create called DSName

Bind the DSName variable to the action item’s regexp variable at the icons “IN” tab and that’s it


The custom decision is a scriptable element which returns either true for the “success branch” or false for the “failure branch".

In our case we simply add the following lines for the decision:

CustomDecisionIcon.pngIf (dsexists){

      throw “Datastore Name is already taken!”;

} else {

      return true;


The variable dsexists is returned by the former action and contains all data stores matching the given name. If this array is empty,

the workflow will continue.


This concludes part 2 of this series. We have now a plan what to do, a new workflow including all necessary parameters and a first

"sanity check" for our action to start.


>> Stay tuned for part 3 where we will script the actual REST calls and bring all this together.


To see all this in action - take a look at the HDS booth P302 at VMworld and ask for UCP Cloud!