Flash Storage​

 View Only

Test Drive of Hitachi Platform (PF) RestAPI ( example scripts attached )

This thread has been viewed 25 times
  • 1.  Test Drive of Hitachi Platform (PF) RestAPI ( example scripts attached )

    Posted 06-20-2023 13:02
      |   view attached

    Post To Community Forum:  Test Drive of Hitachi RestAPI

    ##################################################################
    Goals of my Hitachi RestAPI "Test Drive"
    ##################################################################


    - Learn and evaluate the Hitachi SVOS RestAPI ( PF Rest ) that is
      built-in to newer Hitachi block-storage-appliances ( F/G and E series )

    - Build a simple proof-of-concept ( POC) storage provisioning tool that
      allows the busy storage administrator to :

         1. simply and precisely specify the desired appliance configuration
            or configuration change; by creating and editing clear and simple
            plain-text json files.

         2. Run a process that automates the details needed to implement the
            configuration, specified by the simple json files ,
            on the block-storage-appliance (array)

    - Use the POC tool to configure my new SVP-less e590
      Why would I want to do that ? Because I was disapointed by the
      lack of ease and power provided by the available tools
      ( Ops Center and Storage-Advisor-Embedded ).

      It has become my view that the best appliance management tools are both simple
      ( to install, use and maintain) AND powerful ( allow 100% of the applinace's functionality to be controlled
      in a manner which is easy, efficient and safe ). I found OpsCenter to be
      big, complicated, opaque and not very good at making my job easier.
      Storage-Advisor-Embedded lacks essential basic functionality.


    ##################################################################
    Overview of results
    ##################################################################


    - While not commercial grade code,  My proof-of-concept
     ( POC) storage provisioning tool works as intended:
       - The human ( me ) provides a high level specification of desired configuration
       - The scripts automate the implementation details of this configuration


    - My HOST-GROUP-TRUNK and the HOST-COLLECTION ideas used in the POC are
      commercial grade ideas ....because, unlike SERVER objects ( in Ops-Center and Storage-Advisor-Embedded )
      they facilitate both automation and flexibility
      and they work better with the array's underlying Host-Group based scheme.


    - The built-in RestAPI is quite good. ( I'm impressed ... nice Work Hitachi !!)

    ##################################################################
    Important Terms
    ##################################################################


       REAL CONFIGURATION STRUCTURE: 

       A REAL CONFIGURATION STRUCTURE is an entity that is "built-in" to
       Hitachi block storage appliances ( arrays ). For example:
       DP-LDEVs, Basic/Pool-LDEVs and HOST-GROUPS are REAL CONFIGURATION STRUCTURES.


       SYNTHETIC CONFIGURATION STRUCTURE:  

       A SYNTHETIC CONFIGURATION STRUCTURE is a configuration data structure that is NOT
       ( currently ) "built-in" to Hitachi block storage appliances ( arrays ).
       SYNTHETIC CONFIGURATION STRUCTUREs exist only in high level tools:
            - Storage Advisor Embedded
            - Ops Center Administrator
            - my RestAPI scripts.

       Hitachi Ops Center "Server Objects" are an example of a SYNTHETIC CONFIGURATION STRUCTURE.
       The intention of "Server Objects" was to hide granular details of Host-Group management.
       However, "Server Objects" were too simplistic and removed important
       LDEV pathing/provisioning flexibility that direct HOST-GROUP management provided.
       Two attempts were made by Hitachi to address this weakness: (A) The "rat's-nest" interface
       which allows MacGyver to *manually* "rewire" the actual host-groups by hand. ( hours of error-prone fun
       when you have 8 CHB ports and 16 VMware hosts ) (B) A new tabular host-group control.
       The bottom line is you either use SERVERS objects "as is" and lose key flexibility or
       you use the clunky error prone work-around tools and forgo any automation.  Not a great choice !

       In my RestAPI python scripts, I define 2 new SYNTHETIC CONFIGURATION STRUCTUREs:
       The HOST-GROUP-TRUNK and the HOST-COLLECTION. These items allow management of
       the granular port-level details of array native host-groups to be fully automated
       ( reducing operator burden and human errors ) while at the same time providing the same
       flexible LDEV path assignments as direct "by-hand" SVP-StorageNavigator HOST-GROUP management.

     

       HOST-GROUP--------------------------------------------------------------: 

       A host-group is a REAL CONFIGURATION STRUCTURE that has these characteristics

         [] Associates ONE array port with a Host-Group-Number
            A Host-Group's unique ID is officially: (portId, Host-Group-Number)

         [] Holds Client-OS / Mission specific parameters specified by host-mode and host-mode-options

         [] Is associted with a single client-host port of one or multiple client-hosts
            for example: Associates Array Port "CL3-A" with the B-fabric ports of
            VMWARE Hosts: ENGESX01 and ENGESX02

         [] Allows a unit of array storage capacity ( a DP LDEV) to be presented via ONE path
            to one or more client hosts.

       HOST-GROUP-TRUNK ---------------------------------------------------------:

       A HOST-GROUP-TRUNK is a SYNTHETIC CONFIGURATION STRUCTURE that corresponds to a
       set of Host-Groups. Not just any set of Host_Groups; but a fully redundant , set of Host-Groups.
       By "fully redundant" I mean controller and fabric redundant.
       In the first generation of my scripts, a HOST-GROUP-TRUNK corresponds to a set of 4 Host-Groups.

       HOST-COLLECTION ----------------------------------------------------------:

       A HOST-COLLECTION is a SYNTHETIC CONFIGURATION STRUCTURE that contains key information
       about a set of client-hosts. The set of hosts can contain one host or
       multiple related hosts ( a cluster ).
       HOST-COLLECTIONs are a replacement for both SERVER and SERVER-GROUP objects.

       GENERAL-NOTEs ----------------------------------------------------------:

       The HOST-COLLECTION ( along with the Array Port Layout Map ... explained below) are used by
       the POC automation scripts to automaticaly create and manage HOST-GROUPs. Each of the
       auto-created HOST-GROUPs will have a HOST-GROUP-TRUNK tag.

       The HOST-GROUP-TRUNKs ( actually the HOST-GROUP-TRUNK tags) facilitate
       flexible and automated assignment of LDEV paths, so that: 

          - all LDEVS are auto presented with fully redundant paths
          - we utilize all array port resources
          - we control and limit the total number of presentation paths
            to avoid hitting O/S total path count limits
          - All physical hosts in a VMware cluster will see a given DP-LDEV
            via the same HOST-GROUP-TRUNK ( aka same set of HOST-GROUPS )
              - All hosts in a VMware cluster will be "connected to"
                all of the cluster's HOST-GROUP-TRUNKs
              - Each LDEV is only presented via one HOST-GROUP-TRUNK
              - If you have 10 LDEVs and there are 2 HOST-GROUP-TRUNKs
                associated with a VMware cluster. 5 LDEVs will be presented
                via one HOST-GROUP-TRUNK and 5 LDEVs will be presented via
                the other HOST-GROUP-TRUNK.

    ##################################################################
    HOST-COLLECTION (file: engesx-hostcol.json )
    ##################################################################

    When the storage administrator needs to provide storage
    to a new host or new set of hosts ( cluster ), the storage administrator
    will create a new HOST-COLLECTION. HOST-COLLECTION will be specified
    via a JSON text file. A sample host collection file is shown below.

    The "trunkSetType" and "automationType" parameters are intended to
    control the behavior of script automation. In my current simple,
    (but, very functional) POC automation scripts, I don't make much use
    of these parameters. I simply make sure the the "trunkSetType" parameters
    in the HOST-COLLECTION and the ARRAY-PORT-LAYOUT-MAP match.
    The reason I defined these parameters is to help handle special hosts ( like HNAS ).

    HOST-COLLECTION files are intended to be array independant and useable
    for automated management across multiple arrays.
    Routine tasks, such as adding new hosts to, or removing old hosts from
    host-groups on multiple arrays, should be easy, efficient and free
    from human error.


    The purpose of the hostGroupNumMap section of the HOST-COLLECTION file
    is to account for the fact that the hostGroupNumbers, associated with a host-collection,
    will often not be the same on all the arrays providing storage to the HOST-COLLECTION.
    The hostGroupNumMap section is key to multi-array management.


    The JOBACTION flag for a given HostName controls what action will
    be applied to that host's WWNs when the HG-WWN management application script runs.
    Possible Values for JOBACTION:

       non :  Do Noting
       add :  Add the host's WWNs to the relevant hostgroups
       del :  Delete the host's WWNs from the relevant hostgroups

    Under HOSTS, The A Fabric and B fabric host HBA WWPNs
    are given the simple labels "A" and "B", rather
    than more descriptive labels. Doing this matches
    the Array Port Layout Map sanFabricId values and makes
    the code simpler and more reliable.

    In the future, HOST-COLLECTION files will be created via a script
    which processes the host wwpn data acquired by the host's sys-admin.
    Doing this will eliminate WWN typo errors.


    example HOST-COLLECTION: (file: engesx-hostcol.json )

    {
       "hostCollectionName": "engesx",
       "hostMode": "VMWARE_EX",
       "hostModeOptions": [54,63,114],
       "trunkSetType": "general",
       "automationType": "general",
       "hostGroupNumMap":
       [
          {"serialNumber": 987654, "hostGroupNumber": 1},
          {"serialNumber": 101010, "hostGroupNumber": 52}
       ],
       "JOBACTIONS": {"doNothing": "non", "addWwnToHg": "add", "delWwnFromHg": "del"},
       "HOSTS":
       [
          {
             "hostName": "engesx01",
             "A": "51402ec012cffb3a",
             "B": "51402ec012cffa80",
             "JOBACTION": "non"
          },
          {
             "hostName": "engesx02",
             "A": "51402ec012cffc3a",
             "B": "51402ec012cfffb4",
             "JOBACTION": "non"
          },
          {
             "hostName": "engesx03",
             "A": "51402ec012cffcc0",
             "B": "51402ec012cffb4c",
             "JOBACTION": "non"
          }
       ]
    }


    ##################################################################
    Array Port Layout Map ( aka "CHB MAP" ) hdsfx987654-chbmap.json
    ##################################################################

    After the array is purchased and delivered, The storage
    administrator reviews the array's Controller / CHB / Port
    layout and creates the Array Port Layout Map as
    a JSON text file. ( arrayName-chbmap.json )

    Each fully redundant set of CHB ports is assigned a
    "hostGroupTrunkId"

    The Array Port Layout Map ( aka CHB MAP ) describes
    the layout of the array and it's connections to the SAN.
    It does not contain any host information.

    In the next version of my scripts the Array Port Layout Map
    will include array port WWNs so that SAN switch ALIAS and
    Peer Zone creation can be automated.


    {
       "serialNumber": 987654,
       "arrayName": "hdsfx987654",
       "trunkSetType": "general",
       "hostGroupTrunks":
       [

          {
             "hostGroupTrunkId": "t1",
             "sanFabricPortSets":
             [
                { "sanFabricId": "A", "arrayPortRecList": [ {"arrayCtlId": 1, "portId": "CL1-A"}, {"arrayCtlId": 2, "portId": "CL2-A"} ]},
                { "sanFabricId": "B", "arrayPortRecList": [ {"arrayCtlId": 1, "portId": "CL3-A"}, {"arrayCtlId": 2, "portId": "CL4-A"} ]}
             ]
          },

          {
             "hostGroupTrunkId": "t2",
             "sanFabricPortSets":
             [
                { "sanFabricId": "A", "arrayPortRecList": [ {"arrayCtlId": 1, "portId": "CL5-A"}, {"arrayCtlId": 2, "portId": "CL6-A"} ]},
                { "sanFabricId": "B", "arrayPortRecList": [ {"arrayCtlId": 1, "portId": "CL7-A"}, {"arrayCtlId": 2, "portId": "CL8-A"} ]}
             ]
          }

       ]
    }

    ##################################################################
    DP LDEV JSON FILE  hdsfx987654-dpldevs.json
    ##################################################################

    The purpose of the DP LDEV JSON FILE is:

    A. describe the properties of each DP LDEV
    B. specify the HOST-COLLECTION that each DP LDEV will
       be presented to
    C. specify which fully redundant trunk will be used
       to present the DP-LDEV to a HOST-COLLECTION
    D. specify other items ( for example : how much an LDEV should
       be expanded by )

    For this POC project, the DP LDEV JSON FILE
    is not guaranteed to be a representation of the array's current state.
    For example: after an LDEV expansion operation, the DP LDEV JSON FILE's
    "byteFormatCapacity" value will not be updated.
    For the POC, the DP LDEV JSON FILE is more of a "task director"
    than a holder of state. 

    {
       "serialNumber": 987654,
       "arrayName": "hdsfx987654",
       "JOBACTIONS": {"DoNothing":"non","AddNewLDEV":"add","DeleteOldLDEV":"del","ExpandLDEVbyX":"exp"},
       "dpLdevs":
       [


          {
             "JOBACTION": "non",
             "CoreParams": {
                              "ldevId": 1025,
                              "poolId": 0,
                              "byteFormatCapacity": "2T",
                              "dataReductionMode": "disabled",
                              "isParallelExecutionEnabled": false
                           },
             "PathParams": {
                              "hostCollectionName": "engesx",
                              "hostGroupTrunkId": "t2",
                              "lun": 1
                           },
             "MiscParams": {
                              "additionalByteFormatCapacity": "1K"
                           }
          },

          {
             "JOBACTION": "non",
             "CoreParams": {
                              "ldevId": 1024,
                              "poolId": 0,
                              "byteFormatCapacity": "2T",
                              "dataReductionMode": "disabled",
                              "isParallelExecutionEnabled": false
                           },
             "PathParams": {
                              "hostCollectionName": "engesx",
                              "hostGroupTrunkId": "t1",
                              "lun": 0
                           },
             "MiscParams": {
                              "additionalByteFormatCapacity": "1K"
                           }
          }


       ]
    }

    ##################################################################
    Script Stack "Architecture"
    ##################################################################

    The POC "Code Stack" is :

       [Task-Director-Batch-Files]
       [Application-modules]
       [single-purpose-mini-modules]
       [HitRestBase]


    The script HitRestBase.py contains common code.
    It takes care of the RestAPI calls to the array
    using the stock python urllib ( *** no 3rd party python
    packages are used anywhere in the stack ... only the
    latest stock python3 distro is needed ... nothing else ***)
    HitRestBase.py also takes care of
    Hitachi RestAPI session and lock management.

    The very small single-purpose-mini-modules
    ( DpLdevMgr.py, GetArrayInfo.py, HgEdit.py, HGlist.py, hgWwn.py )
    contain task specific code which calls HitRestBase.py to perform
    a simple task ( for example: create a single host-group or create a single DP-LDEV).

    Both HitRestBase.py and the single-purpose-mini-modules will be used by
    application level scripts which read the json files ( described earlier) and accomplish the automation goals
    of the POC project.

       APP-dpLdev-add.py : DPldev creation, expansion, presentation
       APP-hg-create.py :  Create hostgroups
       APP-hg-WWN-mgr.py:  Add and remove WWNs from hostgroups

    Each APP module has a corresponding Task-Director-Batch-File
    (JOB-RUN-app-dpLdev-add.bat, JOB-RUN-app-hg-create.bat, JOB-RUN-app-hg-wwn-mgr.bat)
    These batch files specify which array and host-collection json files will be used by the
    Application module. Task-Director-Batch-Files also have some safety features
    that facilitate performing "dry runs" ( log-only runs)
    before actually modifying an array.

    The utility module clsLogger.py
    writes detailed info to
    C:\Users\smith\Documents\ArrayMgrv1\arrays\ARRAYNAME\job-logs


    ##################################################################
    Test driving the scripts and learning how they work
    ##################################################################


    Unzip the POC file-set ( ArrayMgrv1.zip ) ,attached to this post,
    to your documents directory on your Windows PC.
    After doing this ( if your username was "smith" ), you will have this
    new subdirectory C:\Users\smith\Documents\ArrayMgrv1

    Edit the json files, in the "host-collections" and "arrays" subdirectories,
    to reflect your test array and your test host collection.

    rename this directory to match the array name specified in the .json files
    C:\Users\smith\Documents\ArrayMgrv1\arrays\hdsfx987654
    also rename these json files to reflect the name of your test array
      - hdsfx987654-chbmap.json
      - hdsfx987654-corecfg.json
      - hdsfx987654-dpldevs.json
    just replace the prefix string "hdsfx987654" with the name of your test array.
    The array file prefix string, the array sub-directory name and the arrayname in the
    json files must match !

    rename this directory to match the host-collection name specified in the .json file
    C:\Users\smith\Documents\ArrayMgrv1\host-collections\engesx
    also rename the file engesx-hostcol.json to reflect the name of
    your host collection. The host-collection file prefix string,
    the host-collection sub-directory name and the hostCollectionName in the
    json files must match !


    Edit the variables, in the JOB-RUN bat files, located in the ArrayMgrv1 directory.

    Install the latest Python 3.x distribution. ( default install )

    If your workstation has only Python 3.x installed, then edit the JOB-RUN bat files
    and replace the string: py -3  with the string: python
    ( my Windows workstation has both Python 2 and Python 3 so I need to use the "py launcher"
      that may not be the case with your workstation )


    For the purposes of learning and unit-testing both HitRestBase.py and
    the single-purpose-mini-modules can be run directly from the command-line.
    I recommend doing this before experimenting with the "APP" level modules.

    Run the base module in "unit-test mode":
      with py launcher: py -3 HitRestBase.py
      or
      without py launcher: python HitRestBase.py
    follow the prompts ,  review log output under
    C:\Users\smith\Documents\ArrayMgrv1\arrays\ARRAYNAME\job-logs

    If you have questions , just post them to this thread

    Thanks

    Andy Romero



    ------------------------------
    Andrew Romero
    Storage Administrator
    Fermi National Accelerator Laboratory
    ------------------------------

    Attachment(s)

    zip
    ArrayMgrv1.zip   38 KB 1 version