Block Storage

 View Only

Configuration of VSP One Block High End with Oracle Solaris 11.4 (SPARC)

By Anup Paul posted 2 hours ago

  

Introduction: The Hitachi VSP One Block High End (BHE) is the flagship of Hitachi’s enterprise storage portfolio, engineered to meet the demands of missioncritical workloads and AIdriven environments. Its all-flash NVMe storage platform that provides high-performance, highly reliable, and scalable access for mission-critical enterprise workloads. It has intelligent management through Hitachi Ops Center and VSP 360, ensuring operational efficiency.

Oracle Solaris 11.4 on SPARC servers providing a proven enterprise operating environment with integrated virtualization, robust security. Native multipathing (MPxIO), fault management, and workload isolation further enhance reliability and performance.

Together, the VSP One BHE and Solaris 11.4 deliver a powerful foundation for enterprises seeking high availability, secure data management, and optimized performance. This document outlines the configuration steps required to configure VSP One BHE with Solaris 11.4 (SPARC).

Environment:

Configuring VSP One BHE with Oracle Solaris 11.4 (SPARC) with the help of following components:

·       VSP One BHE

·       A management host (Red Hat Linux) for Command control interface (CCI). Here used Red Hat Enterprise Linux release 8.4 & version 01-87-00/06.

·       Server SPARC T8-1

·       HBA Oracle 7335902 (Oracle Storage Dual-Port 32 Gb Fibre Channel)

·       Switch Brocade G710 (64G ports) (Firmware v9.2.2a)

·       Cisco MDS 9148V (64G ports) (Firmware v9.4(3b))

 

 

Diagram Layout:

The following image shows the basic setup diagram of VSP One BHE with Oracle Solaris 11.4 (SPARC).

Figure1: Block diagram of VSP One BHE with Oracle Solaris 11.4 (SPARC) Server

Configuration steps:

1.      Install Solaris 11.4 (SPARC) in Server SPARC T8-1 through Oracle ILOM Web Interface or if you have Automated Installer (AI) of Solaris 11.4 (SPARC) configured in another Server, then you can install using AI also.

 

a)      Using ILOM Web Interface: Type the management port IP address in a browser and login. Then mount the ISO image and install the OS. For details, please read the following document: https://docs.oracle.com/cd/E79179_01/html/E80507/z400237a1422399.html#scrolltoc

 

b)      Using Automated Installer: If you have Automated Installer Service configured in another server otherwise you can configure the AI server using the oracle document https://docs.oracle.com/cd/E26502_01/html/E28980/gkfaa.html.

 

Then you can install the Solaris 11.4 (SPARC) with the AI installer as below procedure:

 

The following output showing Automated Installer Service configured in another Sparc Server.

 

root@T2000-3:~# installadm list

Service Name  Status Arch  Type Alias Aliases Clients Profiles Manifests

------------  ------ ----  ---- ----- ------- ------- -------- ---------

S11UP4SPARC   on     sparc iso  no    0       7       1        2

default-i386  on     i386  iso  yes   0       0       0        1

default-sparc on     sparc iso  yes   0       0       0        1

s11-sparc     on     sparc iso  no    0       1       0        1

root@T2000-3:~#

 

Then register the client server (where you want the Solaris OS) MAC address to S11UP4SPARC service as following:

 

root@T2000-3:~# installadm create-client -e XX:XX:XX:XX:35:CA -n S11UP4SPARC

Created Client: ' XX:XX:XX:XX:35:CA'

root@T2000-3:~#

 

[You can find the MAC address from SPARC client server “banner” obp command.]

 

Verify after successful register:

root@T2000-3:~# installadm list -c -n S11UP4SPARC

Service Name Client Address    Arch  Secure Custom Args Custom Grub

------------ --------------    ----  ------ ----------- -----------

S11UP4SPARC  00:10:E0:3D:F7:3E sparc no     no          no

             XX:XX:XX:XX:35:CA sparc no     no          no

root@T2000-3:~#

 

Set the OBP environment variable on your SPARC client server as following:

 

setenv network-boot-arguments host-ip=<client server IP>,router-ip=<client server default gateway>,subnet-mask=<client server subnet>,file=http://<AI server IP>:5555/cgi-bin/wanboot-cgi

 

[http://<AI server IP>:5555/cgi-bin/wanboot-cgi is a URL used by a Solaris automatic installation server to provide a boot file to a client system. The wanboot-cgi program, located at this address, is used by SPARC server to start a network-based installation for an AI server. A web server running on port 5555 and this process involves configuring the client's network boot parameters to point to this URL.]

 

A sample output shown here for reference:

{0} ok setenv network-boot-arguments host-ip=172.XX.XX.XX,router-ip=172.XX.XX.X,subnet-mask=255.XXX.XXX.X,file=http://172.XX.XX.XXX:5555/cgi-bin/wanboot-cgi

 

After changing OBP variables, perform reset-all to ensure they take effect:

 

{0} ok reset-all

NOTICE: Entering OpenBoot.

NOTICE: Fetching Guest MD.

NOTICE: Starting slave cpus.

NOTICE: Initializing LDCs.

NOTICE: Probing PCI devices.

NOTICE: Finished PCI probing.

NOTICE: Probing USB devices.

NOTICE: Finished USB probing.

 

SPARC T8-1, No Keyboard

Copyright (c) 1998, 2019, Oracle and/or its affiliates. All rights reserved.

OpenBoot 4.43.3, 64.0000 GB memory installed, Serial #XXXXXXXXX.

Ethernet address X:XX:XX:XX:35:ca, Host ID: XXXXXXXX.

 

 

 

{0} ok

 

Start the OS installation as following and complete the process:

 

{0} ok boot net

Boot device: /pci@300/pci@1/network@0  File and args:

1G link up

<time unavailable> wanboot info: WAN boot messages->console

<time unavailable> wanboot info: configuring /pci@300/pci@1/network@0

 

1G link up

<time unavailable> wanboot info: http://XXX.XX.XX.XXX:5555/cgi-bin/wanboot-cgi

<time unavailable> wanboot progress: wanbootfs: Read 368 of 368 kB (100%)

<time unavailable> wanboot info: wanbootfs: Download complete

 wanboot progress: miniroot: Read 408100 of 408100 kB (100%)

 wanboot info: miniroot: Download complete

SunOS Release 5.11 Version 11.4.0.15.0 64-bit

Copyright (c) 1983, 2018, Oracle and/or its affiliates. All rights reserved.

Remounting root read/write

Probing for device nodes ...

NOTICE: emlxs0: Physical link is functional.

NOTICE: emlxs1: Physical link is functional.

Preparing network image for use.

Downloading http://<AI Server host name>:5555/export/auto_install/solaris11_4-sparc/solaris.zlib

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current

                                 Dload  Upload   Total   Spent    Left  Speed

100  183M  100  183M    0     0  62.6M      0  0:00:02  0:00:02 --:--:-- 62.6M

Downloading http://<AI Server host name>:5555/export/auto_install/solaris11_4-sparc/solarismisc.zlib

..

..

Done mounting image

Configuring devices.

Hostname: solaris

Welcome to the Oracle Solaris installation menu

 

        1  Install Oracle Solaris

        2  Install Additional Drivers

        3  Shell

        4  Terminal type (currently xterm)

        5  Reboot

 

[Select 1  Install Oracle Solaris, give all the required details and complete the installation.]

 

2.      After OS install, need to update the kernel with available latest OS Patch kernel. Here we have used the following Patch 38101862: ORACLE SOLARIS 11.4.83.195.1 IPS Repository (SPARC/X86 (64-bit)) and Patch 28488987: ORACLE SOLARIS 11.4 IPS REPOSITORY (SPARC/X86 (64-BIT)) for the kernel update.

 

Download both the Patch from My Oracle Support for platform as Oracle Solaris on SPARC (64-bit) and copy to the AI server used previously. Then unzip it and export file systems over NFS.

 

3.      Mount both the NSF shared file system in client server as following:

 

Create two directory with /repo1 & /repo2 and mount the nfs shared file system.

root@T8-2:~# mount -F nfs <AI server IP>:/< Patch 28488987 extract location> /repo1

root@T8-2:~# mount -F nfs < AI server IP >:/< Patch 38101862 extract location> /repo2

root@T8-2:~# pkg set-publisher -g file:///repo1/ solaris

root@T8-2:~# pkg set-publisher -g file:///repo2/ solaris

root@T8-2:~# pkg publisher

PUBLISHER                   TYPE     STATUS P LOCATION

solaris                     origin   online F file:///repo1/

solaris                     origin   online F file:///repo2/

root@T8-2:~#

 

Check current kernel level as following:

root@T8-2:~# uname -a

SunOS T8-2 5.11 11.4.0.15.0 sun4v sparc sun4v

 

Perform Kernel update as following:

root@T8-2:~# pkg update --accept

------------------------------------------------------------

Package: pkg://solaris/release/notices@11.4-11.4.83.0.1.195.1:20250626T054833Z

License: lic_OTN

 

 

            Packages to remove:  94

           Packages to install: 152

            Packages to update: 357

           Mediators to change:   6

       Create boot environment: Yes

Create backup boot environment:  No

 

DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED

Completed                            603/603   22417/22417  428.2/428.2      --

 

PHASE                                          ITEMS

Removing old actions                     21290/21290

Installing new actions                   24920/24920

Updating modified actions                13742/13742

Updating package state database                 Done

Updating package cache                       451/451

Updating image state                            Done

Creating fast lookup database                   Done

Updating package cache                           1/1

 

A clone of solaris exists and has been updated and activated.

On the next boot the Boot Environment be://rpool/solaris-1 will be

mounted on '/'.  Reboot when ready to switch to this updated BE.

 

Updating package cache                           1/1

root@T8-2:~#

 

Then reboot the client server and check the current kernel level:

 

root@T8-2:~# uname -a

SunOS T8-2 5.11 11.4.83.195.1 sun4v sparc sun4v non-virtualized

 

4.      Shutdown the client and install the required Adapter Oracle 7335902, make connectivity as per the diagram, power on it and check the adapter status as following:

 

root@T8-2:~# fcinfo hba-port

HBA Port WWN: 100000109b21c076

        Port Mode: Initiator

        Port ID: c50380

        OS Device Name: /dev/cfg/c11

        Manufacturer: Emulex

        Model: 7115461

        Firmware Version: 7115461 12.8.542.44

        FCode/BIOS Version: Boot:12.8.542.44 Fcode:4.10a09

        Serial Number: 4925382+1733000013

        Driver Name: emlxs

        Driver Version: 3.3.3.1 (2025.04.11.12.00)

        Type: N-port

        State: online

        Supported Speeds: 8Gb 16Gb 32Gb

        Current Speed: 32Gb

        Node WWN: 200000109b21c076

        NPIV disabled

HBA Port WWN: 100000109b21c075

        Port Mode: Initiator

        Port ID: 10200

        OS Device Name: /dev/cfg/c12

        Manufacturer: Emulex

        Model: 7115461

        Firmware Version: 7115461 12.8.542.44

        FCode/BIOS Version: Boot:12.8.542.44 Fcode:4.10a09

        Serial Number: 4925382+1733000013

        Driver Name: emlxs

        Driver Version: 3.3.3.1 (2025.04.11.12.00)

        Type: N-port

        State: online

        Supported Speeds: 8Gb 16Gb 32Gb

        Current Speed: 32Gb

        Node WWN: 200000109b21c075

        NPIV disabled

 

You can find the Adapter information using command “prtdiag -v”:

 

======================================== IO Devices =======================================

Slot +            Bus   Name +                            Model      Max Speed  Cur Speed

Status            Type  Path                                         /Width     /Width

-------------------------------------------------------------------------------------------

/SYS/MB/PCIE6     PCIE  SUNW,emlxs-pciex10df,e300         7115461    8.0GT/x8   8.0GT/x8

                        /pci@301/pci@1/SUNW,emlxs@0

/SYS/MB/PCIE6     PCIE  SUNW,emlxs-pciex10df,e300         7115461    8.0GT/x8   8.0GT/x8

                        /pci@301/pci@1/SUNW,emlxs@0,1

 

5.      Make necessary zoning in switch between Adapter and storage ports and you can see now that both the adapter ports status using the following Solaris command:

root@T8-2:~# luxadm -e port

/devices/pci@301/pci@1/SUNW,emlxs@0/fp@0,0:devctl                  CONNECTED

/devices/pci@301/pci@1/SUNW,emlxs@0,1/fp@0,0:devctl                CONNECTED

 

You can check the zoning by below command:

root@T8-2:~# luxadm -e dump_map /devices/pci@301/pci@1/SUNW,emlxs@0/fp@0,0:devctl

Pos  Port_ID Hard_Addr Port WWN         Node WWN         Type

0    10600   0         50060e8034118040 50060e8034118040 0x0  (Disk device)

1    10200   0         100000109b21c075 200000109b21c075 0x1f (Unknown Type,Host Bus Adapter)

root@T8-2:~# luxadm -e dump_map /devices/pci@301/pci@1/SUNW,emlxs@0,1/fp@0,0:devctl

Pos  Port_ID Hard_Addr Port WWN         Node WWN         Type

0    c503a1  0         50060e8034118050 50060e8034118050 0x0  (Disk device)

1    c50380  0         100000109b21c076 200000109b21c076 0x1f (Unknown Type,Host Bus Adapter)

 

6.      Configure the VSP One Block High End Storage using CCI by creating a HORCM configuration file. This is a sample HORCM file created to configure Hitachi VSP One BHE:

 

[root@cciserver ~]# cat /etc/horcm7010.conf

HORCM_MON

#ip address    service       poll(10ms)  timeout(10ms)

HXX.XX.XX.XX      horcm7010     1000        3000

 

HORCM_CMD

\\.\IPCMD-XXX.XX.XX.XX-31001 \\.\IPCMD-XXX.XX.XX.XX-31002

[root@cciserver ~]#

 

For the HORCM detailed procedure, see the following document: https://docs.hitachivantara.com/v/u/en-us/command-control-interface/01-78-03/mk-90rd7009

 

Create Pool in Storage using raidcom command:

 

a)      Check available parity group with following:

raidcom get parity_grp -fx -IH7010

 

b)      Create basic Ldev and format the Ldev:

raidcom add ldev -ldev_id 00:32 -parity_grp_id 1-3 -capacity 200G -IH7010

raidcom initialize ldev -ldev_id 00:32 -operation fmt -IH7010

 

c)      Create pool:

raidcom add dp_pool -pool_name VSP_Pool -ldev_id 00:32 -IH7010

 

d)      Create volume from created pool:

raidcom add ldev -pool 1 -ldev_id 00:52 -capacity 10G -capacity_saving deduplication_compression -drs -request_id auto -IH7010

Ldev ID 00:53 created same way.

 

e)      Configure host group for Solaris OS:

raidcom get host_grp -port CL5-A -IH7010

raidcom add host_grp -port CL5-A-1 -host_grp_name VSP-SOL -IH7010

raidcom modify host_grp -port CL5-A-1 VSP-SOL -host_mode 9 -host_mode_opt 7 -IH7010

[Here used 09 as Host mode and 7 as Host mode options]

For details, please follow the document:

https://docs.hitachivantara.com/r/en-us/svos/9.6.0/mk-98rd9015/managing-logical-volumes/configuring-hosts/host-modes-for-host-groups

 

raidcom add hba_wwn -port CL5-A-1 -hba_wwn 100000109b21c075 -IH7010

Now verify if the solaris host group created properly:

[root@ilab-cci ~]# raidcom get host_grp -port CL5-A -IH7010

PORT   GID  GROUP_NAME                       Serial# HMD          HMO_BITs

CL5-A    0  5A-G00                            970016 LINUX/IRIX

CL5-A    1  VSP-SOL                           970016 SOLARIS      7

[root@ilab-cci ~]#

raidcom add lun -port CL5-A-1 -lun_id 0 -ldev_id 00:52 -IH7010

raidcom add lun -port CL5-A-1 -lun_id 1 -ldev_id 00:53 -IH7010

 

Now check to verify if the LUN mapped properly:

[root@ilab-cci ~]# raidcom get lun -port CL5-A-1 -fx -IH7010

PORT   GID  HMD            LUN  NUM     LDEV  CM    Serial#  HMO_BITs

CL5-A    1  SOLARIS          0    1       52   -     970016  7

CL5-A    1  SOLARIS          1    1       53   -     970016  7

[root@ilab-cci ~]#

 

Map the LUN in the same way to other host group created on port CL6-A.

 

f)        Storage LUN scan in Solaris OS:

Scan the newly added disk using “devfsadm -c disk”. If you are still not able to see the disks, then identify the controller ports in OS using “cfgadm -al” command, then configure the controller again as following:

 

root@T8-2:~# cfgadm -al

Ap_Id                          Type         Receptacle   Occupant     Condition

c12                            fc-fabric    connected    configured   unknown

c12::50060e8034118040          disk         connected    configured   unknown

[50060e8034118040 this Storage port wwn]

                        root@T8-2:~# cfgadm -c configure c12

                      Now see the storage lun multipathing status with below command:

                                    root@T8-2:~# mpathadm list lu

                                                      /dev/rdsk/c0t60060E80341180000091118000000053d0s2

                                                                        Total Path Count: 2

                                                                        Operational Path Count: 2

                                                      /dev/rdsk/c0t60060E80341180000091118000000052d0s2

                                                                        Total Path Count: 2

                                                                        Operational Path Count: 2

 

 Oracle Solaris I/O multipathing is enabled by default for SPARC based and x86 based systems. If not enabled, you can run “stmsboot -e” to enable it.

 

g)       Format both the disk and create file system as below:

root@T8-2:~# format

Searching for disks...done

 

c0t60060E80341180000091118000000053d0: configured with capacity of 10.00GB

c0t60060E80341180000091118000000052d0: configured with capacity of 10.00GB

 

AVAILABLE DISK SELECTIONS:

       2. c0t60060E80341180000091118000000053d0 <HITACHI-OPEN-V      -SUN-A001-10.00GB>

          /scsi_vhci/disk@g60060e80341180000091118000000053

       3. c0t60060E80341180000091118000000052d0 <HITACHI-OPEN-V      -SUN-A001-10.00GB>

          /scsi_vhci/disk@g60060e80341180000091118000000052

Specify disk (enter its number): 2

selecting c0t60060E80341180000091118000000053d0 <HITACHI-OPEN-V      -SUN-A001 cyl 2728 alt 2 hd 15 sec 512>

c0t60060E80341180000091118000000053d0: configured with capacity of 9.99GB

[disk formatted]

Disk not labeled.  Label it now? y.

 

Check the current partition table:

partition> p

Current partition table (default):

Total disk cylinders available: 2728 + 2 (reserved cylinders)

 

Part      Tag    Flag     Cylinders        Size            Blocks

  0       root    wm       0 -   34      131.25MB    (35/0/0)     268800

  1       swap    wu      35 -   69      131.25MB    (35/0/0)     268800

  2     backup    wu       0 - 2727        9.99GB    (2728/0/0) 20951040

  3 unassigned    wm       0               0         (0/0/0)           0

….

  6        usr    wm      70 - 2727        9.73GB    (2658/0/0) 20413440

  7 unassigned    wm       0               0         (0/0/0)           0

 

partition>

 

Partition 2 always represents the entire disk. You should not change the partition. Create file system on the partition 6:

newfs /dev/rdsk/c0t60060E80341180000091118000000053d0s6

 

h)      Mount the file system and start IO:

mount /dev/dsk/c0t60060E80341180000091118000000053d0s6 /fs1

 

root@T8-2:~# df -h /fs1

Filesystem             Size   Used  Available Capacity  Mounted on

/dev/dsk/c0t60060E80341180000091118000000053d0s6

                      9.59G  9.75M      9.48G     1%    /fs1

root@T8-2:~#

 

Conclusion: The integration of Hitachi VSP One High End (BHE) with Oracle Solaris 11.4 (SPARC) delivers a resilient, high‑performance storage foundation that aligns with enterprise standards for availability, scalability, and auditability. By standardizing configuration steps across array provisioning, SAN zoning, and Solaris MPxIO enablement, this framework ensures consistent outcomes, seamless failover, and transparent audit records.


#VSPOneBlockHighEnd
0 comments
1 view

Permalink