whats your thoughts on HUSVM..
when you compare VSP vs HUSVM
There are different level of products but with same software function. in summary, may you can tread the HUS-VM as mini VSP.
how about a half populated VSP = fully loaded HUSVM ?
in terms of load, processing, latency, replication... do you think HUSVM can compete with VSP ? does it perform allmost equal or less ?
In General terms -
HUSVM - is called as Entry-Level Enterprise Storage
VSP - Enterprise Storage
Following Technical Overview will be helpful for you.
HUSVM is a single product model offered in 4 main configurations
‐1. the new storage virtualization controller alone diskless for managing existing external storage
‐2. the new controller with have a choice of 2.5” or 3.5” hard drives and solid state drives
‐3. the new controller with a file option in addition to internal storage as a uniifed architecture
‐4. the new controller with a future high density all flash configuration, to be released in the future
This also combines the microcode and controller architecture of VSP with the modular form factor and disk storage of HUS 100 family
From the VSP, the microcode offers the same enterprise functionality it supplies for open systems (no mainframe)
The controller, although smaller in package, is designed with the shared resource design of the VSP with clustered processor/cached modules which share a global cache
As such, it is a new dual node, Hierarchical Star Network shared resource controller.
It is not a stripped-down VSP, nor an HUS 150 with new firmware.
Like the HUS 100 family the controller is a modular form factor designed to fit in any standard 19” cabinet using Hitachi provided mounting rails
The disk drives and corresponding trays are also shared although their firmware is unique, thus they are not interchangeable between HUS 100 family and this new design
An HUS VM system will have one rack with controller and drive trays plus up to two optional drive tray expansion racks.
The controller chassis contains the HiStar based controller (block module) and file modules.
There are two types of Disk Boxes (DBs), with a disk tray (DKA, 24 2.5” drive slots) and a dense disk drawer
(DENSE, 48 3.5” drive slots). The 2U 12xLFF HDD tray will be offered soon after GA.
Each rack can hold up to 16 SFF disk trays or 6 LFF (5 in the controller rack) dense drawers. The use of the heavy dense trays is limited to the 26U level unless special rack installation measures are followed to stabilize them.
The tray and dense drawer are carried over from the HUS 100 modular line, but they operate with different internal firmware.
Up to 1152 drives are supported, including up to 128 SSDs.
As these are standard racks, one may also install other equipment such as HNAS or server blades.
The Controller Module (CTL) is a logical grouping of redundant processing components (modules) as follows:
There are two and only two CTL’s per subsystem (CTL0 and CTL1)
Each controller (CTL) contains:
One Main Module (1 x ASIC, up to 8 x Cache DIMMs, Flash memory module for Cache backup, Battery backup for Flash)
One Microprocessor Module (1 x 8 core CPU and CPU cache)
Diskless Configuration: max 6 x Host I/O Interface Modules (for 8Gbps FC, up to 24 x FE ports per CTL)
Internal Disk Configuration: max 4 x Host I/O Interface Modules and max 2 x Backend I/O Interface Modules per CTL 8Gbs FC example: up to 16 x Host ports per CTL up to 4 x SAS 6Gbps Backend ports per CTL (up to 16 SAS WL per CTL)
One Switching Power Supply and Fan
Service Processor (SVP): Only single SVP is supported HUS VM
Hitachi Unified Storage VM is based on a new block storage system powered by its storage virtualization controller.
This virtual storage system is a single-controller chassis, available with no drives, and containing the control logic, processors, memory and interfaces to drive chassis and host servers.
Its dual-node, shared-resource architecture creates a redundant configuration in which the storage system can continue operation should a component failure occur.
Main components can be added, removed and replaced without shutting down a device while the storage system is in operation.
The microcode can also be upgraded without shutting down the storage system.
A service processor is mounted in the controller chassis, which monitors the running condition of the storage system. Connecting the service processor with a Hitachi service center enables remote maintenance.
The Front‐end connectivity modules provide the fibre channel ports for connections to hosts, external storage or remote copy connections (Hitachi Universal Replicator, TrueCopy Sync).
These modules are installed into module slots on the rear of the MAIN blade assemblies in the controller.
There is one type of module, which has four 8Gbps FC ports that can auto‐negotiate down to 4Gbps or 2Gbps rates depending on what the host port requires.
This is the same module as used on the HUS 100 family of midrange arrays, except that the port SFP is exchangeable to allow for either short wave or long wave operation.
For a diskless system dedicated to virtualization use, the four BE modules can be replaced by four more FE modules to provide a total of 48 x 8Gbps FC ports.
The Back‐end drive controller modules provide the SAS links used to attach to the disks in a set of disk trays.
These modules are installed into specific slots on the rear of the MAIN blade assemblies in the controller.
There is one type of module, and it includes two SAS Wide ports on the rear panel by which a pair of SAS Wide cables to the first row of four disk trays (DB‐00 to DB‐03) are connected.
There are four 6Gbps full duplex SAS links per port, with two ports per module. This is the same module and cable as used on the HUS 100 family of midrange arrays.
The battery for the data saving is installed on each Main Blade board.
When the power failure continues more than 20 milliseconds, the storage system uses power from the batteries to backup the cache memory data and the storage system configuration data onto the Cache Flash Memory.
Hitachi Unified Storage VM file module features a hybrid core architecture.
This architecture employs the best properties of FPGA-based design to optimize data movement and supports high-performance, multicore processors for efficient data management functions.
Both classes of activity work at full speed without an impact on each other. They can handle a number of simultaneous workloads, such as serving email to thousands of users and hosting large-scale online transaction processing (OLTP) applications, while maintaining high performance.
They also provide high IOPS performance and utilize built-in 10 gigabit Ethernet (GbE) and 1GbE for high-throughput NAS and iSCSI networking connectivity.
Up to 4 nodes in a single cluster meet demands for scalable storage with greater access, capacity and performance.
Storage can be added at any time to meet new application or business needs, or consolidate disparate storage, all with a single point of management, and no downtime.
These systems offer a total usable capacity of from 4PB to 8PB under a single namespace, all of which is easily managed from a central system management.
Inside each 24‐disk tray there is a pair of SXP “expander” switches.
These may be viewed as two 32 port SAS switches attached to the two SAS wide cables coming from the controllers, or from the next tray in the stack in the direction of the controllers.
The two expanders cross connect the dual‐ported disks to all eight of the active 6Gbps SAS links that pass through each tray.
Any of the 2.5” SAS or SSD drives may be intermixed in these trays. The drives are arranged as one row of 24 slots for drive canisters.
All drives within each tray are dual attached, with one full duplex drive port going to each switch.
HiStar based controller:
The controller design is a “compacted logical” implementation of the HiStar design as used in the VSP, but it uses completely different hardware with some faster, newer generation components.
So in general it could be seen as a smaller VSP that provides a lot of I/O power as well as most of the features of a VSP.
The MAIN blades have plug‐in modules that provide the FC connectivity to hosts (or to external storage or HUR links) as well as the SAS connectivity to the disks. They also contain the system cache and a local flash memory for backing up the contents of cache.
There is a custom HM ASIC at the heart of each MAIN blade, this being an integration of many separate elements within the VSP design.
The connectivity within each MAIN blade, as well as the external connections between the two MAIN blades and to each MP blade, is provided by PCI Express 3.0 links. No user data passes between the MAIN and MP blades; these links are used for command transfers and shared (control) memory writes.
There are small Front‐end connectivity modules (FE modules) and Back‐end drive controller modules (BE modules) – both parts shared with the HUS 100 family that plug into sockets on the MAIN blades.
The MP Blades each contain a 2.1GHz Intel Xeon 8‐core processor, Local Memory (8GB of RAM), and a pair of flash modules for holding the system software.
HDLM : BOS will include VMware licenses for HDLM as well as HDLM Advanced. There is no limit to the number of HDLM licenses which is different from the VSP BOS bundle which only includes 50 HDLM licenses for non-VMware hosts.
We've seen a lot of interest in the HUS VM as an enterprise class storage platform. While it won't scale the same as a VSP (think number of drives, ports and processors) it has very comparable performance.
Also the lower frame based licensing cap at 80TB makes it attractive for smaller shops.
do we see any perf issues when it cross that 80Tb line ?
This is such a broad question, no one would be able to help you on a forum. Such questions need some assesment. Your HDS pre sales rep or partner will be able to easily help you spec the configuration if you provide him or her some basic information and list of requirements.
I encourage you to go through the SPC-1 report that was published for HUS-VM to be able to understand if HUS-VM fits your performance needs.
For a rough idea if you insist, I have one customer who has 100 TB of internal capacity and 84 TB of virtualized capacity and the MP's are at around 30% utilized and cache pending less than 20% with max of 10,000 IOPS, so to say the system can handle much more.
I would also recommend you read the Architecture guides to judge which product meets your scalability requirements.
HUSVM is not VSP and if your company goes over 100 TB & you have lots of hosts to connect you will start feeling the need for addtional processors that VSP can offer, licencing is huge benifit on HUSVM but ultimately if you need large capacity and over 100 TB/150 TB capacity you are safer with VSP or else you need secound HUSVM
I really like the HUS-VM. Lots of clever innovation, and good performance. Depending on what your needs are, how far you need to scale within one array, how much copy, etc... That would be the information needed to see which array better meets your needs.
The HUS-VM is a great addition to any Enterprise storage infrastructure. I am replacing 1 huge USPV with 2 HUS-VM's. Lower TCO (floor space, FTE management, feature rich).
I would say 1 HUS-VM is like a Half populated VSP. So you have to think of that of it that way when laying out the applications you will run on it.
The performance I have seen with it has been awesome so much that my 4Gbps core has become the bottleneck (i need some '14 budget DCX upgrades, lukcily i have an 8Gbps EDGE fabric to use temporarily).
Your cost per TB will be way lower and your feature set will be way higher.
Retrieving data ...