Darin Meyer

Combatting HCP Architecture FUD

Blog Post created by Darin Meyer Employee on Jul 11, 2018

It was recently brought to my attention competitors were referencing our whitepaper, "Hitachi Content Platform Architecture Fundamentals", and spreading FUD as it relates to the resiliency of HCP. Below are two statements competitors are attempting to leverage against us with rebuttals for each:


  • Page 21, Flash Optimized Option, ‘G series… configured in a RAID-1 mirror’, then in S-node we said to use eraser code. Competitor claim if G-node failed, then there will be risk. We are not 100% eraser code.
    • This claim appears to conflate two unrelated topics; Flash Optimized Option and user storage data protection. The Flash Optimized Option refers to the use of a pair of mirrored SSDs for the purpose of accelerating database performance of the content-addressable hash index. There is no user data on the mirrored
      SSDs.  There is no recommendation or assertion that RAID-1 be used to protect user data.  A minimal HCP system consists of a minimum of four G-nodes with a RAID-6 group (4D+2P) for user storage for each node. The cluster will operate at DPL2 ensuring two copies of all data and metadata exist in the cluster on two separate nodes. The loss of a single G node in the cluster will not impact the data or access to the data.


  • Page 22, Storage Nodes (S10, S30), ‘all HCP S nodes feature two independent x86 controllers…’.  Competitor challenges if more than two node/controller failed for critical S node, then whole HCP will be out of service.
    • If we are running a single S node system and lost both controllers on the S node we would lose access to that data. If we were replicating to a second site or in a geodistributed EC topology the loss of a single S node wouldn’t result in loss of data or loss of data access for the HCP system. Generally speaking, dual redundant components is standard for enterprise data centers and would be consistent with anything offered by our competitors.