Skip navigation

Once a file is saved to the Hitachi Content Platform (HCP), the file is protected, compressed, single-instanced, encrypted, and replicated to another HCP repository eliminating the need to backup the file. If a file is accessed and modified, the file is versioned and both versions of the file are retained by HCP.

 

There are a number of ways that files can be saved to HCP to eliminate backup and facilitate retrieval.

 

hu-093015-1.png

 

Applications that support industry-standard HTTP REST, Amazon S3, Symantec OST, WebDAV, CIFS and NFS can store data, create and view directories, view and retrieve objects or files and their metadata, modify metadata, and delete objects. Objects that are added using any protocol are immediately accessible through any other supported protocol. These protocols can be used to access the data with a Web browser, HCP client tools, 3rd-party applications, Microsoft® Windows® Explorer, or native Windows or Unix tools. HCP also allows special-purpose access to the repository through the SMTP protocol in order to support email journaling. Since HCP supports S3, any application that supports S3, including NetBackup, Commvault, Avere, Nice, can save to HCP and eliminate backup.

 

Hitachi’s HNAS high performance filer can automatically tier files to HCP. A common practice is to retain 10 % of the active files on HNAS and tier the rest to HCP which does not need to be backed up. Only the 10% active files on HNAS will need to be backed up or replicated for data protection.

 

HCP Anywhere is a Secure File Sync and Share solution with Mobile Access to Enterprise Content. Files are retained in HCP and links are used to share files. Mobile access to data in Microsoft® SharePoint; NAS devices, including Hitachi NAS Platform, NetApp and Microsoft Windows Servers®; and file-sync-and-share capabilities are provided in a single solution.

 

Hitachi Data Ingestor (HDI) is software that runs on an x86 server or VM and acts like a familiar NAS device to users and applications. Files that are written to HDI are replicated over https to HCP. When the files stored locally exceed a certain capacity threshold, the files are stubbed out to HCP, and the result is a bottomless filer that does not require any backup. HDI can be configured remotely by HCP Anywhere, which makes it ideal for remote and branch offices administrators.  

 

Backup-Free Data Protection and Content Preservation

Hitachi Content Platform is a truly backup-free platform. HCP protects content without the need for backup. It uses sophisticated data preservation technologies, such as configurable data and metadata protection levels, object versioning and change tracking, multisite replication with seamless application failover, and many others. HCP includes a variety of features designed to protect the integrity, provide the privacy, and ensure the availability and security of stored data.  HCP is part of a larger portfolio of solutions that include Hitachi Data Ingestor for elastic, backup-free file services and Hitachi Content Platform Anywhere for mobile access to enterprise data with synchronization and sharing of files and folders across a wide range of user devices.

 

Economic Data Protection With HCP

With single instancing and versioning HCP only needs to keep one copy of the data and a replica for data protection, and not have to keep multiple backup copies where 90% of the data is the same on each copy. HCP can also reduce the cost of storage through the use of an HCP S10 storage node which uses economical, high capacity, commodity disk drives that are protected by an erasure code that can sustain 6 out of 26 disk failures without any data loss and rebuild data without waiting for the failed disks to be replaced. With the S3 interface, HCP can also push encrypted data to a public cloud where storage costs are even lower. Since HCP manages the encryption key it doesn’t matter where the data physically resides in the cloud. HCP can automate the retention, tiering, and shredding of data to eliminate the cost of orphaned and rouge copies of data.

 

Data protection with HCP can prove to be the most economical way to protect file and object data across the enterprise, from edge to core.

What’s New In S/4 SAP

 

In the past two years, SAP Business Suite customers have been porting their applications to HANA and optimizing their code to gain significant performance in business process and reporting activities. This is known as SAP Business Suite powered by SAP HANA. Now, instead of just porting existing application to run on SAP HANA, SAP has developed a completely new Business Suite 4 SAP HANA (S/4 HANA) that is built to take full advantage of SAP HANA’s in-memory technology: http://discover.sap.com/S4HANA.

 

A major achievement is the ability to reintegrate ERP, CRM, SRM, SCM, PLM in one system -- to save hardware costs, operational costs and time. S/4 HANA has a 10x smaller data footprint compared to a best-in-class business suite on traditional databases with 7 time faster throughput and 1800 times faster analytics and reporting.

 

hu-9-23-15.png

 

The main business benefit is simplification. S/4 eliminates the scattered information and data duplication that create different versions of the truth, which complicates decisions and makes it difficult to bring good ideas to market quickly and profitably. Business processes that were built on long-running batch processes can now be done in real time. Complex data models that used to limit advancements in technology are now in the past.

 

 

UCP: The Ideal Infrastructure for S/4 SAP

 

SAP S/4 HANA drives massive IT simplification. As applications and data models adapt to this simple model, your infrastructure needs to be ready for SAP S/4 HANA. You need a simple, high performance, scalable, resilient infrastructure from an experienced vendor who is closely partnered with SAP.

 

Simple: Nothing is simpler than a converged solution that is preconfigured and certified for SAP HANA. Hitachi’s Unified Compute Platform (UCP) integrates a combination of enterprise-class storage systems, compute blades and networking components that simplifies deployment and helps ensure predictable results.

 

High Performance: UCP is a tightly integrated enterprise data center solution that delivers a 5x performance increase over competitive offerings. The Hitachi blade servers use the latest Intel processors with Hitachi SMP technology to increase internal memory and processing power. Storage systems are optimized for flash which is important for in-memory database performance that is gated by external logs and persistent storage access.

 

Scalable: You need the ability to incrementally grow from small to a very-large-scale SAP systems while using the same infrastructure. You can easily expand processing power with symmetric multiprocessing (SMP) within the Hitachi Compute Blade family. Unique LPAR technology simplifies dev/test and Business Suite migration. Unlike other converged solutions that are sold in T-shirt sizes, upgrades are simplified by adding blades and/or drives instead of replacing the entire compute and storage appliance.

 

Resilient: Ensure that your applications stay up and running with enterprise-class server and storage systems that provide robust data-protection capabilities including N+M cold stand-by, in addition to synchronous and asynchronous disaster recovery.

 

Experience: Hitachi Data Systems has partnered with SAP for over 21 years, and is an SAP global service and technology partner. SAP HANA is one of the leading use cases for UCP implementations. Earlier this year Hitachi Data Systems acquired oXya a provider of cloud infrastructure services based on Hitachi technology and specialized in hybrid cloud deployments of SAP workloads including SAP HANA.

 

Partnership: In January 2014 Hitachi Data Systems signed a global OEM agreement that included SAP HANA® delivered by HDS. This collaboration agreement focused on providing future technology innovation and combined sales and marketing activities to customers worldwide. With this agreement, SAP and HDS will extend further integration in areas such as cloud computing, the SAP® Real-Time Data Platform and high-performance enterprise computing. Hitachi plans to build on its vision for social innovation, the Internet of Things, and big data expertise with SAP HANA.

 

For more Information on UCP for S/4 SAP, see our white paper: [White Paper] Simplify Business: SAP S/4HANA and Converged Infrastructure.

The Commonwealth Scientific and Industrial Research Organization (CSIRO) of Australia is doing research to understand the behavior of bees and the stress factors that affect them. To do this they have gone to wearable technology. In this case they have selected an RFID tag from Hitachi Chemicals which they have glued onto thousands of bees to track their movements in and out of the hives, as well as environmental sensors to measure temperatures, humidity, and solar radiation.

 

hu-092115-1.png

 

“This tiny technology allows researchers to analyse the effects of stress factors including disease, pesticides, air pollution, water contamination, diet, and extreme weather on the movements of bees and their ability to pollinate," professor Paulo de Souza, CSIRO science leader, said. "We're also investigating what key factors, or combination of factors, lead to bee deaths en masse.’

 

Honey bees are essential for the pollination of about one third of the food we eat - including fruit, vegetables, oils, seeds and nuts and their health and ability to pollinate our crops is under serious threat. Since 2006, bee populations have been decreasing rapidly on a global basis. This is known as Honeybee Colony Collapse Disorder. Australia is one of the few countries where this has not happened so it is important for this study to establish a control group for research in other countries.

 

The RFID tag is a tiny tiny 2.5mm x 2.5mm chip that can be used in many IoT applications for tracking documents, packages, apparel, tools, as well as living organisms like honeybees. This RFID tag also has a booster antenna that can extend the read range to 4 meters. This is another example of Hitachi’s Social Innovation strategy in partnership with leading research organizations. CSIRO has partnered with Intel to capture the data from the RFID transponder and load it into the cloud where it can be accessed by the international research community to solve the growing problem of Honeybee Colony Collapse Disorder: http://www.csiro.au/en/News/News-releases/2015/Honey-Bee-Health.

Last week I took a few days of vacation with my wife and daughter in Boston. One afternoon we went to vist the Harvard campus in Cambridge. While looking for a restroom, I stumbled into the Science building where I found the first IBM computer, the Mark I, IBM Automatic Sequence Controlled Calculator (ASCC) sitting in the triangle core of the building. I was thrilled to see the first mainframe computer ever built.

 

It was conceived by Harvard Professor Dr. Howard Aiken and it was encased in a steel frame that was 51 feet long, 8 feet high and about 2 feet deep, built by IBM engineers in Endicott, N.Y. It was delivered to Harvard in August of 1944 in time to help with the calculations to develop the atom bomb. It was the worlds largest electromechanical calculator at the time. It was electromechanical in that the basic calculating units had to be synchronized mechanically, run by a 50-foot shaft driven by a five-horsepower electric motor.

 

hu-091115-1.png


An interesting side note in the display was this picture of an actual log entry of the first computer bug.

 

hu-091115-2.png

 

A description of the Mark I’s compute capability is extracted from wikipedia and presented below (https://en.wikipedia.org/wiki/Harvard_Mark_I):

“The Mark I had 60 sets of 24 switches for manual data entry and could store 72 numbers, each 23 decimal digits long.[13]It could do three additions or subtractions in a second. A multiplication took six seconds, a division took 15.3 seconds, and a logarithm or a trigonometric function took over one minute.

 

The Mark I read its instructions from a 24-channel punched paper tape and executed the current instruction and then read in the next one. It had no conditional branch instruction. This meant that complex programs had to be physically long. A loop was accomplished by joining the end of the paper tape containing the program back to the beginning of the tape (literally creating a loop). This separation of data and instructions is known as the Harvard architecture (although the exact nature of this separation that makes a machine Harvard, rather than Von Neumann, has been obscured with the passage of time, see Modified Harvard architecture). “

 

By the 1950’s IBM had replaced the electromechanical computers with the first generation of electronic computers using vacumm tubes. In the 1960’s the second generation was introduced using transistors and magnetic core memories. In the late 1960’s the third generation computers introduced integrated circuits.

 

In comparison, storage has relied on mechanical electro magnetic devices for over 50 years and is just now transitioning to solid state devices. Flash is just the first generation of non-volatile memory devices. I believe that we will see an acceleration in non-volatile memory technology in the coming years. However, at the same time I expect to see mechanical electromagnetic devices like tape and disk stay around for some time to come.

Enterprise customers require storage systems that can consolidate a heterogeneous mix of OS platforms and applications and still deliver consistent high performance, non-stop always on operations with the highest level of failover automation. Midrange storage arrays cannot deliver on these requirements due to their active/passive dual controller architecture and their limited support for remote replication. There are application use cases for lower cost midrange arrays where heterogeneous workloads and non-stop operations and remote replication are not required.  Adding flash drives to a midrange storage array will certainly increase performance but that will not convert it into an enterprise storage array. When one refers to an All Flash Array (AFA) you need to distinguish between midrange and enterprise AFAs.

 

Hitachi Enterprise Flash Arrays

Hitachi Data Systems builds a family of enterprise arrays, from a low cost 2U VSP G200 that can virtualize external arrays with no internal disks, to a VSP G1000 that can support 255 PB of internal and external storage. Our VSP G200 to G1000 enterprise arrays have an internal switch architecture and a global cache that enables I/O load balancing with partitioning for protection against the effect of noisy neighbors. The same code that runs in all the VSPs has been optimized for the latencies of flash, just like other flash array vendors. However, unlike other flash array vendors, Hitachi also optimized the code in the flash module, which increases endurance and performance by removing the flash housekeeping impact out of the I/O path in the device. As a result an all flash G1000 was able to max out the SPC1 benchmark and set a record of 2 million IOPs with less than 1 ms response time with only 64 flash devices. These SPC1 performance results are outstanding but what does it mean in real enterprise environments?

 

At the recent 2015 Flash Summit in Santa Clara, Walter Amsler, our senior director of global technology planning, presented the results of the Coop Group with an all flash G1000.

 

VSP G1000 All Flash Array at COOP

The COOP Group is the leading retail company in Switzerland with a total sales revenue of $30 B in 2014. Their core business runs on SAP with SAP HANA for real time business intelligence. Their platforms include, IBM P-Series Servers with AIX, VMware, Solaris, Linux, etc. With thousands of outlets and production sites, they depend on long distance (120km) Asynchronous Remote Replication for disaster recovery.

 

The COOP installed an All Flash Array G1000 with 134 x 3.2TB flash modules (432 TB raw capacity) and array based asynchronous replication. The G1000 used Hitachi Universal Replicator for asynchronous replication over 120 km for any server, OS and application. This removed the software and hardware costs that would have been required for host-based replication.

hu-090415-1.png

 

With asynchronous replication they maintained an average response time well below 1 ms for the entire storage subsystem with a daily sequential throughput peak of 6 GB/second. Even in a highly consolidated environment with a variety of OS (AIX, VMware, Linux, Solaris) and a diversity of applications (BI, ERP, Analytics), their consistent low response times indicated subsystem scalability without impact to low latency performance.

 

The all-flash G1000 coped with substantial latent demand and eliminated hours of elapsed time for critical applications. Response times were reduced by a factor of 10. Faster shopping cart checkout times on their web shopping application improved online customer satisfaction. Their daily consolidation of distribution center data now completes on time and they can run more analytics to enable better planning and decision-making. With SAP/HANA they can do real-time stock adjustments.

 

 

Enterprise All Flash or Hybrid Arrays – Your Choice

If you have enterprise requirements choose an enterprise array first before you decide on all flash or hybrid flash. The difference between an all-flash enterprise array and a hybrid enterprise array is that all your data will get flash performance whether they need it or not. On the one hand, I have heard a customer say that he prefers an all flash solution even if it costs more because it simply eliminates all his performance problems and he doesn’t have to worry about those 2:00 AM calls about slow response times. Another customer told me that he bought an all-flash array, and he ended up with an all-flash JBOD, because most of his application data was no longer active.

 

With Hitachi Enterprise flash arrays you have a choice, and you can non-disruptively change the mix when your workload changes. The smaller models of the VSP also lower the entry cost for enterprise flash arrays. Since our enterprise arrays can also virtualize external storage, you can attach your existing non-flash, flash or hybrid arrays and enjoy all the latest enterprise capabilities of a Hitachi array.