Skip navigation

iStock_000044596172_low.jpg

 

 

Update: Please view the webinar discussing the top 10 IT trends that I see for 2016. This piece features insights from Greg Knieriemen, our Technical Evangelist and Adrian Deluca our Asia Pacific CTO.  View the recorded webcast here:

https://www.hds.com/webtech/?commid=179961

 

I am sure that Greg and Adrian will add their own perspectives on these trends. I would also like to hear your views. As you will see, I am expecting a major transformation to happen in IT and in the vendor community. For background on this post please see my previous posts

 

Background for 2016 Trends

 

Are CIOs Becoming Extinct?

 

 

A GREATER FOCUS ON APPLICATION AND ANALYTICS

 

1. IT Skills Undergo Transformation

To meet the challenges of IT transformation, IT must off-load the grunt work that ties their staff to infrastructure management and operations and start to develop specialist skills in areas such as cloud enablement, analytics, DevOps, mobile and business solutions. This transformation of IT skills will involve a change in culture and will require the commitment of both business and IT leaders.

 

2. DevOps Adoption Accelerates Application Delivery
  DevOps is a software development methodology where operations and development engineers work together throughout the application cycle, resulting in high IT performance. Companies with high IT performance are twice as likely to exceed their profitability, market share and productivity goals.

 

3. Data Warehouses Transition Into Data Lakes

Big data analytics involves the processing of large amounts of heterogeneous data derived from multiple sources and across multiple knowledge domains. Data lakes enable this by bringing together data sources in their original state which can then be analyzed by applications that are brought to the data. They must also be able to incorporate existing data warehouses to leverage the investments that have already been made.

 

4. IT Takes Control of Provisioning Analytics Platforms

Business leaders will look to IT to make investments in analytics platforms, acknowledging the fact that IT has a better understanding of security, data privacy, integration and the service level requirements of the business. This will reverse the shadow IT trend of business units acquiring their own analytics platforms and tools and creating their own data silos.

 

INFRASTRUCTURE TECHNOLOGIES DRIVE EFFICIENCIES

 

5. Converged Solutions Replace Reference Architectures

Instead of providing reference architectures detailing best practices for application enablement, vendors will begin to deliver these best practices as templates implemented through converged solutions. The converged infrastructure offers a more evolved platform for deriving greater cost efficiencies and time savings by allowing IT resources to be managed more cohesively.

 

6. In-memory Databases Gain Traction

The move to in-memory databases will gather momentum as faster reporting and analysis deliver a clear competitive advantage in today’s real-time business environment. Developments such as the consolidation of SAP’s business suite onto the HANA in-memory database with S/4 HANA, and the emergence of converged solutions and cloud service providers, will help simplify IT and facilitate this migration.

 

7. Flash Devices Begin To Replace High Performance Disks

The availability of multi-terabyte flash devices will enable flash to compete with high-performance 15K RPM disk drives on a capacity-cost basis. As a result, the majority of storage systems delivered in 2016 will contain a percentage of flash to boost response times and reduce the cost of managing storage performance.

 

IT LEADERSHIP DRIVES INNOVATION

 

8. Businesses Prepare For Next Gen Cloud

According to a study by The Economist, some of the best practices that will help business leaders make the most of their cloud opportunities include improving supplier selection; choosing the right cloud service for the right task; making better use of integrators to connect cloud services to existing IT infrastructure; and considering factors such as cloud’s potential to improve business operations and boost employee efficiency.

 

9. IT Infrastructure Companies Will Be Disrupted

As IT begins to focus more on application delivery, analytics and the Internet of Things, pure-play infrastructure companies will try to cope with declining revenues by splitting off some parts of their business, acquiring new infrastructure companies or merging with other infrastructure companies to drive economies of scale. However, in the longer term, they will have to be able to integrate IT with operational technology to deliver solutions around the Internet of things that matter, in areas such as public safety, transportation, health and life sciences.

 

10. IT Plays Leadership Role In The 3rd Platform

IT will play a more proactive role in leading businesses through the transformation driven by social, mobile, analytics and cloud, collectively known as the 3rd Platform. Contrary to the view that IT no longer plays a dominant role in driving enterprise technology spending, we believe that the compelling value of IT lies in its ability to implement 3rd Platform technologies in accordance with corporate requirements for security, data protection, availability and collaboration. If IT does not step up to this leadership role, the result will be silos of information and duplication of processes that will inhibit business growth.

Extinct CIO.png


Last week I presented to a customer in our Executive Briefing Center. I had expected to meet the CIO of this company, but instead, the leader of this contingent was the VP of Director Services and the IT director who reported to him.  We begin each briefing with the “Voice of the Customer” where the customer tells us of their environment, their direction, and what they expect to get out of our briefing.

 

During this briefing, the director of services explained that they were about to do a conversion from Oracle to SAP, so their CIO, called in a consultant from a well-known technology research firm, to help them plan this conversion. During the course of the study, the consultant determined that the CIO’s position did not add any value. Within two weeks the CIO position was eliminated, and the CIO was let go. The VP of Director Services and the IT director now report to the VP of Finance. I was surprised at the recommendation by this analyst since his firm had recommended the CIO position in the early day.

 

I checked with the Hitachi account executive to see if he had heard of this happening in other accounts. He said that in the past year he knew of two other companies in his region where the CIO position was eliminated and IT is now reporting to finance. I don’t know if this is a trend but this is disturbing to me as this is the time for major transitions when the CIO position is more important than ever.

 

Many of us agree with this definition of the role of the Chief Information Officer, which is found in Wikipedia. https://en.wikipedia.org/wiki/Chief_information_officer

 

CIOs form a key part of any business that utilizes technology. In recent times, it has been identified that an understanding of just business or IT is deficient.[1] CIOs are needed for the management of IT resources as well as the “planning of ICT including policy and practice development, planning, budgeting, resourcing and training”.[2] In addition to this, CIOs are becoming increasingly important in calculating how to increase profits via the use of ICT frameworks, as well as the vital role of reducing expenditure and limiting damage by setting up controls and planning for possible disasters. . . . . In this way, CIOs are needed to decrease the gulf between roles carried out by both IT professionals and non-IT professionals in businesses in order to set up effective and working relationships.

 

I can understand if a particular CIO is let go if he does not fill the role of a CIO as defined above. If he focuses on IT as a cost center and his goal is to reduce infrastructure cost without increasing business value, than he should turn IT over to the finance department.

 

Next week I will be discussing the trends that I see for 2016 with Greg Knieriemen our HDS Technical evangelist and strategist and Adrian Deluca, our CTO for Asia Pacific. I will be getting their perspective on these trends and other trends that they are seeing. A big part of what I see for 2016 will depend on the role of the Chief Technology Officer.

 

Please set this date on your calendar, December 2, at 9:am Pacific Standard time for this webinar on IT Trends for 2016 and what we think of the importance of the CIO.  https://www.hds.com/webtech/?commid=179961

 

Until then stay safe and treasure your family and friends as we enjoy Thanksgiving holidays in the US.

IMG_1836.PNG

 

This is the time of year when analysts start to publish their predictions for 2016. IDC has already kicked off the season with their predictions at http://bit.ly/1SGQqMs. Their predictions are focused on Digital transformation and are very similar to my thoughts which I will be publishing on December 2 in a webcast with Greg Knierierman who is a Technical Evangelist for Hitachi Data Systems and popular host of the independent Speaking In Tech Podcast  http://speakingintech.com. Joining us will be Adrian Deluca, the CTO for Asia Pacific to give some local perspective.

 

Background

Businesses today have to contend with a new wave of innovative technology start-ups that are able to move quickly to capitalize on changes in the external environment. In my previous post on $10B startups, I mentioned the disruption caused by Airbnb in the hospitality business. This is a startup company that was stated in 2008 and now has a $25.5B valuation compared to a large hospitality corporation like Marriott, which was started in 1927 and has a current valuation of $20B. Airbnb is a software company and their IT is in the Amazon cloud. They connect people who are looking for lodgings with people who can provide lodging for a fee over the Internet.

 

There are two major disruptions happening here. First the most obvious is that software and services businesses that use the Internet and cloud are more agile than brick and mortar businesses. The second disruption is on the customer side. The consumer that used to book a standard room at a hotel, is now a “pro”sumer, who is empowered to book whatever, where ever, and at any fee that he chooses. Are there many people who want to find lodging this way?  The meteoric rise in Airbnb’s valuation would suggest there is.

 

To compete, traditional businesses will have to change their operating model to become more agile and connect with customers who are more sophisticated and empowered.  Business will need to look to IT as the Information Technology experts for innovation and competitive advantage.  In line with this, 2016 will see businesses shift their IT focus from infrastructure to application enablement, with more of the IT budget going to application development, analytics and big data. This sets the stage for some of the key IT trends that we see emerging in 2016.

 

The value of traditional IT can be thought of as a triangle with more than 50% of the value and focus on infrastructure. In this new business environment, we need to turn that triangle upside down, and focus on the value that we bring to the end users through application development and analytics.

Background.png

 

Infrastructure is still very important.  In fact it is still at the base and is the foundation. In this new positioning infrastructure can be viewed as the tip of the spear that facilitates the penetration of development platforms and applications. However, infrastructure must take less of IT’s time, effort, and budget. This can be done through virtualization, automation, software defined, and cloud.

 

Please set this date on your calendar, December 2, at 9:am Pacific Standard time for this webinar on IT Trends for 2016 https://www.hds.com/webtech/?commid=179961

Today Hitachi Data Systems is announcing the new all-flash Hitachi Virtual Storage Platform (VSP) F Series and enhanced models of the Hitachi VSP G series storage offerings with the next-generation Hitachi flash modules with inline data compression (FMD DC2) and enhanced Hitachi Automation Director and Data Center Analytics tools for improved response times and greater effective capacity.

 

My colleagues, Mike Nalls will be providing more details on the all flash VSP F series and Bob Madaio will cover our overall strategy and direction for flash. In this post I will cover the enhancements in our new Flash Module Device the FMD DC2, which includes data compression and is in position to displace performance disk drives.

 

The FMD DC2 Architecture

Flash drives require a lot of software and a lot of processing power for mapping pages to blocks, wear leveling, extended ECC, data refresh, housekeeping, and other management tasks which can limit performance, degrade durability, and limit the capacity of flash device. In order to support these processing requirements, the FMD from Hitachi Data Systems is built with a quad core multiprocessor, with 8 lanes of PCIe out the front and integrated flash controller logic, which supports 32 paths to the flash array. Having direct access to the engineering resources of Hitachi Ltd., Hitachi Data Systems is able to deliver patented new technology in this next generation FMD, which sets it further apart from competitive flash vendors and displaces the performance disk market with lower cost, higher performance, enterprise flash.


FMD DC2.png

 

FMD DC2 Displaces High Performance Disk

This new generation FMD DC2 flash device, doubles the raw capacity of our previous generation FMD from 3.2 TB to 6.4 TB and, with the use of compression provides 4x more effective capacity for a lower TCO without penalties in performance or scalability. This higher capacity FMD DC2 is 28% lower than 15K RPM disk drives on a relative bit, street price, comparison and as much as 64% lower with a 2:1 compression ratio. The lower price is the result of a higher capacity 6.2 TB, lower 5 year support, power, cooling, and floor space costs. The FMD DC2 with 2.1 compression is even lower than the entry 10K  RPM disk drives.  The FMD DC2 essentially displaces 15K RPM disks and, depending on the actual compression ratios, is even lower than 10K RPM disks!

 

Graph.png

 

Designed for Performance.

The processing power and multi-pathing architecture of the FMD DC2 not only enables us to double the capacity of our previous FMD and double it again with compression, but also adds functions that increase performance. The FMD DC2 increases read IOPs by 50% while maintaining less than 1 ms response time even at PB scale with the new VSP F and enhanced G series storage controllers. 

 

 

Our tests have shown:

Reliable.png

 

By virtualizing the flash capacity, the FMD manages exactly how and where data is stored so that both read and write IO is executed as fast as possible. The FMD also offloads tasks from the VSP storage controller to the flash devices. This distributes data service tasks, reducing system overhead and preserving more processing power on the VSP for tasks like replication. In the first FMD we offloaded inline write avoidance, which increased write performance.

 

The FMD DC2 features a new “always on” inline compression offload engine which eliminates the performance impact of compression/decompression that other vendors experience by doing this in the storage controller. It is a very-large-scale integration (VLSI) engine that enables lossless compression based on a derivative of the LZ77 sliding-window algorithm. The encoding is designed for high performance, using a small-memory footprint. Its parallel processing delivers real-time compression/decompression using a systolic array-content addressable memory architecture, which ensures there is higher throughput, better efficiency and no lag that can create response time spikes. The compression engine enables similar data reduction efficiency to the software compression algorithms found in other all-flash arrays. The difference is that the FMD engine performs at 10 times the speed of other implementations and is run on the FMDs and not in the storage controller. This eliminates the overhead associated with typical software implementations on Intel core and/or journal-based file systems. Since compression in the storage controller may impact flash performance, users try to manage which data streams are compressed and which are not in order to maximize performance and capacity. This is an added level of administration, which the FMD DC2 does not require. Since compression and decompression is done in the FMD all the benefits of compressed capacity is delivered with no impact to performance. Just turn it on and forget it.

 

The FMD also maintains sustained write performance by preserving reserved capacity for background tasks like garbage collection and wear levelling. The FMD eliminates the “write cliff” where housekeeping tasks like garbage collection blocks the write I/O, by taking these tasks out of the I/O path with its multipath architecture.

 

The FMD has a patented method to accelerate reads by allowing the system to read directly from the flash device and bypass the storage controller cache. This also minimizes the system overhead that can result in longer latencies.

 

Another patent improves read / write performance by providing multiple, parallel connections between the FMD controller and flash modules. This allows the controller to access a separate flash module if one is busy.

 

Designed for Durability.

One of the arguments against flash has been the durability of flash versus disk drives. Disk drives can be over written almost indefinitely (10¹⁵ overwrites) while flash cells wear out (10³ overwrites) and eventually lose their ability to retain electrons. However, with proper management, flash drives can be more durable than Disk drives.

 

The FMD exploits the differences in the failure mode of flash versus magnetic disk. Disks records data on physical blocks that are formatted on the disk. If the blocks get damaged through interdictions such as head crashes, the whole disk must be replaced. With flash, data is recorded on virtual pages, which can be relocated to another physical page if the original page is no longer usable. The FMD provides 25% extra capacity for spares, so the FMD does not need to be replaced until all the spare pages are used up.

 

Every FMD is designed to monitor flash storage for issues so that they can be resolved quickly, preventing long-term outages and data loss. Since flash technology can only sustain a specified number of write/formats a key design consideration is to reduce the number of unnecessary writes.

 

Compression helps by reducing the number of blocks that need to be written.

 

The FMD has a block write avoidance feature which can recognize any data stream of all “0 s” or all “1 s” in real time and remap that data with a pointer. This not only eliminates writes but also effectively increases the over provisioning space for spares.

 

ECC (Error Correction Code) has been extended to preserve the integrity of data writes and correct up to 59 bits per 2KB of compressed data which exceeds the MLC (Multi Level Cell) spec of 40 bits per 1.1KB of data. This ensures that even if a bit error is discovered it can be easily corrected. This correction enhances the ability to monitor the degradation of pages and avoids any premature page rewrites. (There are actually two levels of ECC. The first level of ECC is done per transaction and provides end-to-end error checking through a custom eight byte DIF (Data Integrity Field) appended to write I/O recorded on the FMD DC2. This ensures that what was sent to the FMD is what was received.)

 

The FMD manages wear leveling at both the page and block level so that leveling occurs across a much broader capacity – reducing failures and overhead.

 

Virtualizing the flash capacity enables the ability to control exactly where data is stored for better wear leveling and to mask normal cell failures. Virtualization of flash capacity also allows us to prevent write inefficiencies by coalescing and allocating a page / block depending on need.

 

If an FMD reaches 95% of its specified write endurance, a service information message (SIM) is issued to prompt the replacement of the FMD. A service processor (SVP) or a report from Hitachi Storage Navigator can confirm the write capacity of the FMD. FMDs are covered by a 5-year warranty but as long as a customer pays for system maintenance they are covered.

 

The Benefits of the FMD is only part of the story

I have pointed out in my previous blog posts (Flash investments for the long term) that the most cost effective, enterprise flash array, can only be realized by combining intelligence in the flash device controller with the intelligence in the storage controller.

 

To realize the complete user benefit of the FMD DC2 you need to consider the enhancements we have done in the VSP F and G storage systems and enhanced Hitachi Automation Director and Data Center Analytics tools.

For more information, you can start by downloading our announcement letter and reading the following blog posts on the HDS Community:

The Move to Flash: The Time is Now

Finding the most reliable all-flash array

Value of flash storage is about performance vs cost (and it always has been)

 

For technical details on our next generation FMD DC2 and additional features in our VSP F and G series storage systems, please download the latest Hitachi Accelerated Flash white paper: Hitachi Accelerated Flash: An Innovative Approach to Solid-State Storage.

I recently saw this chart on the valuation of the top ten startups, and I was floored!


hu-110315-1.png


Leading the list is Uber Technologies, a transportation networking company that develops and operates the Uber mobile app. This app allows consumers with smart phones to book a ride with Uber drivers who use their own cars. Uber was founded in 2009 and the first app was available in June of 2010. It started in San Francisco and is now in 58 countries and over 300 cities.

 

Xiaomi, Inc. is a privately owned Chinese electronics company that is currently the third or fourth largest smartphone company in the world. It has been selling Android based smart phones nearly at cost since 2011. It sells phones in some of the most populous countries in the world like, China, India, and Indonesia, as well as Malaysia, Singapore, Philippines, and Brazil. The company uses the hardware sales just as a means of delivering software and services, and considers themselves to be an Internet and software company. They sell through their online store and market through social networking. They have a “Design as you Build” process that can incorporate customer’s requests in a matter of weeks.

 

Airbnb was started in 2008 by two people who initially rented out three air mattresses in their living room and included home made breakfast so that they could afford the rent of their San Francisco loft.  Initially called AirBed and Breakfast, they changed the name to Airbnb and created a website for people to list, find, and rent lodgings in over 34,000 cities in 190 countries. Earlier this year when the United States began to ease the embargo on Cuba, Airbnb was up and booking lodgings in Cuba by April, servicing the huge pent up demand for expats, business men, and tourists to travel there. How long will it take for traditional hotels to be in Cuba? What if the traditional hospitality businesses built hotels in Cuba and the political situation in the United States changes and the embargo is re-established? These businesses would have millions of dollars of capital assets locked up in Cuba, while Airbnb would simply close down their website.

 

The list goes on with many of these ten startups being in existence for less than 8 years. The exceptions are Palantir(2004) and SpaceX(2002). But even these exceptions are young for companies that have a valuation that is 10 billion USD or more. How do they do this?

 

Many of these are Internet software companies that are selling to a new type of customer. Analysts are beginning to coin the term “prosumers” as opposed to “consumers” (Forbes article on “prosumers” here) . In the case of the hospitality business a “consumer” books a room at a hotel and pays the standard rate for a standard deluxe room. A “prosumer” goes to Airbnb and has a choice of lodgings from a house, a flat, a spare bedroom, or an air mattress in a living room, a choice of locations, and a choice of rates. He can proactively choose his lodgings.

 

Are there many customers who want to do this?  If you compare the market cap of an established hospitality company like Marriott, founded in 1927, at $20b and Airbnb founded in 2008 with a market cap of $25.5b, the answer seems to be yes.

 

Get ready for a brave new world!

Over the last few years, I have been writing a series of blogs on IT trends for the coming new year. I will be beginning my series for 2016 next month. One of the things I predicted last year was the displacement of Solid State Devices (SSD) designed for commodity markets by flash modules that are enhanced with the processing power required to address enterprise storage requirements for performance, durability and increased capacity. Unlike disk drives, flash drives require a lot of software and a lot of processing power for mapping blocks to pages, wear leveling, extended ECC, data refresh, housekeeping, and other management tasks. No matter what you do in the storage controller, most of these functions have to be done at the flash device level. This year we are beginning to see more specialized flash devices as the demand for larger capacity and, consequently, lower-cost flash devices increases.

 

Having direct access to the hardware engineering resources of Hitachi Ltd., Hitachi Data System (HDS) delivered a specialized flash module in 2012 with a capacity of 1.6 TB when SSDs were still limited to 400 and 600 GB. Unlike other vendors who used standard SSD’s, which were designed for use with commodity systems like PCs, Hitachi Data Systems developed their own flash module designed specifically for the enterprise storage market with a quad-core processor that had enough power to support a usable flash capacity of 1.6TB and deliver four times the sustained throughput of the current generation solid-state drives that are based on multi-level cell (MLC) technology. A year later, in 2013, HDS began delivering 3.2 TB flash modules and as of the end of 2Q 2015 we have shipped over 210 PB of Flash capacity. While we are not currently in the all-flash array category, which Gartner has defined as flash arrays that cannot support tiering to hard disks, we have many customers that have all flash G1000 storage arrays.  Our shipments of flash capacity shows that we are the leader in enterprise flash deployment. We also support SSDs as shown in the following chart.

 

hu-110215-1.png

It is harder for smaller companies and even large companies like EMC to make the investments to build or customize their flash modules for enterprise storage. That is the primary handicap of all flash array vendors that use commodity SSDs. Their ability to differentiate their flash offering is limited to what they can do in the software of the storage controller. The only way that they can communicate with the flash module is through a SAS or SATA interface. Since Hitachi builds the storage controller as well as the flash device controller, we can offload functions to the device and initiate actions in the device that are not possible through standard interface commands. (See my previous post on data eradication for flash devices, which we initiate and report back through our storage controller: What Do You Do With Your Old Flash Drives?). Think of what we could do if we could do other functions in the flash controller, like compression. You would not have the overhead of compression/decompression in the storage controller and we would be able to keep it on all the time without impact to performance.

 

Even on the storage controller side, the AFA vendors lack the capability to provide enterprise functions like disaster recovery, or dual active availability.  The much-anticipated Pure Storage IPO debuted in October but ended the day down 5.8% from their IPO price of $17. Pure Storage uses commodity SSDs, but they are not profitable since they address only a subset of the enterprise market. Their stock did jump up past their IPO price recently as a ripple effect from the Dell buyout of EMC. Analysts attribute this increase to the theory that emerging storage vendors like Pure will be acquisition targets by larger companies. If the goal of an emerging storage company is to be acquired, how much investment will they be making for long-term development of enterprise storage capabilities?

 

As we go to larger capacity flash devices, whether we do that by adding more MLC dimms, triple level cell (TLC), or 3D NAND technology, we will need more management and functions in the flash device, and that means specialized flash devices as well as enhancements to enterprise storage controllers. The new larger capacity flash drives will be comparable to 15K high performance disk drives on a capacity cost basis, and will eliminate the need for these types of disk drives. As we increase the capacity of flash devices, costs will decrease and flash will displace more and more of the disk drive market. The end for disk drives is in sight.

 

Flash itself will give way to more durable and higher performance nonvolatile technologies in the future. Success will depend on the ability to develop new media controllers as well as storage controllers that will be able to integrate these technologies to address future business requirements. Hitachi Data Systems plans to be here for the long term and continue to invest in the research and development of current and future systems.