Blogs

Reviewing 60 years in the Storage industry

By Hubert Yoshida posted 04-30-2020 08:06

  



It is hard to believe that it has been 60 short years since I first started working with computers and this is my last day working in this industry as a full time employee. Since I have been a corporate blogger for Hitachi Data Systems and Hitachi Vantara for over 15 years, I would like to indulge myself with a look back on my career in the IT industry over the past 60 years. I feel very similar to Forest Gump, in that I have been witness to many of the historic events in storage history since the 1960’s. It has been a fantastic life and like a box of chocolates I never knew what I was going to get next.

 

I began working with computers during my university days at UC Berkley in 1960. I was a Math/Physics major and I worked for Dr Luis Alverez as one of his lab rats at the Berkeley Radiation Lab working on Bevatron experiments on K-meson. There I worked at night running his data reduction on an IBM 704 computer, loading the programs from punched cards, setting the configurations through octal keys, and dialing in the tape drives as we progressed through the data, then mounting and printing the output tapes on a 1401 computer. I was the human operating system and job scheduler.

 

One day, President John F. Kennedy came to speak at the Radiation Lab, and I was inspired by his exhortation to accept responsibility and take action to make the world a better place. At that time Berkeley was in turmoil with massive political demonstrations protesting nuclear testing and the hard line against Cuba. As a reaction to everything that was going on, I decided to leave the university and enlist in the U.S. Marine Corps. I spent 5 years in the U.S. Marine Corps, and was eventually commissioned, and had the rare honour of leading Marines in combat as a Rifle Platoon commander in Vietnam. When I left the Marine Corps after 5 years, I saw an ad in the Los Angeles Times that IBM was hiring. Since I had worked with IBM computers in the past I applied for an Interview. IBM was looking for “Systems Engineers” to launch a new computer system called the System 360 that could execute tens of thousands of instructions per second. The first thing they asked me about in the interview was something called a DASD (Direct Access Storage Device). I was shocked.  All I knew about storage was punched cards and tape. I felt like Rip Van Winkle, falling asleep for 5 years and waking up to find everything that I thought I knew about computers had changed. I have often wondered if there would be another 5 years in which the world of computers would undergo such major changes.

 

I was assigned to the IBM team that supported North American Rockwell who developed the Apollo Capsule and I was there during the moon Landing in 1969. President Kennedy’s vision to land a man on the moon and return to earth in that decade was accomplished. It is amazing to think that this was done in less than 10 years with less compute power than we now have in an iPhone.

 

The IT industry has been amazing. During my work life I saw the cost of storage capacity go from $2M/GB to essentially 0 or less than $.02/GB. The increasing density of storage has far exceeded the density of transistors on a microchip (Moore’s law) which doubled every two years. Kryder’s law saw the density of storage doubling every 13 months. The amazing thing was that the doubling of density year after year occurred with what has been mostly a mechanical device. I worked for the IBM storage division in San Jose during the 1970’s and participated in the development of the mainframe storage devices as a performance manager, and the development of data management software. This was an exciting place to be as we saw the development of the Winchester drive, the first HDA (Head Disk Assembly), cached storage controllers, FBA (Fixed Block Architecture) for non-mainframe storage, and the thin film head which enabled even denser storage recordings.

 

I did a rotation through different staff jobs in IBM White Plains and Mount Pleasant and met some of the legends in IBM management during the early 1980’s. When I returned to San Jose, I had an opportunity for an assignment in IBM Japan. The last project I worked on in Japan was to develop an OEM business attaching IBM Storage to Fujitsu and Hitachi mainframe operating systems. There I learned a lot about Fujitsu and Hitachi storage and mainframe systems. On the occasions that I did meet Hitachi engineers, I was very impressed with their professionalism. I was also impressed with the quality of their storage systems.

 

When I returned to San Jose, we were in the midst of implementing RAID. The term "RAID" was invented by David Patterson, Garth Gibson, and Randy Katz at the University of California, Berkeley in 1987. In their June 1988 paper "A Case for Redundant Arrays of Inexpensive Disks (RAID)", they argued that the top-performing mainframe disk drives of the time could be beaten on performance by an array of the inexpensive drives that had been developed for the growing personal computer market. Although failures would rise in proportion to the number of drives, by configuring for redundancy, the reliability of an array could far exceed that of any large single drive. I had occasion to work with Garth Gibson when we were on the Scientific Advisory Board for the Data Institute for the Goverment of Singapore. In one of our conversations I asked him why he and the others had not patented RAID when they published that paper. Garth replied that it was better for the industry. A patent would have limited the rapid adoption of RAID technology. (I had also worked with Ken Ouchi in IBM who I learned later had patented RAID 4 back in 1977.)

 

While everyone was excited about the increasing densities of disk drives, it was creating a challenge for mainframe storage systems. Mainframes used CKD (Count Key Data) devices which were processed by channel processors that executed channel programs while the devices were running. This required each count, key, and data field to be separated by timing gaps to allow for commands to be executed during the gaps. This became a problem as the densities of the disks increased because the gaps were wasting more and more of the areal density. This problem was solved by emulating the CKD format in the cached controllers and mapping it to FBA devices that were being used for non-mainframe systems. This eliminated the need for gaps on the physical media and enabled storage systems to support both mainframe and non-mainframe storage requirements on the same FBA devices, the only difference being the host connections. The mainframes required ESCON/FICON connections while non-mainframes used SCSI. This was the first implementation of storage virtualization through the storage controller, which Hitachi continued to develop to map the latest storage enhancements like NVMe across legacy and 3rd party storage systems today.

 

 

About that time, I decided to work as a contractor and ended up working for IBM in the UK as the business manager for RS/6000 storage with a new storage protocol called SSA for Serial Storage Architecture. This provided serial attachment for open systems storage which replaced the parallel SCSI cable. At that time FC was just being introduced, but SSA was working while FC was not. Although SSA was gaining success, IBM decided to switch to FC and I found myself looking for another job. That was when a friend of mine invited me to interview at Hitachi Data System for a job as a product manager for Open Systems storage.

 

When I joined Hitachi Data Systems, I was the loneliest person in Hitachi Data Systems at that time since the company was selling very expensive Hitachi mainframes with large enterprise mainframe storage. No one, especially the sales teams wanted to talk to me about those low cost, open system, direct attach, SCSI storage devices. Fortunately, I met Kumar Malavalli, the founder of a small startup company that had just announced something called the” Silkworm” FC switch which had a switch fabric of 1 Gps Fibre Channel links. This was a huge improvement over the then current “fast” SCSI-2 which provided 10 MB/s and required huge 32 pin connectors versus a single fibre connector. More importantly, it freed open systems servers form the limitations of direct attached storage and enabled multiple open systems servers to access a common pool of storage on larger enterprise class storage arrays.  This startup company was Brocade and Silkworm was their first generation FC Fabric switches.

 

Since Hitachi Data Systems had just announced the 7700 scalable storage array with an internal switch architecture, we saw the synergy of combining the Brocade Silkworm switch connectivity fabric with the scalable switch storage architecture of the Hitachi 7700 and we became early partners with Brocade. In those early days, many people thought that the lower cost FC Hub products from Gadzooks and Vixel would be preferred by customers over the expensive Brocade switches. Since we were coming from a mainframe legacy, we were familiar with the scalability advantages of the ESCON director switches and we committed to FC switch technology and Storage Area Networks (SAN) as a way to bring enterprise storage functionality to open systems servers.

 

As the mainframe market declined in the 1990’s, we found it more difficult to compete in that market since we had to include the IBM operating system with every Hitachi mainframe that we sold. In 1999 we changed our strategy to focus on the storage market for mainframe and open systems SAN, exiting the mainframe systems market. Our storage switch architecture combined with Brocade switches gave us a competitive advantage that made the revenue transition from mainframes to storage systems fast and painless. Because of our switch architecture we were able to replace 6 to 8 competitive storage arrays with one 7700 array and increased our share in the growing SAN market. Suddenly open systems storage was cool and I was no longer the loneliest person in Hitachi Data Systems.

 

The other amazing thing about this industry is how the data has been exploding. I cannot really comprehend the amount of data in an exabyte, let alone a Zettabyte. Last year IDC published their Data Sphere report and projected that data will grow from 40 Zettabytes to 175 zettabytes by 2025. At that rate of growth, we will reach a Yottabyte by 2030! How much data is one Zettabyte? One analyst said that a Zettabyte was equivalent to the sum of all the grains of sand in all the beaches in the world!

 

Another report by IDC says that WW shipment of storage capacity by 2025 will only be 21.9 Zettabytes. There will be a shortfall of 153 zettabytes! In 2019 there was about 40 Zettabytes of data, and the cumulative storage capacity was less than 2 Zettabytes. A lot of that data is not stored. By 2025, 153 zettabytes of data will be lost. Jon Toigo the storage analyst referred to this as the Zettabyte Apocalypse. The only way that we will be able to capture that amount of data will be if we go to something that is extremely dense such as DNA Storage where 215 petabytes can be stored in a single gram of DNA. Unlike the moonshot which went from zero to landing a man on the moon in less than 10 years, developing DNA storage in the next 10 years should not be as daunting a task. Scientists have been reading and writing DNA for decades. DNA formats are fixed and DNA can last hundreds of thousands of years if kept in a cool, dry place. In 2019 the World Economic forum identified DNA storage as one of the top 10 technologies. Microsoft, Intel and Micron have invested in DNA storage. The future of storage may be in the hands of microbiologists rather than electrical engineers.

 

I have been fortunate to work for two great companies, IBM and Hitachi. Both great technology companies with similar cultures. People define the culture of a company, and I have been very fortunate to work with some amazing people. I have been able to meet some of the great leaders in this industry like Jack Harker in IBM who was on the team that developed the first disk storage system and Dr. Naoya Takahashi who I consider to be the father of Hitachi’s enterprise storage systems with its unique switch architecture and virtualization capabilities.  In Hitachi, I have had the opportunity to work with marketing, sales and customers in nearly all the geos and had the opportunity to visit and experience so many things that I never would have dreamed about. Like hiking China’s Great Wall, a Finnish sauna in the dead of winter, walking in the steps of Jesus on the Via Dolorosa in Jerusalem, and revisiting the battlefield of Operation Utah with a Viet Cong survivor who I fought against over 50 years ago. I am blessed with so many great memories from the experiences that were made possible by my work.

 

There are so many people that I would like to thank for making this all possible. Many have already passed on and there are many more that I have lost contact with over the years. I am looking forward to retirement and hopefully reconnecting with many of them. I am fortunate and especially honored to be given an Emeritus title, so I can still be connected to the Hitachi Family.  Thank you for all your support and friendship over the years.  I will continue to post blogs on this community site from time to time.

 


#Blog
#Hu'sPlace
0 comments
3 views

Permalink