Francois Zimmermann

Dude - don't tell me all my big applications are 'legacy'!

Blog Post created by Francois Zimmermann Employee on May 4, 2017

The-Future-Of-Big-Iron-V3 copy.jpg

Not a week goes by without an article that compares "web native applications" with "legacy applications".  The implication is that all innovation and competitive edge will come from well-behaved scale-out containerized apps.  But how true is this?

 

In the next 1-2 years companies will invest in the following scale-up, big-iron, 'fragile', non-cloud-ready technologies for specific use cases where they believe they can get a substantial business advantage:

  • Storage Class Memory to support in-memory computing - We already have customers who deploy technologies like SAP HANA to reduce the time it takes to roll up their forecasts and stock positions from days to minutes.  With the introduction of Storage Class Memory in the Skylake timeframe ever-larger data sets will be able to take advantage of the extreme flexibility if in-memory computing and businesses will leverage this to be able to interrogate and model market data and improve business instrumentation.
  • Specialized hardware for Artificial Intelligence - Many companies will start to look at deep learning technologies as a way of optimizing complex business problems and automating processes to drive competitive advantage. Machine learning algorithms can run on general purpose infrastructure but the learning speed for large data sets is typically constrained by the bandwidth between processing nodes.  Rather than trying to overcome these by changing the algorithms it will be faster to just deploy specialist hardware that is optimized for running Neural Networks (e.g. Intel Lake Crest).
  • NVMe and alternatives to Ethernet for low latency apps - Algorithmic trading and low latency transactional workloads will look to alternatives to commodity interconnects to provide the sort of marginal gains that they need to maintain advantage.
  • FPGA acceleration - When Intel bought Altera we started to talk about ways to move 'beyond Moore's law' for certain types of workload that needed to crunch a large amount of parallel data streams at wire speed.  For example, we believe this will be particularly relevant when looking at use cases like Stream Analytics - How do I efficiently aggregate and sort data from a bunch of continuous data streams from IoT, market or web sources?  How can I sort through all that data in-flight so that I can only retain what is useful and quickly identify items of interest in all the noise?  How can I raise events and actions against these in real time?

 

In my last post I spoke about the need to integrate the management of Mode One and Mode Two environments - this is required in order to be able to run existing core workloads AND also enable the delivery organization to make the  transition to DevOps practices.  Now we have another dimension: I can deliver innovation to my business by enabling rapid software development AND by enabling rapid adoption of specific "hardware assist" technologies that deliver a compelling competitive edge.

 

In order to solve both of these problem sets we have started to speak about the need to move beyond the Software Defined Data Center to a Programmable Data Center. This new paradigm aims to solve the problem of how you can consume both specialized hardware acceleration (for cutting-edge or scale-up workloads) and commodity infrastructure services (for well-behaved scale-out cloud-native apps).  When physical infrastructure services can be programmed as easily as virtual services then you are able to provide a real innovation platform – one that enables you to rapidly adopt these difficult, cutting edge technologies ahead of your competitors and get a real market advantage.

Outcomes