Skip navigation

By Gary Chen, IDC Research Manager, Cloud and Virtualization System Software

Sponsored by HDS

 

Over the past decade, virtualization has grown from a lab experiment to the standard way to deploy servers today. While virtualized servers have become the majority, there are still many bare metal servers in operation. The toughest workloads to virtualize are high-performance mission-critical Tier 1 applications, and many customers are just starting to tackle these apps. There have been many advances over the years that make virtualization more ready than ever to take on the toughest applications.

From a pure performance point of view, the hypervisor has seen numerous optimizations and takes advantage of all the latest virtualization acceleration features in today's CPUs, which reduces the overhead of virtualization. Also, the rest of the virtualization software ecosystem has grown and matured tremendously. Core virtualization packages encompass features such as monitoring, management, storage, and networking, while third-party tools have all become virtualization-aware, making managing a complex virtualized application more reliable than ever.

Hardware has been tuned for virtualization as management use cases became prevalent on servers. Besides the virtualization acceleration features found in the silicon, we've also seen core counts scale out and RAM sizes increase, allowing customers to pack more virtual machines (VMs) onto a server and also run very large VMs. Beyond just the server, the rise of converged infrastructure now integrates storage and networking. Besides the convenience of deployment and easier management for customers, this model also allows for testing and certification across the multiple hardware components and performance tuning — capabilities that are particularly critical when virtualizing Tier 1 applications.

All these factors have made the virtualization of Tier 1 apps possible today, allowing customers to maintain required levels of availability and performance while reaping the many agility and cost-saving benefits of virtualization. A recently released whitepaper by IDC highlights the virtualization of Tier 1 apps on converged infrastructure, and features a case study of a customer that was able to virtualize SAP applications, a complex Tier 1 application used by many customers. Today, even the newest SAP HANA application, which is an in-memory, high-transaction-rate database, is approved for production use in virtualized environments. SAP HANA would be considered a "torture test" application by any measure, and yet today is able to be virtualized, attesting to how far virtualization technology has advanced — the limits of which continue to be expanded.

For more information, please read the recent IDC White Paper, "Virtualizing Tier 1 Applications on Converged Infrastructure" here Virtualizing Tier-1 Applications on Converged Infrastructure.

StockArtHDSCommunityBlogIntelMay2015.jpg


By Jim Fister, Intel Corporation


I have a friend…

 

Okay, at this point you’re already thinking that I’m kidding with you.  All of us Intel automatons don’t have lives or friends, we just relentlessly execute to Moore’s Law and develop complex Powerpoint decks that highlight the feats of bit-twiddling that make things run incredibly fast.  Well, hey, we do have lives, and we do make friends. I’ll leave it to your imagination how we find the time.

So anyway, I have this friend.  He’s a guy I hired a couple years back (“AHA!” you say…) as an intern, probably one of the better performance-oriented coders I’ve ever met.  The stuff he does with digital image processing is incredible. He came on full time to work for Intel for another friend of mine, and then he got caught by, “the thing.”

 

Big data called.

 

Today he’s the CTO of his own drone company.  They fly over farm fields taking high-resolution, high-spectrum photos. They then mesh all the images together and do special image processing using a large pile of cloud-based servers to look for water problems, or insect infestations, or the like.  All of this data is stored and digitally farmed over time as the analog farmers fix the crops using his data.

So if you’re following along, a former Intel guy is making little robot airplanes, and there are massively parallel systems somewhere in the ether making crop growing more efficient. Life is glorious.  Big data is glorious.  Heck yea.

 

If you’re less prone to molding your own Kevlar airplanes and programming flight systems to match camera shutter speeds, then Big Data can still be for you.  The biggest enterprises across the world still benefit from data analytics, and it's possible to do your own digital farming on your own fertile fields.  Let’s face is, if anything is growing faster than Moore’s law, it’s digital data.  And as all that data comes in, there’s a strong need to use it before it gets stale.

 

And, well… that’s a problem too.  Those crops the farmers are pulling likely took half a year to grow and have weeks of usable harvest time before they turn into so much primeval goo.  Not so with your enterprise data.  You’re getting it fast every minute of the day in the global economy, and about half of it probably isn’t worth a whole lot after a few hours, if not a few seconds.  Decisions have to be made on harvesting your best stuff before a human really has time to add anything positive to the equation.  So I guess even Intel processors need some friends, too.

 

Take SAP, who decided years ago that database architectures fundamentally had to change to keep up with the demands of modern data analytics.  That was about the same time that Intel was starting to conceptualize a new high-end system architecture.  So after a little bit of crop rotation, we jointly harvested some pretty cool stuff. SAP created HANA, a database architecture that runs totally in memory using scores of parallel processing threads. And Intel pulled up a nice crop of Xeon® E7 v2 processors that provided a significant amount of processing threads and memory where the data could rest.

 

Even that wasn’t enough. We needed a solid friend like Hitachi to produce the Unified Compute Platform (UCP) – a massively-parallel, robust, reliable, and manageable system. The system has significant features like embedded logical partitioning (LPAR) that can simplify the delineations between production and test.  It utilizes the latest, hot-off-the-press Xeon E7 v3 processor in a symmetric multi-processing (SAP) configuration along with the SAP engine to give any enterprise its own equivalent to a flying drone with a digital window into fields of data.  Who knew that a big server could be so nimble in the air, at least in the abstract?

 

Enterprise is glorious. Big data is still glorious.

 

Heck yea.

 

The point is that traditional business processing doesn’t work anymore.  Sure, you can run your business like you always do, but the pace of innovation is just way too fast these days.  When you have seconds to make a decision, a traditional database is minutes away from the answer.  With SAP HANA and the SAP S4 Business Suite a Hitachi UCP system can change the way that you fundamentally do business.  If your crops are the first ones to market, you get the advantage over everyone else. A lot of the algorithms being used are fine for the business, it’s the pace that needs to change.  Where tradition meets innovation, that’s the transition that easily moves a company into the era of Big Data Analytics.

 

Intel and SAP will keep providing the base tools for innovation, and Hitachi is there to build the vehicle for getting your decisions to the market first.  With friends in the field like that, I think you’ll be pretty happy to plant that next round of data where it can grow and thrive.

 

Jim Fister grew up playing in the dirt in Ohio to the point where his mother despaired of ever keeping him clean.  He spends his time these days kicking up the clods around Big Data Analytics and the Internet of Things, unless he’s somewhere in the mountains of Oregon getting fresh air.