In the mid 90’s I used to hang out at Bennett’s Lane, a popular club tucked away in one of the quaint laneways of Melbourne to listen to the cream of the city’s jazz scene. Sadly now closed, what used to intrigue me the most was how seemingly un-coordinated musicians playing eclectic instruments produced such wonderfully orchestrated dulcet tones. Mastering jazz is one of the toughest feats for a musician, despite its improvisation, it’s actually based on principles of following common chord tones and tensions.
So what does this have to do with IT? I’ll get to that…
Fast forward to 2015, the noise around containers has reached fever pitch over the past six months. As companies like Docker and CoreOS step up to battle their vision to the technology faithful, infrastructure vendors are scurrying to demonstrate that they are hanging out with the newest cool kids on the block. Along with all this hype, there has been an equal amount of confusion comparing and contrasting them to virtualization. Depending on which side of IT you come from, it could be difficult separating the two, however here is a good article I found that does a good job.
Religious wars are synonymous in Enterprise tech, some of them reach legendary status (how can we forget the browser wars of the 90’s!). But in the real world of business, the CIO’s I speak to generally care about two things – how do we optimize our existing IT environment? And at the same time how to we ensure we are innovating to meet customers ever increasing expectations?
The first inevitably deals with managing cost by modernizing infrastructure, automating processes whilst still balancing organizational compliance and risk. The second is very different, it’s all about responding to rapidly changing market pressures, new competitors and escalating user demand so the aim of the game is assembling services quickly from a variety of places.
In amongst all of this, the other force CIO’s are contending with is how SMAC (Social, Mobility, Analytics, Cloud) can be best leveraged. These nexus forces offer the opportunity not only to better align cost to revenue, but rewire companies to give them the competitive edge in disruptive business conditions. To truly capitalize on this, IT needs to elevate its value within the business which is not such an easy thing to do.
Gartner coined the term Bi-Modal IT to describe the modern organizational model of IT with two discreet sets of operating principles. Somehow, many people have implied that the first mode deals exclusively with enterprise applications whilst the second for new generation cloud native applications. Furthermore, when it comes to building infrastructure for them many vendors have taken the approach that they should be siloed. I question this and say why?
This great blog post from Simon Wardley draws an interesting analogy of this movement to that of pioneer’s, settlers and town planners. I find myself agreeing with him when he says “work taken from the pioneers and turned into mature products before the town planners can turn this into industrialised commodities or utility services. Without this middle component then yes you cover the two extremes (e.g. agile vs six sigma) but new things built never progress or evolve.”
With the attributes of Mode 1 & 2 understood, why must they be applied mutually exclusive? Why can’t enterprise applications introduce agile development practices to augment the reliable quality assurance processes of building software? When it come to governance, why can’t we reduce the component level iterations of continuous delivery practices by incorporating a better understanding of holistic requirements offered by the waterfall model? And given the interdependency and complexity in delivering the ideal user experience, shouldn’t we harmonize the teams responsible for delivering IT by cross skilling both sides? Surely the aim should be to build Cross Modal IT?
As a strategy, building discrete information infrastructure and service delivery capability for these two modes of IT with diverging tools, processes and skills won’t achieve the CIO’s imperative of optimizing and more than this, is just plain dumb. Not only will doing so throw the data center back to spaghetti central of yesteryear, but is actually counterproductive to the goal of gaining agility across all facets of IT. Traditional applications that are still very critical to business functions need to innovate beyond simple re-platforming onto new infrastructure, but seek to evolve to leverage new services. Likewise, cloud native applications need to do more than spin up features quickly, they need to be reliable and trustworthy.
So what are some examples of how we can leverage the best of both worlds to create a better optimized yet innovative IT practice;
For Enterprise applications, doing test/dev or bursting into off premises/public consumption based cloud can fast track deployment as well as significantly reduce costs by better aligning (opex/capex) to demand.
This is why HDS introduced cloud tiering capabilities into HNAS and HCP and integration of UCP to VMWare’s vCloud last year. Giving organizations the ability to extend public cloud services to traditional enterprise applications for things like disaster recovery, backup and archiving. This not only modernizes the environment, but can enable new things like analytics to performed more easily.
Likewise, the ability to run native cloud applications with full portability within the safe confines of a private cloud or where turnstile cloud pricing can be quite prohibitive is another big benefit. Here in Asia Pacific, tight regulation of industries like banking, demarked political and economic zones and bandwidth restrictions have made in increasingly clear there is no one universal cloud across the region. Furthermore, after years sustained rounds of price cuts from public cloud providers, last month we saw the first round of price hikes prompting organizations to reevaluate their costs and risks. Organizations don’t just want choice, they need it which is why so many CIO’s are opting for Hybrid-cloud strategies.
Last month HDS announced support for Google Kubernetes on the Unified Compute Platform, giving developers not only the ability to mobilize their applications and microservices on their cloud of choice but also using a single toolset to abstract it. Coupled with tools like UCP Director which offer the same visibility, orchestration and automation benefits for underlying infrastructure, developers can use the same API’s to consume it as a service without changing code. More than this, they can extend many of these benefits to enterprise applications too.
Although the integration of infrastructure and tooling doesn’t address everything (fusing agile and traditional service delivery practices is arguably the toughest nut to crack), HDS’ vision is to help organizations move toward Cross Modal IT. By making sure those cords and tensions in the IT foundation work harmoniously together from the outset, the equally important goals of optimization and innovation can be accomplished in concert.
Can anyone recommend a good jazz club?