Over the course of 23 years (ish) in IT leadership, I have had the privilege to lead teams of highly skilled technologists in the art and science of designing, building and operating dozens of large scale IT systems. Not all of them have been verifiable successes (just a few), not all of them complete failures (less than a few), but a healthy scoop of all potential academic grades. And of course over time, with evolving skillsets, implementing different morale enhancing techniques, and enhanced technological offerings, you tend to get better. It’s the age old increased value of experience and expertise over time trope.
Throughout all those experiences, the phrase that has become a design standard emerged as “the most effective solution is not the most elaborate”. It’s true for all aspects of the IT program:
- The most effective design is not the most elaborate
- The most effective code is not the most elaborate
- The most effective workflow is not the most elaborate
- The most effective user experience is not the most elaborate
- The most effective security platform is not the most elaborate
That’s not to say that all complex problems can be solved with “a couple business rules” or a “free download”. Nor it is intended to suggest that the business problem doesn’t need 23 interoperated applications across a complex IT platform consisting of hybrid cloud deployments (it may in fact require it).
The intention of the design philosophy is simply to appreciate what it means to use, operate and change a large scale IT system over time. Consider the next person in line who needs to deal with the practical aspects of the system and make it better as the business needs changes. For example:
- The next programmer who needs to add more logic based on a new products being sold has to readily appreciate the coding structure and underlying logic platform without obtaining academic credentials
- The new business user being hired should be able to navigate the order processing system without a week’s worth a training or over-the-shoulder tutoring from their supervisor
- Adding new potential threat detection to the security platform should be a two-click event, not a wholesale redesign of the virtual demilitarized zones
- Shifting workloads to a new data centre, or to the cloud should be as simple as invoking an API, not a multi-stage "wave" of P2V moves
The irony, of course, is that creating SIMPLICITY from an overly complex problem is actually quite an intellectual task, taking far more design skills then simply “lift and shifting” equipment to a new data centre with the same security flaws, or recompiling outdated code to run in a different operating system.
Let’s take IT infrastructure as an example to the next level.
Starting with new business goals and operating practices, especially as they implement various Digital Transformation initiatives:
- The business needs to invest and divest of products, services and line of businesses quickly, as they transform digitally, and explore new and ground breaking business models
- The CFO wants to shift all expenses, including IT to be a direct result of how each operating unit earns revenue
- The rate of change in the organization increases exponentially. That will include new client experiences, new data collected and new data product produced, and jumping head first into the physical world, augmenting current digital offerings with IoT integration and interaction
In turn, IT infrastructure design dramatically shifts:
- The complexity of scale-up models, that assumes ONLY growth in the workload/application/transactions, will need to be redesigned to the simplicity of scale-out models that assume workloads will be created AND destroyed (scaled up and down) constantly
- The complexity of assembled IT systems (storage, compute, networking, and security), that assumes individual discipline-based teams create and control standardized infrastructure, will be redesigned to the simplicity of converged IT systems that assume a workload/application centricity in the infrastructure management activities
- The complexity of CAPEX-only acquisition models that forces singular financial mechanisms of depreciation and amortization, usually with high upfront cash outlays, will be replaced with the simplicity of diverse consumption models that assume the economic principles of the individual workloads should determine: 1) What you buy, 2) Where you install it, 3) Who manages it, and 4) How you pay for it
An exceptional example of the simplicity of design is from VMware, a preeminent partner of Hitachi who released VMware vSAN 6.6 this week: http://www.vmware.com/company/news/releases/vmw-newsfeed.VMware-Accelerates-Customers-Data-Center-Modernization-Efforts-with-New-Release-of-Industry-Leading-vSAN.2155779.html. The hyperconverged platform (HCI), including the VMware solution and software-defined infrastructure platforms such as Hitachi’s enable customers to simplify IT operations and increase performance while lowering their upfront and ongoing costs compared to traditional storage. This platform provides for simplicity of scale out (scale up/down with ReadyNodes), simplicity of converged IT systems (native security and software-defined), and simplicity of diverse consumption models (hybrid clouds, and lower TCO’s).
In all fairness, there is a beauty in IT complexity. It ensures long-tail IT employment. It ensures life-long IT dependency on original designers. It obscures the true costs of operating IT infrastructure.
I’m less sure that’s the actual goal of the CEO however.