Gana Chandrasekaran: The Bottleneck of VDI – Scaling systems without Forklift Upgrade
For years, Organization deploying VDI for the first time have thought the most common issues occurs with inadequate performance from shared storage. An organization looking to deploy VDI today has all-flash array options, de-duplication offerings, hyper-converged (scale-out) systems, and even local storage aggregation products available – all of which can not only increase performance of VDI, but lower the cost as well. Although the price point of individual desktops is endlessly debated, and one could debate the cost is based on the application workloads.
Customers, specially Banking Sectors are looking to run a hundred or thousands of virtual desktops. Choosing the right storage systems endlessly gives you zero worries from deployment till end of the life cycle. The customer has two options to decide the architecture which are scaled-up systems and scale out. The problem that relies with scaling-up systems will allow scale only vertically, meaning to add system resources such as adding processors, memory and also limited by the storage capacity and performance. The scale-out systems which are typically designed to add more nodes in the system, including the rack of storage, storage array expansion, and dedicated for mission critical application.
Let me give you two examples:
Customer A wants to deploy VDI solution with 1000+ virtual desktop users. This is not a small VDI solution where I believe scale-up systems may have some difficulties due to its, architecture cannot scale beyond to performance with respect to the data capacity and processing cycles. VDI comprises of gold standard boot storm copy. There are two important factors to consider with VDI installations, the protection of the gold copy for boot storm and the performance reliability when users login into the VDI system. It is important for the gold standard copy to be protected because if this gold copy ever get corrupted, then the VDI solution is rendered useless. The second, is the number of users. Depending on the timetable of when users log in and log out, there is a performance burstability that the VDI solution needs to take an account of. With 1000 users logging into the system at the same time, performance could be degraded depending on the architecture of the solution. No End users who care about the response and performance
What happens when 1000 persistent virtual desktops, all hosted on the same physical scale up systems, are told to download and install a 100mb patch, and there is certainly disk contention, provided One thousand instances of a software delivery agent spring to life, churning against the vCPUs assigned to the virtual desktops. The patch is retrieved. The patch is expanded. The patch is installed. Post-installation cleanup occurs. The performance problem is no longer stops at the disk, but it still very much exists everywhere in the system
Why Converged System (UCP) HDS?
Scalability. Depending on which design, the UCP 4000e or the UCP 4000 solution. We can scale as the customer’s workload grows. We do not need to build siloed application infrastructure islands anymore. We can build multi-tenant architectures end to end that will share application resources depending on how we architect the infrastructure. Essentially, pay-as-you-grow model. With our UCP 4000e, the solution can only scale to a point (16 CB 500 servers and/or 2 CB 500 chassis). Also, we can scale the storage either to HUS 130 or HUS VM. Obviously with our UCP 4000, we can scale without any forklift upgrades.