Blogs

Cloud Migrations & Pricing Models

By Joe Horstmann posted 08-30-2019 16:09

  
Migrating workloads to public cloud platforms or planning the deployment of new workloads requires not only technical engineering, but also financial modeling to fully realize the inherent benefits of cloud computing. When evaluating an existing environment I'm often looking at re-hosting or re-platforming applications; leveraging Re-installation using cloud API's and infrastructure as code to further automate deployments is also very important.
 
Unlike the public cloud providers where we don't focus as much on the underlying infrastructure, our on-premise VM environments are typically built with additional capacity for growth and scale over some period of time. Having a known capacity with sunk costs enables behavior that drives consumption of the resources purchased and leads in some cases to over provisioned VM guests. The capacity is available, you own it, so what's the problem? Well... when you are shifting to a public cloud you will be paying for resources consumed at the instance or VM guest level. You don't want instances to be larger than needed, or to be running when not in use. If you simply re-host and migrate the existing workload as-is, or continue to run applications in the same operational model as your on-premise environments you may end up paying more to run the same workload.
 
There are a number of commercial tools and services like Cloudamize, CloudCheckr, CloudScape, and Hitachi's own Insight Discovery Analytics that can be used to assess environments, map inter-dependencies, measure the resources and make recommendations for scale down/up, and instance type assignments. Yet inevitably before committing to use more elaborate tooling everyone asks the same question... How much will it cost to run my current environment?

There is nothing wrong with the question, and most of the time we just want to know if we are in the ball park. We just need to keep in mind it's a starting point, not a destination.  Having experience with one or more providers helps.  In my case I started with AWS.  For quite some time AWS has made available a Simple Monthly Calculator that can be used to estimate services pricing. This is a good place to start, however in some cases there is a need for something a bit more scalable for processing hundreds and in some cases thousands of VM's of different shapes and sizes very quickly to get that ball park estimate. To help answer this question I began researching different ways to solve the problem.
 
Using Python and the updated AWS Pricing API seemed reasonable enough. The AWS Pricing API enables fine grained price queries using AWS SDK's... Perfect, just want I need, now how to I use that? After a few days of reading, more coffee and coding, around this time last year I ended up writing a 'Simple' Python script to get the job done, and recently updated this year to improve performance. The instructions for setup, use, and future changes are located on GitHub using this link: GitHub mutineer612/aws-ec2-pricing
 
This repository and script are intended to help introduce and provide a jump start for working with the AWS Pricing API and compliment existing tools you may be using. When working with more complex environments or other cloud providers commercial tooling and services can be a great combination.
 
A few things to keep in mind as you explore public cloud pricing. There is no single tool or process that will meet every requirement. Understanding the individual workloads running on VM guests and physical hosts in your on-premise environment is key to making good decisions with how to re-deploy. Application inter-dependencies are essential for migrations and establishing potential move groups. It's OK to ask for help and engage partners like Hitachi to be successful!

#Blog
#Multi-CloudAcceleration
#ThoughtLeadership
0 comments
1 view

Permalink