Workload Optimization: Is It a Must-have for Cloud Computing?

Cloud computing hasn’t changed the nature of computing – it just changed provisioning and management. That’s important to remember because workloads in the cloud are very much similar to what we see in traditional computing infrastructures. To get the most out of your investment in cloud services or in your own physical IT infrastructure, you need to understand how to optimize workloads.

Workload Categorization

Lost VMs or Containers? Too Many Consoles? Too Slow GUI? Time to learn how to "Google" and manage your VMware and clouds in a fast and secure HTML5 App.

Typical computing workloads involve four basic parts: computation, memory, networking, and storage. Almost all applications have these four parts but mostly not balanced.

Now let’s quickly review the essential categories of application workloads:

  • CPU intensive workloads. These applications include scientific computation with significant data crunching, encryption and decryption, compression and decompression, and so forth;
  • Memory intensive workload. These applications include in-memory caching servers, in-memory database servers, and so forth;
  • Networking intensive workload. These applications are typically Web servers, as well as network load balancers, and so forth;
  • Storage intensive workload. These applications typically involve file serving, data mining applications, and so forth.

What is the problem?

Although cloud computing is supposed to provide unlimited capacity, the real work of specific workloads have to be run on a single server or a cluster of machines which are mostly virtualized.

With the unbalanced nature of individual workloads in mind, the last thing you want to do is to have the same category of workloads running on the same set of physical servers that have limited resources. For example, you don’t want to run all of your file server virtual machines on one physical server competing for storage IO, and leave their CPU cycles largely idle. This creates resource competition on one hand, and resource waste on the other hand.

Another important aspect is timing. If you have the same workload patterns but evenly distributed over time, it’s still balanced.

So, what is the solution?

If you already have an infrastructure in place, you can just mix the different types of workload together. In this way, you can have a balanced utilization of the physical resources on CPU, memory, networking and storage. More importantly, you can overcome the limits by hosting more workloads on the same investment in servers and system software.

This is not a big deal for cloud users because they still use the same resources under typical cloud service level agreements (SLAs). For the service providers, it is a big deal because of the higher ratio of applications to physical investments that impact the margins of the business. This means balanced utilization of workloads can give them a competitive edge over other service providers.

This is much easier said than done. Implementation is everything. You have to collect enough information on the workload patterns of all the applications and then calculate the best distribution of these applications. Based on what you find, you can re-allocate existing applications using live migration technologies such as vMotion, storage vMotion, and so on.

The algorithm described above is a simple one, and does not take into account other elements such as workload distribution over time, isolation of multi-tenants, security and compliance, the criticality of applications, or system backup. To get this workload optimization to work well in real life, you have to think through all of these other factors as well.

With the workload optimization system in place, you can then consult it for every new provision. And, when the workloads are no longer balanced, you can recalculate and re-distribute the workloads for the best utilization. Ta da!

What’s Next?

Now, what if you haven’t set up your infrastructure yet? No problem, in fact you will have more flexibility. I’ll show you how to do this successfully in my next blog.

This entry was posted in Cloud Computing and tagged , . Bookmark the permalink. Post a comment or leave a trackback: Trackback URL.

One Comment

  1. Posted July 22, 2010 at 2:56 am | Permalink

    I think this is more a problem for VMware deployments than public cloud providers.

    The reason being that the shared tenancy of a cloud will by its nature contain a more diverse workload.

2 Trackbacks

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*

You may use these HTML tags and attributes <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

  • NEED HELP?


    My company has created products like vSearch ("Super vCenter"), vijavaNG APIs, EAM APIs, ICE tool. We also help clients with virtualization and cloud computing on customized development, training. Should you, or someone you know, need these products and services, please feel free to contact me: steve __AT__ doublecloud.org.

    Me: Steve Jin, VMware vExpert who authored the VMware VI and vSphere SDK by Prentice Hall, and created the de factor open source vSphere Java API while working at VMware engineering. Companies like Cisco, EMC, NetApp, HP, Dell, VMware, are among the users of the API and other tools I developed for their products, internal IT orchestration, and test automation.