Archive

Archive for the ‘Cloud Computing’ Category

InterCloud vs. Internet: What’s Missing in Cloud Computing?

August 18th, 2010 3 comments

As more and more clouds go live, it’s time to think about how they will need to interconnect and interact. InterCloud is a new terminology coined for cloud computing after Internet for networking.

Vint Cerf, the “father” of the Internet, said recently that the cloud is much like networking in 1973 when computer networks couldn’t connect or interact. He called for open standards for cloud computing so that InterCloud can become a reality.

It’s hard to design standards when people are still trying to reach a consensus on defining what a cloud is in the first place! The good news is that as an industry we went through a similar process for the Internet. So we can learn from that experience.

The idea is simple: look at basic building blocks we have for the Internet and think about their equivalent for the InterCloud. Believe it or not, InterCloud and Internet share many common characteristics. The following table summarizes some of these.

Vertically Complete Systems: Next Big Trend?

August 9th, 2010 2 comments

IBM recently announced its re-organization around its software and hardware business units. The previously separate business units were merged together as one – the Systems and Software Group led by the former software chief Steve Mills.

You may recall that IBM did not have a dedicated software group until Lou Gerstner created one 15 years ago to centralize all the software businesses into one business unit. This unit has been IBM’s most profitable business. Before that, IBM offered all the software as add-ons to the systems like 390 and AS/400.

Now can we expect IBM to offer hardware systems as add-ons to their software solutions?

Although companies constantly re-organize to streamline their business execution, this reorganization did indicate a big trend is happening in the IT industry. Computer vendors are striving to own vertically-complete stacks: from hardware all the way up to business applications.

A Big Cloud Challenge: Cross Stack Portability

August 4th, 2010 No comments

When you think of portability in cloud computing, you think of how to move applications code, data, and workloads. These are mostly horizontal movements within the same level of software stacks – from one IaaS to another, and from one PaaS to another.

There is a more interesting and potentially very important movement that I would describe as “cross stack” portability. Today we don’t see cross stack portability unless we re-write the application, which is not what I cover here (although it could be a good business opportunity for companies to explore). Rather, I am talking about how to move your application built on PaaS to an IaaS vendor or even to a private cloud. The reason I call it cross stack is because the application is moved up or down to a different level in the software stack.

In this blog, I’ll focus on portability without code change. I’ll discuss three conversions: from PaaS to IaaS, SaaS to IaaS, and IaaS to PaaS. Mathematically we can have other forms of conversions – say from IaaS to SaaS – but those examples are either not that interesting or not that practical. So I won’t cover them here.

From PaaS to IaaS

Cloud Architecture Design: Should it be Top-Down or Bottom-Up?

July 26th, 2010 2 comments

In my last blog, I discussed how to optimize workloads across the cloud. This is based on the assumption that you already have an existing infrastructure. What if you don’t have an existing cloud infrastructure but would like to design one from scratch? Here is what you should be thinking about to get the most from your new cloud.

But first let’s take a look at other types of infrastructures – say a road. When you design a new road, you have to collect data such as population densities around the area, people’s working schedules, what types of vehicles will run on the road, and so on. With that information, you can decide how many lanes you want, what kind of road surface is required, and so on. You don’t just make up the design specification from scratch, and lay down an eight-lane freeway everywhere.

The same process applies in designing the cloud infrastructure as well. Unfortunately this is not what we see often today.

Top-down approach

In my previous blog , I said infrastructure is a means and application is the end. We need to drive the design cloud architecture from the application perspective. This is what I call the top-down approach.

Workload Optimization: Is It a Must-have for Cloud Computing?

July 21st, 2010 1 comment

Cloud computing hasn’t changed the nature of computing – it just changed provisioning and management. That’s important to remember because workloads in the cloud are very much similar to what we see in traditional computing infrastructures. To get the most out of your investment in cloud services or in your own physical IT infrastructure, you need to understand how to optimize workloads.

Workload Categorization

Typical computing workloads involve four basic parts: computation, memory, networking, and storage. Almost all applications have these four parts but mostly not balanced.

Now let’s quickly review the essential categories of application workloads:

When to Use Cloud? Example Use Cases

July 9th, 2010 5 comments

In my last post, I discussed when not to use cloud services. Basically you should avoid the cloud for your organization’s core competency IT systems.  Remember, cloud computing is not a silver bullet for everything.

Today I want to share the stories from the other side: when you should use cloud services. As a rule of thumb, you use cloud services for your non-core competency IT systems. But, what are the typical non-core competency systems?

There could be many cases in which you can use cloud services. Let me go through some of them by sharing customer experiences:

Outsourcing projects. If something is outsourced, most likely you don’t think it’s a core competency to your business. You can then leverage the full benefit that public cloud services bring to you. You can easily have workspace that is accessible by both your employees and contractors, and it’s more secure than opening up your own infrastructure to your contractors.

When NOT to Use Cloud?

July 7th, 2010 3 comments

During the July 4th long weekend, I got the chance to read the book “Delivering Happiness” by Tony Hsieh. It’s a great book with many great ideas and lessons he learned from LinkExchange and Zappos.

So, how does this relate to cloud computing?

Here’s what Tony wrote…

“It was a valuable lesson. We learned that we should never outsource our core competency. As an e-commerce company, we should have considered warehousing to be our core competency from the beginning. Outsourcing that to a third party and trusting that they would care about our customers as much as we would was one of our biggest mistakes. If we hadn’t reacted quickly, it would have eventually destroyed Zappos.”

In this paragraph Tony summarized the lesson from contracting eLogistics for inventory services in Kentucky, which turned out to be a mess and almost killed Zappos when cash flow became a big issue.

From a business perspective, cloud services are not much different from the inventory services. Both are all about outsourcing. The high tech nature of cloud doesn’t change the business nature of cloud services. What happened to Zappos could potentially happen to any cloud customers.

System Provisioning in Cloud Computing: From Theory to Tooling (part 2)

July 1st, 2010 No comments

Application Provisioning

With the right system configuration in place, it’s time to install the applications. So why not use the same tools we used for the OS and middleware? Do we need yet another set of tools?”

It depends. You can use the same set of tools for middleware to install some applications. The middleware appears like an application to the OS as well. The difference is whether your application is stable enough and whether you need to customize per node. The tools like Puppet can be good for stable applications that can be deployed the same way across all nodes. If your application is still a work in progress and you need flexibility to tweak it, you need more specialized application provisioning tools.

The big technical difference between application and middleware provisioning tools is that application tools push the application to the nodes and remotely change anything as needed. The process is procedural.

The middleware provisioning tools normally have agents on the nodes to pull the software based on the prescribed configuration files. The process is declarative.

Beyond the “push” and “pull” difference, the application provisioning tools can also manage the lifecycles of applications (sometimes called services) distributed on different nodes with a single line of command or code. Given the nature of remote command dispatching framework, the application provisioning tool can do almost anything. If there has to be a limitation, it’s your imagination.

So if you develop applications by yourself, you most likely need application provisioning tools.

Let’s see what tools are there.

System Provisioning in Cloud Computing: From Theory to Tooling (part 1)

June 30th, 2010 No comments

Cloud computing is an evolutionary technology because it doesn’t change the computing stack at all. It simply distributes the stacks between the service providers and the users. In some sense, it is not as impactful as virtualization technology which introduced a new hypervisor layer in the computing stack and fundamentally changed people’s perception about computing with virtual machines.

But if you look closely at the latest IaaS clouds, they do leverage virtualization as a way to effectively and efficiently deploy systems. Inside one virtual machine, the computing stacks remain the same as before: from OS to middleware to application.

Keep in mind that the application is the end while the OS and middleware are the means. Customers care about applications more than the underlying infrastructure. As long as the infrastructure can support the applications, whatever the infrastructure might be is fine technically. Then the question would shift to the economic side: whatever is the most cost effective wins in infrastructure. That’s why Linux gains more shares in the cloud than in traditional IT shops.

To get to the end, you have to take a mean. In the IaaS cloud, you have to install the underlying OS and middleware before you can run your application. For the PaaS cloud, you can get away from that by focusing on application provisioning.

OS Provisioning

Remember, the software stack inside a virtual machine doesn’t change. It needs OS, middleware and application installed and configured before the application can work.

What Cloud Standards Are There and Coming?

June 25th, 2010 No comments

Designing standards is like playing stocks. Timing is critical if not all. If too early, it may stifle innovation; if too late, it may never win over de facto standards.

With the popularity of cloud computing, many standard organizations have started to work on open standards from different layers and different aspects. How to find these standards or work in progress?

In his talk at SDForum Cloud SIG on June 22nd, DMTF president Winston Bumpus (@wjbumpus) shared a web site http://cloud-standards.org. It lists the cloud related standard organization and their works.

The organizations include:

Decomposition and Challenges in Parallel Programming: Is It Useful for Cloud Computing?

June 23rd, 2010 No comments

A recent article from Dr. Dobb’s introduced Fundamental Concepts of Parallel Programming. Richard Gerber and Andrew Binstock, authors of Programming with Hyper-Threading Technology, discussed three different forms of de-compositions for multi-threading:

  1. Functional decomposition. It’s one of the most common ways to achieve parallel execution. Using this approach, individual tasks are catalogued. If two of them can run concurrently, they are scheduled to do so by the developer.
  2. Producer/Consumer. It’s a form of functional decomposition in which one thread’s output is the input to a second. Can be hard to avoid, but frequently detrimental to performance.
  3. Data decomposition, a.k.a. “data level parallelism.” It breaks down tasks by the data they work on, rather than by nature of the task. Programs that are broken down via data decomposition generally have many threads performing the same work, just on different data items.

To make the three forms easy to understand, the authors used gardening as analogy where the threads map to gardners. For exmaple, the fuctional decomposition in gardening is to have one gardner to move the lawn and the other to weed. I find this analogy very intuitive and easy to follow. Even you don’t know multh-threading, you can guess it out from the gardening analogy.

The challenges while working with multi-threading are:

The Cloud of 2002 and Earlier: More Than a History

June 9th, 2010 1 comment

I read the book Who Says Elephants Can’t Dance by former IBM CEO Lou Gerstner several years ago. For people don’t know the author, Lou Gerstner became IBM CEO in 1993 when the company was on its way to losing $16 billion. The book is about his insider story of IBM’s historic turn around. Unlike other books by top executives, the book was really written by the author himself.

The book is just great with insightful observations and thoughts. So when I saw it in library weeks ago, I borrowed it back home again. This time I found something new or something that I didn’t pay enough attention the first time. Lou actually had the buzzword “cloud” in his book of 2002. Let’s see what he had to say about the cloud:

It had to be in one of these early discussions with Dennie that I was introduced to “the cloud” – a graphic much loved and used on IBM charts showing how networks were going to change computing, communications, and all manner of business and human interaction. The cloud would be shown in the middle. To one side there would be little icons representing people using PCs, cell phones, and other kinds of network-connected devices. On the other side of the cloud were businesses, governments, universities, and institutions also connected to the network. The idea was that the cloud – the network – would enable and support incredible amounts of communications and transactions among people and businesses and institutions.

Comment: The meaning of the cloud seems limited to networking, and quite different from what’s known today. Networking is still important today in the new cloud because the connectivity is a must for accessing cloud services.

Categories: Cloud Computing Tags: ,

Standardizing On Oracle is IT Cure? Testimonial for Cloud Computing

June 7th, 2010 No comments

In the May 3rd issue of InformationWeek, Bob Evans wrote an article “Oracle’s Phillips: Standardizing On Oracle Is IT Cure.” I am sure most IT companies won’t agree with it even though Oracle is now a full stack company after grabbing Sun MicroSystem not long ago. The big players probably want to claim the same for themselves, for example, standardizing on IBM is the IT Cure.

Digging further into the article, we can find some interesting arguments by Phillips:

What CIOs are struggling with right now is trying to find a way to get the opportunity and ability to manage the entire stack with a single management tool that’s predictive about that stack’s going to behave, how the change management around it is more prescriptive and planned, and where they really know how to upgrade and patch the entire stack.

All the dependencies between these layers – the middleware, database, storage, software, systems — they’re all related but unpredictable. And that’s the cycle they’re trying to get out of it — all that need to constantly provision and manage — it’s a huge cost, and it’s kinda boring and takes lots of people to do it, and it’s risky.

Categories: Cloud Computing Tags: ,

Why Google Needs VMware?

May 20th, 2010 1 comment

If you think Google is a superman and doesn’t need anyone, think twice. Yesterday in its Google I/O developer conference, it announced its Google App Engine for business. The notable features include centralized administration, a 99.9% uptime SLA and heightened security. It also announced the partnership with VMware on cloud portability.

Why does Google need VMware?

In short, it’s about Enterprise. As the “for business” in the name explains, the new service is targeted for enterprises which are not really Google’s strength.

Categories: Cloud Computing Tags: ,

Building Do-It-Yourself PaaS: My VMworld Session Proposals

May 19th, 2010 No comments

Most people who are interested in VMworld already know the public voting for the proposal is now open till 26th. If you would like to hear about specific topics, it’s high time to cast your votes.

For each track, all the presentation proposals are listed together in one page. To quickly locate a particular proposal, you can use find feature of your browser. Once you login, I would suggest to browse all the proposals and vote for those you find useful. Casting a vote is just two mouse clicks: one for voting and the other to close the confirmation message box.

BigDog: Next Big Thing After Cloud?

May 14th, 2010 No comments

This week I attended an exciting seminar by Marc Raibert. He is a former MIT Professor who founded the Boston Dynamics Corporation in 1992 as a spin-off from MIT. The company develops a quadruped robot called BigDog among other types of innovative robots, including PETMAN, an anthropomorphic robot for testing equipment, RISE, a robot that climbs vertical surfaces, SquishBot, a shape-changing chemical robot that moves through tight space, and etc.

The BigDog is different from other robots in that it’s designed to operate in rough terrain like rocky, muddy, sandy and snowy surfaces. It can walk, trot, jog, climb a slope, follow a person, and even dance. Marc showed several cool videos, some of which are actually available on Youtube as well.

As you may have known, robots have many real world use cases. For example, it can help to carry weapons in battlefields, move heavy logistics for exploring wild areas, etc. In daily life, it can be a house maid who can help to handle house chores, take care of your kids; it can be a replacement for the Segway.

Real-time Communication Cloud: Can You Take Advantage of It?

May 13th, 2010 1 comment

Like it or not, many technologies in IT industry have a new tag called “cloud” these days. Tonight I came across yet another one at a SDForum emerging technology SIG meeting. It’s a great presentation Tropo & Moho: Disrupting telco with simple cloud-based communications by Jason Goecke, VP of Innovation of Voxeo Labs.

The company was started at Silicon Valley in late 1990s’ and almost went belly up in the Internet Bubble. It then relocated to Orlando FL and became profitable slowly thereafter. Now it’s emerging again with some cool technologies, mainly the communication cloud service.

Their cloud service is quite different from what most cloud companies offer in that it helps to build voice, IM related applications. To do that, the computing cloud has to connect to the telephone system, and be able to handle voice in real-time. This sets a high bar for most start-up companies. If we have to put the service into one of the IaaS, PaaS and SaaS, it fits in the PaaS where the platform is for real-time communication applications.

Top Ten Things a CIO Should Know About VMware vCloud

May 7th, 2010 1 comment

Since the term “vCloud” was made public at VMworld 2008 in Las Vegas, VMware has been working hard to define and implement its vCloud vision and strategies.

In 2009, VMware announced vCloudExpress with service provider partners such as Terremark. VMware also submitted its vCloud API spec to DMTF so that the industry could benefit from the standardized management of APIs. VMware also acquired SpringSource in 2009. The acquisition attracted a lot of attention, scrutiny and questions.

Earlier this year VMware acquired Zimbra, the leading provider of SaaS collaboration software, and subsequently it also bought RabbitMQ. Both are now part of the VMware SpringSource portfolio. Last week, VMware and Saleforce.com announced vmforce.com which is a joint venture targeting enterprise PaaS cloud. Yesterday VMware announced acquisition of GemStone (pending).

With these acquisitions and announcements, the company’s strategy is clearer than ever. Looking back again, VMware has been building a cloud product and service portfolio under the vCloud umbrella. Some previously misunderstood acquisitions become well aligned in the vision and strategies of vCloud.

vCloud is not the only player in the industry but VMware is well on its way. Given its deep roots in enterprise data center virtualization, no one can ignore the potential of VMware in cloud computing.

To help enterprises better understand vCloud, I offer ten things you should know:

vCloud API Spec 0.9: What’s New?

May 4th, 2010 2 comments

Some of you may have noticed that VMware released vCloud API Spec version 0.9 last week. The 9 page document describes all the functions and corresponding REST syntax of version 0.9. Better than I had expected, it highlighted changes from version 0.8. So if you have read previous version, you can just scan for the changes with keywords: CHANGED, NEW, REMOVED.

The vCloud API includes the following categories of functions.

Basic functions

What Does DevOps Mean for Cloud Professionals?

April 29th, 2010 No comments

I heard about DevOps a while back but didn’t really look into it. My initial understanding was that the roles of developer and system administrator would merge into one called devops. Last week, I attended a DevOps meet up at Palo Alto and got the chance to learn from others about DevOps.

The hosting organization even wrote up a good blog defining what a DevOps is. According to the blog,

DevOps is, in many ways, an umbrella concept that refers to anything that smoothes out the interaction between development and operations. However, the ideas behind DevOps run much deeper than that.

So the DevOps is more about a movement than merging of two roles. The basic idea behind the DevOps is to breach the wall between development and operations.

Traditionally developers ship products which are then run by operators in other companies. In this new age where much of software is delivered as services, the developers run their software directly. When there is a problem, the developers must fix it right away. That is why you see engineers at Google required to rotate on calls for support. When more companies ship software as services, it’s natural that more engineers will have two hats on their heads. The DevOps concept is not really new, but the terminology is.