The VMworld 2016 is coming in about two weeks. Although I’ve attended every single VMworld after 2007, it’s my first time as an exhibitor myself, to be exact, as a new innovator in the show. If you follow my blog and Twitter, you probably have know the company DoubleCloud that I had founded and the cool products/technologies we’ve been working on. This is the first year for us to promote our products in show. Please come to see our product demos, or simply stop by and say hi. Our booth is 841#4.
As some of you may have known, I just left VCE last Friday. It’s really a tough decision as I enjoyed very much working with my colleagues there during the last two years, and the company continues to grow rapidly. Building my own business is something I had always dreamed about. I am glad I finally took it into action.
Last week was pretty exciting with VMworld 2013 in San Francisco. I sat through the keynotes and talked to many friends at VMware and partner community who showed up in the SolutionExchange where I spent most of my time. On Thursday I got a bit time to attend a few breakout sessions.
In first day keynote, VMware CEO Pat Gelsinger laid out three imperatives for VMware: 1) Virtualization extended to ALL of IT; 2) Management gives way to automation; 3) Compatible hybrid cloud is ubiquitous. The keynote was centered around these three imperatives.
As discussed in my previous post, Libvirt is an open source project for managing hypervisors. With the increasing popularity of Openstack, it’s important to get familiar with KVM as an alternative virtualization platform to commercial products like vSphere and Hyper-V.
To use KVM, you don’t have to install Openstack – you can just install KVM as a standalone product as described in my previous post. In that, it’s pretty much like VMware Player or Workstation. In terms of maturity, KVM is pretty solid and way ahead of Openstack which is also improving quickly since last year with many commercial vendors jumping in.
While working with Openstack on both VMware virtual machines (with no virtualization instruction set exposed) and physical machines, I found virtual machine instances can be deployed seamlessly. On a machine that does not have virtualization instruction set exposed, KVM falls back to QEMU silently. That is why could I try out OpenStack on virtual machines before my hardware was ready. Because both KVM and QEMU support the same libvirt APIs, you would not notice any difference using command line like virsh, or Virtualization Manager. That is the beauty of standard APIs with different implementations, similar to the standard vSphere APIs that are implemented by both vCenter and ESXi.
After installing Openstack, I got KVM/QEMU installed as a by-product. To get myself familiar with the functionalities, I played with Virtulization Manager and the virsh command line. By comparing with the libvirt API, I found they are pretty similar. Therefore, I think it’s a good starting point before jumping to the APIs. Also, the virsh is implemented on top of the libvirt APIs.
During the Microsoft Management Summit last month, I had an interesting chat with Rakesh Malhotra who is the VP product of Apprenda. It made me to think more about two important technologies: virtualization and PaaS. As we know, virtualization is almost a must for IaaS. Will it be the same case for PaaS?
Pure PaaS or PaaS over IaaS
The software-defined networking is the new buzzword for network centralization, which is also known as OpenFlow or network virtualization. The idea is to centralize the control to a server (or a cluster of servers) called controller.
With the acquisition of Nicira by VMware, the software-defined networking has caught many eyeballs from the community. From there, VMware extended it to a new vision called software-defined datacenter which includes three elements of computing: compute, network, and storage.
I flew to Vegas this week for Microsoft Management Summit 2013, which happens to be in the same hotel (Mandalay Bay) as VMware Partner Exchange one and half months ago. The organizations and activities of both conferences are pretty similar – keynotes, breakout sessions, hands on labs (HOL). It’s pretty exciting to learn new technologies and meet new people.
Hands On Labs
After VMware touting out the new term “software defined data center,” I suddenly saw many vendors claiming they support software defined data center at VMworld. Days ago I read a news about Joe Tucci, the CEO of VMware’s parent company EMC, explaining what “software defined data center” is.
As I mentioned in a previous article, Hadoop is in a similar stage as virtualization 10 years ago – the technology is mostly ready for wider adoption. There were certain secret sauces leading to virtualization’s stellar success, especially VMware in the enterprise space. Here I examine some of these success factors that could be learned by Hadoop community.
Strive For Out Of Box Experience
I just did an interview with Ricky Ribeiro, who is online content manager of BizTech Magazine. It was published last week as part of the Q&A series of Must Read IT blogs. In response to Ricky’s great questions, I shared thoughts on a broad range of topics, including blogging, cloud computing, and technical innovation in general.
The following is part of the article. For full coverage, please check out here, where you can also find links to interviews with other top IT bloggers.
Weeks ago, ThoughtWorks published a new issue of Technology Radar compiled by its senior tech leaders. It has done a great job to track latest technology and market trends since 2010 (for archives, scroll to the bottom of this page).
In my previous post “Physical is New Virtual,” I mentioned that I would talk about when you will need virtualization and when you don’t. This topic could be a little controversial as we at virtualization community all assume that virtualization is the way to go, which is true in general.
There are however use cases in which virtualization doesn’t make much sense. In the following, I will detail some of these use cases and explain why it doesn’t make much sense to use virtualization. Like everything else, virtualization doesn’t fit all.
Today I read an interesting article “The Efficiency Paradox” in latest Business Week magazine. It reviews the book Conundrum: How Scientific Innovation, Increased Efficiency, and Good Intention Can Make Our Energy and Climate Problem Worse by David Owen. I haven’t read the book but got the main idea of the book from the article.
While installing and configuring vCloud Director recently, I kept thinking how to simplify it by removing un-necessary concepts and steps. To be fair, vCloud Director as of version 1.5 does a decent job to provide a high level abstraction for cloud infrastructure. Still it can be significantly improved just like every other new technology. Note that I pick vCloud Director as an example for the following discussion simply because VMware is the leader in virtualization space and what it does has ripple effects on other vendors.
I went to EMC office at Milford, MA last week for a 5 day training class on Vblock Administration. As you may have known, VCE Vblock is the industry’s first and leading converged infrastructure with compute, network, and storage from industry leaders. For the compute, it uses Cisco UCS. If you have followed my blog, you should know that I have blogged about the UCS emulator and XML management APIs.
After finishing up my reflection of 2011 predictions , it’s time to make my predictions for 2012 as today is the last day of 2011.
1. Virtualization war will be heated between VMware and Microsoft. The trigger will be the Hyper-V 3.0 which is expected to ship in the middle of 2012 with the Windows 8 server. According to many people, the 3.0 release will bring it on par or better than latest VMware hypervisor.
While checking out the exhibitions at CloudExpo weeks ago, I learned about the Red Hat Enterprise Virtualization (RHEV) 3.0. Due to my interest in virtualization APIs, I started to look into its management APIs. With no surprise these days, it’s a REST API.
As cloud computing gains momentum, more mega data centers are constructed or to be constructed. You can find cool videos on how companies like Google, Microsoft build and run their state-of-the-art data centers.
In these data centers, computers/storage/switches are packed and wired inside containers in factory before being shipped to a data center. After hooking up power, networking, and cooling, a container of servers are ready to go. These advances have