Archive

Author Archive

XML APIs to Manage Cisco Nexus 1000V

September 30th, 2012 5 comments

If you’ve been following my blog, you may remember I wrote Cisco Nexus 1000V in VMware vSphere API about half year ago. The Cisco Nexus 1000V actually has another APIs based on XML. Interestingly, it’s implemented over SSH, but not HTTP or HTTPS.

The Nexus 1000V APIs follows two ITEF standards: RFC 4741 NETCONF Configuration Protocol, and RFC 4742 Using the NETCONF Configuration Protocol over Secure SHell (SSH). The first one is pretty long with close to 100 pages, but fortunately Wikipedia has a much shorter introduction. The RFC 4742 is just 8 pages and pretty easy to browse through.

Categories: Virtualization Tags: ,

Hadoop File System Commands

September 26th, 2012 3 comments

I just took a Hadoop developer training in the week of September 10. To me, Hadoop is not totally new as I’ve tried HelloWorld sample and Serengeti project. Still, I found it’s nice to get away from daily job and go through a series of lectures and hands-on labs in a training setting. Believe it or not, I felt more tired after training than a typical working day. This post is not much new but just helps me on the commands when needed later.

Categories: Big Data Tags: , ,

Announcing Public Beta of VI Java API 5.1 Supporting vSphere 5.1

September 23rd, 2012 6 comments

After VMware released the vSphere 5.1 on the night of September 10, I finally got a chance to look at the new vSphere API, including the API reference and more important to me the WSDL files.

I was relieved to find out that there weren’t many changes. No single managed object is added to the vSphere 5.1 API, meaning a lot less work than I thought for vijava API to support the latest vSphere 5.1.

Categories: vSphere API Tags: ,

Converged Infrastructure and Object Oriented Programming

September 10th, 2012 1 comment

At first sight, these two technologies are totally different and you won’t talk about them together. But looking closely at the philosophies behind them, I find they are surprisingly similar and I hope you would agree with me after reading through this article.

A Quick Overview

Before getting into the detailed analysis, let’s take a quick look at the concepts and histories of both technologies.

Behind vRAM – What’s VMware’s Deepest Fear?

September 5th, 2012 4 comments

The vRAM was the license model VMware used in vSphere 5.0. It basically limits the usage of virtual memory, which is different from physical memory, per license. When first announced last year, it created a lot of angry customers overnight even though VMware estimated that the license scheme wouldn’t affect most of the existing customers. Later on, VMware doubled the amount of virtual memory and implemented a cap per license, and insisted to roll out the modified license model despite strong objections from customers.

My First Try of Hadoop Azure

August 27th, 2012 No comments

During the breaks of my vacation last week, I tried the Technology Preview for the Apache Hadoop-based Service on Windows Azure. The service is not yet publicly available and requiring Microsoft approval. Here is the link that I used to file my application. It took several days for me to get the email with invitation code. Sorry that I cannot include the code here. :-)

Your Cloud, My Cloud, or Our Cloud? Rethinking VMware Public Cloud Strategy

August 19th, 2012 2 comments

About two weeks ago, CRN published an article about VMware Zephyr project. According to the article, VMware plans to launch a public IaaS cloud to compete with Amazon EC2, Microsoft Azure, and more directly with existing VMware vCloud service providers. The reason for the move is “because none of its service provider partners are moving fast enough. Look at the adoption rate of vCloud Director with service providers — it is non-existent.”

Categories: Cloud Computing Tags: ,

Big Data: How Big is Big?

August 15th, 2012 1 comment

I came across a video on Youtube over the past weekend: Big Ideas: How Big is Big Data. Although coming with several mentions of EMC, it’s very well prepared and demonstrated with white-boarding, therefore worthwhile to share here.

Some of the key points made from the video include:

  • The growth is accelerating. By 2020, there will be 50x more data than today.
Categories: Big Data Tags:

Why VMware Needs A New Direction

August 12th, 2012 2 comments

In my last article, I analyzed the real motivation behind the VMware’s recent intention to acquire Nicira. In this article, I am going to review VMware’s past strategies and predict its long term strategies. In short, VMware’s past growth strategy is “vertical,” and its future growth strategy should be “horizontal.”

Past Strategy Review

What Are Cisco’s Options to VMware’s Nicira Deal?

August 5th, 2012 No comments

VMware’s acquisition of Nicira posted a big risk on Cisco’s future control of networking market. The risk was in fact there from day one of VMware ESX with virtual switches and then distributed virtual switches, which reduces the need for customers to buy physical geeks from Cisco because virtual machines use “free” virtual ports. For the inter-physical server communication, customers still need Cisco and other vendors even though the volume is not as high as otherwise. That is why Cisco quickly came up with its own distributed virtual switch Nexus 1000v to stay relevant in the virtualization market.

What VMware Didn’t Tell You About Nicira Deal

July 29th, 2012 1 comment

On this past Monday VMware announced to buy Nicira for $1.26 billion. Congratulations to many of my former VMware colleagues who joined Nicira and will return back to VMware soon.

Overall this deal aligns well with VMware’s newly found vision on software defined data center. You must have read many of similar explanations and comments from various sources including this one from VMware CTO Steve Herrod, and this one by Nicira cofounder and CTO Martin Casado.

Hadoop vs. Tomcat

July 25th, 2012 3 comments

In my previous article, I talked about three different ways enterprises use Hadoop. Thinking a bit more, you may have come to realize that the three usage patterns are very similar to how we use Tomcat. I will compare these two for commonalities and differences.

First of all, both Hadoop and Tomcat are Java based open source projects from Apache Foundation, thus copyrighted by the same Apache license. As a result, you can freely use Hadoop in the same way as you have used Tomcat in terms of license compliance.

Categories: Big Data Tags: , ,

Will Enterprise Hardware Be Hot Again?

July 23rd, 2012 No comments

BusinessWeek recently published an article “In Silicon Valley, Hardware is Hot Again.” Almost all big names started to sell hardware now, Microsoft, Google, and Apple of course. Apple’s stellar success in iPhone and iPad disrupted the conventional wisdom that software is higher in margin compared to hardware. Also, Apple’s hardware and software combined devices posts a real risk for Microsoft and Google. To be exact, the hardware in the article title should really be software bundled hardware. That is why Google and Microsoft had to get into hardware business competing directly against Apple.

VMware Serengeti: A Perfect Match of Hadoop and vSphere

July 19th, 2012 No comments

During the Hadoop Summit 2012 last month, I learned the release of the open source (Apache license) Serengeti project from VMware. The week after, I downloaded the OVA file from VMware site, and gave my first try with a development environment after browsing through the user guide which introduces a fairly easy process to get a Hadoop cluster to run on vSphere.

Categories: Big Data, Virtualization Tags: ,

Hack Workspace in Netbeans IDE

July 18th, 2012 7 comments

As a long time Eclipse user, I like its workspace concept and the ease of switching workspaces among many other things. The workspace provides a simple yet powerful way to isolate groups of projects into different workspaces under different folders, so you’re not distracted by other un-related projects.

This feature is, however, not available in Netbeans IDE, which is not a big deal most of time. By default, the Netbeans IDE creates a folder under current user’s home directory as follows (yours could be different):

Three Ways Enterprises Can Use Hadoop

July 17th, 2012 No comments

Hadoop has recently gained lots of attentions from enterprises. Just think about the rapid growth of attendees in Hadoop Summit. There are many different ways to leverage Hadoop in enterprises. But in general, there are three major types of usage patterns as detailed below.

As a Framework

This is what Hadoop was initially intended to be, and continues to be one of the major approaches in the short term. It means that an enterprise needs to invest in customized application development, which normally costs more than out of shelf applications.

Categories: Big Data Tags: ,

What Hadoop Community Can Learn From VMware Virtualization

July 14th, 2012 No comments

As I mentioned in a previous article, Hadoop is in a similar stage as virtualization 10 years ago – the technology is mostly ready for wider adoption. There were certain secret sauces leading to virtualization’s stellar success, especially VMware in the enterprise space. Here I examine some of these success factors that could be learned by Hadoop community.

Strive For Out Of Box Experience

Categories: Big Data Tags: ,

Is MapReduce A Major Step Backwards?

July 9th, 2012 3 comments

While learning Hadoop, I was wondering whether the MapReduce processing model that can handle all the Big Data challenges. David DeWitt and Michael Stonebrakeer took a step further by arguing MapReduce is a major step backwards in their blog article. I found it’s a very good reading but not necessarily agree with the authors. It’s always good to know different opinions and the contexts where they come from. I also found the authors wrote the best introduction of MapReduce in several short paragraphs. I quote them in the end, so read on.

Categories: Big Data Tags: ,

MapReduce: The Theory Behind Hadoop

July 3rd, 2012 No comments

As most of us know, Hadoop is a Java implementation of the MapReduce processing model originated from Google by Jeffrey Dean and Sanjay Ghemawat. After studying Hadoop and attending several related events(Hadoop Summit, Hadoop for Enterprise by Churchill Club), I felt I should dig deeper by reading the original paper.

The paper is titled “MapReduce: Simplified Data Processing on Large Clusters.” Unlike most research papers I’ve read before, it’s written in plain English and fairly easy to read and follow. I find it really worthwhile reading and strongly recommend you spend an hour to read through it.

Categories: Big Data Tags: ,

Review Board Virtual Machine for Code Review: The Missing Manual

July 2nd, 2012 No comments

Code review is important for the quality of a software product. It used to be a meeting activity where a small group of engineers walk through changes and provide the author feedbacks. This is highly effective but not flexible enough, especially when there are frequent code changes.