The Linux Letter: Virtualization Is Complicating Simplicity

Linux / Open Source
Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

The latest "hot topic" in the IT world is server virtualization. From all of the hoopla, you'd think that this is some new and groundbreaking concept. But as most of us IT old-timers know, IBM has been doing this for years—first on its mainframe computers and more recently on p5s and i5s whose POWER CPUs can be dissected into fractional CPUs. Perhaps your i5 is partitioned into one or more i5/OS or Linux partitions.

Despite the huge volume of press it's currently enjoying, server virtualization (which hereinafter I'll refer to as simply "virtualization") isn't particularly new on Intel-class machines, either. I've been using various incarnations of it on my Linux machines for many years—first with DOSEMU, an early DOS emulator, and then with Win4Lin (which let me run Win9x instances under Linux), and finally with my current favorite, VMWare (about which I wrote a couple of months ago).

We all know the obvious advantages of virtualization, which include lowered energy costs for power and cooling, lower hardware and hardware maintenance costs, and lower physical space requirements. If you spend additional cash for enterprise-level virtualization, you get yourself a whole other realm of nice features, including the ability to move a virtual machine (VM) between physical machines (while the VM is running), the capability of assigning nominal "horsepower" to specific VMs with the virtualization environment dynamically tuning things for you, and high-availability computing (the virtualization engine can identify when a VM dies and automatically restart it or assign its workload to another virtual machine). All of these features have been fully explained numerous times in the deluge of virtualization articles and marketing brochures, so I won't rehash them here. Suffice it to say that, with the exception of Intel and AMD manufacturing CPUs that finally are equipped to provide proper VM isolation (POWER and other CPUs had this first), virtualization is a relatively old technology that's finally becoming mainstream.

So what's driving the current virtualization frenzy? I can't say for sure, but I can make an educated guess.

Naked Virtualization

If you strip away all of the fancy features of virtualization, you'll end up with the realization that a VM is, for all intents and purposes, a computer. That it's built of software instead of hardware is irrelevant for the majority of applications (the exception being applications with specific or exotic hardware needs). If you think about the VM in this way, it seems logical that it should make little difference to operating system or application vendors where their product is running. Yet this doesn't appear to be the case. The novelty of virtualization has given vendors new avenues for marketing and market manipulation, and they're taking full advantage of it. The benefit to the consumer is that it has also given opportunities to go bargain shopping, but there is a dark side....

Bargain Hunting

While the management features of virtualization are nice, many of us are just happy to utilize the otherwise-wasted CPU capacity inherent in modern hardware. By creating VMs, it's possible to consolidate many servers onto one physical piece of hardware, saving the hardware and energy costs associated with the now-superfluous hardware. What virtualization doesn't do in this scenario is lower the costs involved with the software that ran on each machine.

The same problem is encountered when deciding to split up a single server into a group of them for the purposes of isolating services. Let's say that you have an email server that offers SMTP access for mail distribution, scans email for viruses, differentiates SPAM from HAM, and provides Web-based access to the email. I use Linux for just such a box, and it performs well doing all of the functions simultaneously. It would be interesting, though, to place each function onto a different server instance (I'll call a functioning system an "instance" regardless of whether it's hosted on real or virtual hardware) so that one instance is performing only one task. That way, the OS on a given instance could be optimized to its assigned task. The only problem is that, done the old-fashioned way, each instance would cost you additional cash for the OS. This issue seems to be fading, though, as vendors alter their licensing terms to adapt to virtual environments.

Take Red Hat, for example. The software Red Hat provides is, for the most part, open-source and available for free for anyone's use. The company's income is partially derived from the subscription service that you purchase to keep your Red Hat Enterprise Linux systems updated and current. The old subscription model dictated that you had to pay a subscription fee for every instance of RHEL that you ran, regardless of the platform or instance.

With the release of the newest RHEL incarnation, RHEL 5, the rules are changed. Red Hat Enterprise Linux 5 comes in two versions for server use: the standard Enterprise version and Advanced Server, both of which have integrated support for Xen, an open-source virtualization product. Enterprise has the ability for you to run up to four virtual machines (your choice of OSes in the virtual machine) within the host system, while Advanced Server has no hard limit, save for the hardware resources of the physical host. The new subscription scheme is simple: You may update any and all virtual hosts running on any physical host under the subscription you purchased for the physical host. The caveat is that you are running the same OS and version on the virtual host as you are on the physical one, which in my shop would be what I'd typically do. In short, this means that I can break up the services on my email server into individual virtual instances on one physical host (perhaps recycling the email server hardware), yet not incur any additional licensing or subscription costs. This is huge! And it makes sense, too. All i5 users are conditioned to processor-capacity licensing (the higher the "P" value, the higher the price for the same software), and this new model by Red Hat fits into this model well. You may have three or four virtual servers running on a physical server, but the aggregate capacity of them all will never exceed the capacity of the host system, so why charge more? (For completeness, RHEL pricing is not based on server capacity in the sense that the i5 does, but the analogy holds.)

I don't doubt that as more and more companies start availing themselves of virtualization, software vendors will follow RH and the other companies that are early adopters.

The Dark Side

There always has to be a dark side, so of course virtualization has one. Earlier, I posited that logically a VM should appear to the software running on it as just another computer, without regard to the components with which the computer was built. In my opinion, it shouldn't matter one whit to a vendor that I'm going to run a piece of software on a virtual machine instead of a physical one. The Redmond giant disagrees with me and has created a convoluted licensing scheme that makes it difficult to legally run its software virtualized. Let me rephrase that: The company is making it difficult to run its software on virtualization software other than its own, in its typical "wait for us, we're the leader" fashion. You can read about this in a white paper on the VMWare site. So if you're an MS acolyte and want to use its product, you get to wait for [fill in some lengthy period of time] until the company gets its virtualization software to do what most other virtualization engines can do now. The whole point of the MS licensing scheme, again in my opinion, is to stall the market until MS can catch up. It's manipulation, plain and simple. While Microsoft is the most obvious example of this behavior, I'm sure that there are other vendors out there that are doing the same...or will be.

An Already Overcomplicated World

Right now is a great time to be in the IT world. Server virtualization is just one of many great technologies that make it easier and cheaper to provision and deploy a server than ever before. Virtualization is more than hype; it's one of the rare technologies that does deliver what it promises to. For the most part, there's little downside for the consumer. Little downside, that is, but unnecessary complication in the already overcomplicated world of software licensing.

As IT management, it behooves you to hunt for the bargains that are now being made available to the marketplace. Vendors are always looking for industry buzz on which to piggyback their products, and server virtualization is the buzz du jour. If you find a piece of software that you think might be a good candidate, be sure that you read the fine print to ensure that you're using the software as the vendor deigns to allow you. I'm just glad that the majority of the software that I run is open-source and GPLed; I can skip right to the installation and configuration section of the manual.

Barry L. Kline is a consultant and has been developing software on various DEC and IBM midrange platforms for over 23 years. Barry discovered Linux back in the days when it was necessary to download diskette images and source code from the Internet. Since then, he has installed Linux on hundreds of machines, where it functions as servers and workstations in iSeries and Windows networks. He co-authored the book Understanding Linux Web Hosting with Don Denoncourt. Barry can be reached at This email address is being protected from spambots. You need JavaScript enabled to view it..

BLOG COMMENTS POWERED BY DISQUS