by August 9, 2003 0 comments



Clustering has long since been used for ensuring high availability and redundancy. Software products that make a single computer run multiple OSs and work as a number of virtual servers, however, are more recent. In this article, we examine the concepts and issues related to such deployments. We will use the term ‘virtual server’ to denote each virtual computer system in the single physical entity, to avoid confusion of the term ‘virtual machine’ with other entities of the same name.

Server virtualization in the way described above is implemented on a single high-end server running a server OS (Host OS).

The virtualization software runs just above this OS layer. Multiple copies (services) of this software are started and each can be configured to different hardware specs, such as CPU type, amount of RAM and the amount of hard-disk space to be used. While the amount of RAM can be any amount and simulated using swap-space, the total amount of hard-disk space must be less than that physically available. Each such virtual server can run its own OS (Guest OS), irrespective of what you run as the Host OS. This means that the Host OS can be Windows 2003 Server and a Guest OS can be Linux Advanced Server.

Each virtual server can have its own set of (virtual) hardware, including things like sound, video and network adapters. The Guest OS and its applications can use this hardware as it would the real thing. The greatest server-side advantage being virtual network adapters. These enable communication between each of these servers using these network interfaces, just as they would have done had they been on individual physical systems. 

So, you can run applications such as databases, file servers and services (such as DNS and DHCP) on the same physical system, within different virtual servers.

There is only one technological bottleneck–the resources of the physical system utilized in such a deployment. To avoid negating the positive effects, the physical server needs to have the highest possible configuration, with a super-fast (Gigabit or FireWire) connection to the external environment. Most virtual servers available today simulate 10/100 Mbps connections between the contained deployments.

The raw advantage derived here is, of course, the consolidation of the big ‘irons’ (the server hardware). But, there are still issues that imply that each virtual server be treated at par with separate physical servers for licensing purposes. That is, if you have only one license for particular software, you cannot install it on more than one of the virtual servers, even though it is physically one machine. However, user groups worldwide are trying to work with software vendors to come out with alternate licensing poilicies for virtual-server deployments. Until such time, software cost savings will be nil in the licensed arena. You can, of course, select equivalent license-free software instead. Most such software offer you equal, if not better, performance and other advantages. 

Troubleshooting a runaway server is quite easy. There are no irons to reboot and no real hardware to remove, test and replace. Simply shutdown the particular virtual server (maybe even through an end-task from the Host OS) and change the virtual configuration. Usually, this can be done on the fly for hot-swap and ‘external’ PnP hardware. This is necessary since some OSs have a hard time working with particular brands of hardware. For example, the versions of Windows from 2000 on need perfectly working hardware that are on their HCL to function properly and, sometimes, even those are rejected. In a traditional deployment, the only way around is to run to purchase hardware that would work. 

More and more software are becoming bigger and more resource hungry. They need more RAM, faster CPUs and more hard-disk space. This usually means upgrading your hardware along with the software. With virtual server and client technologies, this is no longer necessary. Simply spend an extra that much for one super high-end system and run multiple virtual systems inside that, configured to particular needs. And, of course, you can replicate as many copies of virtualized hardware as you want from that plugged into the real box.

There are some disadvantages, too. In the non-virtual implementation, if a box crashes, then only that box’s functions would be affected. You could troubleshoot it and possibly replace faulty components and bring it back online. In large-scale deployments, a backup box of the same configuration and usually having data synchronized up to the last backup would be available and, in the interim, this would be plugged in. Now, imagine what would happen if all those boxes were actually virtual servers! If there were a crash on a virtual server, only that ‘system’ would crash. If, however, the Host OS crashes, or a hardware component on the real iron crashes, the whole box would crash. Troubleshooting such setups would take some doing. Also, if you want to ensure failovers for such implementations, you would need to duplicate the entire box so that you can plumb it into the network during a disaster.

Sujay V Sarma

No Comments so far

Jump into a conversation

No Comments Yet!

You can be the one to start a conversation.

Your data will be safe!Your e-mail address will not be published. Also other data will not be shared with third person.