Virtualization and server consolidation are the hottest buzzwords in the
industry today. The interesting thing is that virtualization used to be one of
the techniques for doing server consolidation. Today, virtualization can be used
for more than just server consolidation. It can be used for disaster recovery
and business continuity, testing and training, securing desktop environments,
and much more. Similarly, virtualization isn't the only technique for doing
server consolidation. Here, we'll look at the technologies in each area in
more detail.
Let's look at server consolidation first. The most common definition of
server consolidation is that it reduces the number of servers that you have in
your data center and other offices by moving the Operating Systems and
applications that were running on them onto a fewer, more powerful servers. This
is done with the objective of reducing administrative and management costs of
your existing fleet of servers. But that's not the only definition for it.
According to Gartner, server consolidation can be divided into three types:
logical consolidation, physical consolidation and rationalized consolidation.
There are vendors who have also categorized server consolidation differently.
IBM for instance, divides server consolidation into centralization, physical
consolidation, data integration, and application integration. In all cases, the
objective is to simplify your server infrastructure. Here, we'll go by Gartner's
definition.
Logical consolidation
This is about what most IT managers dread the most: documentation. How well
documented are the servers in your organization? Do you know which server is
located where, its configuration, how well is it utilized, and what applications
is it running? If you do, then you've already achieved one part of logical
consolidation. The other part is to understand how to use this information
effectively to plan your next server upgrade or purchase better, or improve
troubleshooting.
Physical consolidation
This is nothing but reducing the number of physical server sites that you have
by moving servers from those locations into a fewer locations. By doing this,
you can save not only the cost of maintaining so many servers, but you also save
on real estate costs.
True Utility Computing with Jack PC | |
The PC has been shrinking into various shapes and sizes, It doesn't expose any removable components, so it can't |
Rational consolidation
This is the most commonly known part of server consolidation, and the most
complex. It dramatically improves server utilization, but you must also have
failover support, as doing this is like putting all your eggs in one basket. In
this technique, you either put multiple applications onto a single Operating
System on one server, or you install multiple Operating Systems on a single
Operating System on the server, and then install applications on top of each OS
instance. Gartner calls the former technique as Workload consolidation and the
latter as partitioning. Workload consolidation needs a robust scheduler that can
balance the hardware resources amongst different applications. There are two
different ways of doing workload scheduling-processor binding and software
based resource allocation. In the former, different applications are tied to
different processors in a multi-processor machine. IBM's AIX Operating System
for instance can force processor affinity through a 'bindprocessor' command.
IBM defines processor affinity as the probability of dispatching a thread to a
processor that was previously executing it. HP also allows this on its HP-UX
processor using its HP-UXi 11i OS. In Software-based resource allocation, the OS
itself tries to allocate the hardware resources to the applications.
Benefits |
- Improve server utilization - Better Server Provisioning - Reduce the number of servers - Save on real estate requirement in the data center - Reduce energy requirements for power and cooling of servers - Reduce overall cost of managing servers |
Server Consolidation using Partitioning
Partitioning is another way of doing server consolidation. You can do it at the
hardware level, which basically means allotting processor(s) and memory to
specific applications. This technique is a fairly old one and has been around in
mainframes and the UNIX world for a long time. It's only done on multi-CPU
servers, and physical hardware is used to create partitions on the server. Each
partition can thus have one or more CPUs, dedicated memory and I/O components.
These are electronically isolated and therefore are completely safe from each
other. Even if one partition goes down, the other partitions remain unaffected.
Another form of hardware partitioning can be achieved with blade servers.
They offer electronic isolation, and each server blade contains its own CPUs,
memory, and hard drive. The good news about blade servers is that their costs
have come down significantly.
The second type of partitioning is logical partitioning. This adds a firmware
to the hardware for doing the partitioning. IBM for instance, adds a firmware
hypervisor function in its pSeries servers that does virtual memory management,
debugs registry and memory access, and provides virtual tty support. The benefit
of logical partitioning is that you can alter resources to virtual partitions
without stopping the OS. The last form of portioning technology is called
software partitioning. This allows one Operating System to run several guest
Operating Systems on top of it. In other words, there's a hypervisor on top of
the host OS on the server, which allows multiple instances of an OS to run on
top of it. In case you haven't figured it out, then this technique is also
known as virtualization. Today, virtualization is one of the most happening
technologies and has moved far beyond being a mere technique for doing server
consolidation. Let's get dig deeper into it.
Understanding Virtualization
Many of the techniques we've discussed in server consolidation are a form of
virtualization, and they've existed in the UNIX world for ages. What's
relatively new is the introduction of virtualization technology on the x86
platform.
This is good because the real challenge of server proliferation has happened
in this segment only. Most organizations have tons of x86 based servers, the
result of which is high administrative overheads and operating costs, challenges
in managing the server fleet, etc. In fact, there are many studies which
indicate that most servers in the enterprises are not utilized beyond 15-20% of
their actual capacity. This wastes a lot of compute power. It wastes a lot of
precious energy that powers up servers and keeps them cool. Virtualization can
improve server utilization by deploying more than one Operating System and
applications on single server. It achieves not only server consolidation, but
also improves resource utilization, reduces real estate requirement in a data
center, and lowers energy requirements. Moreover, virtualization technology on
the x86 platform is also available for desktops.
Types of virtualization
The oldest type of virtualization is emulation, which is running an OS that's
meant to be run on one platform on another platform. The one that's gained
popularity today is native virtualization, which allows you to run multiple
Operating Systems on same platform. The difference here is that it doesn't let
you run Operating Systems for some other CPU. Another form of virtualization is
called paravirtualization, which also let's you run multiple Operating Systems
on the same system, with the difference that it's tuned to provide better
interaction with the CPU, memory, and other I/O devices. It also interacts
directly with Intel's VT and AMD V technologies to give better performance.
VMware Infrastructure 3 |
The beauty of this product is that it can help you create a production class virtualization environment for your data center. You can move virtual machines from one server to another on the fly, it can back them up for disaster recovery, it can restart them if they crash, and much more. So apart from giving you the well-known benefits of virtualization, it also gives you reliability. Specially, the product has the following three components--ESX Server, Infrastructure Management server and Licensing server. The ESX server has to be installed on all servers on which you want to setup the virtual machines. The Infrastructure Mgmt and Licensing server would install on another server. These are used to control the licensing and management of all the virtual machines. There's also a separate virtual client that can access individual virtual machines from a web browser. The Infrastructure Management server can have the following Add-ons to manage your virtualized data center: Vmotion: To move a running virtual machine instantaneously from one ESX server to another, without affecting the running applications. VMware HA: VMware High Availability provides fail-over protection. It restarts virtual machines almost instantly without human intervention on a different physical server within the same resource pool. VMWare DRS: VMware Dynamically Allocate System Resources is used with VMWare HA to continuously monitor utilization across resource pools and intelligently allocate available resources among virtual machines based on pre-defined rules. VMware Consolidated Backup: To perform full and incremental backups of virtual machines and create full image backups for DR. VMware Import Utility: A free beta utility to migrate Microsoft Virtual Server and VPC images to ESX Server. |
Hardware Assisted Virtualization
The dark side of software virtualization on the x86 platform is that performance
takes a hit as the number of virtual machines increases. To tackle this, Intel
and AMD are coming out with virtualization technology in their processors. Intel
calls it Intel VT, while AMD calls it AMD V (code name
Pacifica).
Intel VT
This was officially launched in 2005 and was available in most Pentium 4 6x2,
Pentium D 9x0, Xeon 3xxx/5xxx/7xxx, Core Duo and Core 2 Duo processors. The
technology facilitates a CPU to act as if it were numerous CPUs working in
parallel. It has some extra instruction sets that are used in virtual machine
emulation. These instruction sets reduce the communication process between
virtual machines and actual hardware, which was earlier performed by VMM. This
eliminates overhead and improves performance.
AMD-V
AMD introduced its virtualization extension in its 64-bit architecture in mid of
August 2006. This technology uses Direct Connect Architecture, which provides a
balanced mechanism to support virtualization software in managing, partitioning,
and securing I/O devices. This is done for improved performance and less
implementation complexity in providing I/O in virtual environments. It helps
eliminate bottlenecks inherent in FSB architectures and gives high-throughput
responsiveness and scalability for your applications to improve overall system
efficiency. Right now this technology is available in AMD Opteron Processors.
Virtual Appliances
This is the latest feather in virtualization technology's cap. Simply
speaking, it is a combination of OS and application. Instead of distributing
apps alone, ISVs can bundle an optimized OS with them. The virtual appliance
would then simply be dropped on top of a virtual infrastructure from some
vendor, and would be ready to run.
This would save IT managers the trouble of first installing an OS on a
hardware, and then installing and customizing the application. They can instead
focus straightaway on configuring and customizing the application as per their
organization's need. It's a fairly powerful concept, and is worth watching
out for.