IaaS (Infrastructure as a Service) could become a hot buzz word next year or
in the coming future. IaaS lets users share and use Infrastructure or hardware
(such as processing power, storage, interconnect, etc) as a service, both on
premise and over the Internet.
This is similar to SaaS where you use Software in a similar manner. There are
two technologies which are looking very promising in this respect and we talk
about them in this article. Let's start with Cloud Computing
Cloud Computing: The story So far
This year India and the world has witnessed a plethora of vendors adapting
cloud computing as a technology. These vendors were essentially SaaS or Managed
Service providers. I just checked my mailbox for last one week and saw at least
six press releases which talked about companies going the cloud way, either as a
client or as a service provider.
And the benefits are well known to us. But here let us clarify one thing. And
that is, cloud as a technology is nothing new.
That we are still considering it as a Future tech today is because of its
recent and mammoth adoption in India and worldwide, which is changing the way
the SaaS industry works. Moreover, its impact in the industry is pretty high.
So, you must be wondering what's going to happen to this technology next
year? Well lots more adoption and if our prediction is indeed correct, then a
lot of in-house or on-premise Cloud deployment is going to be the new
development next year.
On-premise cloud
To understand what's in-house cloud computing, first we have to understand
the architecture of cloud computing. For those who use cloud computing as a
service, it is nothing but a technology which is letting the service provider
provide an application in a manner which is not restricted by resources and
depending on needs, can be scaled up or down and paid for accordingly.
But to make this work, the use of a lot of technologies such as
virtualization, clustering, job scheduling and management is required. So the
point here is that if you own an organization where you are using virtualization
on very high end servers and using it to provision server instances to your
users, you can actually go ahead and use technologies which fuel cloud computing
and can make your life more easier by making the provisioning and scaling up job
completely automated.
So, if we talk about a scenario where a bank that has consolidated 500 odd
servers into 60 very high-end hardware server boxes and are provisioning and
managing resources for the virtual server instance. Today this task is done
manually. But if they can automate this entire provisioning process, then the
overall management hassles for such a deployment can go down tremendously.
Additionally, as the entire setup is unified as a single pool of resources using
cloud like technologies or what some people call as EC (Elastic Computing)
technology, the level of granularity in provisioning the servers also increases
manifold.
This is the interface of an Open Source cloud computing application which can be deployed on premise and can be used for better provisioning of servers. |
You might ask here, that why, we think this technology could be so impactful
next year. Well! Personally I am very convinced about adoption of this
technology in enterprises as I can see direct benefits which large enterprises
can harness with this. Plus, the fact that so many companies are coming out with
such products for commodity usage is also increasing day by day, is one reason
which is giving strength to my belief.
The commoditization of Cloud Computing, has started (in a very small way
though), by some Open Source products, which has made it possible for
enterprises and test labs to deploy their own cloud and see the possible
benefits. Some such products are Enomalism (http://www. enomaly. com/) and
Eucalyptus (http://www. eucalyptus.com/) which provide an Elastic Computing
platform for enterprises. And we are proud to say that we were the first and
the only Indian magazine in the country to talk about both applications along
with a guided deployment experience for the same. You can go through some of
these articles at http://tinyurl. com/ygmeug9 . And now the things are changing
in two ways. First, the Open Source products which we just talked about are
going beyond just being a free and test class application to an enterprise ready
product with support. Both Enomalism and Eucalyptus today have their range of
enterprise class products which are not available for free and come with
support. And the second significant thing is that, now companies like VMware is
also getting into this business and are bringing new products in this segment,
VMware vCloud is an application which lets you build a private cloud or have
them federated on-demand to partner-hosted public clouds.
CERN Grids
With the LHC (Large Hadron Collider) becoming a hot topic amongst masses,
the CERN grid is also becoming a topic of interest for many. Well, both the
terms the CERN and the GRID are pretty well known by the IT industry. CERN for
the primary development of World Wide Web or Internet, and GRID is a loosely
coupled cluster of computers/servers to create a processing pool. Then what's
so great about the CERN grid and why it might be something of a path breaker in
the coming future?
To understand this, first of all you have to understand how a GRID works. In
simple terms, the GRID is something similar to an HPC (high performance
computer), which uses multiple networked machines to share their processing
power. But the difference here is that, in case of a grid the interconnect is
loosely coupled. This means that these nodes are connected with a medium like
Internet, which is comparatively slower than an HPC interconnect and doesn't
provide continuous acknowledgements. And due to this reason the job handling in
grids is more complicated than in HPC. A very old and simple example of a Grid
is the SATI@Home which lets people join into a grid for sharing/donating their
processing power over the Internet.
But the speed of the Internet is very slow and this affects these grids.
Internet utilizes the interconnect which is primarily deployed for telecom needs
and there the bandwidth needs where not that high.
So, to make things more prompt in grids, CERN is working on a parallel
Internet which only connects over fiber and is around 10,000 times faster than
todays broadband and will be primarily be used for processing power and resource
sharing. The need for this grid came from the requirement put up by LHC, which
is to churn out around one GB of data every second.
Right now somewhere around 100,000 computers across 11 centers across the
world are connected to the Grid for processing this data. But if we go by
tradition, sooner rather than later, this Grid will be replacing the Internet
with a much faster and richer Internet experience, which will also be used for
IaaS or infrastructure sharing (sharing processing power, storage, etc). So we
ought to keep an eye on its development.