by July 4, 2009 0 comments

Cadence designs CAD software for chip and PCB designing. They have a
centralized server farm compute infrastructure. The usage of server ensures that
the company is able to continuously innovate. This ensures that the company is
able to run huge number of regressions on the products they make before
releasing to customers, so that they can ensure quality. Before consolidation,
the entire compute server infrastructure was scattered across 12 small computer
rooms across three buildings in their Noida Campus. And a tremendous amount of
effort was required to maintain and manage the scattered infrastructure which
demanded the move to consolidate them into one large datacenter. The
inefficiencies existed not just in maintaining the rooms but also in cooling,
effective utilization, and the cost of creating minimum n+1 resiliency.
Availability, scalability and performance were the main guiding priniples in
their design. After deploying the new datacenter, it has given them 100% uptime
since May 2007 and the energy efficiency or PUE (power usage effectiveness) of
the datacenter is around 81%. As far as scalability is concerned, the modular
approach of the power backup solution has ensured that they did not have to
commission entire UPS power backup capacity in the beginning. As the datacenter
utilization goes up, they can accordingly add more UPS modules, thus reducing
upfront capital cost and wastage of unused resources.

Ashwin P. Rao
Group Director — IT

QWhat were the key business challenges that were faced while
implementing this project?

Cadence’s Noida datacenter is the heart of production environment of EDA
tools being developed and supported. Entire product R&D and support
organization is highly dependent on the availably of this infrastructure.
Consolidating the infrastructure but guaranteeing higher uptime and
operational efficiency was a key business priority which the design and
implementation team happily took as a challenge.

Q What were the key technical challenges that were faced while
implementing this project?
Ensuring Uptime: This was one of the major challenges in front of the
design team. They had to ensure that the project when implemented gives 100%
uptime to the R&D production environment.

Energy efficiency: Saving energy while creating this datacenter was
always at the top of our minds. We wanted to save energy and make a visible
green impact as a part of this project.

Scalability: Modularity was one of the key mantra. Each and every
critical piece of the datacenter has been designed to keep this basic
guideline in mind. This was critical to ensure that we do not unnecessarily
block our capital.

Plus the cooling is also taken care of very efficiently by using hot and cold
aisle design. They are reducing the shot circuiting of cold air thrown by the AC
and the hot air generated by the servers. This has significantly reduced the
cooling requirement and the company is able to set the ACs at a higher
temperature (for saving power and cost), and at the same time able to derive the
desired amount of cooling.

Company Scenario
Before Deployment
  • Before the deployment, there were 12 different server rooms with 1350
    servers spread across three locations at Cadence Noida resulting in
    management difficulty and resource wastage
After Deployment
  • Centralized management saved money in power, equipment and space.
Implementation Partner

Anil Munjal, PCI

The overall datacenter is very nicely planned by providing redundancy for
everything starting from IT components to UPSs (two redundant bank of UPS
systems feeding DC through separate electrical distribution systems segregated
by a concrete wall.), Gen sets, local transformers, etc. They also have facility
to monitor power consumption at each rack level in real-time. They have also
used static switches for servers with single PSU. They also uses
surge-suppressors at every row level distribution and have provided dedicated
grounding system for datacenter using chemical earthling pits. The company has
gone to such an extent of planning that they have not allowed a single copper
wire to connect between the datacenter to the network. This is done to make sure
no shot circuiting or earthling leaks out from the production network to the
datacenter. To achieve this, they have connected the production network with the
datacenter over fibre lines. And most servers from the datacenter now can be
remotely accessed using cheaper home-grown webized management console using IPMI
and other related technologies.

For securing the premises they have used CCTV surveillance, biometric and
smart card based access control. One very innovative thing which they have done
is that they have made separate room for staging and repair work, so that the
service providers don’t have to enter the datacenter for any reason. The staff
brings the devices to the staging room, get it checked and fixed and then the
staff takes it back. Similarly, for the telecom lines and equipments, they have
a separate room where all the lines and devices terminate. Any telecom service
provider to fix anything, can operate from that room itself thus reducing the
requirement of outsiders access to the core datacenter. Let’s have a look at the
green initiatives Cadence has taken the results which made it to win the prize
in the green category of BIT.

As discussed earlier, the cold aisle containment and the consolidation itself
turned out to be the two major cost savings in terms of electricity. After the
consolidation, the company has grown its number of servers in the grid from 1000
to 1350 which is an increase of 35%, but they were at the same time able to
reduce the power consumption from 11,51,064 KWH to 6,93,792KWH. This means a
power saving of around 40%. The cost they are saving here is 20 Lakh per annum.
And not only this, while building this new datacenter they kept recycling very
much in their mind and as a result they have reused some 50,000 bricks, 65,000KG
Steel, 30 cu ft wood. Plus they have also replanted the 40 trees which were
there in the area where this new datacenter was being built.

No Comments so far

Jump into a conversation

No Comments Yet!

You can be the one to start a conversation.