Over the last three years, the name Server Room has completely vanished.
Today most of the traditional server rooms have evolved in size, complexity,
capacity, reliability, features and cost. To say it in one word, they have
evolved into Data Centers. But with the increase in complexity, managing them
efficiently has also become a challenge. However, to make things easier, quite a
few technologies have come up.
Remote Management
The biggest change being observed in data centers is that they don't
require physical presence of people. Remote management technologies have become
so popular, that you can perform just about every management task remotely.
Today even when you outsource your data center management to a third party, they
do most of the management remotely. While there are many ways of doing remote
management, two technologies deserve mention:
Remote Desktop Protocol (RDP): RDP has been there since the days of
NT4, in the form of Terminal services. But it became popular with masses, when
RDP features were integrated in Windows XP. Now it is widely used for remotely
accessing desktops. The protocol consumes very less bandwidth by sending KVM
signals alone over the network, giving you a feel of using the machine locally.
Today, we have tools in Windows 2003 to access hundreds of RDP machines over the
network simultaneously. We even have RDP clients for Linux or MAC, making it
truly platform independent.
IPKVM: KVMs are pretty old and popular. But traditional KVMs have a
limitation that you have to be physically present in the data center to use
them. With IP-based KVMs, you can access multiple servers from anywhere in the
world over the Internet. They allow multiple computers to be controlled remotely
across a LAN or WAN using TCP/IP over either a web browser or a specially
designed viewer. This is better than using RDP, because the software part is
completely removed from the server side. So you can access any OS over IPKVM. In
fact, you can go to the BIOS level and configure machines remotely. Other
benefits are failsafe and manageability.
DCML
Data Center Markup Language (DCML) is an XML based language used to
make different components of the data center talk to each other. Suppose you
have a virtualization layer running on your consolidated data center and you
need to provision different servers for different tasks. After a task is
accomplished, you need to decommission it. So, whenever you create a new virtual
server, all data is immediately sent to the data center management and
monitoring console. This data will include the IP address of the machine,
applications running, configuration and specs, and the contact person. So,
without any human intervention, the management and monitoring console will be
able to add this server into its list and start monitoring it. But the challenge
here is to send information from the virtual server to the monitoring sever.
This issue is tackled by DCML standards.
Disaster Recovery and BCP
Any discussion on data centers is incomplete without this topic. DR has been a
trend for a long time, but now BCP is more important as that's ultimately the
goal of doing DR. We've discussed it at length ina separate section in this
cover story.
Blade Servers and Multi-Core
Blades have been around long enough, and therefore don't need any explanation.
But with their costs coming down, and with new enterprise IT requirements like
HPC, more data centers are going to need them. Likewise, with the introduction
of multi-core processors on the x86 platform this year, server densities in the
data center are going to shoot up like anything. These technologies will shoot
up the compute capabilities of data centers like never before.
Power and Cooling
The key reason this becomes important is the rising density of equipment in the
data center. Technologies l ike blades and multi-core not only pack huge amounts
of computing power in very low space, they also result in tremendous heat
generation in the data center. Therefore, you need appropriate cooling solutions
to handle it. Today, most major power conditioning equipment companies have
transitioned into becoming physical infrastructure management companies. Two
prominent examples of this are APC and Emerson. Both offer not only UPSs, but
also racks, power, and cooling solutions. Moreover, they offer solutions to
remotely monitor and manage all their power and cooling equipment.
Server Racks
Believe it or not, but there's a lot of engineering that goes into server racks,
so choosing the right one for your data center requires careful planning. To
explain the case, we'll take a simple example of cooling from above. Many
cooling solutions end up cooling the overall data center, so that the entire
area becomes cool. As a result, you need warm clothes just to enter the data
center. This is not right, because the actual objective is to cool servers and
other equipment inside racks. You don't need cooling outside. This where a
well-engineered rack comes in. Many rack vendors have worked out rack solutions
that do just this. They don't let out the cooling from the racks, and ensure
that cooling is done where it needs to be done. One company that comes to mind
for this is President, which has racks that let you do this.
Category 5, 5 E, 6 and 7 Performance Specification Chart |
||||
Parameter | Category 5 | Category 5E | Category 6 | Category 7 |
Specified frequency range | 1-100 MHz | 1-100 MHz | 1-250 MHz | 1-600 MHz |
Attenuation | 24 dB | 24 dB | 21.7 dB | 20.8 dB |
NEXT | 27.1 dB | 30.1 dB | 39.9 dB | 62.1 dB |
Power-sum NEXT | N/A | 27.1 dB | 37.1 dB | 59.1 dB |
ACR | 3.1 dB | 6.1 dB | 18.2 dB | 41.3 dB |
ELFEXT | 17 dB | 17.4 dB | 23.2 dB | N/A |
Propagation delay | 548 nsec | 548 nsec | 548 nsec | 504 nsec |
Delay skew | 50 nsec | 50 nsec | 50 nsec | 20 nsec |
NEXT (Near End Crosstalk): Adjacent pairs are susceptible to cross-talk i.e. strong signals from one pair might be picked up by adjacent pair(s). PSNEXT (Power Sum NEXT): Sum of individual NEXT effects on each pair by the other three pairs. ACR (Attenuation to Crosstalk Ratio): Difference between cross-talk loss and attenuation. FEXT (Far End Crosstalk): Similar to NEXT, except that the signal is sent from the near end and crosstalk is measured at the far end. ELFEXT (Equal Level Far End Crosstalk): FEXT with attenuation subtracted from it. Thus, it gives a more accurate picture and is preferred. Propagation Delay: Time required for a signal to propagate from one end of the circuit to the other. Delay Skew: The difference in propagation delay between the fastest and the slowest pairs in a UTP cable. If it is too high, it may not be possible to reconstruct the signal at the receiving end. |
10 GbE
10 Gigabit Ethernet is the fastest Ethernet standard available till date. This
provides ten times the speed of a standard Gbps network. The good thing is that
it has been tested on both Fiber and copper. The latter will help data centers
have high bandwidth at a low cost. 10G adds a sub-layer in the PHY (physical)
layer of the OSI model, and uses a different encoding scheme (64/66b) for better
error rate detection during data transfer. Plus, it supports full duplex
transmission only. As 10G becomes more popular, several applications could use
the extra bandwidth like video on demand, distributed computing, HPC
deployments, medical imaging, and scientific simulation. Enterprises can set up
buildings at larger distances within a campus and link them. This will
facilitate bandwidth-intensive applications such as VoIP and digital video
conferencing. Enterprises can locate their data centers, disaster recovery
centers in a different city and yet get faster access. The cost of moving from
Gigabit to 10 Gigabit will be two to three times more, but the performance gain
might make it worth the effort. The cost of 10G products is expected to
decrease.
Cat 7 cabling
The use of Cat 7 or category 7 cables is another emerging trend in data centers.
It's essentially the latest and fastest Ethernet cabling standard, which gives
full backward compatibility to 10/100/1000 Mbps networks. As network backbones
are moving from 1Gbps to 10Gbps, Cat 7 is going to be used rapidly, because it's
the only cabling standard in copper that supports such high-speed data
transfers. The other option for 10G is fiber. But if you're planning to deploy
fiber then the biggest problem will be the loss of backward compatibility. And
you have to build a new cabling infrastructure for your organization. But there
is one limitation in Cat 7 over Fiber. With Fiber you can connect two end-points
within a distance of 45 Km, but while using copper you can go only upto 100 mtrs.
So, if you are planning to build a MAN, then there is no option other than going
for fiber; but if you are talking about connecting LAN devices, then Cat7 would
be the best possible option.
Managed structured cabling
This is another key trend in structured cabling today. As data centers become
more complex with more equipment coming in, you need something like this to
manage the connectivity. It is a real-time Layer-1 management system for
networks. Such an intelligent system consists of an end-to-end structured
cabling system with intelligent patch panels and software agents that provide a
complete view of physical layer connectivity and connect it to logical layers.
It collects real-time information used to automatically maintain database
connectivity, and is capable of presenting data in a compressed format, enabling
administrators to troubleshoot and document the network efficiently. You can for
example, immediately trace broken links and rectify them. Being real time, the
system allows admins to resolve all issues quickly. When this is coupled with
today's manageable switches the efficiency increases tremendously. As a result
the admin will not only be able to find and manage SNMP enabled devices from a
central location but also be able to check for data patterns running through the
network. With the growth of multi-location office and large and multiple campus
areas, the importance of properly managing network resources has become very
important and Intelligent calling lets you do this.