Advertisment

Two Enterprise Blade Servers

Our test lab has been sounding like a busy airport for three months now, ever since we brought in and set up three blade servers for review

author-image
PCQ Bureau
New Update

Our test lab has been sounding like a busy airport for

three months now, ever since we brought in and set up three blade systems for

review. It has been an exciting three months, we must say, to get our hands

dirty on some of the most high-end equipment that goes into data centers (and to

hear some of our co-workers complain that they can't hear themselves think due

to the noise made by our airplanes...oops...blades!). A blade system isn't

really a single product, but more like a mini-data center with lots of servers,

networking devices, storage, etc all fitted into a small box. It's a different

matter of course that such a small box itself weighs more than half a tonne!

Advertisment

A blade system is an engineering marvel, with so much

compute capability built into such a small form factor. It's ideal for

organizations looking to do server consolidation to save precious real-estate in

their data centers. It can also be used for HPC (High-Performance Computing)

applications, or even to replace the entire server room of a small organization,

who could start with a few blade servers and then add more as their requirements

grow.

The three blade systems we evaluated were from Dell, NetWeb,

and Fujitsu. The first two have been reviewed in this issue, and the one from

Fujitsu will be covered in the next issue, because we couldn't finish testing it

completely, despite all the late nights and weekends we spent in office

evaluating all the blades. And the fact that the Fujitsu system was taken back

for a customer demo didn't help either. Hence, you'll see its review next month,

as we try and work out something with the folks at Fujitsu in the meantime to

carry out the remaining tests.

So here is a comprehensive analysis of the two blade

systems that we reviewed this time from Dell and NetWeb.

Advertisment

Dell PowerEdge M1000e Blade System

First up on the block was Dell's eleventh generation

PowerEdge M1000e blade enclosure. Up to 16 half-height or 8 full-height blades

can fit into its 10U chassis. Alternately, you could also put combinations of

half and full-height blades into the enclosure, and Dell offers three different

blade configurations for each type, meaning a total of six different blade

models available for you to choose from. The choice of CPUs for these blades

range from Intel's Xeon 5200, 5400, or 5500 series, or the powerful six-core

Opterons from AMD. The blades models follow a particular naming pattern. Model

numbers that end with a '0', viz M600, 610, etc are based on Intel based CPUs,

while those ending with a '5' are Opteron based, viz M605, 805, etc.

publive-image

Advertisment

We received two full-height and one half-height blade for

review, with impressive configurations. One of the full-height blades sported

four quad-core Xeon X5570 CPUs running at 2.93 GHz each. It has a whopping 18

DIMM slots for DD3 memory, and the blade that shipped to us had 144 GB RAM,

fitted as 18 x 8 GB DDR3 modules. For storage, it contains four bays for taking

2.5 inch SAS drives. Ours came with 4 x 146 GB drives, each running at 10K rpm.

The second full-height blade had four six-core AMD Opteron

8439 CPUs running at 2.8 GHz each. This one supports even more memory than the

Intel-based full-height blade, with a whopping 24 DIMM slots. Ours came with 10

of them populated with 8 GB DDR2 modules. Storage capacity however, gets limited

in this blade, with only 2 drive bays, which can take up to 300 GB drives each.

For such a powerful machine, it would be nice to have four drive bays.

publive-image publive-image
Rear of the Dell enclosure with redundant PSUs, cooling

fans, CMC and the network fabric.
The Dell blades have a tool free design. Opening and

physical inspection, addition of hardware to the blades is fairly

straightforward.
Advertisment

The half-height blade servers are a little less powerful

than the full-height ones. Ours came with two quad-core Xeon 5570 processors, 96

GB RAM (8 GB x 12 DIMMs), and two 146 GB, 2.5 inch SAS drives.

Another small, but useful thing you'll find on these blades

is an LED indicator, which glows orange when there's a problem and the blade is

not functioning properly, else it glows a cool shade of blue. So at one glance,

a data center admin can tell which blades are working and which ones are down.

The chassis alone for the Dell machine weighs about 50 kg

and a fully populated unit with all the blades, switches, PSUs, cooling fans,

etc weighs about 182 kg! Considering that it's just a 10U unit, imagine the load

on the floor when you fit three of these in a 42U rack! So plan your data center

design accordingly if you're planning to install these babies.

Advertisment

Ease of Setup

Setting up the M1000E is a breeze. Simply insert all the modules you

require, power it on, connect a keyboard, display, and mouse, and power it on.

After that, the moment you plug in a blade server, the chassis automatically

recognizes the same. The setup program asks for drivers to install, which come

on a DVD along with the system. Post this, you can see the system booting, and

you can start loading an OS on it. Each blade takes around 2 minutes to

initialize and be ready. After that, the blade server's BIOS is displayed, post

which the OS takes another 2 minutes to boot up.

The Mid-Plane

As the name suggests, the mid-plane sits in the middle of the blade

enclosure and allows the blade servers to be connected into it from the front,

and the power distribution, network switches, and cooling fans to be plugged in

from the rear. It's essentially a printed circuit board with female connectors

for the blade servers. The good thing about the mid-plane is that it's scalable,

in that it's been designed to support 10 GbE & QDR Infiniband. This essentially

protects your investment so that future blade servers that come with these

technologies can be plugged right in and the mid-plane would happily support the

higher I/O bandwidth.

publive-image

On-board LCD of the Dell blade server provides you

basic networking & health info on the enclosure and the blades.

Advertisment

Since the blades are designed to draw cool air from the

front and throw the exhaust air from the rear, the midplane also houses

ventilation holes to allow the hot air to move out from the enclosure's rear.

Hot Swappability and redundancyJust about everything that

plugs into the Dell's chassis is hot swappable and redundant for

high-availability, be it the six PSUs (power supply units), the six I/O module

bays, the CMCs (Chassis Management Controller), the 9 cooling fans, or even the

blade servers themselves. We managed to plug in and pull out all components at

will, and the machine kept running 'without batting an LED!'.

The funny thing we found about the blade enclosure was its

redundancy modes. There's a 0 redundancy mode, which requires at least 3 PSUs to

be plugged in, without which the system won't run. With 2.3 kW per PSU, that's

nearly 7 kW of power with 3 PSUs plugged in. Surely if you initially require

very few components, say a single blade server, one network or pass-through

switch, one CMC, etc, then you wouldn't require so much spare power. The best

thing would have been was to allow for two PSUs, with 1+1 redundancy, so that if

one fails, the other takes over. Why should the buyer pay for the additional PSU?

The other redundant power modes supported by the m1000E are 3+1, and 3+3, where

the former implies that there's one spare PSU available on standby, while the

latter mode implies three PSUs on stand-by.

Advertisment
publive-image

The power management feature of the web based console

shows you the power consumption patterns of the individual server modules.

The six I/O modules bays support 3+3 redundancy, meaning

whichever three modules you choose can be backed up three redundant modules.

Likewise, there's an optional redundant module for the CMC.

The Network fabric

The number of I/O options provided in the M1000e are very impressive. Each

blade server supports two LAN ports on the motherboard by default, and you can

optionally add two mezzanine cards into each blade server, so that the system

can connect to multiple network topologies, be it a LAN, SAN, or Interprocess

Communication. The LAN ports can be connected through the mid-plane to either an

Ethernet pass-through switch, or an actual GbE switch, which could be either a

Dell PowerConnect or one of Cisco's Catalyst series of switches. The other two

mezzanine cards could either connect to 10 GbE modules or a Brocade Fiber

Channel switch.

Management System

The M1000e has an impressive management system, which can be accessed physically

from the blade system itself, or remotely over the network using a web browser.

There's an analog integrated KVM switch from Avocent, with

USB ports and a D-sub for connecting a keyboard, mouse, external DVD drive, and

a monitor. The KVM provides these connectivity options from the front as well as

the rear of the blade enclosure. The KVM allows you to access all 16 blades that

might be inserted into the enclosure. The good thing here is that you can insert

an external DVD drive and load an OS or other applications to any of the blade

servers.

The front of the m1000E enclosure also contains a sleek LCD

panel with a small GUI to indicate the status of various blade servers that are

inserted into the enclosure. You can access the IP addresses of the enclosure as

well as the blade servers inserted in it from this panel. The LCD panel can help

you quickly perform troubleshooting tasks on the blades as well as the

enclosure.

The third way to manage the M1000E blade server is remotely

over the network, using the CMC (Chassis Management Controller). The CMC

contains two Ethernet ports, one of which can be used to connect the system to

your network so that you can access it remotely. The second one can be used to

connect a machine directly to the blade system to access the management

software. There's also a serial port, in case you'd like to access the CLI of

the CMC.

The Management software

The CMC software can be accessed over the network using a web browser, and

provides a host of management features. It lets you control all components in

the blade enclosure. It color codes all components to quickly identify whether

they're functional or not. You can drill down into any component from the

software to get their detailed specs. Besides this, one really useful feature we

found in the software is its power monitoring capability. From the software

interface itself, you can monitor how much actual power is each blade consuming,

what's the cumulative power drawn in kWh, and even the peak consumption for each

blade.

Dell PowerEdge M1000e Blade System

publive-image

Power Consumption

After powering on just the Blade enclosure and all the rear components (CMC,

fans, PSUs, I/O modules, etc) plugged in, the M1000e consumed around 360 Watts

of power in stable state. Note that we hadn't plugged in any of the blade

servers till this point. After plugging in the blades, we could measure the

power consumption of each remotely using the CMC. We found that the full-height

blade consumed 280W of power in steady state, and then we plugged in two

half-height blades. These consumed 141 W and 125 W of power respectively.

Temperature Control

We loved the cooling fan technology incorporated into this blade unit. The

cooling fans auto-adjust their RPM to keep the unit cool. In fact, just to test

their effectiveness, we pulled out a few fans while the machine was running,

hoping that the temperature inside the enclosure would go up. We waited for a

while, and kept a constant vigil on the internal temperature, which incidentally

is another good feature provided by the CMC. The temperature indeed increased

after about ten minutes, and the remaining fans really picked up speed after

that. The fans continued to spin hard until the temperature came back down.

During the period where the fans were spinning faster, we noticed a jump in the

overall power consumption.

Bottomline: The new M1000e Blade system has an

impressive set of features and hardware configurations supported. Combine this

with the ease of setup and management, and you have a powerful solution in your

hands for server consolidation or HPC. The pricing would of course depend upon

the components you choose, considering that there are so many different modules

in it.

enterprise server
Advertisment