Advertisment

In Search of a Perfect Test

author-image
PCQ Bureau
New Update

There's no easy answer. I've bought a cell phone which topped our reviews,

and then performed so poorly in the long run that I had to trash it. I've bought

a Ford sedan that topped comparisons, but has been plagued by a chronic problem.

I've bought a snazzy BlackBerry Pearl that topped review parameters and

features, but makes a terrible phone-and guzzles battery charge like there's no

tomorrow. And take the king of gadgets: I'm surrounded by owners trashing their

iPods, because of a dying battery or other glitches that are unfixable.

Advertisment

It's

tough for a reviewer to get beyond features and design, and to really get into a

buyer's, or an experienced user's, shoes. The problem becomes more acute with

enterprise-class products, like servers, where the cost of a wrong selection is

multiplied by the numbers and the mission-critical apps. But “more” is relative.

For a small business user, who's spent a hard-earned Rs 1 lakh on a laptop, the

cost of a bad selection is not just troubling, it's personal: it's his own

money.

A reviewer faced with a server to test for a thousand readers can get

overwhelmed by the gigabytes and gigahertz. So can a buyer. We have to do better

than that buyer, and recognize that over a five-year period the server may be

used. Raw performance and cutting-edge specs would bring far less value to the

table than reliability and predictability.

Prasanto K



Roy,
Chief Editor
Advertisment

One step we took as part of this

month's server shootout was to work with a user. A public sector organization

approached us for help with its server selection, ahead of a tender. We decided

to adopt the very detailed (but not cutting-edge) specs that this user needed,

to help us keep a focus on reality. So every spec was compared against what was

needed for this (fairly representative) application.

If a test unit is

expandable to 64 GB memory or has 16 hard disk slots, we compare that against

the headroom this user would actually need in the three to five year life,

demanded of the server. And so, top specs do not always get top brownie points.

Instead, fault tolerance, hot-swappability, overall disk performance, etc, get

higher weights.

What we could not do was really test the servers for long-term

reliability. We considered accelerated failure tests, such as heat, humidity and

vibration, in a suitably-equipped lab. We finally skipped those. Partly because

of the time, trouble and cost of these, and the fact that these are potentially

destructive-and we hadn't quite got the vendors to agree to destroying their

precious servers. And partly because enterprise-class rack-mounted servers,

including for this user, tend to be used, in fairly controlled environments.

So,

as a buyer, what you get from our reviews is a report on what's the best product

based on performance tests, features, and price/warranty, tempered with some

real-world experience. You're still stuck with figuring out long-term

reliability and support. For that, most buyers fall back on brand, reputation

and relationship.

Meanwhile, we're still looking for the perfect test. How we do

we factor in things that obviously matter a great deal to buyers? Should we add

softer factors like support and market reputation? (And, then we need to figure

out how to test these, other than long market surveys.) Tell me what you think!





Advertisment