by July 9, 2003 0 comments

Many of us depend on various benchmark results to decide which computer or component to buy. And there are benchmarks to suit various needs. These range from the WinBenches and 3D Marks that measure PC and component performance to TPC ratings for high-end Enterprise class machines and to Linpack for measuring supercomputer performance. 

Every once in a while, some vendor or the other claims from all available rooftops that their system has topped a benchmark. And with equal frequency, someone or the other is accused of misreporting, or worse still, fudging benchmark results. Such accusations are as old as the benchmarking industry itself. In the good old days, it was the anti-virus vendors who were accused of designing their software to default to a particular virus-testing mode if they detect benchmark-like activity. The next were the graphics-card vendors who came out with drivers optimized for popular benchmarks like WinBench. Recent instances when both nVidia and ATI were found to be detecting the starting up of 3DMark and modifying the drivers to give better scores show that this old game still continues. Even more recently, Apple showed a number of SPEC benchmark results to claim that the new G5 was the fastest desktop in the world, only to have almost every single claim and even the test settings questioned. Even the high-end TPC benchmarks, which take thousands of dollars to set up have been questioned, including on some basic assumptions.

Why do these questions come up? The answer to this lies in that the computer can have a large number of variations in hardware and software. Even changing a driver can change benchmark results. Then there are hundreds of hardware and software settings that can influence the result. Then, a computer is capable of a wide spectrum of activities. And there are a number of benchmarks that measure performance in different contexts. And this measure would be irrelevant in all others.

All this means that a vendor can pick and choose the exact configuration of hardware, software components, and drivers to present his system in the best possible light. So what, if such a system is unavailable elsewhere or a poor performer in real life usage!

To protect against such misuse, benchmark creators specify that anyone announcing a benchmark result should also disclose all settings. However, such disclosures tend to be well hidden, deep inside voluminous reports, much like statutory warnings on cigarette packs, and only an expert eye would be able to detect the catch in them.

That is why you should disregard such benchmarks when making purchase decisions, and as far as possible look for real life benchmarks, done with the applications you will be running. For more expensive purchases, insist that competing vendors demonstrate live benchmarks with the hardware and software you plan to use, running the processes you will run, under your watch, or that of an independent expert!

Krishna Kumar

No Comments so far

Jump into a conversation

No Comments Yet!

You can be the one to start a conversation.