Advertisment

Closing The Gap

author-image
PCQ Bureau
New Update

The media's used to flak. Whether it's a group review or shoot out, or a

major survey.

Advertisment

Among our 'high flak' stories is Dataquest's annual “best e-governed states”

survey, done by IDC. The flak rarely comes from top performers in the survey. In

March, the 2008 edition of the survey listed Delhi, Goa and Chattisgarh as the

best e-governed states. Gujarat dropped from fourth to nineteenth place, flanked

by Haryana and Jharkhand(#20). The latter three, and others, were upset about

their position. Asenior official said: “Look here--we've got these awards for

our project, so your survey is nonsense.” Another said: “Our budget for this one

award-winning project is more than the complete IT budget of that state you've

rated so high!”

Did they miss a point? The awards they got were for projects: technology,

spend, planning, idea, may be execution. Our survey was about something else:

user satisfaction. Were citizens, and businesses, satisfied? Did they see an

improvement in their government interface? Speed, transparency?

Everything about a project's benchmark should boil down to that: are

customers, or users, satisfied?

Advertisment

Unfortunately, it doesn't always work that way. It may not be practical, for

a start. PC Quest's “Best IT implementation awards” featured in this issue are

an example. It follows an extensive, rigorous process where PC Q's editors whet

several hundred nominations, visit the projects, shortlist a few dozen, and

presents the shortlist to a jury panel, which then discusses, debates and

selects the five or six awardees.

What this does not always factor in is actual end-user feedback,a point that

was brought up in the jury this time. For instance, the overall winner: the

Rajasthan government's online BPL (below poverty line) census. The project won

on technology, potential impact, the fact that it works (the jury tried it out).

But the questionthat was left unanswered was: has it changed people's lives? Do

actual users find the process of BPL classification more transpar-ent? While the

PC Quest team did some follow up research, it wasn'tpractical to go down to the

'BPL' people and survey that, especially with the post-blasts situation in

Jaipur.

Prasanto K Roy,

President, ICT Publications at CyberMedia



pkr@cybermedia.com
Advertisment

Jury chairman Dr G D Gautama, IAS, noted: “This is a wonderful job of

evaluating many diverse projects. But it would be good to also evaluate the

actual benefits from these projects...reduction in down-time, cost, increased

transparency, etc.

“It is important to evaluate the actual impact. For instance, in an e-gov

project, what benefits have actually gone to the citizens? Is there feedback

from the beneficiaries?”

Now, there's a flip side too. Will end users always really rate a project

fairly?

Advertisment

Let's say a new CRM system is implemented. There are the staunch, old company

loyalists who resist it, and prefer the old system. Do a survey, and you'll find

out how much of a time-was ter the new system is. Do they have the larger

picture-that it's ultimately for improving speed of response to customers, and

knowledge base...that short term pain is for long term gain?

Now, you can say that it's the job of the implementers to carry the users

along-to get them on their side, aligned to a common goal-including a new

project or application.

This applies to product reviews too. Would you trust an expertre viewer in a

lab, or dozens of users blogging about it?

There's no simple answer. But what do you think?

Advertisment