It is not sufficient to simply write a piece of software and distribute it to someone. Software bugs and improperly tested code cost millions of dollars in damages and millions more in time and money to fix. Whether the application be personal, commercial or something as specifically written as ERP software, it needs to be tested properly and its quality accounted for.
While small development teams might do the testing part too themselves, in larger organizations it is commonplace to have dedicated testing teams in place. These testing groups use a mix of software-commercially available as well as some home-grown solutions-along with many human-touch methods to both provide proper feedback to the developers and make sure what goes to the customer is quality assured.
|
Software testing happens in several phases, some of it done by the developers themselves, some by the team leaders who check for integration issues and others by the public at large. Self testing, where it happens, should be restricted purely to the build cycle itself and never be done in the production stages of development.
Self testing: a bad idea
During a build cycle, a lot of self testing (testing carried out by the developer himself) happens. However, full-scale testing should be done by separate groups. The reason is, when we as the developer test a module, we rarely create conditions of total randomness. We don't even consider providing values in a different way. How would the code react if someone entered a telephone number where an e-mail address is asked for? How does such a thing affect something apparently disconnected down the line-in the reporting module, for instance? What happens if in a wizard we select an option, click the Back button a couple of times and then re-run it? Does the software remember our previous settings intelligently? This is why many organizations prefer to engage testers at large (public testing) to check out their software. When this is done, a lot more conditions, both deployment as well as input values, can be checked out and the actual results compared against expected behavior.
Again, the scenarios used to test the applications can make a huge difference to what is seen during testing. Is the software being tested on a real server where one is asked for or are we merely simulating it on an ordinary PC? Are we really testing that new N'Gage game on a prototype unit or on an emulator?
|
Testing scenarios
In more situations than not, testing happens under controlled environments. For example, when we at PCQuest receive a product for testing, we have a set of conditions we recreate for that product, based on the type of product. However, in a real-life scenario, this would not be the case and a completely random set of conditions will prevail. But since it is not possible to create every type of deployment scenario, we simplify the comparison process by having conditions we have tested under before and hence have results we can compare. Thus we can say that if the user has deployment scenario 'A' and is running module 'B' and has provided the input set 'C', 'D' will happen. If instead 'E' happens, we log it as an unexpected case.
This is why there are different types of testing that happen at different stages of the development and build cycles.
Test types
Testing processes are of different types: White Box, Black Box, Unit, Integration, Functional, Sanity, System, Load, Usability, Alpha and Beta. See the table for a quick comparison.
Most of these test types happen in an undocumented and automatic fashion. For example, when we add a new screen, we would naturally run and test that screen. But we usually fail to document the achieved results formally, thinking it too inconsequential. Generally, only the Unit, System, Load and Alpha/Beta testing happens formally.
Some of these happen by hand, where someone sits in front of a terminal, runs the application and notes down what he sees. Others are done using scripts, tools and automated testing software.
QA tools
Depending on the type of testing being done, a number of software tools may be used. These tools make it easy to analyze performance and provide the tester with a numerical score that can be used to compare other tests against. Some commonly used tools are WinRunner, LoadRunner, JUnit, FxCop, Visual Studio Analyzer, IBM Robot and CA's QA Center.
Depending on the language and platform of development, IDE-integrated debuggers and tools are also used.
However, as with everything else, ideal results are only achieved when 'best practices' are followed. When development companies are sure that they can deliver on the quality front, they provide something that has come to be known as 'quality assurance'.
Software and quality assurance
This is something like life assurance, given out by insurance companies. Basically, the software vendor will guarantee that for a fixed period of time, they are under a contract with you to keep your software absolutely up to date at a nominal cost. Just as with life insurance, where you assure a sum of a few lakh rupees for a comparatively paltry monthly premium, you would pay the software vendor a fixed sum of money every year.
Subscribing to such policies would help keep large deployments trouble-free and up to date at nominal cost and logistical hassles.
Sujay V. Sarma