Advertisment

Databases across Ages

author-image
PCQ Bureau
New Update





It wasn’t such a long time back that we had to write programs to do anything with a database. 


In those days, it took a COBOL program of over 250 lines to print a voucher listing, the data for which had been entered using offline machines that had single-line displays, 8 kB RAM, and an 8” floppy drive. I even remember having two dedicated data-entry operators in our team. Then, there was the accounts clerk who used to manually check the listings that came out. And it usually took three sets of corrections–all offline–before the listing was final. 

Advertisment

Each time, the data-entry operators would correct the listing, the machine operators would execute a program to copy the data onto the server’s disk, and then run a program to take printouts. Once final, the listing would be loaded (an operation called sort-merge, achieved by executing another program) into the main data file, which was then used to run the trial balance and the ledgers at the end of the month. Then we would back up the data files onto streamer tapes that used to run on an

almirah-like device called a vacuum column drive. We had two of these. Whew! That’s not all. All this was done according to a list of predefined tasks (see table). 

Week after week, new vouchers would appear, and the same routine would be followed.



Designing these reports was also an interesting task. You had these enormous graph papers. Each paper had 132 columns and several rows. You wrote the headings with a pencil, character by character, one character in each cell, corresponding to a row-column coordinate. Then each alphabetic field was defined with a series of “X”s and numeric fields with a series of 9s. You had to be careful not to mix the two, and the positioning of decimals was critical. Once the layout was ready, you wrote out the program that would create this report. Then the data-entry operators entered the code offline, and you had to compile and execute the program for a test run with test data.

Interesting? Well, there were no forms or user screens at all. There was no querying either. The only user interface we ever saw was the programs asking for date and file name, after which they would pretty much do their own thing, and return some output or the other. The database used for all this was just a set of files created using data-storage methodologies defined by the programming language. 

Advertisment

A list of predefined tasks

Date  Program File Description Remarks
10/10/89  FA0801  FA1089P.DAT Payment voucher Listing for

week 1, October 1989 
Load file is FA1089.DAT
11/10/89 FA0802 FA1089R.DAT Receipt voucher Listing for

week 1, October 1989 
Load file is FA1089.DAT
11/10/89 FA0803 FA1089J.DAT Journal voucher Listing for

week 1, October 1989 
Load file is FA1089.DAT
11/10/89 FA0804 FA1089.DAT Trial balance for

October 1989
 
11/10/89 FA0805 FA1089.DAT Voucher-wise General

Ledger for October 1989
 

The first foray into doing some online work came when we upgraded our COBOL compiler. That made it easier to create a screen section. But it also involved writing some routines that would convert the standard voucher files into ISAM (Indexed Sequential Access Method) files, so that we could do some basic search and update rows. The idea was to have the accounts clerk enter vouchers online rather than bunch them up and get them entered offline. This would reduce loads, and would help us generate the trial balance two to three days earlier than usual. It took us almost a month to get all that working. There was our database system in place, and were we proud of it!

Advertisment

In the meantime, the department had acquired a PC, and we had put an Xbase product onto it. One person set about building expertise on Xbase at about the same time when we embarked upon our ISAM file project. A day before we got the flat file to ISAM conversion going, this person had learnt how to use

Xbase. He’d also built the company’s all-new invoicing application and was busy giving it a user test. The system intrigued us, to say the least. Here was something that had a user interface, a query system, search and update capabilities or index management, file management, report and form design capabilities, structure-free programming, single-point design, and execution. There goes all our hard work!

It gave us another idea, though. We could now appreciably improve the chances of reducing our backlog. We could build applications faster, and meet demand much quicker than ever before. And it wouldn’t take too many people to work on a new project. Four small projects immediately went onto the PC. We didn’t want to risk the major ones, though. 

Compared to what we were used to, PC-based software turned out to be highly productive. Applications could be built quickly. More control could be given to the end user, because these applications had simple user interfaces that included menus and data-entry screens. Ad hoc reporting became possible because users could use report designers or even run simple queries. You could even import and export data to tools like spreadsheets. However, you couldn’t yet have really strong, unbreakable applications, and retain user flexibility at the same time.

Advertisment

The first tools that let you do that became popular in India in the late ’80s and early ’90s. They let you build PC-style applications, but you could still base the core of your application on your server.

Simultaneously, on the server side, relational-database systems started becoming popular because they reduced, and in many cases eliminated the need to write programs for server-side data management. You didn’t have to write routines for indexing, sorting, storing, and defining data anymore. Backing up, sorting and merging, search and update operations became easy.

Databases have evolved considerably since then. On the PC front, developer tools transcended traditional single-screen applications and presented information in multiple sections, increasingly being described as data windows. With this came the concept of event-driven user interfaces. This new type of user interface completely trashed the idea of a defined sequence of operation. No longer did you have to systematically move from one step to another, like from one menu to another. You could simply move anywhere on the screen, and the application would respond to what you wanted, rather than your having to respond to what the application demanded. 

Advertisment

This led to a huge improvement in usability, and database applications began moving out from central computer departments into line of business departments–such as marketing, stores, etc. Now, PCs would run the user interface part, and servers would do the database management. This trend was fuelled by further improvements in PC operating systems. The appearance of Windows helped proliferate event-driven, graphical applications, and client-development tools evolved into object-oriented tools.



As applications became more sophisticated, and more and more data got into the databases that they used, the data had to be shared across different servers. Relational databases now provide the capability to do this quite reliably. Managing transactions across physically distributed databases is now done through a system where the first server checks the second for readiness, and on affirmative response, first updates it and then itself, or doesn’t update either. New methods of managing transactions online, or even offline, have emerged in the last few years.

With the advent of the Internet, databases now have to contend with a completely new set of challenges. Database software is gearing up to the demands of the Web by providing both offline publishing and online querying. Simultaneously, developer tools are evolving to provide environments for building Web-based and client-server applications into single, seamless entities. The most interesting challenge for databases, however, comes from managing data they haven’t traditionally managed–rich data including sound, images, video, etc. Some databases take the approach of enabling storage of all these data types within themselves and provide methods to access that data. Another approach today is to leave the data where it is, and to provide the method to access it from any application. The second approach also breaks up the application into three manageable parts–the client logic, the processing logic, and the data services–to provide a viable architecture for open client and open server applications. The most exciting part is the hybrid server, where the data stored can be anything, and the developer has the flexibility of accessing it through a familiar client.

Needless to say, development strategies have changed considerably since the days we plotted out reports on graph paper. Developers are now working on building interface strategies, integration strategies, and large-scale online deployments. Web technologies, electronic mail, and groupware applications are all serviced by the database server, which itself is growing into a hybrid data store to meet today’s challenges. User independence is reaching higher levels than ever before, heralding an exciting decade ahead for application developers and databases.

Advertisment