by December 2, 2003 0 comments

Last year’s slump saw most IT companies fighting to keep their heads above water. This year, however, the struggle seems have paid off with the market showing signs of recovery. Given the good news, we thought that this would be the ideal time to see which products and technologies were hot this year and the impact they’re likely to have in the years to come. We also thought it interesting to see areas that haven’t done very well this year despite all the noise, but have excellent prospects for the near future. So, let’s start with what was hot this year. 

Wireless LAN
Flash Memory
Dual Channel DDR 400 memory
Network Management
MS .Net
B2C Commerce

1. Wireless LAN
One of the hottest technologies this year was WLAN with tons of wireless implementations being done worldwide, new standards being developed and mature ones being ratified. The worldwide trend was towards building public hot spots, with Starbucks Coffee and Marriott Hotels being two prime examples. The former WiFi enabled its 1200 coffee outlets across the
US, while the latter added high-speed wireless access across 400 of its hotels in Germany, UK and the US. 

India, too, saw a few similar cases, but not at that scale. In India, the focus has been on WiFi in the enterprise networks and homes, and is being used by the hospitality sector and some leading educational institutes. This is due to government regulations that permit it to be used only indoors. The going standard, 802.11b, works in the 2.4 GHz frequency band and offers up to 11 Mbps bandwidth. So, companies can implement ‘b’ products within their networks without obtaining a license.

Thankfully, a higher standard, 802.11g offering up to 54 Mbps bandwidth in the same frequency band, was ratified this year.

Plus, most 802.11b equipment can be upgraded to this standard, giving companies the flexibility to add more bandwidth. 

Government regulations aside, currently wireless faces two implementation issues: security and management. Most wireless equipment use something called WEP security, which is not sufficient. A new security standard called 802.11i has been introduced and is effective, but can be costly to implement. Managing a wireless network can also become a nightmare as it grows, because each wireless access point has to be managed individually. New technologies such as WLAN switches have recently been introduced to address both the manageability and security issues.

So, what’s in store for wireless? IDC predicts the total WLAN equipment revenue in India will grow at a CAGR of 35% from 2002 to 2007. Another projection by Gartner Dataquest says 50% of all laptops sold will have WiFi prebuilt by 2004. 

Though CDMA is a new technology for mobile telecommunications as against the widely used GSM, it still created ripples in India this year. CDMA started out in India as a limited mobility service by basic phone operators; ie, it could only be used to provide wireless telephony within a limited area. Any service provider offering long-distance wireless telephony or full mobility had to use GSM. This doesn’t in any way mean that CDMA as a technology is not capable of providing full mobility. Many countries are using the technology for this; in India, government regulations have prevented it from being used as a true mobile service.

In May 2003, Reliance Infocomm launched its commercial CDMA limited mobility service in India. Though meant to be used as a WLL (Wireless in Local Loop) service, Reliance offered its mobility services almost similar to cellular mobile services.

This led to many litigation cases between cellular and WLL service providers. Still, WLL service providers continued to provide cheaper cellular rates and saw a much higher growth rate than cellular operators. Finally, in November, the government introduced the Unified Telecom License, which did not place any restriction on the use of technology to provide full mobility services. CDMA operators will now be able to provide full cellular service, which means that the technology is likely to proliferate and compete with

3. Flash Memory
Names such as USB storage, Compact Flash, Sony Memory stick and SD cards became commonplace. These are Flash memory or solid-state storage devices. Their small size, lightweight, and low cost make them a viable solution for carrying any kind of data. This storage media is more reliable than the magnetic media of a floppy because here data is stored on a Flash memory chip. Cost isn’t that high either, with a 128 MB USB drive coming for around Rs 1,500. Devices from cellphones and PDAs to MFDs and hardware appliances like firewalls came out with integrated readers for Flash memory.

Unfortunately, Flash memories come in different form factors and designs, which lead to compatibility issues. For instance, if you have a digicam that supports the Sony memory stick, while your friend’s digicam supports SD or compact Flash cards, then you can’t directly exchange pictures. Currently, manufacturers are embedding multiple card readers in their devices, but this may not be a long-term solution. Either we’ll see some technologies disappear or a common standard emerge. 

4. DVDs
Though popular worldwide for a while, in India DVDs became popular only this year. On the consumer side, DVD-Video was hot and quite a few DVD players were introduced at very attractive price points. This was fueled by easy availability of DVD movies. Since technology wise DVDs give better quality than VCDs, the move to DVDs was only expected sooner or later. High-end DVD recorders, which could record directly from a TV either to DVD media or a built-in hard drive, also became available. 

At the desktop, CDs have reached a saturation point with both read and write speeds stuck at 52x and capacities of around 700 MB. And, since DVDs have
seven times more capacity than that of an ordinary CD, they are the next bet. 

A variety of DVD drives are now thronging the market, starting from plain DVD drives to DVD combos that can write CDs and read DVDs to DVD-writers at the upper end. Since DVD-drives can also read or write CDs, it makes sense to buy one instead of a plain CD-drive. Currently, DVD-writer prices are high, but then so were CD-writers whey they hit the market. 

The future seems clear. DVD penetration is likely to rise and prices to fall in both the consumer and desktop segments. 
DVD-Audio is currently in the infancy stage, but has some promising features such as 6-channel high quality audio. Already, some big names such as Bose have built support for this format in their systems.

5. Dual Channel DDR 400 memory
When Intel started shipping its P4 processor at the end of 2000, it supported only a new memory type RDRAM. RDRAM was much faster than the then widely used SDRAM. But by the middle of 2002, Intel added support for a new type of DDR SDRAM, though it was also slower than RDRAM. DDR started with an initial bus speed of 200 MHz. But this year, this reached to 400 MHz and DDR400 was readily available. By this time RDRAM also reached a speed of 1066MHz and provided a maximum bandwidth of 4.2 GBps. DDR 400 still had the maximum bandwidth of 3.2 GBps, much lower than RDRAM. But Intel used this memory in dual-channel memory architecture, taking the total bandwidth from 3.2 to 6.4GBps.
This is far more than RDRAM bandwidth. 

In 2003 Intel introduced two high-end P4 chipsets, which supported single and dual channel DDR400 and not RDRAM. In addition, RDRAM was removed from Intel’s memory roadmap. In the next phase Intel motherboards will support DDR II memory, which will increase memory speeds even further. So, DDR 400 and clever engineering of dual channel has finally managed to replace RDRAM for Intel’s crème-de-la-crème motherboard chipsets and will be the hot favorite for all new motherboard chipsets.

6. MFDs
MFDs (Multi Functional Devices) are products that can be connected to a PC either directly or through a network and can perform at least two out of four functions of printing, copying, scanning and faxing, without compromising on the quality of individual tasks. However, most devices come with three or all four functions. Though MFDs have been around for a long time, it was only this year that they become one of the fastest growing segments. 

You can get MFDs from the entry-level all the way up to the high-end segment. The entry-level devices are meant for personal use and have color inkjet printer, the mid-range one are for small workgroups and mostly have monochrome laser printers, while the high-end ones are for large organizations and have either monochrome or color laser printers.

7. Linux
This year was full of surprises for the Linux community, some pleasant and some not. More organizations gave serious thoughts to deploying Linux in their enterprise production environments, and many went ahead with implementations. This was fueled by the plethora of open-source software available for Linux. In fact, there’s open-source software for just about every enterprise need, most of which we have covered in PCQuest throughout this year. 

March this year saw SCO filing a lawsuit against IBM for breach of contract. According to SCO, IBM improperly introduced copyrighted Unix code into Linux . This raised doubts on whether Linux users could continue using Linux without paying SCO also. The next major one was Novell’s acquisition of Ximian desktop in August, and SuSE Linux in November. The first acquisition will let Novell get into the Linux desktop market, while the latter will see a play on servers. The last major surprise of the year came in November, when Red Hat announced that it will discontinue the popular Red line and will instead concentrate on Red Hat Enterprise Linux. It would however continue to service the free software movement by sponsoring the Fedora project (see Fedora review in this issue, page 104). There are a lot of mixed feelings about this move, with the maximum ripples in the small business and individual user community. All those who have implemented Red Hat Linux in their production environments would have to either upgrade to Red Hat’s Enterprise edition or move on to another OS. It’s also not clear whether Fedora will continue to support and update older Red Hat releases. 

In short, a year that saw Linux gain major ground, both at the upper and the lower ends of the business spectrum. It also saw questions and actions that could well undermine its attractiveness atleast to some current and potential users. 

On the otherhand, open-source software has now moved much beyond Linux and has started seeing a momentum of its own.

8. Network Management
This year saw the proliferation of network management software. There’s asset management software to handle desktop inventory, help desk management to make life easier for your IT support staff, storage management for your storage infrastructure, and software for niche applications like cyber café management. 

Besides the variety, another mantra, far more interesting and complex also came into being-self-healing, self-configuring, and self-managing networks. Key drivers behind this initiative are IBM, HP, and Sun, who call it autonomic computing, adaptive enterprise, and N1 respectively. To explain the concept, let’s take large servers for example. If you had to decide which enterprise server to buy, you won’t be able to choose based on technology alone. That’ because they all have the same standard components, conform to similar standards, and would probably serve the purpose equally well. So where does the difference lie? Bargains and better deals aside, it could be this initiative that will skew the preference. Using this technology, a server farm running some mission critical application, would be able to automatically detect routine problems and fix them without manual intervention. This could be something worthwhile for an IT manager, as it would save him a lot of time, effort, and even money. Taking the argument further, suppose most elements on a network could be managed like this through software. It would make life a lot easier for the IT manager.

9. MS .Net
.NET, Microsoft’s solution for Web services, was announced in June 2000. This year, however, saw a lot of developments in this area. First, .NET framework 1.1 and Visual Studio.NET 2003 were released. The framework offers a level of abstraction between Windows and the programmer, meaning the programmer can use the language of his choice–be it Visual Basic, Perl, C++ or even Java–for building Web services. In all cases the programming construct remains the same, which ties Windows and Web programming. 

The other big thing that happened this year was the release of the first Microsoft product to have .NET capabilities built-in, the Windows 2003 Server. This can act as a full-fledged app server for your Web services based applications. Already there are a number of sites using Windows 2003 server for hosting. 

10. B2C Commerce
During the dot com boom, B2C commerce was the visible face of Internet commerce and also the part that went bust. There were plenty of reasons why that happened, which we’ll not get into as they’ve been covered enough. 

The good news is that B2C is making a come back, but this time with more seriousness and sound business logic. Any guesses on who’s making money on the Net? Two success stories from India are contests2win and Indian Railways. The former hosts contests which anybody in India and some other countries can participate in for free and win prizes. It’s free for the user, but costs money for anybody who wants to host a contest. 

Indian Railways, on the other hand, allows users to do online reservation from any place to any place through the Internet.

This is an example of a brick and mortar establishment using the power of the Internet to reach out to consumers.

Wannabe Stuff of the year

A technology doesn’t always become popular over night. It depends upon factors such as how hot are the prevalent technologies, how much does it cost to implement the new one and vendor acceptance. Several technologies that were introduced either this year or in the past two to three years,
fall in this category. They didn’t create much impact in the user space this year, but have strong future

Gigabit Ethernet
Gbe (Gigabit Ethernet) has been talked about for years now but, unfortunately, it hasn’t managed to conquer the networking space. This year saw some developments that could help the technology grow in the near future. For one, we finally saw product launches in the market this year at far more affordable prices than earlier. Second, we saw it slowly beginning to enter the desktop space with many high-end motherboards coming with integrated Gbe ports. Plus, most servers are equipped with Gbe ports. 

While these will help the technology grow to some extent, the major hurdle lies elsewhere–the structured cabling space. Gbe won’t become widely accepted unless the existing cabling infrastructure can support it. The laying of cables has to be done much more carefully than for the existing 10/100 Mbps networks. That’s because at such high speeds, mis-synchronization can occur very easily, which can actually reduce the performance by as much as 50%, meaning 500 Mb/s.

So, the technology will start with being implemented at the network backbone level and then move from there. The ratification of the Cat 6 standard for Gbe last year will help it to some extent, but the real growth will occur as the cost of equipment comes down. Plus, further developments to this technology like the 10 Gbe are also happening. Who knows, it might even become a better alternative to fiber. 

Serial ATA
SATA is touted to be the successor to the existing PATA (Parallel ATA) interface hard drives. By the end of 2002, people started hearing about SATA, with drives scheduled to arrive by mid-2003. Thankfully, the drives did arrive as per schedule, promising a lot to the user. They sported thinner, more manageable cables, promised faster data rates and better reliability.

They were targeted at the power user in the desktop space and for low-end non mission-critical servers in the enterprise space. So far they’ve received full support from hardware manufacturers with most new motherboards shipping with SATA controllers. 

PATA drives seem to have reached their end of life as far as performance goes, but SATA has a roadmap that will increase their performance many fold. In fact, the next version, namely SATA II, is slated for release next year, and promises transfer rates of up to 300 MBps as opposed to the existing 150 MBps. If that happens, SATA may give some serious competition to SCSI.

With so many benefits and so much support, why didn’t they proliferate this year? Two reasons: cost and performance.

Currently, SATA drives are more expensive than their PATA counterparts. Plus, they don’t offer significant performance gain. 

Desktop RAID
RAID has always been in the r\ealm of servers because it helps achieve a mix of redundancy and performance. This year, RAID became a reality at the desktop using ordinary IDE drives. While IDE-based RAID was possible earlier as well with a number of vendors offering IDE-based RAID cards, this year saw it being integrated with motherboards, such as the Intel 875PBZ. It started with motherboards that had Serial ATA support, which could also be configured to work in RAID level 0 or 1. Later it also moved into the Parallel ATA domain. 

RAID 0 improves performance through data striping and RAID 1 achieves redundancy. In both cases of RAID, you need two hard drives. RAID 0 achieves far better throughputs than when using a single hard drive. 

Till now the hard drive was the biggest bottleneck to building a workstation-class machine. You could either go for a single 7200 rpm IDE hard drive or buy the more expensive SCSI drives. RAID finds an in between path that is not too expensive and yet gives great performance. It can benefit in all throughput intensive applications, be they audio ripping and MP3 encoding, video capture and editing, DVD encoding, running CAD/CAM applications or when using programs like Photoshop and Premiere.

Since the technology entered the desktop market this year, it would take some time to become popular. Hopefully, it should pick up next year. 

Grid computing
This is an offshoot of distributed computing wherein a compute intensive workload is distributed across multiple systems for a speedy delivery of results. Distributed computing has been used in high-end computing environments, such as engineering and bio-medical research. Grid computing brings this concept to the enterprise. Simply put, it harnesses the unused potential of all systems in your organization. Typically, a large part of an organization’s PCs would be used for running productivity apps such as word processing, browsing and e-mail. These apps leave a lot of the PC’s power unutilized. With grid computing, organizations can use this spare power to perform complex tasks. Today, big names such as IBM, HP and Sun have announced grid-computing initiatives. There’s the Global Grid Forum, which has developed a standard for grid computing called the Open Grid Services Architecture. So, though there was a lot of noise around grid computing this year, it was mostly in the vendor space. Not too much happened in the user space, except for niche areas such as life sciences, geo sciences and defense research. The good news, however, is that with all this development, we’re likely to see a lot of solutions being built around grid computing.

Anil Chopra, Anindya Roy, Anoop Mangla, Geetaj Chanana, Sanjay Majumder

No Comments so far

Jump into a conversation

No Comments Yet!

You can be the one to start a conversation.