Web Technologies:Enter the Next

PCQ Bureau
New Update

In order to understand and appreciate the next generation of web

technologies, it's important to understand what the current generation has to

offer. For a long time, websites were nothing but a bunch of web pages put

together for people to click and browse. While it proved to be a terrific source

of information, there were many limitations in it. For one, people couldn't

interact on it. There was just one sea of pages to go through. If two people

were browsing the same site, they had no way of knowing that. If you wanted to

modify a web page or a part of it, then you had to be familiar with HTML coding.

Plus, you had to modify the complete web page. There was no way to change parts

of it. Over a period of time, websites started springing up like mushrooms on a

rotting log. The situation became that 'if it moves, then it must have a

website'. With so many websites offering the same set of static pages, life

became rather boring on the Internet. Something was needed to break this



That something was web 2.0. It didn't spring up over night. In fact, it has

been there for many years now, gradually getting into our lives. It comprises of

many technologies that we've all heard of already, like JavaScript, XML, ASP.Net,

PHP/Perl, MySQL, etc. These gave the web a different way of dealing with data.

Using these and other similar technologies, software companies started giving a

web front-end to all their existing applications. Others started building

applications solely for the web. We've all heard of Intranets and knowledge

management solutions, online CRM packages, etc. All of these have in some way or

the other contributed to building the next generation web technologies.

Unfortunately, they've all gone by relatively unnoticed.

It was only after some one put these technologies to some real creative use

that it started getting noticed. All of us have heard of blogging, wikipedia,

YouTube, Flickr, etc. All of these have converted the web into more of a

platform rather than a static source of information. Blogging allowed ordinary

users to post their thoughts on websites without knowing a word of HTML.

Wikipedia allowed users to edit what others have written online. YouTube and

Flickr need no introduction. They've become the torch bearers for the world of

social networking. Their names are the first to come to mind whenever someone

says 'web 2.0'. All these applications have given the web a new identity. Users

can now, not only access data, but also participate and add value to

applications. Integration of applications in the browser, freedom for the user

to modify content in real-time, interaction between several users, accumulation

of content from other sites and feeding of the same into one's own site, having

the desktop hosted on Web, watching video on a Web browser, blogging-all these

are possible now, thanks to major development in the Web technologies.


The emergence of Web 2.0 offers several opportunities for enterprises as

well. With the integration of several applications, like Wiki, blogging, RSS

feeds, they can make their portal solutions much more interactive and useful for

users. When ever reference to technologies behind Web 2.0 is made, AJAX is the

first name that comes up. So, let us start with it and as we move on, we will

cover some other essential technologies that are shaping up the new Web.


Today, every second Internet user has a Gmail account, most of us use flickr

to upload and share our pictures on the Web, and some of us use Google Maps to

locate the area we are travelling. Surely, you must have realized that a

completely new variety of dynamic Web applications are emerging. Most of these

applications have looks and feel similar to that of desktop applications. AJAX

(Asynchronous JavaScript and XML) is the technology that has enabled this. It

makes webpages more responsive by exchanging small amounts of data with the

server, behind the scene, so that the entire webpage doesn't need to reload

every time the user makes a request. This helps


improve the overall interactivity, speed, functionality and usability of the

webpage. JavaScript is the main programming language wherein AJAX function calls

are made and using XML the asynchronously retrieved data is formatted and kept.

Being a cross-platform technology enables it to be used across different

operating systems, computer architectures and Web browsers. Web apps always had

more benefits than desktop apps. They were easy to use, install, and develop,

but interactivity was lacking. AJAX solves that problem as well.

AJAX is a combination of several technologies each thriving in its own

right, gelling together in a powerful way. We can incorporate a standard-based

presentation using XHTML and CSS. Dynamic display and interaction can be

incorporated using Document Object Model. Data interchanging and manipulation is

mainly the work of XML and XSLT. Data is retrieved asynchronously with the help

of XMLHttpRequest and finally JavaScript, residing on client side, binds

everything together and dynamically displays and interacts with the information.

XMLHttpRequest object has been the key to the success of AJAX, as it enables

asynchronous data exchange with servers.


An enterprise can also implement AJAX in its portal. With AJAX you can build

applications with rich and dynamic content, by offering simple features like

drag and drop and auto-completion. AJAX is a framework model and is now

relishing support from both Java and .NET. After the introduction of Microsoft's

ASP 2.0, the things have become still easy for developers working on Web 2.0

compliant Web applications. To the extent that you don't even need to be an

expert in AJAX to use it. With ASP 2.0, a Web developer can compose a page out

of separate working parts that communicate independently with the server.

Online spread-sheets spell a new era where

desktop applications being overtaken by browser-based applications



These are value added services in the form of lightweight tools provided by

third parties to be integrated into a Web application. Mostly, mash-ups source

the content from a third party via a public interface or API. For example, the

small box on your personal homepage showing you the local weather forecast and

another box showing you news headlines, are forms of mash-up.

 A mash-up application would generally comprise of three different

participants, namely, API/content provider, mash-up site and client's browser

interface. The API/content providers are the facilitators of content being

mashed (sourced). They provide the content for retrieval by making it available

through APIs, which generally are in the form of Web protocols such as REST, Web

services, or RSS/Atom feeds.

There are some sites that do not provide facility for content retrieval, for

that there are mash-up techniques to extract content from such sites. 'Screen

scraping' is a process by which a tool attempts to extract information from the

content provider by attempting to parse the provider's web-pages and formatting

the retrieved content as XML data that is sent back as a response to an HTTP

request. On other hand, a mash-up site is one where mash-up logic resides. The

mash-ups can be implemented similarly to traditional Web applications using

server-side dynamic content generation technologies, like Java servlets, PHP, or

ASP. Finally, mash-up application is rendered on client's browser interface and

where actual user interaction takes place.


Interestingly, these mash-ups have opened new avenues for enterprises, as it

renders interoperability. For instance, an SAP ERP communicating with a Java

application or a PHP website and presenting a unified result to the user.

Mash-ups can also be used to source contents from disparate Web services and so,

it makes sense to implement mash-ups into an enterprise intranet. This will

provide enterprises with a common platform, from which contents of different

applications can be accessed. As portal would be using a Web browser, there

would not be a need to install client-environments for different applications,

and thus would provide employees the flexibility to work from different

locations. They just need being on the Net and should be having a Web browser to


Web feed

It is a data format, which is generally used to facilitate users with the

content that is frequently updated. Content distributors syndicate a Web feed

and require users to subscribe to it. The entire collection of Web feed is

aggregated at one spot with the use of Internet aggregators. An aggregator is a

client-software or Web based service that aggregates syndicated Web content like

blogs, news headlines etc, at a single location.

The functionality of Web feed is, as simple as, dragging a link from the Web

browser to the aggregator. The content provider publishes a feed-link on its

website and the end users subscribe to it, via an aggregator that is hosted on

their machine. Aggregator enquires about any new content uploaded on the server

and then either makes note of the new content or downloads it to the client's



Web feeds are designed to be machine-readable rather than being

human-readable, hence they can be used to automatically transfer information

from one website to another without human intervention. The two main Web feed

formats are RSS and Atom.

RSS: RSS (Really Simple Syndication) is an XML based protocol, which

when used with feeds and aggregators, offers website summaries and syndications.

The RSS feeds take the form of a single XML file that can be hosted and updated

automatically by the website owners and accessed and read by RSS feed software.

The RSS feed is an XML based document that has a global container RSS tag of

format 2.0. This XML file has tags that define the main website, and provides

set of item-tags representing links that have been published/updated on the

site. Each such item-tag consists of a title, short description of the items and

the links to the full text in XML format. The RSS feed reader software downloads

this XML file and parses it to form HTML data that gets displayed on the user's

browser in the form of hyperlinks, pointing to the original website.

Atom: There are two standards related to Atom-the Atom Syndication Formats is

an XML language and the Atom Publishing Protocol is a simple HTTP-based protocol

for creating and updating Web resource. The development of Atom was a result of

incompatible version of RSS syndication and poor interoperability. Even though

the functionality of RSS and Atom are some what similar, the intention with Atom

is to make development of application easier with Web syndication feeds.

RSS may either contain plain text or escaped HTML as a payload, and there is

no way to indicate which of the two is provided. In contrast to this, Atom uses

an explicitly labeled payload container. Hence, more variety of payload types,

like plain text, escaped HTML, XHTML, XML, Base64-encoded binary, is available

in Atom and at the same time reference to third party content, like documents,

video and audio streams can also be made available. Another point of difference

between Atom and RSS is that Atom includes XML schema, whereas RSS does not.

Web feeds are useful for any enterprise, if they plan to integrate it in

their portal solution. It can be used for showing corporate data as well, such

as the latest software build status, network uptime, upcoming corporate meetings

or other dashboard-like features.

Road to Enterprise 2.0

Collaboration and resource sharing have been two such headways shown by Web

2.0 that have envisioned enterprises to try benefiting from them. Blogging and

Wiki are two collaboration techniques that enterprises can look forward to

integrate into their portal solutions. Adding a blog to enterprise's portal will

add a human interactive touch to a vendor-customer or management-employee

relationship. Similarly, a central information repository can be created with

the help of Wiki.

As we have talked of Web feeds and mash-ups, they can be efficiently used in

an enterprise. Through Web feeds, employees can keep themselves abreast with

latest information of data stored in corporate applications. On other hand

mash-ups provide rich user interfaces that address the need for increased worker

productivity, by making it easier for the user to find and use the information

that he needs for a particular task or role.

Many SaaS (Software as a Service) model based applications are now being

developed, to be used as Web applications. Presently, enterprises are already

using SaaS applications in the areas of CRM, HR, accounting and e-mails, which

can be accessed over the network and require just a stub application to be

installed for usage. Now as these applications can be made available over

Internet and just requiring a Web browser for execution on client machines,

there is no need to install any service application on individual machines. Such

applications when used by an enterprise for their portal will enable the

employees to access applications remotely via the Internet.

Another Web 2.0 feature is of tag based searches, whereby user-defined tags

are associated to the content. The search results are generated on such tags.

Such tagging tool can be incorporated into enterprise portals for employees to

define tags for their content to be put on the corporate data repository. So

that later other employees can view those files, based on the tags and they

could again add appropriate tags to such files for refining the search.

This new approach will open up new business avenues for the enterprises as

this will help them increase the opportunity for productive interactions between

employees, customers and partners, which is critical for the growth of any


Web 3.0....Is it the future?

While we are talking extensively on technologies behind Web 2.0 and how an

enterprise can benefit from it, there are already talks of a newer Web

version-Web 3.0, also referred to as, Semantic Web. The vision is to make

computers capable of understanding information and perform tedious task such as

finding, sharing, combining information on the Web.

With Web 3.0, web-content won't be restricted to formats understandable to

humans only, but it will move beyond that to a form wherein software agents can

read, understand and hence use them, allowing machines to find, share and

integrate information more conveniently and on their own. It's a drive from Tim

Berner Lee's vision to make Web the universal medium for data, information, and

knowledge exchange.

Some key technologies which will eventually make Web 3.0 possible would be

RDF (Resource Description Framework), a simple language for expressing data

models which eventually refers to objects and their relationship. OWL (Web

Ontology Language) is another technology which will form the background for Web

3.0. It adds more vocabulary for describing properties and classes as well as

describes relationship between classes, characteristic of properties etc.

XML will obviously be there along with the above mentioned technologies and

together they will form the backbone for Web 3.0. It's still some time before we

see it coming in a big way. But, it surely is spelling a threat to the ruling

desktop applications. It's only a matter of time. Till then, let's wait and