by December 31, 2001 0 comments

The Internet is no longer the medium for viewing static information. Today you can watch movies, listen to CD-quality music, and even play games on it. While all this is possible, it surely takes a toll on the bandwidth. Due to this, it becomes a costly affair both for the user as well as the company providing the content. For the user, it’s the cost of the connection, and for the company, it’s the cost of bandwidth. With the arrival of broadband the cost for users has been reduced tremendously. However, it has created a further load on the companies providing the content, as people think nothing of leaving their computers on and connected for extended periods of time. This makes efficient delivery of multimedia content over the Internet a bigger challenge. So what’s the solution? Enter content-delivery networks.

Immediately after the recent World Trade Center tragedy, Internet news sites were flooded and most were unable to cope with the huge load spike that took place for news sites, especially for those showing multimedia content (There were a few exceptions like and Such a situation may occur again and again as more and more people log on to the Web and access multimedia services and content providers may have to invest more and more in bandwidth and delivery networks. Unlike traditional media where there is no further cost of the medium, on the Internet, costs scale with the increase in the number of users, and the file sizes and efficiencies go down. A content-delivery network is meant to handle such situations. It’s supposed to reduce Internet traffic jams and optimize the Internet bandwidth.

Simply put, a content-delivery network is the addition of components on an existing network infrastructure. The idea is to create multiple copies of the same content in different locations. When a user requests for particular information, the network will be able to track the user’s location and deliver the content from the nearest location to the user. For this, a content-delivery network needs to have components that will tackle three basic issues–content routing, delivery, and performance measurement. Just like a picture speaks a thousand words, concepts can always be cleared with examples. So we’ll understand content-delivery networks with the help of an example called Kontiki. It’s a company that specializes in content-delivery networks, and has software (also called Kontiki) for the same as well. Here, we’ll understand the technologies being used in

The Kontiki core
Kontiki uses the concept of bandwidth harvesting for delivering media content. This smoothens out spikes in bandwidth consumption using time shifting, adaptive-rate multiserving, and content caching.

In traditional systems, network usage is erratic and demand peaks at certain times of the day while being unused during the rest of the time, this would not be a problem but for the fact that payment is usually made for a fixed amount of bandwidth or the peak bandwidth used. Kontiki’s directory builds up lists of people who have asked for upcoming one-time deliveries of content (like software and movies) or ongoing deliveries (like the weekly news summary). The network is so designed to allow the delivery of such content during off peak hours thereby filling in the low bandwidth-consumption periods. This means that providers have to cater for less bandwidth while optimally utilizing the bandwidth they have.

Adaptive-rate multiserving is nothing but a fancy name for technology similar to GetRight. The same content is picked up from different servers, progressively picking up most of the content from the nearest and fastest nodes. This means faster downloads and deliveries and also has a built-in fault tolerance. If a particular node goes down, the others continue to serve content.

Caching further reduces loads on bandwidth required. This ensures whenever there is a new file released, some of the clients will download from the central repository, while others simply pick it up from these which were first of the block. Those clients who come in later will pick it up from these and so on. Relay caching as it is called, tries to ensure that the computers that are being served by the clients are as few hops away from itself as possible. This again optimizes delivery. As the cache spreads across the network, the initial relay cache will become an on demand cache. Implying that the client will pick it up from the nearest node where it is available whenever there is a demand for the same.

This idea is not new, the Gnutella protocol, too, uses something similar. It uses the concept of a “servent” (server + client), ie, the software like in this case acts as a server as well as a client.

Security issues
Kontiki has also integrated security protocols and network management into the system, ensuring that the flak from industry fora is minimal. And it delivers any kind of file, works with existing infrastructure and uses XML to authenticate and communicate details about the deliveries. It operates over HTTP, therefore is compatible with firewalls etc. The Kontiki Distributed Network Kernel is an embeddable component that can be used cross platform, and in a variety of devices.
None of the three ideas relating to the core content-delivery network are, really speaking, earth shattering; they’ve been thought of and implemented before. The fact that they were brought together in one application in this manner is what makes this truly innovative.

This software-based network scales without limits, and its efficiency increases in direct proportion to the number of users. This means a reversal of the initial problem that we had, ie, the more the number of users poorer the experience. 
Overall, content-delivery network is among the happening things in the Internet world. Some other players in the field include Digital Fountain and Deviant Art.

Chirag Patnaik

No Comments so far

Jump into a conversation

No Comments Yet!

You can be the one to start a conversation.

Your data will be safe!Your e-mail address will not be published. Also other data will not be shared with third person.