Advertisment

Understanding Quality of Services

author-image
PCQ Bureau
New Update

The key objective behind any bandwidth-management solution is to provide a particular QoS to the end customer. In this case, customers for an organization are its employees. While in case of an ISP, customers are the organizations. In case of the latter, the organizations must ensure that they get a minimum guaranteed QoS from the ISP and it must be put down in the SLA (Service Level Agreement). In the former, there may not always be a formal SLA (though, in some organizations it is), but it would be the IT department's job to assure a minimum guaranteed QoS. Here, we'll first understand what QoS is and why it is so important, followed by how organizations can apply it to their setups.

Advertisment

QoS defines what kind of throughput would be available for applications in a particular environment, be it LAN, MAN or WAN. QoS can be implemented in three ways: FIFO (First in First Out), QoS and CoS (Class of Service). In the first case, you're allowing the application traffic to pass over the link as is. No traffic shaping, marking or prioritizing happens. So, there's no bandwidth management. The second method, called QoS, aims to guarantee a minimum bandwidth level for different applications. It does so by analyzing the traffic, prioritizing it in the order of importance, and allowing high priority traffic to go first. CoS divides the network into smaller classes of service, with dedicated network resources. 

Defining a policy

Before we get into the nitty-gritty of QoS, it's important for businesses to set up access policies on their network. These must be defined both at the network as well as user level. It becomes very critical at the WAN level, because bandwidth is limited there, and you're paying a recurring cost on it. At the network level, you need to define which applications deserve what priority. This is where you have to determine which applications are critical for running your business. At the user level, you must define an AUP or Acceptable Usage Policy, which will define the 'do's and don'ts' on your network. For instance, do you want to allow the use of chatting using public Instant Messengers, or allow heavy downloads. Once the policy has been defined, you need to start by assessing your existing network usage. 

Measuring existing QoS

In order to provide the right quality of service to your applications, it's important to first understand how your network is doing. What sort of quality of service currently exists on your network? This is measured using three variables, namely latency, dropped packets and jitter. Latency is the time it takes for data to reach from source to destination without any load. Since this is an ideal condition, it's inherent to the equipment being used, and therefore, a permanent phenomenon. The only way to reduce latency is to choose the equipment carefully, else there will be permanent latency built into the system. The other way of measuring the quality of service is to measure the number of packets being dropped. Since IP by nature is unreliable, it's going to loose or drop packets as the network becomes congested. Higher packet drop rate indicates a congested network, meaning poor quality of service. The last form of measuring the quality of service is jitter. This is an unpredictable variable delay in data reaching from a source to destination under certain load. It's unpredictable because it depends upon how congested your network is. Jitter becomes of utmost importance in a converged network where you're passing voice, video, and data over the same link. So, as the network utilization increases, the number of dropped packets and jitter increase. 

Advertisment

One solution to traffic congestion is to increase the bandwidth. Unfortunately, bandwidth is one commodity that will never be sufficient. Whenever you buy more bandwidth, it will quickly be consumed by users for relatively irrelevant applications. Extra bandwidth is, therefore, not a permanent solution. It's better to do congestion management and provide the right QoS to the applications. So, the first thing to do before applying QoS is to monitor your network for utilization. At what percent utilization does it peak, what time of the day does it happen, which applications hog the maximum bandwidth, etc. How well are you managing your WAN links? Are your mission critical applications gaining priority over regular applications? You would need to put in place network monitoring applications to answer these questions. 

Data classification

You can improve QoS by controlling the flow of data across your network. In order to do that, you must be able to identify the types of packets that are flowing. You must also be able to mark them with tags that say which is more important in the order of priority. Finally, you must be able to control these packets and allow those with higher priority to pass first. You must also be able to reserve fixed bandwidth for certain types of packets, for very sensitive data. In the networking world, this is achieved by various protocols. Marking of packets is done by 802.1Q/p, DifffServ and MPLS. Bandwidth reservation is done by RSVP, and classification of packets is done by
IntServ.

The 802.1Q/p protocol works at layer 2 and, therefore, can't be routed. That's why it's suitable on a LAN. This form of packet marking is only possible in certain layer 2 switches. In the Cisco Catalyst switches, for instance, incoming data frames can be tagged with a CoS value, and the switch is able to differentiate frames based on that. 

Advertisment

The DiffServ can mark at layer 3 so that packets can cross routers, but it can't be recognized by different networks. That's why it's only suitable for a MAN, with private leased links. MPLS on the other hand, can work on WAN links, as its markings can cross routers and is recognized by different networks. 

While marking protocols can classify packets in the order of priority, it can't guarantee that they will be delivered on time. In other words, if the network is congested, then any level of marking will not help. That's where RSVP comes into the picture, which simply reserves a particular amount of bandwidth for certain types of data. IntServ on the other hand, creates three types of reservation, namely guaranteed, control load and best effort. 

Congestion management

Once data has been classified, it needs to be prioritized and then sent across. This is where congestion management steps in. There are lots of applications running on a network, all of which have different requirements from the network. For instance, voice and video applications are time sensitive, in that they can't afford to have any kind of jitter or delays in transmission.

Advertisment

Therefore, when they're used on the network, they must assume priority over other applications. Congestion management helps to do that. It can also help in cases where there's bursty traffic, which can cause temporary congestion. It can also be
used when users are experiencing poor response time from the applications. 

There are a few basic steps followed by congestion management algorithms. It involves first creating multiple queues in a networking device such as a router. This is followed by assigning packets to those queues depending upon how they've been classified, and finally, the packets are scheduled to be transmitted across the network. 

There can be various types of queuing in congestion management. The most basic is FIFO (First in First Out), wherein, the packets are sent out as soon as they come in. This becomes useful when traffic on the network is light. Beyond this, other mechanisms such as Fair Queuing and Round Robin are used. 

Advertisment

Congestion avoidance

There's another form of queue management also in practice, called congestion avoidance. Unlike congestion management, where prioritization of traffic is done after congestion has occurred, this technique does it before. It involves keeping a tab of the traffic flow on the network. The moment it exceeds a certain threshold, packets are dropped. If the source that's sending the packets detects that packets are being dropped, it will slow down its transmission, thereby preventing congestion from happening in the first place. 

In the following articles, we'll look at bandwidth related issues in a WAN and a LAN. 

QoS Protocol: Integrated Services

Advertisment

There are two ways to assure certain bandwidth for a network service. One is to reserve bandwidth for a network service before it flows. The second is, on seeing a certain type of network service, that service is given priority over other services. The former is called Integrated Service or IntServ and the latter, is called Differentiated Service or DiffServ. For DiffServ, see box on the following page. 

Integrated Services 

A popular QoS (Quality of Service) protocol that is defined for Integrated services is RSVP (ReSerVation Protocol). Being popular protocol, the terms Integrated Services and RSVP are often used interchangeably. In case of IntServ the service under consideration is pre-allocated bandwidth. The process starts with the applications (sender and the receiver), negotiating and agreeing upon the reasonable bandwidth required by a service, before it flows on the network. A simple example can be video conferencing applications running on two machines, across the network(s), for a peer-to-peer conference. The sender sends to the receiver a request for allocation, which bundles the bandwidth requirements for the service, in terms of the
desired transmission rate and some other parameters. 

Advertisment

This request may travel across networks through many routers. Success of the negotiation takes place only if all the routers that fall in between the sender and the receiver support RSVP. If RSVP is supported, each router records the information of the sender router that passed on the negotiation to it. This establishes a dedicated virtual path from the sender to the receiver across routers. Henceforth, data traveling across this dedicated path is assured certain bandwidth (as agreed upon during the initial negotiation). 

SBM

The Ethernet as we know does not provide any type of QoS inherently. On an Ethernet network, any type of traffic flows at any time and in any amounts. Ethernet tries the best to deliver all the traffic to the destination with any type of control. This type of network service is called Best Effort service. SBM or Subnet Bandwidth Manager brings RSVP protocol to Ethernet.

SBM is a signaling protocol that works between network hosts (machines) and switches. SBM consists of two components
namely Bandwidth Allocator and Requestor Module. The Bandwidth Allocator resides on the network switch while, the Requestor Module on the hosts. There can be several Bandwidth Allocators on the LAN. This is done to facilitate failover. To start with, one of this Bandwidth Allocator is elected to manage the traffic and is called DSBM (Designated Subnet Bandwidth Manager). Any host that requires transmitting on the network with an assured bandwidth, first contacts the DSBM. The DSBM then decides upon reserving the bandwidth for the requested transmission. The subsequent process is similar to that defined in RSVP section. The only difference being SBM enabled switches are used in each network which create the PATH.

QoS Protocol: Differentiated Services

This is the second way to assure a certain bandwidth to a network service. The other is Integrated Services 

Differentiated Services

DiffServ prioritizes traffic on the flow. It usually operates at the router. A DiffServ enabled router applies a PHB (Per Hop Behavior) to the traffic entering the network. The traffic (packets) gets marked at the point (router) when they enter a network. This is called ingres point. The traffic is marked to use a particular type of PHB. There are two types of PHB: Expedited Forwarding and Assured Forwarding. In case of Expedited Forwarding, low priority traffic can be dropped to prefer a high priority traffic or service. In case of Assured Forwarding, a low priority traffic is held back to allow a high priority one. The low priority traffic may or may not be dropped. When the traffic leaves the network it is unmarked at the router (called the egres point). As the traffic flows across networks, the Diffserv networks share an SLA (Service Level Agreement), which specifies marking of traffic and what types of PHB to apply. 

MPLS

Of worth mentioning, is another protocol called Multi-Protocol Label Switching. It is similar to DiffServ. In MPLS, each router determines the next efficient hop (or router) for the traffic or packets by looking up a label. The first router (ingres point) calculates the route for the traffic according to the priority and adds a label to the packets, which identifies the priority. 

The calculated route for the traffic is stored in a table where the next hop (router) is determined by using the label as the index value. Thus, the subsequent routers do a lookup for the next hop (or router) in the table by using the label as an index. The lookup results into a new label, which is substituted for the existing label, and the packet is routed to the next router. By using a label to determine the next hop or routing path, the routers are off burdened from the task of using complex routing algorithms and the label specifies a route for the traffic as per the decided priority.

Advertisment

Stay connected with us through our social media channels for the latest updates and news!

Follow us: