Switched networks are a definite improvement over shared ones, mainly because they provide a dedicated connection between any two points. So if node A on port 1 of a switch sends a packet to node B on port 2, then this packet will travel straight to its destination. Whereas in a hub, a packet from one port will be forwarded to all the ports, and only the destination port will pick it up. The latter greatly increases the chances of network packet collisions, thereby downgrading performance as the number of nodes goes up. Therefore, if you are experiencing poor performance, then maybe it’s time to switch over to a switched network. If you already have a switched network, then maybe it’s time to explore it further for optimizing your network. In this article, we’ll look at some of the other features available in switches. These include resilient links for reducing link failures, aggregated bandwidth for faster network throughput, and traffic prioritization for better network quality of service. A switch may have all or some of these features. If you already have a switched network, then you could check whether they support any of these features.
Adding resilience
Resilience in a network is meant to reduce failover in links. So if one link fails, another one should automatically be able to take over. Most manageable switches today, allow you to create resilient links. Usually you can create them using two ports on the network switch. While one port will act as the primary link, the second will be a standby, and will remain inactive until the primary link fails. The moment this happens, the standby link takes over, so the users have no idea that a link failure has occurred. This process usually takes just a few seconds, so the network services would not be disrupted for long.
One place where you may need to create a resilient link is your application server. This might be running a central database server, CRM, or ERP package. The resilient link will ensure that the service doesn’t get disrupted due to link failure. Many network servers today come with two network cards for redundancy. You can connect one to the primary port on the switch, and the second to the standby port.
Improving bandwidth
If you think your network is getting congested due to heavy traffic, maybe you need to put up aggregated links on the backbone or different network segments to increase the available bandwidth. Port or link aggregation is another interesting feature found in many switches today. As the name suggests, port aggregation allows you to create a single link out of multiple ports on a switch. This aggregates the bandwidth of each port. So if you have two 100 Mbps switches supporting link aggregation of up to four ports, then you can connect four ports on each switch together. This will give you four times the bandwidth between these two switches. If each port is configured to work at 100 Mbps in full duplex mode, then the overall bandwidth available between these two switches will be 800 Mbps. Not only that, but if one of the links fails, the network continues to function because the remaining links act as redundant links. Normally, the switch will try to evenly distribute the traffic across each aggregated link, so that no single link gets overloaded.
The method of link aggregation can be different for different switches. Some switches have DIP switch settings that allow you to create aggregated links between specific ports. Other more advanced ones let you define them through software.
Prevent broadcast storms
If you have some knowledge of networks, then you must have come across the term network broadcasts. A broadcast is a network packet having no particular destination other than the network itself. It therefore reaches all ports on a switch. A broadcast storm is when a switch gets overwhelmed with network broadcast traffic to the extent that it chokes up the entire network, thereby making it unusable. It’s a topic specific to existing switched networks. If you have multiple segments on your network, and there are some redundant links between them, then there are chances of loops occurring. These are the biggest cause of broadcast storms.
This may sound a little complex, so we’ll try and simplify it with the help of a simple diagram. Suppose there are two network segments, both of which are connected by two switches, switch1 and switch2. So there are two paths available for anyone from one segment wanting to contact someone on the other.
Now suppose someone from segment 1 sends a broadcast packet. This will go to port 1 of both switches. Now let’s see the behavior of each switch. When switch 1 receives it on its port 1, it will copy it to its port 2 using standard bridging protocols. Similarly, switch 2 also receives this packet on its port 1 and copies it across to its second port. Now the second switch receives the broadcast packet, which was copied by switch one to its port 2. Similarly, switch 1 receives the broadcast packet, which switch 2 had copied to its port 2. Both switches now pass these packets to their respective port 1. Notice what has happened here. There was one broadcast packet coming from segment one, which has now turned into four broadcast packets. This creates a loop, and causes an exponential increase in the number of broadcast packets, thereby creating a broadcast storm, all due to a single broadcast packet. Collisions start occurring on the network, thereby brining it to a standstill.
The way out of this problem is something called STP (Spanning Tree Protocol). This protocol eliminates redundant links between switches and creates a single path between the two segments, as shown in the diagram. This has been around for a long time in switches now, and you can enable the protocol to prevent these broadcast storms from occurring.
You have to enable the protocol in the switch, and it does the rest. STP finds all the switches on the network and determines a root bridge amongst them. Once this is done, the root bridge will calculate all the redundant paths on the network, and also determine the best ones. It will allow data flow from the best path, and keep the redundant links closed. In case the main link fails, STP will route the traffic from the redundant link.
The switches communicate all information between themselves using frames called BPDUs (Bridge Protocol Data Units). This information is not passed to the rest of the network. Once the exchange of information is through, STP has established the correct paths on your network, eliminating all the redundant ones.
The case mentioned here is very simple, as it only contains two bridges. It’s only a matter of eliminating a single redundant path. However, when there are multiple switches, the problem becomes more complex.
Anil Chopra