Advertisment

Trends to Watch Out for in Storage

author-image
PCQ Bureau
New Update

The explosive growth of data in combination with the needs for better data protection and disaster recovery strategies and maintaining high availability of data means that newer technologies and strategies are required to tackle these new challenges of today. Here we take a look at a couple of key trends that we see in the market today.

Advertisment

Storage Bridge Bay



The fundamental way to make a storage system highly available is to make each and every component of the system highly redundant. This includes processors, memory modules, drives (using RAID), network and other host connectivity ports, power supplies, fans and other components. However, still the disk array controller (RAID controller) and motherboard of the system constitute single points of failure in the system.

Storage Bridge Bay (SBB) is a specification created by a non-profit working group that defines a mechanical/electrical interface between a passive backplane drive array and the electronic packages that give the array its 'personality,' thereby standardizing storage controller slots. One chassis could have multiple controllers that can be hot-swapped. This ability to have multiple controllers means that the system is protected against controller failures as well, thereby giving it a truly high availability.

Availability is often expressed as a percentage of system availability. SBB represents the highest class of data availability. SBB systems are required for data centers with very high availability requirements such as 99.999% (five nines) or 99.9999% ('six nines'). Such solutions have a down time of the order of a few seconds to a couple of minutes per year.

One of the primary challenges with such a system is the fact it hosts two intelligent controllers within the same unit that share the common mid-plane and drive array. Since the drive array is shared, the two controllers must exercise a mutual exclusion policy on the drives to ensure that they don't modify the same data simultaneously causing data corruptions and inconsistencies. Thus the RAID module on the controllers must be cluster-aware to avoid such collisions and handle conflicts thereof. Further, the two controllers will have their own cache of the meta-data and data that is stored on these drives. The synchronization between these two caches needs to be maintained to ensure that one controller can resume the activities of the other controller upon its failure.

In order to do this clustered RAID communication and maintain cache coherency, the two controllers need to have a set of (preferably) dedicated communication channels. A combination of more than one communication channels such as SAS fabric, Ethernet connection etc, could be employed to ensure minimal performance impact and redundancies in this communication layer as well. As with all dual redundant “intelligent” clusters, the loss of the inter-node communication could result in the two controllers losing cache coherency. Further, as the communication is lost, each controller could try to take up the operation of its peer controller resulting in a split brain scenario. In order to handle this split brain scenario, the two controllers also need to maintain a quorum using dedicated areas of the shared drive array to avoid conflicts and data corruptions.

The key advantage of such a dual controller setup is that it is almost fully redundant with hot-swappable components. However, despite the controllers being redundant, the mid-plane connecting the controllers to the drive back-plane is still shared making it a single point of failure.

Data deduplication



Data deduplication is a data reduction technique, which is done at either a file or a block granularity by scanning and eliminates storage of blocks with the same data content and retaining only a single instance of data along with references to this unique copy of data. Block data deduplication techniques are more effective and can be done both in line with the I/Os to the storage or as a post-processing activity. Deduplication is, typically, a CPU intensive operation that involves computation of checksums, comparison of checksums and/or data blocks. Therefore, post-processing de-duplication is typically more preferred. Further, deduplication is done at specific block granularities — finer the granularity better are the deduplication ratios achievable. But, finer granularities also mean more processing. In addition, an intelligent deduplication methodology would be to have a sliding block that seeks out duplicates in naturally occurring internal boundaries rather than at pre-defined fixed boundaries defined by physical layer constraints (such as the RAID stripe size, file system block size etc). The concept of deduplication finds a natural application in the case of backups. Whether it is the backup of user files and folders or the periodic backup of enterprise data stored in servers and storage, the backup data typically have a lot of duplicate data and are the most common candidates for deduplication. Typically, data are backed up or archived and preserved for a number of reasons including reasons of legal compliance. For eg, emails contain a lot of sensitive information for the business and are expected to be to be archived and stored for a number of years for compliance requirements. Often, the same emails are stored in a number of user mail boxes; these data blocks are written once and rarely change and over the period of time are rarely accessed as well, making them ideal candidates for deduplication.

However, the current trend is to use deduplication in primary data storage systems as well. These storage systems have high demands placed on their performance and the chore of processing to find duplicates might lead to an overload of these systems. Also, due to the nature of data on the storage systems the data reduction ratios are much reduced when compared to backup systems. Further, the data on these systems are much more volatile than the backup systems and change often, making them not ideal for expensive deduplication computations. Despite these apparent inconveniences, data deduplication on primary storage systems is gaining interest in the storage industry due to the obvious advantages it brings to the table such as lower capacity footprint, reduced power and cooling requirements and greener data centers.

Summary

Data deduplication and SBB, alongwith a number of other key storage and server technologies such as Virtualization, Cloud Computing and Storage, Solid State Disks, 10G Ethernet, to name a few, are holding out promises for efficient storage, better protecti

on and availability and enhanced performance for the future.

Advertisment