by February 6, 2003 0 comments

DAS, NAS, SAN and now storage virtualization. It’s the new mantra in the large enterprise storage market. Welcome to a future where your IT department might configure highly-scalable storage pools to be your system’s 10 TB (terabyte) ‘virtual’ hard disk. This is irrespective of the actual number, capacity and physical location of individual storage devices in the pool. As a user, you would never even know that the last file you saved was perhaps saved across a dozen different devices from different vendors sometimes even across different physical locations. Instead, what you will see is data being saved on to, say, a single 10 TB (or even more) disk attached to your system. In reality, the disk you see is made up of several smaller disks distributed over a network. And, more importantly, this distributed storage could be a mix of magnetic disks, tapes and even optical disks. 

One size fits all 
Storage virtualization is about the abstraction of storage from a physical level to a logical level using specialized virtualization software and hardware like intelligent routers and servers. Though network-based storage evolved from simple RAID into NAS and SAN with many successful solutions coming from major players like EMC, HP, Sun and IBM storage virtualization has a far higher potential. At its best, storage virtualization can pool each and every storage device that an enterprise has, even those spread over different geographical locations (connected by a WAN), to form a unified virtual storage disk, with higher levels of data integrity, redundancy and fault—tolerance thrown in for good measure. Storage virtualization does not, however, replace SAN. Rather, it’s an evolution of SAN that complements and adds to SAN’s efficiency and features on a larger scale across different storage pools. That is, today, IBM’s SAN solutions/hardware might not be compatible with those of EMC, but storage virtualization if realized fully, promises true compatibility and inter-operatability between solutions from different vendors.

The concept
Large IT-enabled businesses have sometimes as many as four or five OSs, each of which is responsible for one critical part of its business. Often these OSs have unique requirements for writing and reading data to the storage. A major stumbling block of storage solutions, including SAN and NAS systems today, is that these OSs and other apps running on them use old rules to figure out where to store data. The software often identifies precise storage locations using a combination of network identity and the hierarchical path. Since different OSs use different rules/processes, data written by one OS need not be readable for another especially when files/data are split across different devices. 

SAN solutions made life easier for network administrators by simplifying the connection and management of storage devices, but using costly proprietary storage. Storage virtualization tries enabling these without necessarily being as costly in implementation. It also tries to give you the option of adding cheaper storage devices. The single attached disk concept also makes it possible for administrators to easily implement all encompassing data storage policies/ rules without the headaches of a hundred configurations based on devices, OSs and proprietary systems. 

The concept of storage virtualization expects to make the software intelligent enough to carry out the necessary configurations without major human intervention. Also it’s hoped that the software will be able to utilize all available storage efficiently and be highly scalable without major shutdowns. This also means that data will be mirrored for protection and stored in locations where it can be accessed quickly. Also, expensive features like remote-copy, mirroring, snapshots and LAN-free back-up that are now available only with high-end storage might become less expensive for large enterprise users. Security too will be comparable to that offered by SAN solutions now. Keep in mind though, that all these advantages will be only as good as your network is. 

How it works 
The following three are the common approaches that different vendors have taken.

Server-based virtualization. Here the virtualization software resides on the application server (host) and causes the host’s OS to simulate direct communication with the storage device. 

Storage-based virtualization. Virtualization software resides on the storage devices. Quite similar to host-based virtualization, it works best in uniform environments.

Network-centric virtualization. Here an intelligent device (like a router) on the network takes care of the virtualization function.

The players
HP, EMC, Sun, Network Appliances, IBM and Hitachi are some of the major players in this arena who offer storage virtualization solutions. DataCore, FalconStor, TrueSAN and LeftHand Networks are some of the other players who offer virtualization software. 

The best-case scenario here is only a roadmap of what is expected to happen in the next couple of years in the enterprise-storage market with several milestones including adoption of industry-wide standards yet to happen. As of now, most of the benefits are yet to be translated from concepts to reality. And some of the advantages might seem a tad over-hyped too.

For now, individual vendors promote their respective products even as every one fights for his share of a rapidly growing storage market. For example, EMC which has a strong presence in the enterprise storage hardware market markets storage-based software while HP has solutions at all three levels–OpenView Storage Allocator, Virtual Array and SANlink. IBM promotes its concept nicknamed the Storage Tank as well as the storage- and server-based technology from the start-up, DataCore. Sun uses Vicom’s Storage Virtualization Engine on its StorEdge 6900 servers. There are quite a few solutions available from others too.

Complete virtualization and true inter-vendor operability will depend on the standardization of the different protocols involved, which will, in turn, depend on the maturing of the market. But, initial moves towards standardization have already been made by SNIA (Storage Network Industry Association). The Bluefin initiative (now named SMIS or Storage Management Interface Specifications) that was created by a consortium of leading vendors and the Storage Management Initiative by the SNIA are right moves in this direction. The ball is now in the vendors’ court.

But, if you run the IT affairs of a large enterprise, do keep an eye for further developments in this area in the next couple of years.

Benoy George Thomas

No Comments so far

Jump into a conversation

No Comments Yet!

You can be the one to start a conversation.