Advertisment

The Best Interconnect for your SAN

author-image
PCQ Bureau
New Update

In most data centers today you'll still find Ethernet as the de facto

interconnect fabric for servers and storage. Familiarity could be one reason why

architects settle for this fabric; other reason could be that the

next-generation fabrics increase the cost. Though Fibre Channel has seen

increased adoption in storage area network (SAN) environments, the adoption of

10-Gig Ethernet is still to become mainstream. Apart from these two interconnect

technologies; there is InfiniBand which is gathering increased interests amongst

organizations working in high performance computing environments because of its

potential to deliver increased speeds with low latency.

Advertisment

Enterprise data centers host business critical applications like ERP or CRM,

which has resulted in increase of data volumes and also demand for reliability.

This has led to formation of clusters which operate parallel to serve an

application. To maintain data transfer speeds between cluster servers a reliable

interface is needed, and Inifiniband serves that well. The use of InfiniBand in

SAN vouches itself as an ideal usage in datacenters. Besides being used in HPC

environments, organizations that have used InfiniBand would already know the

benefits of it.

This time at our labs we had received Tyrone Opslag FS2, a unified storage

solution that can support 10 Gig Ethernet, Fibre Channel and InfiniBand as host

interfaces. This gave us the opportunity to benchmark the performances of these

three interconnect interfaces. Before delving into benchmarking, let's first see

what InfiniBand is and where it is applicable.

Advertisment

Benefits of InfiniBand



InfiniBand is a fabric communication link that is used in enterprise data
centers and high performance computing. Like other modern interconnects, say

SATA or Fibre Channel; Inifiniband too offers a point-to-point bi-directional

serial link for connection of processors with high speed peripherals like disks.

It features high throughput, low latency, and is designed to be scalable. The

data transmitted with Inifiniband is in packets form, and this packet delivery

is handled in the hardware, not in the software. By use of credit based

flow-control mechanisms and monitoring of bandwidth, InfiniBand delivers packets

between sending and receiving nodes in a lossless way.

If as an interconnect medium we compare InfiniBand with Ethernet, the

difference is that InfiniBand is focused towards providing high reliability

connection over a short distance. In contrast Ethernet with TCP/IP is intended

towards undefined distances over any medium of connection. Though TCP/IP

provides robustness to work under any condition, and this robustness adds to the

overhead. With InfiniBand, overhead is minimal as it optimizes the data stack to

allow for Remote Direct Memory Access (RDMA). RDMA simply is the direct memory

access of one computer into that of another without involving the server's

operating system for read/write procedures. This results in delivering very high

throughput, low latency interconnect, which is particularly essential to be used

in parallel compute cluster environments.

Thus, InfiniBand architecture simplifies the communication between the

servers and SANs in a datacenter, and also allows the scope of being scalable

while supporting quality of service and failover.

Advertisment

Benchmarking Setup



We used Tyrone Opslag FS2 as a SAN (Block level sharing) connected to a server
having specifications as 4x Intel Xeon quad-core processors, 32GB RAM, Windows

2008 R2. As interconnect interfaces we used InfiniBand, FC and 10Gig Ethernet

one at a time to see the throughput measured using IOMeter.

For the IOMeter tests, we used the block size of 512KB as standard and used

100% sequential read and 100% sequential write as tests to measure the

throughput of the interconnect medium. As pre-requisite we installed the

necessary drivers for each of the network interface onto the server. Also on the

SAN we configured a block target of volume 500GB which we had used as central

storage for the server with the benchmark. We connected the SAN over local

network too, so as to connect and manage the console remotely over the web

interface.

Advertisment

First we connected Infiniband; we installed the Mellanox 4X DDR Infiniband

card on our main server. Connecting over LAN to the Opslag FS2 console, we have

to enable the InfiniBand by going to Advanced Setup > InfiniBand Settings on the

menu. Once enabled, we have to enable the SRP settings from the same main menu

and also assign the block disk volume of 500GB that we have created earlier to

be used for streaming over SRP. Once done, and with InfiniBand connected to the

main server, we see that a new disk has been added under the Disk Management

console. We can now initialize, format and use this disk block from the SAN as a

local drive for the server.

To measure the throughput we ran the IOMeter on the newly added drive and set

the test configurations as mentioned earlier. For sequential read we were able

to get sustained throughput of 1570 MBps. While for sequential write we got a

speed of 1092 MBps.

Similarly we setup the 4G FC and 10GigE interconnect medium and conducted the

tests. The FC setup was straightforward through the web console, but for 10GigE

you need to initialize iSCSI also, and assign its initiator and target as well

because 10GigE doesn't support native SCSI initiation. The speed that we got for

4G Fibre Channel was 390 MBps for read and write, while for 10GigE over iSCSI we

got speed for sequential read as 590 MBps and sequential write speed was

589MBps.

Advertisment

The InfiniBand architecture is also cheaper to FC and 10GigE, as the prices

of each of these cards is mentioned in the box. Having such virtualization base

would also be helpful in moving towards Cloud Computing, as one can already have

scalability enabled through InfiniBand.

SAN as store for Virtual Deployments



As more and more enterprise workload is moved into the virtual environment, it
becomes a challenge for the existing fabric to meet the requirement of increased

bandwidth and low latency. Since, InfiniBand can cater to such demand. We tried

to test the performance by having a virtual machine running from the main server

while its virtual disk is located in the central SAN, connected with InfiniBand.

Tyrone Opslag FS2

This is a unified storage solution for enterprises which provides both

NAS and Block sharing from a single box. For NAS or file level sharing it

can do SMB, AFP, FTP and NFS whereas for Block level sharing it can do iSCSI,

FC and SRP (over InfiniBand). You can find more information about the

product from http://tyronesystems.com . The USP of this unified storage box

is that, organizations looking for high performance storage solution can use

any interface, as it supports all the three, InfiniBand, Fiber Channel as

well as 10 Gig Ethernet.



The management of the device is through intuitive web based GUI through
which configuration and monitoring can be achieved. It also supports all

RAID types and the 4U box which we have received can support up to 24

hot-swap SAS/SATA hard drives.

The model that we received was FS2-D4A3-24 and was priced at Rs 5,23,109

(that include 12 1TB 7200 HDDs SATA HDDs) and had following specs: 1x Intel

Xeon 2GHz quad-core processor, 8GB DDR2 RAM and 4x Gigabit Ethernet

But to make sure the performance which we get is not hindered due to the

bottleneck of the Hard disk throughput. We have been sent a box with 20x

250GB HDDs.The interconnect cards that we used for the benchmarking, and

their costs: InfiniBand 4X DDR (Rs. 29,500), 10Gig Ethernet (Rs. 31,500), &

Fibre Channel 4Gbps single port (Rs. 35,500)

Advertisment

One can argue on the fact that why would someone not have a local datastore

where the virtual machine hard disks resides on the Host machine's local hard

disk. The answer to this is very simple. First is the data redundancy and

scalability which can be only achieved by a SAN at the backend, the second is

the throughput. A simple server cannot match the throughput of a SAN as a server

has its limitation of number of hard disks it can have locally. And as we all

know, with RAID, the overall throughput is directly proportional to the number

of Hard disks connected to the RAID volume.

The third and the most important reason are the functionalities like vMotion

which has become very crucial for today's complex virtualization setup. For

doing vMotion or any equivalent live migration exercise, it's a must to have a

shared SAN (NFS or Block) at the back end which is reachable by all the Virtual

Hosts in the network.

So, to test this out, we installed VMware ESX server on the server and

created few virtual machines on it. For all these virtual machine we did not use

the local storage of the server as a datastore but used the storage of the SAN

exported has Block volumes. The virtual machine being an instance of Windows

2008 Server R2, we ran the IOMeter test from each instance onto the target drive

which was the virtual drive residing in the SAN. Through this clustered IOMeter

test we were able to get 1530MBps of throughput for sequential read and write

tests.

Advertisment