With Windows Server 2012 Release Candidate in our hand and a sufficient number of rack servers and storage in our test lab, we tried to find out features that could simplify building a private cloud. But before that we need to have a big picture of the necessary cloud layers clear in our mind. So if you are thinking to build your own cloud using Microsoft components, then besides having hardware (which in most cases would become your performance and expansion limiting factor) you need to have following three layers–Infrastructure, Platform, and Management. Here platform layer consists of the MS Hypervisor, Hyper-V and the topmost layer (Management) consists of components that draw the line between virtualization and cloud. The management layer consists of Microsoft System Center, which consists of components that automate resource management.
[image_library_tag 784/65784, border=”0″ align=”middle” hspace=”4″ vspace=”4″ ,default]
This layer consists of hardware glued together with a Windows Server Failover Cluster. We always talk about pooled resources that can be sliced or stretched out and given to applications along with disaster resistant applications, when we talk about cloud. The technology from Microsoft that makes this possible is failover clustering and Server 2012 comes with substantial improvements in this area. In this article we set up a sample infrastructure layer which could be used to build a private cloud. But before we do that lets talk about features that make Server 2012 a better option than its predecessor or other available solutions in the market.
While in the older version of Windows Server (2008 R2), one could add up to 16 nodes per failover cluster, the latest version allows up to 64 nodes. The number of VMs per failover cluster has also gone up from upto 1000 in Server 2008 R2 to 4000 in Windows Server 2012. So, 64 nodes with 4000 VMs means that one can run a complete cloud on it no matter how big your organization. All this has been made possible thanks to improvements in manageability and the storage model.
There is improvement not only in the amount of resources supported but also on easier management if you are using Server 2012 to manage failover clusters. You get all resources associated with that cluster in the ‘Server Manager’ window, which means that even without opening the Failover Cluster Manager, one can get a view of resources in the cluster.
Yet another improvement is power shell support for managing a failover cluster. With 81 commands, one can use scripting to automate management tasks like adding or tearing down nodes.
The Failover Cluster Manager itself has also been improved, specifically in the clustered role.
Setting up a Failover Cluster
In our sample setup, we used the following components:
1. Two hardware servers running Windows Server 2012
2. A dedicated domain controller
3. A storage server
All of the above were connected on a Gigabit Ethernet network. Configuration of the nodes purely depends on the applications you plan to run on them. Our nodes had 128 GB RAM and 16 cores each. For stoage, we used IBM’s Storwize V7000 with iSCSI connection between nodes and the storage server.
We started our setup by creating a domain named ‘cloudpcq.com’ on a dedicated domain controller machine and then created one super user on it. Next, we installed Windows Server 2012 on both the nodes and added them to the domain and logged in using the super user credentials. To know how to install Windows Server 2012, you can refer to our article in the Sep 2012 issue or you can read it online at bit.ly/REY0K8.
The interface of Windows Server 2012 is quite different from its predecessors, but once you get used to it, navigation is quite simple. The Server Manager itself as mentioned earlier is quite intuitive to work with. The interface of the server manager not only provides most of the information you need but also acts as a starting point of tasks you wish to carry out on your server. One can also add other nodes to create a central management point. There are multiple ways one can create roles. Simply click on the ‘Manage’ tab on the top and then click on ‘Add Roles and Features’. In our sample setup we added ‘Failover Clustering’ role on both nodes.
After installing the clustering roles, open Server Manager and go to ‘Tools>Failover Cluster Manager’. This would open the interface from where one can create and manage the cluster and clustered roles. In this implementation we would first create a cluster named ‘cloudpcqc’ and then add clustered files server role ‘fs’ on top of it.
[image_library_tag 785/65785, border=”0″ align=”right” hspace=”4″ vspace=”4″ ,default]
To create a cluster, click on ‘Action>Create Cluster’. This would open up a wizard that would not only create the cluster, but before that validate if the cluster can be created at all. One new feature of failover clustering in Server 2012 is that validation can be controlled. One can select parameters to be validated and leave those which would be added later. For instance, you may be ready with nodes but not with storage for cluster. In Server 2012 you can uncheck validation of storage and create a cluster, and then add storage to it. In our setup we added both nodes to the ‘cloudpcqc’ cluster.
The next step is to add common storage disk to the cluster. But before we can accomplish this we need to create a storage volume on a server and connect that volume to a node (in our case using iSCSI protocol). To add storage simply right click on ‘Disks’ under ‘Storage’ header in failover cluster manager interface and click on ‘Add Disk’ to add available disk to cluster.
The final step in our sample setup is to create a clustered role, again from a failover cluster manager interface. Right click on ‘Roles’ and add the required role. As mentioned earlier to test failover cluster we have added ‘File Server’ role named ‘fs’ to cluster.
We are now ready with the infrastructure layer of Microsoft private cloud, but before you begin test, there is one more critical configuration task to be taken care of. When we create a failover cluster, ‘Quorum Setting’ determines the number of failures that the cluster can sustain. If an additional failure occurs, the cluster must stop running. The relevant failures in this context are failures of nodes or in some cases, of a disk witness (which contains a copy of the cluster configuration) or a file share witness. In our test setup we changed the setting from ‘Node Majority’ to ‘Node and File Share Majority’. One can access Quorum Setting by right clicking on the cluster and then going to ‘More Actions> Configure Cluster Quorum Setting’.
[image_library_tag 786/65786, border=”0″ align=”middle” hspace=”4″ vspace=”4″ ,default]
[image_library_tag 787/65787, border=”0″ align=”middle” hspace=”4″ vspace=”4″ ,default]
Testing the Failover Cluster
Effectiveness of the failover cluster can be measured on the basis of how it handles the failure of nodes. To test this feature we moved ‘fs’ role from one node to another while we kept pinging ‘fs’ from the third machine. We found that third machine was able to continuously ping ‘fs’ even while it was actually being moved from one node to another. There were just 2-3 packet drops observed by the third machine. This shows that if your critical server is in failover mode it would be available to users even if there are multiple node failures.