by March 1, 2005 0 comments

High performance computing can be achieved using a cluster of machines instead of a single powerful machine. Cluster install is not for the newbie though. This article assumes you to have more than a passing familiarity with Linux. For an in-depth understanding of clusters, the concept of cluster server and nodes, high performance computing and the OSCAR, package, refer to our May 2002 issue.

PCQLinux 2005 uses OSCAR to set up a high performance cluster. OSCAR was first bundled with PCQLinux 8 (given out in March 2003) but couldn’t make its way through PCQLinux 2004. OSCAR simplifies the setting up of a cluster and includes utilities and libraries for cluster control and high performance cluster programming. The supercomputer that we are about to build consists of a single server and a number of nodes. We can deploy the programs on the server, which in turn distributes the processing (of the programs), amongst the nodes in the cluster.

Getting started

Plug in the server and nodes on a separate network, using a hub or preferably a switch. Boot the server off PCQLinux 2005 CD1. When the ‘Install Type’ screen is shown, select the Advanced Installation>Super Computing. Subsequently, when the network configuration screen appears, specify a hostname manually (for eg Assign an IP address and corresponding netmask ( By default, the supercomputing install type installs the only required packages. However, you can select additional packages to install on the server from the package selection screen, if needed.

Preparing to run OSCAR

Once PCQLinux 2005 is installed on the server, start X Window and launch a terminal window (select System Tools>Terminal from the start menu). All subsequent commands must be issued here.

The graphical way of setting up a Supercomputer with PCQLinux 2005

OSCAR requires PCQLinux RPMS to be present in a directory called /tftpboot/rpm.
Create this directory and copy all RPM files found in the directory PCQLinux/RPMS from PCQLinux CD1, 2 and 3. You need to download the packages named mysql-server-3.23. 58-13.i386.rpm and mysql-3.23.58-13.i386.rpm from 
and place it in the /tftpboot/rpm directory. Now, change to the directory /opt/oscar and issue:

# ./install_cluster eth0

After a couple of minutes, the OSCAR graphical wizard will show up. Click on ‘Select OSCAR Packages to install’. On the window that pops up, click on Exit.

Click on the button ‘Configure Selected OSCAR packages’ and then on Done. After each step, a message will pop up, reporting the success (or failure) of the step. Click on ‘Install OSCAR Server Packages’ and then on ‘Build OSCAR Client Image’.

In case, you have SCSI hard disks on the nodes, click on the choose partition file and select the file named sample.disk.scsi. Click on the ‘Build Image’ button. And then on the ‘Define OSCAR clients’ button. For ‘Number of hosts’, type in the number of nodes-that you want to plug into the cluster. Subsequently click on the ‘Add clients’ button.

Set up the nodes

Click on the button ‘Setup Networking’. In the right frame, you will see a tree-like structure. We need to assign the MAC (Media Access Control) address of the nodes to the listed IP addresses, which can be done by booting the nodes using an auto install floppy. To create the floppy, click on the button ‘Build AutoInstall Floppy’. This will launch a terminal window. Press Enter and insert a blank floppy in the server and click on ‘y’ to continue. After the terminal window disappears, click on the button ‘Collect MAC addresses’ in the OSCAR window. Insert the floppy in one of the node machines and power it on. The machine will boot from the floppy. Press Enter at the boot: prompt. After some time, the MAC address of the node will show up in the left frame. Say if we want to assign the IP address to this node, click on the MAC address in the left frame and on the ‘’ in the right frame. Then click on ‘Assign MAC to node’.
Switch off the node machine. Now boot the second node machine from the same floppy. As before, the MAC address of the second node will appear in the left frame. Assign it to

Repeat the above process for other nodes. When done, click on the button ‘Stop collecting’ on the OSCAR window.

Once you have shut down all the node machines, click on the button ‘Configure DHCP Server’. Then click on the close button in the ‘MAC address collection’ window.

Starting node install

Important note: The following steps will wipe out any existing data on the hard disk of the node.

Boot the first node machine again from the floppy. The node will now install PCQLinux 2005 from the network. When this installation is done, a message, ‘I have done for … seconds. Reboot me already’ will be shown.

Take the floppy out and reboot the node machine. This time it should boot from the hard disk. If everything has gone well, you will boot into PCOLinux 2005. Repeat the process for other nodes. Click on ‘Complete Cluster Setup’ on the server and then on ‘Test cluster Setup’. All tests should succeed.

Adding and deleting nodes

You can add another node using the OSCAR wizard. Relaunch this wizard as follows:

# cd /opt/oscar
# ./install_cluster eth0

from the terminal window. Click on ‘Add OSCAR Clients’ and then on ‘Define OSCAR Clients’ (on the window that pops up). Here, modify the starting number to one more than the maximum number of nodes on the cluster. That is, if your cluster already has 10 nodes, type in 11 for the starting number. Similarly modify the starting IP address. For the ‘Number of hosts’, type in the number of new nodes. Click on ‘Add Clients’. The ‘Setup Networking’ step is same as explained above. Finally click on the ‘Complete Cluster Setup’ button.

To delete one or more nodes, click on ‘Delete OSCAR Clients’. In the pop-up window, select one of the nodes to delete and click on ‘Delete Clients’. Repeat the process to delete more nodes.

Henceforth, using the libraries installed on the cluster, you can start developing or executing cluster-aware applications on the server. To get started, OSCAR installs PVM (Parallel Virtual Machine), PBS (Portable Batch System), Maui PBS Scheduler, LAM Message Passing Interface and C3 (Cluster Command and Control). For more information refer to

No Comments so far

Jump into a conversation

No Comments Yet!

You can be the one to start a conversation.

Your data will be safe!Your e-mail address will not be published. Also other data will not be shared with third person.