Ubuntu_Quick_Start

There is a newer version of this page. You can find it here.

xCAT installation using Ubuntu

This is a copy of the iDataplex Quick Start but with Ubuntu and under construction

This document is specific to Ubuntu deployment in statefull installation.

This configuration will have a single dx360 Management Node with 2 other dx360 servers as nodes. The OS deployed will be Ubuntu 12.04.1 LTS (Precise), amd64 Server edition.

Prepare for xCAT installation

xCAT install process will scan and populate certain settings from the running configuration. Having the networks configured ahead of time will aid in correct configuration. All the networks in the cluster must be defined in the xCAT networks table before start installing cluster nodes. When xCAT is installed on the Management Node, it will run makenetworks to create an entry in the networks table for each of the networks the management node is on. Additional network configurations can be added to the xCAT networks table manually later if needed.

Install the Management Node OS

Install a ubuntu as the Management Node (MN). If the Management Node can not access the network, ensure that isc-dhcp-server, apache2, nfs-kernel-server, nmap, bind9,, expect, apache2, xinetd, tftpd-hpa, tftp-hpa, libnet-telnet-perl, syslinux, libsys-virt-perl and perl-XML-Parser are installed. It is recommended the Management Node can access the internet and the external repository, these installs will happen automatically later if not done now.

Configure NICS

Configure the cluster facing nics, the interface which used for managing the compute node should use the static IP address. An example /etc/network/interfaces:

auto eth1
iface eth1 inet static
  address 192.168.52.1
  netmask 255.255.255.0

Configure hostname

The xCAT 2 Management node hostname must be configured before setting up the xCAT 2 Management Node. The hostname or its resolvable ip address will be used as the default master name in the xCAT site table, when installed. This name needs to be the one that will resolve to the cluster-facing NIC. Short hostnames ( no domain) are the norm for the Management Node and all cluster nodes.

/etc/hostname should have as an example

mgt.cluster.net

If you run hostname command, if should return the same:

# hostname
mgt.cluster.net

Configure DNS resolution

In the /etc/resolv.conf file, set the nameserver to the management node. For example:

search cluster
nameserver 192.168.52.1
nameserver 192.168.131.2

Configure NTP

When using management node to install service nodes or compute nodes, the timezone configuration on the management node will be inherited by the service nodes or compute nodes. So it is recommended to configure time management on the management node.

Setup basic hosts file

Ensure lines like the following is in /etc/hosts:

127.0.0.1               localhost.localdomain localhost
::1                     localhost6.localdomain6 localhost6
###
192.168.52.1 mgt.cluster mgt

Setup the TimeZone

When using management node to install service nodes or compute nodes, the timezone configuration on the management node will be inherited by the service nodes or compute nodes. So it is recommended to setup the correct timezone on the management node.

sudo dpkg-reconfigure tzdata

Create a Separate File system for /install (optional)

It is not required, but recommended, that you create a separate file system for the /install directory on the Management Node. The size should be at least 30 meg to hold to allow space for several install images.

Install xCAT

Prepare to install xCAT from disk or media

Go to the Download xCAT site and download the level of xCAT tarball you desire. Go to the xCAT Dependencies Download page and download the latest snap of the xCAT dependency tarball.

Copy the files to the Management Node (MN) and untar them:

mkdir /root/xcat2
cd /root/xcat2
tar jxvf xcat-core-ubuntu.tar.bz2
tar jxvf xcat-dep-ubuntu.tar.bz2

Setup APT repositories for xCAT and Dependencies

Point APT to the local repositories for xCAT and its dependencies:

cd /root/xcat2/xcat-dep
./mklocalrepo.sh
cd /root/xcat2/xcat-core
./mklocalrepo.sh

Install xCAT packages

Use apt to install xCAT and all the dependencies for you:

apt-get clean all
apt-get update
apt-get install xcat

This should automatically grab all the relevant dependancies from the web if required.

Test xCAT Installation

Add xCAT commands to the path by running the following:

source /etc/profile.d/xcat.sh

Check to see the database is initialized:

tabdump site

The output should similar to the following:

key,value,comments,disable
"xcatdport","3001",,
"xcatiport","3002",,
"tftpdir","/tftpboot",,
"installdir","/install",,
     .
     .
     .

Configure xCAT

Networks Table

All networks in the cluster must be defined in the networks table. When xCAT was installed, it ran makenetworks, which created an entry in this table for each of the networks the management node is on. Now is the time to add or update any other networks needed to the networks table.

For a sample Networks Setup, see the following example: [Setting_Up_a_Linux_xCAT_Mgmt_Node#Appendix_A:_Network_Table_Setup_Example]

passwd Table

This includes the system entry which is the passwd that will be assigned to root when the node is installed. You can modify this table, to change the default password for root.

tabedit passwd
#key,username,password,cryptmethod,comments,disable
"system","root","cluster",,,
"ipmi","USERID","PASSW0RD",,,

Setup DHCP

This will get the network stanza part of the DHCP configuration (including the dynamic range) set:

makedhcp -n

Setup TFTP

Nothing to do here - the TFTP server is done by xCAT during the Management Node install.

Deploying Nodes

Add nodes to nodelist

Here you can use the power of the templates to define the nodes quickly:

mkdef -t node -o n1-n10 groups=compute,all
mkdef -t node -o bmc1-bmc10 groups=bmc

At this point, xCAT should be ready to begin managing services.

To see your nodelist

nodels

or

lsdef all

Setup /etc/hosts file

Add the map between the xCAT node names and IP addresses,

chdef -t node -o n1 ip=xxx.xxx.xxx.xxx
chdef -t node -o n2 ip=xxx.xxx.xxx.xxx
               .
               .
               .

Run the makehosts xCAT command to create the /etc/hosts file from the xCAT hosts table. (You can skip this step if creating /etc/hosts manually.)

makehosts switch,idataplex,ipmi

Verify the entries have been created in the file /etc/hosts. For example your /etc/hosts should look like this:

127.0.0.1               localhost.localdomain localhost
::1                     localhost6.localdomain6 localhost6
###
10.168.52.1 mgt mgt.cluster
10.168.52.101 n1 n1.cluster
10.168.52.102 n2 n2.cluster
10.168.52.103 n3 n3.cluster
              .
              .
              .

Setup DNS

To get the hostname/IP pairs copied from /etc/hosts to the DNS on the MN:

  • Ensure that /etc/sysconfig/named does not have ROOTDIR set
  • Set site.forwarders to your site-wide DNS servers that can resolve site or public hostnames. The DNS on the MN will forward any requests it can't answer to these servers.

    chdef -t site forwarders=1.2.3.4,1.2.5.6

  • Run makedns

    makedns && service named start

Declare use of SOL

If not using a terminal server, SOL is recommended, but not required to be configured. To instruct xCAT to configure SOL in installed operating systems on dx340 systems:

chdef -t group -o compute serialport=1 serialspeed=19200 serialflow=hard

noderes Table

A basic noderes table which defines node resources during install. If the server is not defined, it will default to the Management Node.

Begin using xCAT to configure system and discover nodes

Setup /etc/hosts file

Since the map between the objects and IP has been added in the hosts table by the 31350 template you can run the makehosts xCAT command to create the /etc/hosts file from the xCAT hosts table. You can skip this step if creating /etc/hosts manually.

# makehosts switch,idataplex,ipmi

Verify the entries have been created in the file /etc/hosts.

For example your /etc/hosts should look like this:

127.0.0.1               localhost.localdomain localhost
::1                     localhost6.localdomain6 localhost6
###
172.20.0.1 mgt mgt.cluster.net
172.20.101.1 n1 n1.cluster.net
172.20.101.2 n2 n2.cluster.net
172.20.101.3 n3 n3.cluster.net
172.20.101.4 n4 n4.cluster.net
172.20.101.5 n5 n5.cluster.net
172.20.101.6 n6 n6.cluster.net
172.20.101.7 n7 n7.cluster.net
              .
              .
              .

Setup DNS

Ensure that /etc/sysconfig/named does not have ROOTDIR set, then:

# makedns && service named start

For more information about name resolution in an xCAT Cluster, see [Cluster_Name_Resolution].

Setup DHCP

# makedhcp -n && service dhcpd restart

Setup TFTP

Set up a Trivial File Transfer Protocol (TFTP) server is done by xCAT during the Management Node install.

Setup conserver

# makeconservercf && service conserver start

Discover nodes

If you want to update node firmware when you discover the nodes, follow the steps in [XCAT_iDataPlex_Advanced_Setup#Updating_Node_Firmware] before continuing.

Walk over to systems, hit power buttons, watch nodes discover themselves by:

#tail -f /var/log/syslog

If you cannot discover the nodes successfully, go to [#Manually_discover_a_node].

After a successful discovery operation, following attributes for a target node would have been added: ( You can verify them by run the lsdef <node> )

bmcpassword  #the bmc password
mac  #the mac address of the node which gotten by discovery
mtm  #the hardware type
serial  # the hardware serial number

Manually discover a node

If you just have several nodes and do NOT have the switch to support the discovery process, you can manually set up the xCAT tables instead of a discovery operation and do the BMC setup to configure the BMC for the nodes:

  • Add the mac address for each node to the xCAT database:

This mac address can be gotten from the back panel of the machine. This mac address should belong to NIC which connected to the installation network.

#chdef -t node -o n1 mac="xx:xx:xx:xx:xx:xx"
  • Add the nodes to dhcp service

    makedhcp n1

  • Setup the current runcmd to be bmcsetup

    nodeset n1 runcmd=bmcsetup

  • Then walk over and power on the node.

Verify state of nodes

After about 5-10 minutes, nodes should be configured and ready for hardware management:

# rpower all stat|xcoll
==================================== 
n1,n10,n100,n101,n102,n103,n104,n105,n106,n107,n108,n109,n11,n110,n111,
n112,n113,n114,n115,n116,n117,n118,n119,n12,n120,n121,n122,n123,n124,n125,n126,n127,n128,
n129,n13,n130,n131,n132,n133,n134,n135,n136,n137,n138,n139,n14,n140,n141,n142,n143,n144,n145,
n146,n147,n148,n149,n15,n150,n151,n152,n153,n154,n155,n156,n157,n158,n159,n16,n160,n161,
n162,n163,n164,n165,n166,n167,n17,n18,n19,n2,n20,n21,n22,n23,n24,n25,n26,n27,n28,n29,n3,n30,
n31,n32,n33,n34,n35,n36,n37,n38,n39,n4,n40,n41,n42,n43,n44,n45,n46,n47,n48,n49,n5,n50,n51,n52,
n53,n54,n55,n56,n57,n58,n59,n6,n60,n61,n62,n63,n64,n65,n66,n67,n68,n69,n7,n70,n71,n72,n73,n74,
n75,n76,n77,n78,n79,n8,n80,n81,n82,n83,n84,n85,n86,n87,n88,n89,n9,n90,n91,n92,n93,n94,n95,n96,
n97,n98,n99
==================================== 
on

Install nodes

Create Ubuntu repository

  • Download Redhat ISOs or load your OS's DVD's and place in a directory:

    mkdir /root/xcat2
    cd /root/xcat2
    wget <ISO of your Ubuntu OS>

  • Run copycds to setup the install directory for the node diskfull/diskless boots. The copycds commands will copy the contents of to /install/ubuntu10.10/<arch>. For example:

    cd /root/xcat2
    copycds ubuntu-10.10-server-amd64.iso

Begin installation

The following command will commence installation to disk on all of the nodes. Modify the oslevel from ubuntu10.10 to whatever version you are installing.

# rinstall -o ubuntu10.10 all

Monitor installation

It is possible to use the wcons command to monitor a sampling of the nodes:

# wcons n1,n20,n80,n100

or rcons to monitor one node

# rcons n1

Additionally, nodestat may be used to check the status of a node as it installs:

# nodestat n20,n21
n20: installing man-pages - 2.39-10.el5 (0%)
n21: installing prep

After some time, the nodes should be up and ready for general usage


Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.