XCAT_zVM

There is a newer version of this page. You can find it here.

Document Abstract

This document provides an overview, an installation, and a quick start guide on basic z/VM and Linux on System z administration using xCAT. For technical support, please contact Thang Pham or post your question on the mailing-list.

Terminology

This section outlines the terminology used within this document.

DirMaint
CMS application that helps manage an installation's VM directory.

Ganglia
"Ganglia consists of two unique daemons (gmond and gmetad), a PHP-based web frontend and a few other small utility programs. Gmond is a multi-threaded daemon which runs on each cluster node you want to monitor. Gmetad is the daemon that monitors the other nodes by periodically polling them, parsing the collected XML, and saving all the numeric, volatile metrics to the round-robin databases." - Ganglia Development Team

Life cycle
A collection of tasks that include: power on/off of a virtual server, and create/edit/delete of a virtual server.

SMAPI
The Systems Management APIs simplify the task of managing many virtual images running under a single z/VM image.

Virtual server
A server composed of virtualized resources. An operating system can be installed on a virtual server.

VMCP
Linux module that allows execution of CP commands.

CP
"The Control Program (CP) is the operating system that underlies all of z/VM. It is responsible for virtualizing your z/Series machine's real hardware, and allowing many virtual machines to simultaneously share the hardware resource." - IBM

xCAT
xCAT (Extreme Cloud Administration Tool) is a toolkit that provides support for the deployment and administration of large cloud environments.

zHCP
zHCP (System z Hardware control point) is a Linux virtual server that interfaces with SMAPI and CP and manages other virtual servers on z/VM.

AutoYaST
"AutoYaST is a system for installing one or more SUSE Linux systems automatically and without user intervention. AutoYaST installations are performed using an autoyast profile with installation and configuration data." -SUSE

Kickstart
"Automated installation for Red Hat. It uses a file containing the answers to all the questions that would normally be asked during a typical Red Hat Linux installation." -Red Hat

Support on z/VM and Linux on System z

This section provides a list of supported functionalities on xCAT for z/VM and Linux on System z.

  1. Lifecycle Management
    • Power on/off VM
    • Create/edit/delete VM
  2. Inventory
    • Software and hardware inventory of VM
    • Resource (e.g. disks, networks) inventory
  3. Image Management
    • Cloning VM
    • Vanilla installation of Linux via Autoyast or Kickstart
    • Provisioning diskless VM via NFS read-only root filesystem
  4. Network Management
    • Supports Layer 2 and 3 network switching for QDIO GLAN/VSWITCH and Hipersockets GLAN
    • Create/edit/delete QDIO GLAN/VSWITCH and Hipersockets GLAN (not persistent)
    • Add/delete virtual network devices to VM
  5. Storage Management
    • Add/remove ECKD/FBA disks from disk pool
    • Add/remove ECKD/FBA disks from VM
  6. OS Management
    • Upgrading Linux OS
    • Add/update/remove software packages on OS
    • Basic xCAT functionalities, e.g. remote shell, post-scripts, rsync, etc.
  7. Monitoring
    • Linux monitoring using Ganglia
  8. Others
    • Full command line interface support
    • Web user interface support
    • Self-service portal to provision VM on demand

Design Architecture

This section provides an architectural overview of xCAT on z/VM and Linux on System z.

Figure 1. Shows the layout of xCAT on System z.

xCAT can be used to manage virtual servers spanning across multiple z/VM partitions. The xCAT management node (MN) runs on any Linux virtual server. It manages each z/VM partition using a System z hardware control point (zHCP) running on a privileged Linux virtual server. The zHCP interfaces with z/VM systems management API (SMAPI), directory manager (DirMaint), and control program layer (CP) to manage the z/VM partition. It utilizes a C socket interface to communicate with the SMAPI layer and VMCP Linux module to communicate with the CP layer.

Prerequisite

This section details what is required before you setup xCAT on z/VM and Linux on System z.

Figure 1. Sample environment

Before you can install xCAT, there are a couple of prerequisites. You need to have two virtual servers (one server for the xCAT MN and the other for the zHCP) running Linux.

Both the xCAT MN and zHCP are linked to LNXMAINT, which contains files used by both CMS and Linux.

LNXMAINT has the following directory entry:

USER LNXMAINT PWD 64M 128M BEG
INCLUDE TCPCMSU
LINK TCPMAINT 0592 0592 RR
MDISK 0191 3390 1 20 EM6340 MR
MDISK 0192 3390 1 279 EM6341 MR

It is recommended that you have the following PROFILE EXEC on LNXMAINT 192 disk (which is linked to all virtual servers on the z/VM partition).

/* PROFILE EXEC for Linux virtual servers */
'CP SET RUN ON'
'CP SET PF11 RETRIEVE FORWARD'
'CP SET PF12 RETRIEVE'
'ACC 592 C'
'SWAPGEN 300 1048576' /* create a 512M VDISK disk swap space */
'SWAPGEN 301 2097152' /* create a 1G VDISK disk swap space */
'PIPE CP QUERY' userid() '| var user'
parse value user with id . dsc .
if (dsc = 'DSC') then /* User is disconnected */
    'CP IPL 100'
else /* User is interactive -> prompt */
do
    say 'Do you want to IPL Linux from minidisk 100? y/n'
    parse upper pull answer .
    if (answer = 'Y') then 'CP IPL 100'
end /* else */

This statement in the PROFILE EXEC enables each virtual server to IPL 100 upon startup.

For more information on how to setup z/VM, refer to z/VM and Linux on IBM System z: The Virtualization Cookbook for SLES 11.

xCAT Management Node

In our development environment, the xCAT MN has the following directory entry:

USER LNX1 PWD 1G 2G G
INCLUDE LNXDFLT
COMMAND SET VSWITCH VSW2 GRANT LNX1
MDISK 0100 3390 0001 10016 EMC21A MR
MDISK 0101 3390 0001 10016 EMC21C MR
MDISK 0102 3390 0001 10016 EMC28B MR

where the user profile, LNXDFLT, contains:

PROFILE LNXDFLT
CPU 00 BASE
CPU 01
IPL CMS
MACHINE ESA 4
CONSOLE 0009 3215 T
NICDEF 0800 TYPE QDIO LAN SYSTEM VSW2
SPOOL 000C 2540 READER *
SPOOL 000D 2540 PUNCH A
SPOOL 000E 1403 A
LINK MAINT 0190 0190 RR
LINK MAINT 019E 019E RR
LINK LNXMAINT 0192 0191 RR

To install Linux onto this virtual server, we used the following parm file:

ramdisk_size=65536 root=/dev/ram1 ro init=/linuxrc TERM=dumb        
HostIP=10.1.100.1 Hostname=gpok1.endicott.ibm.com          
Gateway=10.1.100.1 Netmask=255.255.255.0                       
Broadcast=10.1.100.255 Layer2=1 OSAHWaddr=02:00:06:FF:FF:FF     
ReadChannel=0.0.0800  WriteChannel=0.0.0801  DataChannel=0.0.0802
Nameserver=10.1.100.1                                          
portname=FOOBAR                                                     
portno=0                                                            
Install=nfs://10.1.100.254/install/SLES-11-SP1-DVD-s390x-GMC3-DVD1.iso
UseVNC=1 VNCPassword=12345678                                       
InstNetDev=osa OsaInterface=qdio OsaMedium=eth Manual=0

It is recommended that you use LVM for the install directory (/install), so you are not constrained by disk size. In our development environment, we allocated 4GB to the root filesystem (/) and the rest (17GB) into an LVM partition for /install. The xCAT MN is connected to NICDEF 0800 which uses VSW2, a layer 2 VSWITCH. If you plan to have your virtual servers use DHCP, each virtual server must be connected to a layer 2 VSWITCH. It is recommended that you create a layer 2 VSWITCH, which will allow virtual servers to communicate across LPARs and CECs.

The xCAT MN can run on any Linux distribution, SLES or RHEL. In our development environment, the xCAT MN was setup on SLES 11 SP1 with Server Base, Gnome, and X Windows packages installed.

System z Hardware Control Point

In our development environment, the zHCP has the following directory entry:

USER LNX2 PWD 512M 1G ABCDG
COMMAND SET VSWITCH VSW2 GRANT LNX2
CPU 00 BASE
CPU 01
IPL CMS
MACHINE ESA 4
OPTION LNKNOPAS
CONSOLE 0009 3215 T
NICDEF 0800 TYPE QDIO LAN SYSTEM VSW2
SPOOL 000C 2540 READER *
SPOOL 000D 2540 PUNCH A
SPOOL 000E 1403 A
LINK MAINT 0190 0190 RR
LINK MAINT 019E 019E RR
LINK LNXMAINT 0192 0191 RR
MDISK 0100 3390 1 10016 EMC278

To install Linux onto this virtual server, we used the following parm file:

ramdisk_size=65536 root=/dev/ram1 ro init=/linuxrc TERM=dumb        
HostIP=10.1.100.2 Hostname=gpok1.endicott.ibm.com             
Gateway=10.1.100.2 Netmask=255.255.255.0                         
Broadcast=10.1.100.255 Layer2=1 OSAHWaddr=02:00:06:FF:FF:FE       
ReadChannel=0.0.0800  WriteChannel=0.0.0801  DataChannel=0.0.0802 
Nameserver=10.1.100.1                                             
portname=FOOBAR                                                     
portno=0                                                            
Install=nfs://10.1.100.1/install/SLES-10-SP3-DVD-s390x-DVD1.iso
UseVNC=1 VNCPassword=12345678                                       
InstNetDev=osa OsaInterface=qdio OsaMedium=eth Manual=0

It is recommended that you mount the root filesystem (/) onto MDISK 0100. You do not need 10016 cylinders allocated to the zHCP, but you do need enough for small Linux operating system. The zHCP is connected to NICDEF 0800 which uses VSW2, a layer 2 VSWITCH. The NICDEF must be specified in the directory entry and not in a profile. The zHCP has A, B, C, D, and G privileges. It needs class A privilege to use the FORCE command, class B privilege to use the ATTACH and FLASHCOPY command (if permitted), class C privilege to use the SEND command, and class D privilege to use the PURGE command.

The zHCP can run on any Linux distribution, SLES or RHEL. In our development environment, the zHCP was setup on SLES 10 SP3 with Server Base package installed.

Planning

This section helps you plan the layout of the xCAT cloud environment.

You can find the configuration we used in our development environment below in the table. You should plan out how your cloud environment would be configured based on the examples given.

Configuration Example Custom

Network
Gateway: 10.1.100.1

Netmask: 255.255.255.0

IP range: 10.1.100.1-10.1.100.254

Hostname range: gpok1-gpok254

Broadcast: 10.1.100.255

Nameserver: 10.1.100.1

FTP server containing Linux ISOs
IP: 10.1.100.254

xCAT management node
Hostname: gpok1.endicott.ibm.com

IP: 10.1.100.1
UserID: LNX1

Hardware control point(s)
Hostname: gpok2.endicott.ibm.com

IP: 10.1.100.2
UserID: LNX2
LPAR: POKDEV61
Network: VSW2 (layer 2 VSWITCH)

Installation of xCAT

This section details how to install the xCAT management node.

Red Hat Enterprise Linux

If you have a Red Hat Linux, follow the instructions below.

  1. Logon as root using a Putty terminal
  2. Disable SELinux
    # echo 0 > /selinux/enforce
    

The command above will switch off enforcement temporarily, until you reboot the system.
To make it permanent, edit /etc/selinux/config and change SELINUX=enforcing to SELINUX=permissive.

  1. Add the RHEL repository to yum

    • Create a repository file

      # touch /etc/yum.repos.d/rhel-dvd.repo
      
    • Insert the following into the repository file rhel-dvd.repo

      [rhel-dvd]
      name=RHEL DVD
      baseurl=ftp://xxx-ftp-path
      enabled=1
      gpgcheck=1
      

where xxx-ftp-path is the FTP path to the RHEL DVD. For example:

        [rhel-dvd]
        name=RHEL DVD
        baseurl=ftp://10.1.100.254/rhel5.5/s390x/Server
        enabled=1
        gpgcheck=1



* Download the `RPM-GPG-KEY-redhat-release` from the FTP server (e.g. [ftp://10.1.100.254/rhel5.5/s390x/)](ftp://10.1.100.2/rhel5.5/s390x/) onto this node.
* Import the key

        # rpm --import RPM-GPG-KEY-redhat-release
  1. Make an xcat directory under /root

    # mkdir /root/xcat
    
  2. Download the latest xCAT tarballs, xcat-core-xxx.tar.bz2 and xcat-dep-xxx.tar.bz2 (where xxx is the release and version number) from <http://sourceforge.net/apps/mediawiki/xcat/index.php?title=Download_xCAT> onto /root/xcat

  3. Extract the contents of each tarball

    # cd /root/xcat
    # tar jxf xcat-core-xxx.tar.bz2
    # tar jxf xcat-dep-xxx.tar.bz2
    
  4. Create a yum repositories for xCAT
    If you have Red Hat Enterprise Linux 5:

    # /root/xcat/xcat-dep/rh5/s390x/mklocalrepo.sh
    # /root/xcat/xcat-core/mklocalrepo.sh
    

If you have Red Hat Enterprise Linux 6:

    # /root/xcat/xcat-dep/rh6/s390x/mklocalrepo.sh
    # /root/xcat/xcat-core/mklocalrepo.sh
  1. Use yum to install xCAT
    # yum clean metadata
    # yum install xCAT
    

Ignore the warning messages (if any) about the keys and accept them.

SUSE Linux Enterprise Server

If you have a SUSE Linux, follow the instructions below.

  1. Logon as root using a Putty terminal
  2. Install the DHCP server through yast (if not already)

    # zypper install dhcp-server
    
  3. Make an xcat directory under /root

    # mkdir /root/xcat
    
  4. Download the latest xCAT tarballs, xcat-core-xxx.tar.bz2 and xcat-dep-xxx.tar.bz2 (where xxx is the version number) from <http://sourceforge.net/apps/mediawiki/xcat/index.php?title=Download_xCAT> onto /root/xcat

  5. Extract the contents of each tarball

    # cd /root/xcat
    # tar jxf xcat-core-xxx.tar.bz2
    # tar jxf xcat-dep-xxx.tar.bz2
    
  6. Add the xCAT repositories to zypper

    • If you have SUSE Linux Enterprise Server 10:

      # zypper sa file:///root/xcat/xcat-dep/sles10/s390x xCAT-dep
      # zypper sa file:///root/xcat/xcat-core xcat-core
      
    • If you have SUSE Linux Enterprise Server 11:

      # zypper ar file:///root/xcat/xcat-dep/sles11/s390x xCAT-dep
      # zypper ar file:///root/xcat/xcat-core xcat-core
      

Ignore the warning messages (if any) about the keys and accept them.

  1. Use zypper to install xCAT
    # zypper install xCAT
    

Finalizing Installation

Continue with the following steps once you completed installing xCAT:

  1. Add the xCAT commands to path

    # source /etc/profile.d/xcat.sh
    
  2. Enable vsftp and restart xCAT

    # chtab key=vsftp site.value='y'
    
    # service xcatd restart
    Restarting xCATd Stopping xCATd                                      done
    
     Shutting down vsftpd                                                done
     Starting vsftpd                                                     done
    
  3. Check if the database is initialize

    # tabdump site
    

The output should look similar to the following:

    #key,value,comments,disable
    "blademaxp","64",,
    "domain","endicott.ibm.com",,
    "fsptimeout","0",,
    "installdir","/install",,
    "ipmimaxp","64",,
    "ipmiretries","3",,
    "ipmitimeout","2",,
    "consoleondemand","no",,
    "master","10.1.100.1",,
    "maxssh","8",,
    "ppcmaxp","64",,
    "ppcretry","3",,
    "ppctimeout","0",,
    "rsh","/usr/bin/ssh",,
    "rcp","/usr/bin/scp",,
    "sharedtftp","0",,
    "SNsyncfiledir","/var/xcat/syncfiles",,
    "tftpdir","/tftpboot",,
    "xcatdport","3001",,
    "xcatiport","3002",,
    "xcatconfdir","/etc/xcat",,
    "timezone","US/Eastern",,
    "nameservers","10.1.100.1",,
  1. Setup an FTP server on the xCAT MN to contain Linux distributions

    • Download the desire Linux ISO into /install
    • Go into /install directory

      # cd /install
      
    • Extract the ISO into the xCAT install tree /install

      # copycds -n xxx -a s390x /install/yyy.iso
      

where xxx is the distribution name and yyy is the ISO name.

For example, if you have a SUSE Linux Enterprise Server 10 SP3 ISO:

        # copycds -n sles10sp3 -a s390x /install/SLES-10-SP3-DVD-s390x-DVD1.iso
        Copying media to /install/sles11sp1/s390x/1
        Media copy operation successful

or if you have a Red Hat Enterprise Linux 5.4 ISO:

        # copycds -n rhel5.4 -a s390x /install/RHEL5.4-Server-20090819.0-s390x-DVD.iso
        Copying media to /install/rhel5.4/s390x
        Media copy operation successful



* Remove the ISO from /install since we do not need the ISO any longer, and it consumes disk space

        # rm /install/SLES-10-SP3-DVD-s390x-DVD1.iso

Installation of xCAT UI

This section details the installation of the xCAT UI.

Red Hat Enterprise Linux

If you have a Red Hat Linux, follow the instructions below.

  1. Use yum to install the following packages (accept the dependencies)

    # yum install php php-pear httpd
    
  2. Allow httpd to make network connections (if SELinux is enabled)

    # /usr/sbin/setsebool httpd_can_network_connect=1
    
  3. Install the xCAT-UI

    # yum install xCAT-UI
    

SUSE Linux Enterprise Server

If you have a SUSE Linux, follow the instructions below.

  1. Use zypper to install the following packages (accept the dependencies)

    # zypper in php5-openssl apache2 apache2-mod_php5
    
  2. Install the xCAT-UI (accept the dependencies)

    # zypper in xCAT-UI
    

SSL Configuration

This section details the configuration of SSL on the xCAT server. SSL stands for Secure Socket Layer, which is a security protocol for communications over networks.

Red Hat Enterprise Linux

No actions required. SSL should be configured by default.

SUSE Linux Enterprise Server

If you have a SUSE Linux, follow the instructions below. You can find the following instructions from <http://en.opensuse.org/Apache_Howto_SSL>.

  1. Apache should be set to start with SSL. Verify with the following command

    # a2enmod ssl
     "ssl" already present
    
  2. Make sure that SSL is active

    # a2enflag SSL
    
  3. Create self signed keys

    # gensslcert
    
  4. Copy /etc/apache2/vhosts.d/vhost-ssl.template to /etc/apache2/vhosts.d/vhost-ssl.conf

    # cp /etc/apache2/vhosts.d/vhost-ssl.template /etc/apache2/vhosts.d/vhost-ssl.conf
    
  5. For the enabled modules, server flags, generated keys and vhosts to take effect, restart the apache service

    # service apache2 restart
    Syntax OK
    Shutting down httpd2 (waiting for all children to terminate)         done
    Starting httpd2 (prefork)
    
  6. Open a browser (Firefox) to the xCAT UI at https://xxx/xcat, where xxx is the host name of the xCAT MN. For example, https://gpok1.endicott.ibm.com/xcat. You will get a "Untrusted certificate" warning when you first try to access the URL. This is expected because of the use of a self-signed certificate.

Installation of zHCP

This section details the installation of the zHCP. Before continuing, note that the user directory entry for the zHCP should be similar to the one below.

USER LNX2 PWD 512M 1G ABCDG
COMMAND SET VSWITCH VSW2 GRANT LNX2
CPU 00 BASE
CPU 01
IPL CMS
MACHINE ESA 4
OPTION LNKNOPAS
CONSOLE 0009 3215 T
NICDEF 0800 TYPE QDIO LAN SYSTEM VSW2
SPOOL 000C 2540 READER *
SPOOL 000D 2540 PUNCH A
SPOOL 000E 1403 A
LINK MAINT 0190 0190 RR
LINK MAINT 019D 019D RR
LINK MAINT 019E 019E RR
LINK LNXMAINT 0192 0191 RR
LINK TCPMAINT 0592 0592 RR
MDISK 0100 3390 1 10016 EMC278

It is important to include OPTION LNKNOPAS in the user directory entry because it is needed by the zHCP to link to disks of other virtual machines. Also, it is important to note that the zHCP uses one network device (in our development environment, VSWITCH VSW2). This network must be specified in the directory entry and not in a profile. The zHCP has A, B, C, D, and G privileges. It needs class A privilege to use the FORCE command, class B privilege to use the FLASHCOPY command (if permitted), class C privilege to use the SEND command, and class D privilege to use the PURGE command.

Configuring SMAPI and DirMaint

Perform the following steps to prepare a Linux virtual server for installation of the System z Hardware Control Point (zHCP)

  1. Install and configure SMAPI and DirMaint for each z/VM partition. Refer to (step 1: Configure and start DirMaint and step 2: Configure SMAPI server environment) Installing the IBM z/VM Manageability Access Point Agent.
  2. Grant the zHCP access to DirMaint.

    • Open a 3270 console, logon MAINT, and issue the following commands, substituting LNX2 used in this example with the user ID of your virtual machine.

      ==&gt; DIRM FOR ALL AUTHFOR LNX2 CMDL 140A CMDS ADGHOPS
      DVHXMT1191I Your AUTHFOR request has been sent for processing.
      DVHREQ2288I Your AUTHFOR request for ALL at * has been accepted.
      DVHREQ2289I Your AUTHFOR request for ALL at * has completed; with RC =
      DVHREQ2289I 0.
      
      ==&gt; DIRM FOR ALL AUTHFOR LNX2 CMDL 150A CMDS ADGHOPS
      DVHXMT1191I Your AUTHFOR request has been sent for processing.
      DVHREQ2288I Your AUTHFOR request for ALL at * has been accepted.
      DVHREQ2289I Your AUTHFOR request for ALL at * has completed; with RC =
      DVHREQ2289I 0.
      
    • Change VSMWORK1 AUTHLIST

      ==&gt; SET FILEPOOL VMSYS
      
      ==&gt; QUERY FILEPOOL CONNECT
      Userid Connected
      VSMWORK1 Yes
      VSMWORK2 Yes
      VSMWORK3 Yes
      VSMREQIN Yes
      VSMREQIU Yes
      VSMPROXY Yes
      MAINT Yes
      
      ==&gt; ACCESS VMSYS:VSMWORK1.DATA A (FORCERW
      DMSACR724I VMSYS:VSMWORK1.DATA replaces A (0191)
      
      ==&gt; ACCESS VMSYS:VSMWORK1. B (FORCERW
      DMSACR724I VMSYS:VSMWORK1. replaces B (05E5)
      
      ==&gt; X VSMWORK1 AUTHLIST B
      00001 DO.NOT.REMOVE
      00002 MAINT ALL
      00003 VSMPROXY ALL
      00004 VSMWORK1 ALL
      
    • Copy the line where VSMWORK1 is specified by inserting a double quote in the prefix area and pressing enter. Substitute VSMWORK1 with the user ID you wish to have DIRMAINT access (in our case LNX2). The VSMWORK1 AUTHLIST should be similar to the this:

      00001 DO.NOT.REMOVE
      00002 MAINT ALL
      00003 VSMPROXY ALL
      00004 VSMWORK1 ALL
      00005 LNX2 ALL
      
    • Restart SMAPI

      ==&gt; FORCE VSMWORK1
      ==&gt; XAUTOLOG VSMWORK1
      
  3. Give the virtual server where you will install the zHCP A, B, C, D, and G privileges. The zHCP needs class A privilege to use the FORCE command, class B privilege to use the FLASHCOPY command (if available), class C privilege to use the SEND command, and class D privilege to use the PURGE command. In order for the zHCP to have these privileges, you must open a 3270 console, logon to MAINT after the user has been created, and issue:

    ==&gt; DIRM FORUSER LNX2 CLASS ABCDG
    
  4. Log off MAINT

    ==&gt; LOGOFF
    

Configuring zHCP

Warning The NICDEF statements for the zHCP must be contained in the directory entry and not in a profile.

  1. Logon to the xCAT MN as root using a Putty terminal
  2. Setup the SSH keys between the xCAT MN and zHCP

    # xdsh gpok2 -K
    
  3. Go into the directory where you extracted the xcat-dep tarball, e.g. /root/xcat. Send the zHCP RPM (zhcp-1-1.s390x.rpm) located in /root/xcat/xcat-dep/<os>/s390x to the zHCP, where <os> is the operating system installed on the zHCP. For example,

    # scp /root/xcat/xcat-dep/sles10/s390x/zhcp-1-1.s390x.rpm root@10.1.100.2:
    
  4. Exit the Putty session to the xCAT MN

  5. Logon to the zHCP Linux as root using a Putty terminal
  6. Load Linux VMCP module (if not already)

    # modprobe vmcp
    
  7. Install gcc and gcc-c++ (if not already)

If you have Red Hat Enterprise Linux:

    # yum install gcc gcc-c++

If you have SUSE Linux Enterprise Server:

    # zypper install gcc gcc-c++
  1. Install the RPM
    # rpm -i /root/zhcp-1-1-s390x.rpm
    

Initializing Database

This section details how to manage z/VM and Linux on System z using xCAT.

  1. Logon the xCAT MN as root using a Putty terminal
  2. Load Linux VMCP module on the xCAT MN (if not already)

    # modprobe vmcp
    
  3. Set up the passwd table. This table will contain the default password for new nodes installed through autoyast/kickstart and other methods.

    # chtab key=system passwd.username=root passwd.password=xxx
    

Substitute xxx with the root password.

  1. If you do not plan to use regular expressions to represent the IP address and hostname, skip to the next step.

Set up the hosts table (it will be used to setup /etc/hosts). You need to determine the regular expression that represents the nodes that xCAT will manage.

    # chtab node=xxx hosts.ip="yyy" hosts.hostnames="zzz"

Substitute xxx with the node range, yyy with the regular expression for the IP addresses, and zzz with the regular expression for the hostnames. You could use the following online tool to construct your regular expression: <http://gskinner.com/RegExr/>. Each time a new node is added to xCAT, you will need to run makehosts. You will need to setup the hosts table for each group you create.

In our development environment, we setup nodes belonging to group=all to have hostnames of gpok1, gpok2, etc. and IP addresses of 10.1.100.1, 10.1.100.2, etc. in /etc/hosts with the following:

    # chtab node=all hosts.ip="|gpok(\d+)|10.1.100.(\$1+0)|" hosts.hostnames="|(.*)|(\$1).endicott.ibm.com|"
  1. Setup the networks table. You need to set the DHCP, DNS, and FTP server to the IP address of your xCAT MN.

In our development environment, we setup up the xCAT MN to manage the network 10.1.100.0, which has a netmask of 255.255.255.0, a gateway of 10.1.100.1, and on an ethernet interface eth1. Our DHCP, DNS, and FTP servers are at 10.1.100.1. This is the command that we used:

    # chtab net=10.1.100.0 networks.mask=255.255.255.0 networks.mgtifname=eth1 networks.gateway=10.1.100.1 networks.dhcpserver=10.1.100.1 networks.tftpserver=10.1.100.1 networks.nameservers=10.1.100.1
  1. If you do not plan to use DHCP, skip to the next step.

Define the DHCP interfaces in the site table to limit which network the DHCP server will listen on. In our development environment, we setup eth1 as the interface where we have the DHCP server listening on.

    # chtab key=dhcpinterfaces site.value='all|eth1'
  1. Edit the nameserver and master in the site table to point it to the xCAT MN. In our development environment, we setup our nameserver and master to be 10.1.100.1.

    # chtab key=nameservers site.value='10.1.100.1'
    # chtab key=master site.value='10.1.100.1'
    
  2. Configure the DHCP server (if you plan to use it)

    • Add networks into the DHCP configuration

      # makedhcp -n
      
    • Restart DHCP

      # service dhcpd restart
      Shutting down DHCP server              done
      Starting DHCP server [chroot]          done
      
  3. Configure the DNS server, for more information see: Cluster Name Resolution

    • Restart DNS

      # service named restart
      Shutting down name server BIND  waiting for named to shut down (29s) done
      Starting name server BIND                                            done
      
    • Start DNS on boot

      # chkconfig --level 345 named on
      
  4. Start by adding the zHCP node into the datatabase (Use the DNS hostname of that node when adding). In our development environment, our zHCP has a hostname of gpok2, a userID of LNX2, and belonged to the group=all. This is the command that we used:

    # mkdef -t node -o gpok2 userid=LNX2 hcp=gpok2.endicott.ibm.com mgt=zvm groups=all
    1 object definitions have been created or modified.
    

Set the node's IP address and hostname (only if a regex is not set for the group)

    # chtab node=gpok2 hosts.ip="10.1.100.2" hosts.hostnames="gpok2.endicott.ibm.com"
  1. Add more nodes (if any) that you want to manage into the database. For example, if you have a node with a hostname of gpok3 and userID of LNX3 on the same z/VM partition (managed by the zHCP on gpok2), you would use the following command:
    # mkdef -t node -o gpok3 userid=LNX3 hcp=gpok2.endicott.ibm.com mgt=zvm groups=all
    1 object definitions have been created or modified.
    

The node IP address should follow the rule you specified in the hosts table (step 4).

Set the node's IP address and hostname (only if a regex is not set for the group)

    # chtab node=gpok3 hosts.ip="10.1.100.3" hosts.hostnames="gpok3.endicott.ibm.com"
  1. Update /etc/hosts

    # makehosts
    
  2. Update DNS

    # makedns
    Handling localhost in /etc/hosts.
    Handling gpok3 in /etc/hosts.
    Getting reverse zones, this may take several minutes in scaling cluster.
    Completed getting reverse zones.
    Updating zones.
    Completed updating zones.
    Updating DNS records, this may take several minutes in scaling cluster.
    Completed updating DNS records.
    DNS setup is completed
    
  3. Setup the SSH keys for the node range that you want to manage

    # xdsh xxx -K
    

Substitute xxx with the node range. For example, if you were to setup the SSH keys for the nodes you added above in steps 10 and 11, you can use:

    # xdsh all -K
    Enter the password for the userid: root on the node where the ssh keys 
    will be updated:

    /usr/bin/ssh setup is complete.
    return code = 0

The xdsh command will prompt you for a root password. It is the root password for the node or group you are trying to push the public SSH key to. It is recommended that you put nodes with the same root password into the same group. More importantly, the xdsh command will only work for nodes that are online.

  1. Start using supported xCAT commands. At this point, you could use the xCAT UI to start managing your virtual servers. However, you should go through the rest of this document in order to understand the concepts and how the xCAT UI works in the background.

xCAT Commands

This section lists the current xCAT commands supported on z/VM and Linux on System z.

rpower - Controls the power for a node or noderange.
The syntax is: rpower &lt;node&gt; [on|off|softoff|stat|reset]

# rpower gpok3 stat
gpok3: on

Note: You should cleanly shutdown the node by issuing rpower &lt;node&gt; softoff.

mkvm - Creates a new virtual server with the same profile/resources as the specified node (cloning). Alternatively, creates a new virtual server based on a directory entry.
The syntax is: mkvm &lt;new node&gt; /tmp/&lt;directory entry text file&gt;

# mkvm gpok3 /tmp/dirEntry.txt
gpok3: Creating user directory entry for LNX3... Done

For cloning, the syntax is: mkvm &lt;target Linux&gt; &lt;source Linux&gt; pool=&lt;disk pool&gt; pw=&lt;multi password&gt;

# mkvm gpok4 gpok3 pool=POOL1
gpok4: Cloning gpok3
gpok4: Linking source disk (0100) as (1100)
gpok4: Linking source disk (0101) as (1101)
gpok4: Stopping LNX3... Done
gpok4: Creating user directory entry
gpok4: Granting VSwitch (VSW1) access for gpok3
gpok4: Granting VSwitch (VSW2) access for gpok3
gpok4: Adding minidisk (0100)
gpok4: Adding minidisk (0101)
gpok4: Disks added (2). Disks in user entry (2)
gpok4: Linking target disk (0100) as (2100)
gpok4: Copying source disk (1100) to target disk (2100) using FLASHCOPY
gpok4: Mounting /dev/dasdg1 to /mnt/LNX3
gpok4: Setting network configuration
gpok4: Linking target disk (0101) as (2101)
gpok4: Copying source disk (1101) to target disk (2101) using FLASHCOPY
gpok4: Powering on
gpok4: Detatching source disk (0101) at (1101)
gpok4: Detatching source disk (0100) at (1100)
gpok4: Starting LNX3... Done

rmvm - Removes a virtual server.
The syntax is: rmvm &lt;node&gt;.

# rmvm gpok3
gpok3: Deleting virtual server LNX3... Done

lsvm - List a virtual server's configuration. Options supported are:

  • List the directory entry.
    The syntax is: lsvm &lt;node&gt;

    # lsvm gpok3
    gpok3: USER LNX3 PWD 512M 1G G
    gpok3: INCLUDE LNXDFLT
    gpok3: COMMAND SET VSWITCH VSW2 GRANT LNX3
    
  • List the defined network names available for a given node.
    The syntax is: lsvm &lt;node&gt; --getnetworknames

    # lsvm gpok3 --getnetworknames
    gpok3: LAN:QDIO SYSTEM GLAN1
    gpok3: LAN:HIPERS SYSTEM GLAN2
    gpok3: LAN:QDIO SYSTEM GLAN3
    gpok3: VSWITCH SYSTEM VLANTST1
    gpok3: VSWITCH SYSTEM VLANTST2
    gpok3: VSWITCH SYSTEM VSW1
    gpok3: VSWITCH SYSTEM VSW2
    gpok3: VSWITCH SYSTEM VSW3
    
  • List the configuration for a given network.
    The syntax is: lsvm &lt;node&gt; --getnetwork [networkname]

    # lsvm gpok3 --getnetwork GLAN1
    gpok3: LAN SYSTEM GLAN1        Type: QDIO    Connected: 1    Maxconn: INFINITE
    gpok3:   PERSISTENT  UNRESTRICTED  IP                        Accounting: OFF
    gpok3:   IPTimeout: 5                 MAC Protection: Unspecified
    gpok3:   Isolation Status: OFF
    
  • List the disk pool names available.
    The syntax is: lsvm &lt;node&gt; --diskpoolnames

    # lsvm gpok3 --diskpoolnames
    gpok3: POOL1
    gpok3: POOL2
    gpok3: POOL3
    
  • List the configuration for a given disk pool.
    The syntax is: lsvm &lt;node&gt; --diskpool [pool name] [space (free or used)]

    # lsvm gpok3 --diskpool POOL1 free
    gpok3: #VolID DevType StartAddr Size
    gpok3: EMC2C4 3390-09 0001 10016
    gpok3: EMC2C5 3390-09 0001 10016
    

chvm - Changes the virtual server's configuration. Options supported are:

  • Adds a 3390 (ECKD) disk to a virtual server's directory entry.
    The syntax is: chvm &lt;node&gt; --add3390 [disk pool] [device address] [cylinders] [mode] [read password] [write password] [multi password]

    # chvm gpok3 --add3390 POOL1 0101 3338 MR
    gpok3: Adding disk 0101 to LNX3... Done
    
  • Adds a 3390 (ECKD) disk that is defined in a virtual server's directory entry to that virtual server's active configuration.
    The syntax is: chvm &lt;node&gt; --add3390active [device address] [mode]

    # chvm gpok3 --add3390active 0101 MR
    gpok3: Adding disk 0101 to LNX3... Done
    
  • Adds a 9336 (FBA) disk to a virtual server's directory entry.
    The syntax is: chvm &lt;node&gt; --add9336 [disk pool] [virtual device] [block size] [mode] [blocks] [read password] [write password] [multi password]

    # chvm gpok3 --add9336 POOL3 0101 512 4194272 MR
    gpok3: Adding disk 0101 to LNX3... Done
    
  • Add a disk to a disk pool defined in the EXTENT CONTROL. The disk has to already be attached to SYSTEM.
    The syntax is: chvm &lt;node&gt; --adddisk2pool [function] [region] [volume] [group]. Function type can be either: (4) Define region as full volume and add to group OR (5) Add existing region to group
    If the volume already exists in the EXTENT CONTROL, use function 5. If the volume does not exist in the EXTENT CONTROL, but is attached to SYSTEM, use function 4.

    # chvm gpok2 --adddisk2pool 4 DM1234 DM1234 POOL1
    gpok2: Adding DM1234 to POOL1... Done
    
    # chvm gpok2 --adddisk2pool 5 DM1234 POOL1
    gpok2: Adding DM1234 to POOL1... Done
    
  • Adds a network adapter to a virtual server's directory entry (case sensitive).
    The syntax is: chvm &lt;node&gt; --addnic [address] [type] [device count]

    # chvm gpok3 --addnic 0600 QDIO 3
    gpok3: Adding NIC 0900 to LNX3... Done
    
  • Adds a virtual processor to a virtual server's directory entry.
    The syntax is: chvm &lt;node&gt; --addprocessor [address]

    # chvm gpok3 --addprocessor 01
    gpok3: Adding processor 01 to LNX3... Done
    
  • Adds a virtual processor to a virtual server's active configuration (case sensitive).
    The syntax is: chvm &lt;node&gt; --addprocessoractive [address] [type]

    # chvm gpok3 --addprocessoractive 01 IFL
    gpok3: CPU 01 defined
    
  • Adds a v-disk to a virtual server's directory entry.
    The syntax is: chvm &lt;node&gt; --addvdisk [userID] [device address] [size]

    # chvm gpok3 --addvdisk 0300 2097120
    gpok3: Adding V-Disk 0300 to LNX3... Done
    
  • Connects a given network adapter to a GuestLAN.
    The syntax is: chvm &lt;node&gt; --connectnic2guestlan [address] [lan] [owner]

    # chvm gpok3 --connectnic2guestlan 0600 GLAN1 LN1OWNR
    gpok3: Connecting NIC 0600 to GuestLan GLAN1 on LN1OWNR... Done
    
  • Connects a given network adapter to a VSwitch.
    The syntax is: chvm &lt;node&gt; --connectnic2vswitch [address] [vswitch]

    # chvm gpok3 --connectnic2vswitch 0600 VSW1
    gpok3: Connecting NIC 0600 to VSwitch VSW1 on LNX3... Done
    
  • Copy a disk attached to a given virtual server.
    The syntax is: chvm &lt;node&gt; --copydisk [target address] [source node] [source address]

    # chvm gpok3 --copydisk 0100 gpok2 0101
    
  • Adds a dedicated device to a virtual server's directory entry.
    The syntax is: chvm &lt;node&gt; --dedicatedevice [virtual device] [real device] [read-only]

    # chvm gpok3 --dedicatedevice 0101 637F 0
    gpok3: Dedicating device 637F as 0101 to LNX3... Done
    

Specify 1 for read-only if the virtual device is to be in read-only mode, otherwise, specify a 0.

  • Deletes the IPL statement from the virtual server's directory entry.
    The syntax is: chvm &lt;node&gt; --deleteipl

    # chvm gpok3 --deleteipl
    gpok3: Removing IPL statement on LNX3... Done
    
  • Formats a disk attached to a given virtual server (only ECKD disks supported). The disk should not be linked to any other virtual server. This command is best used after add3390().
    The syntax is: chvm &lt;node&gt; --formatdisk [disk address] [multi password]

    # chvm gpok3 --formatdisk 0100 PWD
    
  • Disconnects a given network adapter.
    The syntax is: chvm &lt;node&gt; --disconnectnic [address]

    # chvm gpok3 --disconnectnic 0600
    gpok3: Disconnecting NIC 0600 on LNX3... Done
    
  • Grant VSwitch access for given virtual server.
    The syntax is: chvm &lt;node&gt; --grantvswitch [VSwitch]

    # chvm gpok3 --grantvswitch VSW1
    
  • Removes a minidisk from a virtual server's directory entry.
    The syntax is: chvm &lt;node&gt; --removedisk [virtual device]

    # chvm gpok3 --removedisk 0101
    gpok3: Removing disk 0101 on LNX3... Done
    
  • Reset z/VM SMAPI, assuming the SMAPI worker machines to reset are: VSMWORK1, VSMWORK2, VSMWORK3, VSMREQIN, and VSMREQIU.
    The syntax is:chvm &lt;zhcp&gt; --resetsmapi

    # chvm gpok2 --resetsmapi
    gpok2: Resetting SMAPI... Done
    
  • Remove a disk from a disk pool defined in the EXTENT CONTROL.
    The syntax is: chvm &lt;node&gt; --removediskfrompool [function] [region] [group]. Function type can be either: (1) Remove region, (2) Remove region from group, (3) Remove region from all groups, OR (7) Remove entire group

Remove a region from the EXTENT CONTROL:

    # chvm gpok2 --removediskfrompool 1 DM1234
    gpok2: Removing DM1234... Done

Remove a region from a group in the EXTENT CONTROL:

    # chvm gpok2 --removediskfrompool 2 DM1234 POOL1
    gpok2: Removing DM1234 from POOL1... Done

Remove a region from all groups in the EXTENT CONTROL:

    # chvm gpok2 --removediskfrompool 3 DM1234
    gpok2: Removing DM1234... Done

Remove group POOL1 in the EXTENT CONTROL (The second argument has no significance):

    # chvm gpok2 --removediskfrompool 7 FOOBAR POOL1
    gpok2: Removing POOL1... Done
  • Removes a network adapter from a virtual server's directory entry.
    The syntax is: chvm &lt;node&gt; --removenic [address]

    # chvm gpok3 --removenic 0700
    gpok3: Removing NIC 0700 on LNX3... Done
    
  • Removes a processor from an active virtual server's configuration.
    The syntax is: chvm &lt;node&gt; --removeprocessor [address]

    # chvm gpok3 --removeprocessor 01
    gpok3: Removing processor 01 on LNX3... Done
    
  • Replaces a virtual server's directory entry.
    The syntax is: chvm &lt;node&gt; --replacevs [directory entry]

    # chvm gpok3 --replacevs /tmp/dirEntry.txt
    gpok3: Replacing user entry of LNX3... Done
    
  • Sets the IPL statement for a given virtual server.
    The syntax is: chvm &lt;node&gt; --setipl [ipl target] [load parms] [parms]

    # chvm gpok3 --setipl CMS
    gpok3: Setting IPL statement on LNX3... Done
    
  • Sets the password for a given virtual server.
    The syntax is: chvm &lt;node&gt; --setpassword [password]

    # chvm gpok3 --setpassword PSSWD
    gpok3: Setting password for LNX3... Done
    

rscan - Collects the node information from one or more hardware control points.
The syntax is rscan &lt;zhcp&gt; [-w]. The -w option will populate the database with details collected by rscan.

# rscan gpok2
gpok2:
  objtype=node
  arch=s390x
  os=sles10sp3
  hcp=gpok3.endicott.ibm.com
  userid=LINUX2
  nodetype=vm
  parent=POKDEV61
  groups=all
  mgt=zvm

rinv - Remote hardware and software inventory.
The syntax is: rinv &lt;node&gt; &lt;all|config&gt;.

# rinv gpok3 all
gpok3: z/VM UserID: XCAT3
gpok3: z/VM Host: POKDEV61
gpok3:Operating System: SUSE Linux Enterprise Server 11 (s390x)
gpok3: Architecture:    s390x
gpok3: HCP: gpok3.endicott.ibm.com
gpok3: Privileges: 
gpok3:     Currently: G
gpok3:     Directory: G
gpok3: 
gpok3: Total Memory:    796M
gpok3: Processors: 
gpok3:     CPU 01  ID  FF0C452E20978000 CP   CPUAFF ON
gpok3:     CPU 00  ID  FF0C452E20978000 (BASE) CP   CPUAFF ON
gpok3: 
gpok3: Disks: 
gpok3:     DASD 0100 3390 EMC2C6 R/W      10016 CYL ON DASD  C2C6 SUBCHANNEL = 0000
gpok3:     DASD 0190 3390 EV61A2 R/O        107 CYL ON DASD  61A2 SUBCHANNEL = 000E
gpok3:     DASD 0191 3390 EMC20D R/O       1000 CYL ON DASD  C20D SUBCHANNEL = 0013
gpok3:     DASD 019D 3390 EV61A2 R/O        146 CYL ON DASD  61A2 SUBCHANNEL = 000F
gpok3:     DASD 019E 3390 EV61A2 R/O        250 CYL ON DASD  61A2 SUBCHANNEL = 0010
gpok3:     DASD 0300 9336 (VDSK) R/W     262144 BLK ON DASD  VDSK SUBCHANNEL = 0014
gpok3:     DASD 0301 9336 (VDSK) R/W     524288 BLK ON DASD  VDSK SUBCHANNEL = 0015
gpok3:     DASD 0402 3390 EV61A2 R/O        146 CYL ON DASD  61A2 SUBCHANNEL = 0011
gpok3:     DASD 0592 3390 EV61A2 R/O         70 CYL ON DASD  61A2 SUBCHANNEL = 0012
gpok3: 
gpok3: NICs:    
gpok3:     Adapter 0600.P00 Type: QDIO      Name: UNASSIGNED  Devices: 3
gpok3:       MAC: 02-00-06-00-05-38         LAN: * None
gpok3:     Adapter 0700.P00 Type: QDIO      Name: UNASSIGNED  Devices: 3
gpok3:       MAC: 02-00-06-00-05-39         LAN: * None
gpok3:     Adapter 0800.P00 Type: QDIO      Name: FOOBAR      Devices: 3
gpok3:       MAC: 02-00-06-00-05-3A         VSWITCH: SYSTEM VSW2

Note the complete inventory can only be retrieved when the node is online.

xdsh - Concurrently runs commands on multiple nodes.
The syntax is: xdsh &lt;node&gt; -e &lt;script&gt;.

# xdsh gpok3 /tmp/myScript.sh

For a list of general xCAT commands, click here.

For some commands above, such as chvm, a return code and reason code may be returned. In such cases, refer to the Systems Management Application Programming (SMAPI) documentation, which lists the return codes and their description.

In some cases, a return code of 596 may be returned. In this case, take the reason code that follows it and decipher it using the Directory Maintenance Facility Messages documentation.

Installing Linux Using AutoYast or Kickstart

This section provides details on the installation of Linux using autoyast or kickstart.

There are two ways to install Linux onto a z/VM virtual server, depending on which Linux distribution you want. One is through autoyast, which is used to install SUSE Linux Enterprise Server (SLES) releases. The other is through kickstart, which is used to install Red Hat Enterprise Linux (RHEL) releases.

Before you begin, make sure the following is done.

  • The FTP server must be setup during the xCAT MN installation, and the FTP root directory (/install) must contain the appropriate Linux distribution.
  • If you are managing an IP address range starting at 1 (e.g. 10.1.100.1), be sure that the netmask is set correctly (e.g. 255.255.255.0) on the xCAT MN or else the node you are trying to provision cannot find the repository.

In the following example, we will provision a new node (gpok3) with a userID (LNX3) that is managed by our zHCP (gpok2). You will need to substitute the node name, userID, and zHCP name with appropriate values.

  1. Logon the xCAT MN as root using a Putty terminal
  2. Create the node definition
    # mkdef -t node -o gpok3 userid=LNX3 hcp=gpok2.endicott.ibm.com mgt=zvm groups=all
    Object definitions have been created or modified.
    

Set the node's IP address and hostname (only if a regex is not set for the group)

    # chtab node=gpok3 hosts.ip="10.1.100.3" hosts.hostnames="gpok3.endicott.ibm.com"
  1. Update /etc/hosts

    # makehosts
    
  2. Update DNS

    # makedns
    
  3. Define the directory entry for the new virtual server in a text file (dirEntry.txt). For our example, we used the following:

    USER LNX3 PWD 512M 1G G
    INCLUDE LNXDFLT
    COMMAND SET VSWITCH VSW2 GRANT LNX3
    

Once you have defined the directory entry in a text file, create the virtual server by issuing the following command (the full file path must be given):

    # mkvm gpok3 /tmp/dirEntry.txt
    gpok3: Creating user directory entry for LNUX3... Done

The directory entry text file should not contain any extra new lines (/n). A MAC address will be assigned to the user ID upon creation.

  1. Copy the default autoyast/kickstart template and package list available in xCAT (if not already). Customize this template and package list (the ones you copied) to how you see fit. For more information on how to customize the template, see Appendix B.

If you want to install a SUSE Linux Enterprise Server:

    # mkdir -p /install/custom/install/sles
    # cp /opt/xcat/share/xcat/install/sles/compute.sles10.s390x.tmpl /install/custom/install/sles
    # cp /opt/xcat/share/xcat/install/sles/compute.sles10.s390x.pkglist /install/custom/install/sles
    # cp /opt/xcat/share/xcat/install/sles/compute.sles11.s390x.tmpl /install/custom/install/sles
    # cp /opt/xcat/share/xcat/install/sles/compute.sles11.s390x.pkglist /install/custom/install/sles

There are two templates available for SLES, one for SLES 10 (compute.sles10.s390x.tmpl) and the other for SLES 11 (compute.sles11.s390x.tmpl). It is recommended that you copy both templates into /install/custom/install/sles.

If you want to install a Red Hat Enterprise Linux:

    # mkdir -p /install/custom/install/rh
    # cp /opt/xcat/share/xcat/install/rh/compute.rhel5.s390x.tmpl /install/custom/install/rh
    # cp /opt/xcat/share/xcat/install/rh/compute.rhel5.s390x.pkglist /install/custom/install/rh
    # cp /opt/xcat/share/xcat/install/rh/compute.rhels6.s390x.tmpl /install/custom/install/rh/compute.rhel6.s390x.tmpl
    # cp /opt/xcat/share/xcat/install/rh/compute.rhels6.s390x.pkglist /install/custom/install/rh/compute.rhel6.s390x.pkglist

There are also two templates available for RHEL, one for RHEL 5 (compute.rhel5.s390x.tmpl) and the other for RHEL 6 (compute.rhels6.s390x.tmpl). It is recommended that you copy both templates into /install/custom/install/rh.

The default templates are configured to use one 3390-mod9 with / mounted and use DHCP. The package lists (.pkglist) are configured to install the base software package. You should only customize the disks, partitioning, and install packages, and leave the network configuration alone.

  1. Add disks to the new node (the default autoyast/kickstart template available in xCAT requires 1 3390-MOD9 disks attached at 0100).
    # chvm gpok3 --add3390 POOL1 0100 10016 MR
    gpok3: Adding disk 0100 to LNX3... Done
    

Be sure that each disk in the pool is attached to SYSTEM.

Alternatively, you can use SCSI/FCP disks (which are seen by z/VM as 9336 disks), but you first need to configure the autoyast/kickstart template. See Appendix B for details. If you choose to have SCSI/FCP disks, you can add these disks to the new node using:

    # chvm gpok3 --add9336 POOL3 0101 512 4194272 MR
    gpok3: Adding disk 0100 to LNX3... Done
  1. Set up the noderes and nodetype tables. You need to determine the OS and profile (autoyast/kickstart template) for the node. Here, we have nodetype.os=sles10.3. You can find available OS and profiles by issuing:
    # tabdump osimage
    

If you want to install a SUSE Linux Enterprise Server:

    # chtab node=gpok3 noderes.netboot=zvm nodetype.os=sles10sp3 nodetype.arch=s390x nodetype.profile=compute

If you want to install a Red Hat Enterprise Linux:

     # chtab node=gpok3 noderes.netboot=zvm nodetype.os=rhel5.4 nodetype.arch=s390x nodetype.profile=compute
  1. Verify the definition
    # lsdef gpok3
    

It should look similar to this:

    Object name: gpok3
         arch=s390x
         groups=all
         hcp=gpok2.endicott.ibm.com
         hostnames=gpok3.endicott.ibm.com
         ip=10.1.100.3
         mac=02:00:01:FF:FF:F0
         mgt=zvm
         netboot=zvm
         os=sles10sp3
         postbootscripts=otherpkgs
         postscripts=syslog,remoteshell,syncfiles
         profile=compute
         userid=LNX3
  1. Add the new node to DHCP

    # makedhcp -a
    
  2. Power on the node

    # rpower gpok3 on
    gpok3: Starting LNX3... Done
    
  3. Prepare the new node for installation

    # nodeset gpok3 install
    gpok3: Purging reader... Done
    gpok3: Punching kernel to reader... Done
    
    gpok3: Punching parm to reader... Done
    
    gpok3: Punching initrd to reader... Done
    
    gpok3: Kernel, parm, and initrd punched to reader.  Ready for boot.
    
  4. Boot the new node from reader

    # rnetboot gpok3 ipl=00C
    gpok3: Starting LINUX5... Done
    
    gpok3: Booting from 00C... Done
    
  5. In Gnome or KDE, open the VNC viewer to see the installation progress. It might take a couple of minutes before you can connect.

    # vncviewer gpok3:1
    

The default VNC password is 12345678. If you have trouble connecting to the vncviewer, open a 3270 console to the node, try steps 12 and 13 again, and look at the progress on the console.

  1. (Only for SLES 10 SP2 or older) Once the first phase of installation is complete, restart the virtual server to complete the final phase of installation

    # rpower gpok3 reset
    
  2. The default password for the node can be found in the passwd table. See Initializing Database section step 3. The SSH keys should already be setup for the node.

Adding Software Packages

This section shows how to add other software packages (ones available outside the OS distribution) into the autoyast/kickstart installation process.

In the following example, we will add Ganglia (packaged with xCAT) and configure it during the autoyast/kickstart installation.

  1. Put the RPMs you want to be installed under /install/post/otherpkgs/&lt;os&gt;/&lt;arch&gt; directory, where &lt;os&gt; and &lt;arch&gt; can be found in the nodetype table.

    # mkdir -p /install/post/otherpkgs/sles11sp1/s390x 
    # cp /root/xcat/xcat-dep/sles11/s390x/ganglia-gmond-3.1.1-1.s390x.rpm /install/post/otherpkgs/sles11sp1/s390x
    # cp /root/xcat/xcat-dep/sles11/s390x/libganglia-3.1.1-1.s390x.rpm /install/post/otherpkgs/sles11sp1/s390x
    # cp /root/xcat/xcat-dep/sles11/s390x/libconfuse-2.6-1.s390x.rpm /install/post/otherpkgs/sles11sp1/s390x
    
  2. Put the package names (in our case, libconfuse, libganglia, and ganglia-gmond) to be installed in /install/custom/install/&lt;os&gt;/&lt;profile&gt;.&lt;os&gt;.otherpkgs.pkglist. For example

    # cat /install/custom/install/sles/compute.sles11sp1.otherpkgs.pkglist
    libconfuse 
    libganglia 
    ganglia-gmond
    

The autoyast/kickstart install process picks up the RPMs listed in the otherpkgs.pkglist and installs them on to the nodes.

  1. Most software packages require some kind of configuration. In the case of Ganglia, gmond needs to be configured to advertise to gmetad (on the xCAT MN). The configuration can be done using postscripts.
    Place the following script under /install/postscripts.

    # cat /install/postscripts/confGanglia
    
    #!/bin/sh
    # Post-script to customize virtual machine
    
    # Install Ganglia
    echo "Configuring Ganglia..."
    
    # Get IP address of MS
    OS=`uname`
    echo "The OS is: $OS"
    ms_ip=$MONMASTER
    result=`ping -c1 $MONMASTER 2&gt;&1`
    if [ $? -eq 0 ]; then
        index1=`expr index "$result" "\("`
        index2=`expr index "$result" "\)"`
        pos=`expr $index1 + 1`
        length=`expr $index2 - $index1`
        length=`expr $length - 1`
        ms_ip=`expr substr "$result" $pos $length`
        echo "MS IP is: $ms_ip"
    fi
    
    CLUSTER=\"$MONSERVER\"
    echo "Cluster is: $CLUSTER"
    MASTER=$ms_ip
    gmond_conf="/etc/ganglia/gmond.conf"
    gmond_conf_old="/etc/gmond.conf"
    if [ $OS != "AIX" ]; then
        if [ -f  $gmond_conf ]; then
            grep "xCAT gmond settings done" $gmond_conf
            if [ $? -gt 0 ]; then
                /bin/cp -f $gmond_conf /etc/ganglia/gmond.conf.orig
                sed -i 's/setuid = yes/setuid = no/1' $gmond_conf
                sed -i 's/name = "unspecified"/name='$CLUSTER'/1' $gmond_conf
                sed -e "1,40s/mcast_join = .*/host = $MASTER/" $gmond_conf &gt; /etc/temp.conf
                /bin/cp -f /etc/temp.conf $gmond_conf
                sed -i 's/mcast_join/#/g' $gmond_conf
                sed -i 's/bind/#/g' $gmond_conf
                echo "# xCAT gmond setup end" &gt;&gt; $gmond_conf
            fi
        fi
    fi
    
    if [ $OS != "AIX" ]; then
        if [ -f $gmond_conf_old ]; then
            grep "xCAT gmond settings done" $gmond_conf_old
            if [ $? -gt 0 ]; then
                /bin/cp -f $gmond_conf_old /etc/gmond.conf.orig
                sed -i 's/setuid = yes/setuid = no/1' $gmond_conf_old
                sed -i 's/name = "unspecified"/name='$CLUSTER'/1' $gmond_conf_old
                sed -e "1,40s/mcast_join = .*/host = $MASTER/" $gmond_conf_old &gt; /etc/temp.conf
                /bin/cp -f /etc/temp.conf $gmond_conf_old
                sed -i 's/mcast_join/#/g' $gmond_conf_old
                sed -i 's/bind/#/g' $gmond_conf_old
                echo "# xCAT gmond settings done sh_old" &gt;&gt; $gmond_conf_old
            fi 
        fi
    fi
    
    # Start gmond
    /etc/init.d/gmond start
    
  2. Give the appropriate file permissions for the script

    # chmod 755 /install/postscripts/confGanglia
    
  3. Specify the postscript to run at install time by putting it in the postscripts table in xCAT (using tabedit). In the case of Ganglia, the otherpkgs and confGanglia scripts need to be run after installation. otherpkgs script comes packaged with xCAT and confGanglia script is provided above.

    # tabdump postscripts
    
    #node,postscripts,postbootscripts,comments,disable
    "xcatdefaults","syslog,remoteshell,syncfiles","otherpkgs",,
    "all","otherpkgs,confGanglia",,,
    
  4. You can optionally install other packages (e.g. Ganglia) after the autoyast/kickstart installation process by using: updatenode &lt;node&gt; otherpkgs. The node must be online for this to work.

    # updatenode gpok3 otherpkgs
    gpok3: Running postscript: otherpkgs
    gpok3: NFSERVER=10.1.100.1
    gpok3: OTHERPKGDIR=10.1.100.1/post/otherpkgs/sles11sp1/s390x
    gpok3: Repository 'SUSE-Linux-Enterprise-Server-11-SP1 11.1.1-1.152' is up to date.
    gpok3: Repository 'sles11sp1' is up to date.
    gpok3: All repositories have been refreshed.
    gpok3: zypper --non-interactive update --auto-agree-with-license
    gpok3: Loading repository data...
    gpok3: Reading installed packages...
    gpok3: 
    gpok3: Nothing to do.
    gpok3: rpm -Uvh --replacepkgs  libconfuse* libganglia* ganglia-gmond*
    gpok3: warning: libconfuse-2.6-1.s390x.rpm: Header V3 DSA signature: NOKEY, key ID da736c68
    gpok3: Preparing...                ##################################################
    gpok3: libconfuse                  ##################################################
    gpok3: libganglia                  ##################################################
    gpok3: ganglia-gmond               ##################################################
    gpok3: insserv: warning: script 'S11xcatpostinit1' missing LSB tags and overrides
    gpok3: insserv: warning: script 'xcatpostinit1' missing LSB tags and overrides
    gpok3: gmond                     0:off  1:off  2:off  3:on   4:off  5:on   6:off
    gpok3: Running of postscripts has completed.
    

Cloning Virtual Servers

This section shows how to clone a virtual server running Linux.

In the following example, we will clone the virtual server that we created (gpok3) in the previous section Installing Linux Using Autoyast or Kickstart. The new virtual server will have the node name (gpok4) and user ID (LNX4) respectively, and managed by the same zHCP (gpok2). You will need to substitute the node name, user ID, and zHCP name with appropriate values.

  1. Logon the xCAT MN as root using a Putty terminal (if not already)
  2. The source node must be online and accessible via SSH. If it is not online, bring it online.

    # rpower gpok3 on
    
  3. Setup the SSH keys for the source node to be cloned (if not already)

    # xdsh gpok3 -K
    
  4. Create the table definition for new node (gpok4)

    # mkdef -t node -o gpok4 userid=LNX4 hcp=gpok2.endicott.ibm.com mgt=zvm groups=all
    

Set the node's IP address and hostname (only if a regex is not set for the group)

    # chtab node=gpok4 hosts.ip="10.1.100.4" hosts.hostnames="gpok4.endicott.ibm.com"
  1. Update /etc/hosts

    # makehosts
    
  2. Update DNS

    # makedns
    
  3. Add the new node to DHCP

    # makedhcp -a
    
  4. In order to clone a virtual server running Linux, the partition must be mounted by path. This is done by default for the node (gpok3) that we created in the previous section and in general, for nodes provision by xCAT using the default templates.

For SUSE Linux Enterprise Server:
The root directory under /etc/fstab, which contains information on the system partitions and disks, should be similar to this:

    /dev/disk/by-path/ccw-0.0.0100-part1  /  ext3  acl,user_xattr  1 1

The parameters under /etc/zipl.conf, which specifies which disks to bring online when the system is IPLed, should be similar to this:

    parameters = "root=/dev/disk/by-path/ccw-0.0.0100-part1 TERM=dumb"

If you happen to edit zipl.conf, you must run zipl after you made the changes so that changes are written to the boot record.

  1. Clone virtual server(s) running Linux:
    # mkvm gpok4 gpok3 pool=POOL1
    gpok4: Cloning gpok3
    gpok4: Linking source disk (0100) as (1100)
    gpok4: Stopping LNX3... Done
    
    gpok4: Creating user directory entry
    gpok4: Granting VSwitch (VSW2) access for LNX3
    gpok4: Adding minidisk (0100)
    gpok4: Disks added (1). Disks in user entry (1)
    gpok4: Linking target disk (0100) as (2100)
    gpok4: Copying source disk (1100) to target disk (2100) using FLASHCOPY
    gpok4: Mounting /dev/dasde1 to /mnt/LNX3
    gpok4: Setting network configuration
    gpok4: Powering on
    gpok4: Detatching source disk (0100) at (1100)
    gpok4: Starting LNX3... Done
    
    gpok4: Done
    

This will create a virtual server (gpok4) identical to gpok3. It will use disks in disk pool POOL1.

If FLASHCOPY is not enabled on your z/VM system, then this will take several minutes to complete depending on the number of nodes you want to clone. Also, FLASHCOPY will not work if the disks are not on the same storage facility.

  1. Check the boot status of the node by pinging it:
    # pping gpok4
    gpok4: ping
    

If the node returns a ping, then it is fully booted and you can start using it. If you try to SSH into the node and are prompted for a password, you need to setup the SSH keys for each for the new nodes:

    # xdsh gpok4 -K
    Enter the password for the userid: root on the node where the ssh keys
    will be updated:

    /usr/bin/ssh setup is complete.
    return code = 0

Setting Up Ganglia on xCAT

This section details how to the set up Ganglia on Linux on System z.

Red Hat Enterprise Linux

If you have Red Hat Enterprise Linux, follow the instructions below.

  1. Logon the xCAT MN as root using a Putty terminal (if not already)
  2. Go into the directory where you extracted the xcat-dep tarball, e.g. /root/xcat. Locate the Ganglia RPMs under /root/xcat/xcat-dep/<os>/s390x, where <os> is the RHEL version you are running. Verify install the following RPMs are present.

    rrdtool-1.4.5-0.20.s390x.rpm (RHEL 5.x only)
    libconfuse-2.6-1.s390x.rpm
    libganglia-3.1.1-1.s390x.rpm
    ganglia-gmetad-3.1.1-1.s390x.rpm
    ganglia-gmond-3.1.1-1.s390x.rpm
    ganglia-web-3.1.1-1.s390x.rpm
    
  3. Set up ganglia on the xCAT MN

    • Install PHP and apache packages (if not already). Use yast to install the following packages

      # yum install apr pkgconfig php-pear php-gd httpd
      
    • Install the Ganglia RPMs

      # yum install ganglia-gmetad ganglia-gmond ganglia-web
      Loaded plugins: product-id, subscription-manager
      Updating Red Hat repositories.
      Setting up Install Process
      Resolving Dependencies
      --&gt; Running transaction check
      ---&gt; Package ganglia-gmetad.s390x 0:3.1.1-1 will be installed
      ---&gt; Package ganglia-gmond.s390x 0:3.1.1-1 will be installed
      ---&gt; Package ganglia-web.s390x 0:3.1.1-1 will be installed
      --&gt; Finished Dependency Resolution
      
      Dependencies Resolved
      
      ###### ====================================================================
      
       Package                Arch          Version            Repository        Size
      
      ###### ====================================================================
      
      Installing:
       ganglia-gmetad         s390x         3.1.1-1            xcat-dep          39 k
       ganglia-gmond          s390x         3.1.1-1            xcat-dep         283 k
       ganglia-web            s390x         3.1.1-1            xcat-dep         112 k
      
      Transaction Summary
      
      ###### ====================================================================
      
      Install       3 Package(s)
      
      Total download size: 435 k
      Installed size: 1.2 M
      Is this ok [y/N]: y
      Downloading Packages:
      
      * * *
      
      Total                                            79 MB/s | 435 kB     00:00     
      Running rpm_check_debug
      Running Transaction Test
      Transaction Test Succeeded
      Running Transaction
        Installing : ganglia-gmetad-3.1.1-1.s390x                                 1/3 
        Installing : ganglia-web-3.1.1-1.s390x                                    2/3 
        Installing : ganglia-gmond-3.1.1-1.s390x                                  3/3 
      duration: 73(ms)
      Installed products updated.
      
      Installed:
        ganglia-gmetad.s390x 0:3.1.1-1          ganglia-gmond.s390x 0:3.1.1-1         
        ganglia-web.s390x 0:3.1.1-1
      
      Complete!
      
    • Restart the HTTP server

      # service httpd restart
      Stopping httpd: [  OK  ]
      Starting httpd: [  OK  ]
      
    • Restart gmond and gmetad

      # service gmetad restart
      Shutting down GANGLIA gmetad: [FAILED]
      Starting GANGLIA gmetad: [  OK  ]
      
      # service gmond restart
      Shutting down GANGLIA gmond: [FAILED]
      Starting GANGLIA gmond: [  OK  ]
      
  4. Create the directory /install/post/otherpkgs/<os>/s390x on the xCAT MN, where <os> is the SLES version you are running

    # mkdir -p /install/post/otherpkgs/&lt;os&gt;/s390x
    
  5. Copy the following packages from /root/xcat/xcat-dep/<os>/s390x into /install/post/otherpkgs/<os>/s390x, where <os> is the SLES version you are running

    libganglia-3.1.1-1.s390x.rpm
    libconfuse-2.6-1.s390x.rpm
    ganglia-gmond-3.1.1-1.s390x.rpm
    
  6. Refer to Adding Software Packages section on how to automatically install Ganglia when provisioning nodes.

SUSE Linux Enterprise Server

If you have SUSE Linux, follow the instructions below.

  1. Logon the xCAT MN as root using a Putty terminal (if not already)
  2. Go into the directory where you extracted the xcat-dep tarball, e.g. /root/xcat. Locate the Ganglia RPMs under /root/xcat/xcat-dep/<os>/s390x, where <os> is the SLES version you are running. Verify install the following RPMs are present.

    # ls /root/xcat/xcat-dep/sles11/s390x
    ...
    ganglia-devel-3.1.1-1.s390x.rpm
    ganglia-gmetad-3.1.1-1.s390x.rpm
    ganglia-gmond-3.1.1-1.s390x.rpm
    ganglia-gmond-modules-python-3.1.1-1.s390x.rpm
    ganglia-web-3.1.1-1.s390x.rpm
    libconfuse-2.6-1.s390x.rpm
    libconfuse-devel-2.6-1.s390x.rpm
    libganglia-3.1.1-1.s390x.rpm
    ...
    
  3. Set up ganglia on the xCAT MN

    • Install PHP and apache packages (if not already). Use yast to install the following packages

      # zypper install libapr1 pkgconfig php5-pear php5-gd apache2 apache2-mod_php5
      
    • Install the Ganglia RPMs

      # zypper install ganglia-gmetad ganglia-gmond ganglia-web
      Loading repository data...
      Reading installed packages...
      Resolving package dependencies...
      
      The following NEW packages are going to be installed:
        ganglia-gmetad ganglia-gmond ganglia-web libconfuse libganglia rrdtool
      
      The following packages are not supported by their vendor:
        ganglia-gmetad ganglia-gmond ganglia-web libconfuse libganglia
      
      6 new packages to install.
      Overall download size: 981.0 KiB. After the operation, additional 3.9 MiB will 
      be used.
      Continue? [y/n/?] (y): y
      Retrieving package rrdtool-1.3.4-2.8.s390x (1/6), 478.0 KiB (1.7 MiB unpacked)
      Retrieving: rrdtool-1.3.4-2.8.s390x.rpm [done]
      Installing: rrdtool-1.3.4-2.8 [done]
      Retrieving package ganglia-web-3.1.1-1.s390x (2/6), 106.0 KiB (222.0 KiB unpacked)
      Installing: ganglia-web-3.1.1-1 [done]
      Retrieving package libconfuse-2.6-1.s390x (3/6), 102.0 KiB (468.0 KiB unpacked)
      Installing: libconfuse-2.6-1 [done]
      Retrieving package libganglia-3.1.1-1.s390x (4/6), 77.0 KiB (252.0 KiB unpacked)
      Installing: libganglia-3.1.1-1 [done]
      Retrieving package ganglia-gmetad-3.1.1-1.s390x (5/6), 67.0 KiB (188.0 KiB unpacked)
      Installing: ganglia-gmetad-3.1.1-1 [done]
      Additional rpm output:
      gmetad                    0:off  1:off  2:off  3:on   4:off  5:on   6:off
      
      Retrieving package ganglia-gmond-3.1.1-1.s390x (6/6), 151.0 KiB (1.1 MiB unpacked)
      Installing: ganglia-gmond-3.1.1-1 [done]
      Additional rpm output:
      gmond                     0:off  1:off  2:off  3:on   4:off  5:on   6:off
      
    • Restart the apache server

      # service apache2 restart
      Syntax OK
      Shutting down httpd2 (waiting for all children to terminate)         done
      Starting httpd2 (prefork)
      
    • Restart gmond and gmetad

      # service gmond restart
      Shutting down gmond                                                  done
      Starting gmond                                                       done
      
      # service gmetad restart
      Shutting down gmetad                                                 done
      Starting gmetad                                                      done
      
  4. Create the directory /install/post/otherpkgs/<os>/s390x on the xCAT MN, where <os> is the SLES version you are running

    # mkdir -p /install/post/otherpkgs/&lt;os&gt;/s390x
    
  5. Copy the following packages from /root/xcat/xcat-dep/<os>/s390x into /install/post/otherpkgs/<os>/s390x, where <os> is the SLES version you are running

    libganglia-3.1.1-1.s390x.rpm
    libconfuse-2.6-1.s390x.rpm
    ganglia-gmond-3.1.1-1.s390x.rpm
    
  6. Refer to Adding Software Packages section on how to automatically install Ganglia when provisioning nodes.

Ganglia Monitoring on xCAT

This section details how to use Ganglia on Linux on System z.

  1. Logon the xCAT MN as root using a Putty terminal (if not already)
  2. Transfer ganglia RPMs required to run gmond over to nodes you want to monitor
    # xdcp &lt;node&gt; /install/post/otherpkgs/&lt;os&gt;/s390x/ganglia-gmond-3.1.1-1.s390x.rpm
    # xdcp &lt;node&gt; /install/post/otherpkgs/&lt;os&gt;/s390x/libconfuse-2.6-1.s390x.rpm
    # xdcp &lt;node&gt; /install/post/otherpkgs/&lt;os&gt;/s390x/libganglia-3.1.1-1.s390x.rpm
    

The command transfers the files into /root directory on the target nodes.

  1. Install the RPMs
    # xdsh &lt;node&gt; rpm -i libconfuse-2.6-1.s390x.rpm
    # xdsh &lt;node&gt; rpm -i libganglia-3.1.1-1.s390x.rpm
    # xdsh &lt;node&gt; rpm -i ganglia-gmond-3.1.1-1.s390x.rpm
    

Make sure the target node has libapr1 (SLES) or apr (RHEL) package installed.

  1. Ensure the nodetype of all nodes you wish to monitor have the type of osi. This can be done by editing the nodetype table.

    # tabedit nodetype
    
  2. Add gangliamon to the monitoring table

    # monadd gangliamon
    
  3. Configure the node

    # moncfg gangliamon -r
    

This runs the ganglia configuration script on all the nodes.

  1. If you want to start gangliamon:
    # monstart gangliamon -r
    

The command will start the gmond daemon on all the nodes. The -r flag is required to ensure the gmond daemon is started on each node. You may also specify a particular node to start:

    # monstart gangliamon gpok3 -r

If you want to stop gangliamon:

    # monstop gangliamon -r

Statelite

This section details how to configure an NFS read-only root filesystem. For more details, refer to xCAT Linux Statelite. Note that you can only create statelite nodes that is of the same Linux distribution as your management node. For example, if your xCAT MN is SLES 11 SP1, you can only create SLES 11 SP1 statelite nodes.

Red Hat Enterprise Linux

If you have Red Hat Linux, follow the instructions below.

  1. Logon the xCAT MN as root using a Putty terminal (if not already)
  2. Edit /etc/exports to export the /install directory. It should look similar to this:

    /install *(rw,no_root_squash,sync,no_subtree_check)
    /lite/state *(rw,no_root_squash,sync,no_subtree_check)
    
  3. Restart the NFS server

    # service nfs restart
    
  4. Edit the litefile table. This table specifies which files should be kept persistent across reboots. By default, all files are kept under tmpfs, unless a persistent, ro, or bind option is specified. Refer to the litefile table description for more details.

    # tabedit litefile
    

Copy the following defaults into the litefile table. This is the minimal list of files you need.

    #image,file,options,comments,disable
    "ALL","/etc/adjtime",,,
    "ALL","/etc/fstab",,,
    "ALL","/etc/lvm/",,,
    "ALL","/etc/mtab","link",,
    "ALL","/etc/syslog.conf",,,
    "ALL","/etc/syslog.conf.XCATORIG",,,
    "ALL","/etc/ntp.conf",,,
    "ALL","/etc/ntp.conf.predhclient",,,
    "ALL","/etc/resolv.conf",,,
    "ALL","/etc/resolv.conf.predhclient",,,
    "ALL","/etc/ssh/","persistent",,
    "ALL","/etc/sysconfig/",,,
    "ALL","/tmp/",,,
    "ALL","/var/",,,
    "ALL","/opt/xcat/",,,
    "ALL","/xcatpost/",,,
    "ALL","/root/.ssh/",,,
  1. Edit the litetree table. This table controls where the files specified in the litefile table come from.
    # tabedit litetree
    

Copy the following into the litetree table. You will need to determine the Linux distribution you want. In our example, RHEL 5.4 is used.

    #priority,image,directory,comments,disable
    "1.0",,"10.1.100.1:/install/netboot/rhel5.4/s390x/compute",,
  1. Edit the statelite table. This table controls where the permanent files are kept.
    # tabedit statelite
    

Copy the following into the statelite table. You will need to determine the statelite node range and the IP address of the xCAT MN. In our example, the node range is all and the IP address is 10.1.100.1.

    #node,image,statemnt,comments,disable
    "all",,"10.1.100.1:/lite/state",,
  1. Create the persistent directory

    # mkdir -p /lite/state
    
  2. Ensure policies are set up correctly. When a node boots up, it queries the xCAT database to get the lite-files and the lite-tree. In order for this to work, the command must be set in the policy table to allow nodes to request it. (This should already be done automatically when xCAT was installed)

    # chtab priority=4.7 policy.commands=litefile policy.rule=allow
    # chtab priority=4.8 policy.commands=litetree policy.rule=allow
    
  3. Download and copy the packages from the Linux distro media into /install (if not already)

    # copycds -n xxx -a s390x /install/yyy.iso
    

Substitute xxx with the distribution name and yyy with the ISO name.

For example, if you have a RHEL 5.4 ISO:

    # copycds -n rhel5.4 -a s390x /install/RHEL5.4-Server-20090819.0-s390x-DVD.iso
  1. Create a list of packages that should be installed onto the statelite image. You should start with the base packages in the compute template and if desired, add more packages by editing the .pkglist.

    # mkdir -p /install/custom/netboot/rh
    # cp /opt/xcat/share/xcat/netboot/sles/compute.rhe5.s390x.pkglist
    
  2. Create the statelite image

    # genimage -i eth1 -n qeth -o rhel5.4 -p compute
    OS: rhel5.4
    Profile: compute
    Interface: eth1
    Network drivers: qeth
    Do you need to set up other interfaces? [y/n] n
    Which kernel do you want to use? [default] [Enter]
    

This command creates a RHEL 5.4 image with an eth1 interface, qeth network driver, and uses the compute profile. The interface used must match the xCAT MN interface that DHCP listens on. The genimage command creates an image under /install/netboot/rhel5.4/s390x/compute/rootimg. It also creates a ramdisk and kernel that is used to boot the statelite node.

  1. Modify the statelite image by creating symbolic links with all the files listed under the litetree table

    # liteimg -o rhel5.4 -a s390x -p compute
    going to modify /install/netboot/rhel5.4/s390x/compute/rootimg
    creating /install/netboot/rhel5.4/s390x/compute/rootimg/.statelite
    
  2. Create the statelite node definition.

For our example, we will create a new node (gpok6) with a userID (LINUX6) that is managed by our zHCP (gpok2). You will need to substitute the node names, userIDs, and zHCP name with appropriate values.

    # mkdef -t node -o gpok6 userid=LINUX6 hcp=gpok2.endicott.ibm.com mgt=zvm groups=all
  1. Update /etc/hosts

    # makehosts
    
  2. Update DNS

    # makedns
    
  3. Create the new virtual machine using the desired directory entry. For our example, we used the following:

    USER LNX6 PWD 512M 1G G
    COMMAND SET VSWITCH VSW2 GRANT LNX6
    CPU 00 BASE
    CPU 01
    IPL CMS
    MACHINE ESA 4
    CONSOLE 0009 3215 T
    NICDEF 0800 TYPE QDIO LAN SYSTEM VSW2
    SPOOL 000C 2540 READER *
    SPOOL 000D 2540 PUNCH A
    SPOOL 000E 1403 A
    LINK MAINT 0190 0190 RR
    LINK MAINT 019D 019D RR
    LINK MAINT 019E 019E RR
    

To create the virtual server, copy the directory entry above into a text file (dirEntry.txt) and issue the following command (the full file path must be given):

    # mkvm gpok6 /tmp/dirEntry.txt

The new virtual server should be attached to the same VSWITCH as the one used by the hardware control point (in our case, VSW2) and have the same network adapter address (in our case, 0800) for the interface given in step 12 (in our case, eth1).

  1. Add the new node to DHCP

    # makedhcp -a
    
  2. Set up the noderes and nodetype tables. The values of nodetype.os and nodetype.profile were determined in step 11, where the statelite image was created.

    # chtab node=xxx noderes.netboot=zvm nodetype.os=yyy nodetype.arch=s390x nodetype.profile=zzz
    

Substitute xxx with the node name, yyy with the operating system, and zzz with the profile name.

In our example, we used the following:

    # chtab node=gpok6 noderes.netboot=zvm nodetype.os=rhel5.4 nodetype.arch=s390x nodetype.profile=compute
  1. Prepare the node(s) to boot from the statelite image
    # nodeset xxx statelite
    

Substitute xxx is the node name.

  1. Boot the statelite node(s). During this process, the symbolic links are made to files listed under the litefile table.
    # rnetboot xxx ipl=00c
    

Substitute xxx is the node name.

Caution: Do no try to boot more than 20 nodes at one time. The xCAT MN will be bogged down as all the nodes are trying to access the NFS server at once. Try booting 20 or less at a time and waiting till those nodes are pingable before booting the next batch.

  1. Check the boot status of the node(s) by pinging them:
    # pping xxx
    

Substitute xxx with the node name. If the node returns a ping, then it is fully booted and you can start using it.

  1. Clone this node as many times as you want to achieve the number of statelite nodes you desire. Refer to Cloning Virtual Servers section above. In order to clone, the source statelite node must be online and have SSH keys setup. Once you have completed clonning, you will have to repeat steps 17 to 20 for all the cloned nodes.

SUSE Linux Enterprise Server

If you have SUSE Linux, follow the instructions below.

  1. Logon the xCAT MN as root using a Putty terminal (if not already)
  2. Edit /etc/exports to export the /install directory. It should contain these two directories:

    /install *(rw,no_root_squash,sync,no_subtree_check)
    /lite/state *(rw,no_root_squash,sync,no_subtree_check)
    
  3. Restart the NFS server

    # service nfsserver restart
    
  4. Check that the NFS server is running

    # rpcinfo -p
    

Make sure nfs is listed, e.g.

    100003 2 tcp 2049 nfs
    100003 2 tcp 2049 nfs
    100003 3 tcp 2049 nfs
    100003 4 tcp 2049 nfs
    100003 2 udp 2049 nfs
    100003 3 udp 2049 nfs
    100003 4 udp 2049 nfs
  1. Edit the litefile table. This table specifies which files should be kept persistent across reboots. By default, all files are kept under tmpfs, unless a persistent, ro, or link option is specified. Refer to the litefile table description for more details.
    # tabedit litefile
    

Copy the following defaults into the litefile table. This is the minimal list of files you need.

    #image,file,options,comments,disable
    "ALL","/etc/lvm/",,,
    "ALL","/etc/mtab","link",,
    "ALL","/etc/ntp.conf",,,
    "ALL","/etc/ntp.conf.org",,,
    "ALL","/etc/resolv.conf",,,
    "ALL","/etc/ssh/","persistent",,
    "ALL","/etc/sysconfig/",,,
    "ALL","/etc/syslog-ng/",,,
    "ALL","/tmp/",,,
    "ALL","/var/",,,
    "ALL","/etc/yp.conf",,,
    "ALL","/etc/fstab",,,
    "ALL","/opt/xcat/",,,
    "ALL","/xcatpost/",,,
    "ALL","/root/.ssh/",,,
  1. Edit the litetree table. This table controls where the files specified in the litefile table come from.

    # tabedit litetree
    
  2. Copy the following into the litetree table. You will need to determine the Linux distribution you want. In our example, SLES11 SP1 is used.

    #priority,image,directory,comments,disable
    "1.0",,"10.1.100.1:/install/netboot/sles11sp1/s390x/compute",,
    
  3. Edit the statelite table. This table controls where the permanent files are kept.

    # tabedit statelite
    

Copy the following into the statelite table. You will need to determine the statelite node range and the IP address of the xCAT MN. In our example, the node range is all and the IP address is 10.1.100.1.

    #node,image,statemnt,comments,disable
    "all",,"10.1.100.1:/lite/state",,
  1. Create the persistent directory

    # mkdir -p /lite/state
    
  2. Ensure policies are set up correctly. When a node boots up, it queries the xCAT database to get the lite-files and the lite-tree. In order for this to work, the command must be set in the policy table to allow nodes to request it. (This should already be done automatically when xCAT was installed)

    # chtab priority=4.7 policy.commands=litefile policy.rule=allow
    # chtab priority=4.8 policy.commands=litetree policy.rule=allow
    
  3. Download and copy the packages from the Linux distro media into /install (if not already)

    # copycds -n xxx -a s390x /install/yyy.iso
    

Substitute xxx with the distribution name and yyy with the ISO name.

For example, if you have a SLES 11 SP1 ISO:

    # copycds -n sles11sp1 -a s390x /install/SLES-11-SP1-DVD-s390x-GMC3-DVD1.iso
  1. Create a list of packages that should be installed onto the statelite image. You should start with the base packages in the compute template and if desired, add more packages by editing the .pkglist.

    # mkdir -p /install/custom/netboot/sles
    # cp /opt/xcat/share/xcat/netboot/sles/compute.sles11.s390x.pkglist /install/custom/netboot/sles
    
  2. Create the statelite image

    # genimage -i eth1 -n qeth -o sles11sp1 -p compute
    OS: sles11sp1
    Profile: compute
    Interface: eth1
    Network drivers: qeth
    Do you need to set up other interfaces? [y/n] n
    Which kernel do you want to use? [default] [Enter]
    

This command creates a SLES11 SP1 image with an eth1 interface, qeth network driver, and uses the compute profile. The interface used must match the xCAT MN interface that DHCP listens on. The genimage command creates an image under /install/netboot/sles11sp1/s390x/compute/rootimg. It also creates a ramdisk and kernel that is used to boot the statelite node.

  1. Modify the statelite image by creating symbolic links with all the files listed under the litetree table

    # liteimg -o sles11sp1 -a s390x -p compute
    going to modify /install/netboot/sles11sp1/s390x/compute/rootimg
    creating /install/netboot/sles11sp1/s390x/compute/rootimg/.statelite
    
  2. Create the statelite node definition.
    For our example, we will create a new node (gpok6) with a userID (LNX6) that is managed by our zHCP (gpok2). You will need to substitute the node names, userIDs, and zHCP name with appropriate values.

    # mkdef -t node -o gpok6 userid=LINUX6 hcp=gpok2.endicott.ibm.com mgt=zvm groups=all
    
  3. Update /etc/hosts

    # makehosts
    
  4. Update DNS

    # makedns
    
  5. Create the new virtual server using the desired directory entry. For our example, we used the following:

    USER LNX6 PWD 512M 1G G
    COMMAND SET VSWITCH VSW2 GRANT LNX6
    CPU 00 BASE
    CPU 01
    IPL CMS
    MACHINE ESA 4
    CONSOLE 0009 3215 T
    NICDEF 0800 TYPE QDIO LAN SYSTEM VSW2
    SPOOL 000C 2540 READER *
    SPOOL 000D 2540 PUNCH A
    SPOOL 000E 1403 A
    LINK MAINT 0190 0190 RR
    LINK MAINT 019D 019D RR
    LINK MAINT 019E 019E RR
    

To create the virtual server, copy the directory entry above into a text file (dirEntry.txt) and issue the following command (the full file path must be given):

    # mkvm gpok6 /tmp/dirEntry.txt

The new virtual server should be attached to the same vswitch as the one used by the hardware control point (in our case, VSW2) and have the same network adapter address (in our case, 0800) for the interface given in step 12 (in our case, eth1).

  1. Add the new node to DHCP

    # makedhcp -a
    
  2. Set up the noderes and nodetype tables. The values of nodetype.os and nodetype.profile were determined in step 11, where the statelite image was created.

    # chtab node=xxx noderes.netboot=zvm nodetype.os=yyy nodetype.arch=s390x nodetype.profile=zzz
    

Substitute xxx with the node name, yyy with the operating system, and zzz with the profile name.

In our example, we used the following:

    # chtab node=gpok6 noderes.netboot=zvm nodetype.os=sles11sp1 nodetype.arch=s390x nodetype.profile=compute
  1. Prepare the node(s) to boot from the statelite image
    # nodeset xxx statelite
    

where xxx is the node name.

  1. Boot the statelite node(s). During this process, the symbolic links are made to files listed under the litefile table.
    # rnetboot xxx ipl=00c
    

where xxx is the node name.

Caution: Do no try to boot more than 20 nodes at one time. The xCAT MN will be bogged down as all the nodes are trying to access the NFS server at once. Try booting 20 or less at a time and waiting till those nodes are pingable before booting the next batch.

  1. Check the boot status of the nodes by pinging them:
    # pping xxx
    

Substitute xxx with the node name. If the node returns a ping, then it is fully booted and you can start using it.

  1. Clone this node as many times as you want to achieve the number of statelite nodes you desire. Refer to Cloning Virtual Servers section above. In order to clone, the source statelite node must be online and have SSH keys setup. Once you have completed clonning, you will have to repeat steps 19 to 22 for all the cloned nodes.

Updating Linux

This section details how to update the Linux operating system.

  1. Download and extract the ISO into the xCAT install tree /install (if not already)
    # copycds -n xxx -a s390x /install/yyy.iso
    

Substitute xxx with the distribution name and yyy with the ISO name.

For example, if you have a SUSE Linux Enterprise Server 10 SP3 ISO:

    # copycds -n sles10sp3 -a s390x /install/SLES-10-SP3-DVD-s390x-DVD1.iso
    Copying media to /install/sles10sp3/s390x/1
    Media copy operation successful

or if you have a Red Hat Enterprise Linux 5.4 ISO:

    # copycds -n rhel5.4 -a s390x /install/RHEL5.4-Server-20090819.0-s390x-DVD.iso
    Copying media to /install/rhel5.4/s390x
    Media copy operation successful
  1. Update the node
    # updatenode xxx -o yyy
    

Substitute xxx with the node name and yyy with the operating system version.

For example, if you want to update gpok5 to RHEL5.4 (assuming gpok5 has RHEL 5.3):

    # updatenode gpok5 -o rhel5.4

The command requires the node to be online. It will take several minutes to complete the update. You can only update to the next release. For example, you can only update RHEL5.3 to RHEL5.4. You cannot skip releases, e.g. updating RHEL5.3 to RHEL5.5.

Warning: You cannot update SLES10.3 to SLES11. There is a bug in rug where you cannot add a repository/service.

Limitations

This section highlights the limitations of xCAT on z/VM and Linux on System z.

  1. xCAT is only supported on z/VM 5.4 or newer.
  2. zHCP is only supported on RHEL 5.4 or newer, and SLES 10 SP2 or newer.
  3. The default autoyast and kickstart templates available on xCAT was tested on SLES 10.2/10.3/11/11.1 and RHEL 5.3/5.4/5.5/6.0/6.1.
  4. Cloning LVM volumes is supported. However, it is not supported on nodes where the root file system is on an LVM volume.
  5. CP Flashcopy is only supported on ECKD volumes. These volumes must be on the same storage facility.
  6. Statelite is only supported on SLES 11 or newer, and RHEL 5.4 or newer.
  7. Nodes that the zHCP manages must have the Linux VMCP module.
  8. A layer 2 VSWITCH is required for DHCP.
  9. In order for the xCAT MN to manage across multiple LPARs and CECs, you must use a layer 2 VSWITCH. The network hardware must be configured in such a way that these VSWITCHes can communicate across multiple LPARs and CECs.

Appendix A: Setting Up a Second Network

This section details how to setup a second network based on a layer 2 VSWITCH.

Red Hat Enterprise Linux

SSH to the desire Linux where you want to setup the private network. A network script must be added under /etc/sysconfig/network-scripts/ to let the system know about the new interface and a qeth group must be created under /sys/bus/ccwgroup/drivers/qeth/group.

In the following example, we will configure an ethernet interface (eth1) for a layer 2 VSWITCH (VSW2) attached to 0800. We will assume there is an existing ethernet interface (eth0) for a network card attached to 0600.

Copy the hardware settings from the existing network /etc/sysconfig/network-scripts/ifcfg-eth0.

# cp /etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/network-scripts/ifcfg-eth1

Edit the network settings.

# vi /etc/sysconfig/network-scripts/ifcfg-eth1

It should look similar to the following:

# IBM QETH
DEVICE=eth1
ARP=no
BOOTPROTO=static
BROADCAST=10.1.100.255
IPADDR=10.1.100.1
IPV6INIT=yes
IPV6_AUTOCONF=yes
MTU=1500
NETMASK=255.255.255.0
NETTYPE=qeth
NETWORK=10.1.100.0
ONBOOT=yes
PORTNAME=PORT800
OPTIONS="layer2=1"
SUBCHANNELS=0.0.0800,0.0.0801,0.0.0802
MTU=1500

You need to substitute the broadcast, IP address, netmask, network, port name, and subchannels with appropriate values.

Load the qeth driver

# modprobe qeth

Create a qeth group device

# echo 0.0.0800,0.0.0801,0.0.0802 &gt; /sys/bus/ccwgroup/drivers/qeth/group

Declare the qeth group device as Layer 2

# echo 1 &gt; /sys/bus/ccwgroup/drivers/qeth/0.0.0800/layer2

Bring the device back online (you need to reset the device after each reboot)

# echo 1 &gt; /sys/bus/ccwgroup/drivers/qeth/0.0.0800/online

Verify the state of the device (1 = online)

# cat /sys/bus/ccwgroup/drivers/qeth/0.0.0800/online

Check to see what interface name was assigned to the device

# cat /sys/bus/ccwgroup/drivers/qeth/0.0.0800/if_name

A qeth device requires an alias definition in /etc/modprobe.conf. Edit this file and add an alias for your interface

# vi /etc/modprobe.conf


alias eth0 qeth
alias eth1 qeth
options dasd_mod dasd=0.0.0100,0.0.0103,0.0.0300,0.0.0301

Start the new interface

# ifup eth1

SUSE Linux Enterprise Server 10

SSH to the desire Linux where you want to setup the private network. Two configuration files must be added under /etc/sysconfig/ to let the system know about the new interface, one for hardware and one for network settings.

In the following example, we will configure an ethernet interface (eth1) for a layer 2 VSWITCH (VSW2) attached to 0800. We will assume there is an existing ethernet interface (eth0) for a network card attached to 0600.

Copy the hardware settings from the existing network /etc/sysconfig/hardware/hwcfg-qeth-bus-ccw-0.0.0600. Both interfaces will use the qdio/qeth drivers, therefore, the configuration files can be identical except for the virtual addresses. The existing file is copied to specify the new NIC. The only difference needed is to change the 060X values to 080X.

# cd /etc/sysconfig/hardware/

Edit the hardware settings.

# sed *600 -e 's/060/080/g' &gt; hwcfg-qeth-bus-ccw-0.0.0800

It should look similar to the following:

STARTMODE="auto"
MODULE="qeth"
MODULE_OPTIONS=""
MODULE_UNLOAD="yes"
SCRIPTUP="hwup-ccw"
SCRIPTUP_ccw="hwup-ccw"
SCRIPTUP_ccwgroup="hwup-qeth"
SCRIPTDOWN="hwdown-ccw"
CCW_CHAN_IDS="0.0.0800 0.0.0801 0.0.0802"
CCW_CHAN_NUM="3"
CCW_CHAN_MODE="OSAPORT"
QETH_LAYER2_SUPPORT="1"

Copy the network settings from the existing network /etc/sysconfig/network/ifcfg-qeth-bus-ccw-0.0.0600.

# cd /etc/sysconfig/network
# cp ifcfg-qeth-bus-ccw-0.0.0600 ifcfg-qeth-bus-ccw-0.0.0800

Edit the network settings.

# vi ifcfg-qeth-bus-ccw-0.0.0800

It should look similar to the following:

BOOTPROTO="static"
UNIQUE=""
STARTMODE="onboot"
IPADDR="10.1.100.1"
NETMASK="255.255.255.0"
NETWORK="10.1.100.0"
BROADCAST="10.1.100.255"
_nm_name='qeth-bus-ccw-0.0.0800'

You need to substitute the broadcast, IP address, netmask, and network with appropriate values.

Reboot the virtual server to have the changes take effect.

# reboot

SUSE Linux Enterprise Server 11

SSH to the desire Linux where you want to setup the private network. A configuration file must be added under /etc/sysconfig/network and /etc/udev/rules.d to let the system know about the new interface.

In the following example, we will configure an ethernet interface (eth1) for a layer 2 VSWITCH (VSW2) attached to 0800. We will assume there is an existing ethernet interface (eth0) for a network card attached to 0600.

Copy the hardware settings from the existing network /etc/udev/rules.d/51-qeth-0.0.0600.rules. Both interfaces will use the qdio/qeth drivers, therefore, the configuration files can be identical except for the virtual addresses. The existing file is copied to specify the new NIC. The only difference needed is to change the 060X values to 080X.

# sed /etc/udev/rules.d/51-qeth-0.0.0600.rules -e 's/060/080/g' &gt; /etc/udev/rules.d/51-qeth-0.0.0800.rules

Edit the udev rules

# vi /etc/udev/rules.d/51-qeth-0.0.0800.rules

It should look similar to the following:

# Configure qeth device at 0.0.0800/0.0.0801/0.0.0802
ACTION=="add", SUBSYSTEM=="drivers", KERNEL=="qeth", IMPORT{program}="collect 0.0.0800 %k 0.0.0800 0.0.0801 0.0.0802 qeth"
ACTION=="add", SUBSYSTEM=="ccw", KERNEL=="0.0.0800", IMPORT{program}="collect 0.0.0800 %k 0.0.0800 0.0.0801 0.0.0802 qeth"
ACTION=="add", SUBSYSTEM=="ccw", KERNEL=="0.0.0801", IMPORT{program}="collect 0.0.0800 %k 0.0.0800 0.0.0801 0.0.0802 qeth"
ACTION=="add", SUBSYSTEM=="ccw", KERNEL=="0.0.0802", IMPORT{program}="collect 0.0.0800 %k 0.0.0800 0.0.0801 0.0.0802 qeth" TEST=="[ccwgroup/0.0.0800]", GOTO="qeth-0.0.0800-end"
ACTION=="add", SUBSYSTEM=="ccw", ENV{COLLECT_0.0.0800}=="0", ATTR{[drivers/ccwgroup:qeth]group}="0.0.0800,0.0.0801,0.0.0802"
ACTION=="add", SUBSYSTEM=="drivers", KERNEL=="qeth", ENV{COLLECT_0.0.0800}=="0", ATTR{[drivers/ccwgroup:qeth]group}="0.0.0800,0.0.0801,0.0.0802" LABEL="qeth-0.0.0800-end"
ACTION=="add", SUBSYSTEM=="ccwgroup", KERNEL=="0.0.0800", ATTR{portname}="OSAPORT"
ACTION=="add", SUBSYSTEM=="ccwgroup", KERNEL=="0.0.0800", ATTR{portno}="0"
ACTION=="add", SUBSYSTEM=="ccwgroup", KERNEL=="0.0.0800", ATTR{layer2}="1"
ACTION=="add", SUBSYSTEM=="ccwgroup", KERNEL=="0.0.0800", ATTR{online}="1"

You must also enable layer2 for the device. Take note of ATTR{layer2}="1".

Copy the network settings from the existing network /etc/sysconfig/network/ifcfg-eth0.

# cp /etc/sysconfig/network/ifcfg-eth0 /etc/sysconfig/network/ifcfg-eth1

Edit the network settings.

# vi /etc/sysconfig/network/ifcfg-eth1

It should look similar to the following:

BOOTPROTO='static'
IPADDR='10.1.100.1'
BROADCAST='10.1.100.255'
NETMASK='255.255.255.0'
NETWORK='10.1.100.0'
STARTMODE='onboot'
NAME='OSA Express Network card (0.0.0800)'

Reboot the virtual server to have the changes take effect.

# reboot

Appendix B: Customizing Autoyast and Kickstart

This section details how to customize the autoyast and kickstart templates. It should only serve as a quick guide on configuring the templates. It is beyond the scope of this document to go into details on configuring autoyast and kickstart. You need to go to the links provided below to get more information.

Autoyast and kickstart allows you to customize a Linux system based on a template. While you would typically go through various panels to manually customize your Linux system during boot, you no longer have to with autoyast and kickstart. This allows you to configure a vanilla Linux system faster and more effectively.

Red Hat Enterprise Server

  1. Base your customization on the default template (compute.rhel5.s390x.tmpl) in /opt/xcat/share/xcat/install/rh/. This template is configured to setup the network for you using DHCP.
  2. Determine the number of disks (ECKD or SCSI) your vanilla system will have and the mount points for each disk. There are no extra steps needed to specify the disk type.
  3. Copy the default template /opt/xcat/share/xcat/install/rh/xxx.tmpl, where xxx is the template name, into /install/custom/install/rh/. For our example, we will use compute.rhel5.s390x.tmpl:
    # cp /opt/xcat/share/xcat/install/rh/compute.rhel5.s390x.tmpl /install/custom/install/rh/custom.rhel5.s390x.tmpl
    

The default templates are configured to use one 3390-mod9 with the root filesystem (/) mounted, install the base software package, and use DHCP. You should only customize the disks, partitioning, and install packages, and leave the network configuration alone.

  1. Add this template to the osimage table. For our example, we customized the kickstart template for RHEL 5.4 and added it to the osimage table using:

    # chtab imagename=rhel5.4-s390x-install-custom osimage.profile=custom osimage.imagetype=linux osimage.provmethod=install osimage.osname=Linux osimage.osvers=rhel5.4
    
  2. Add the disk and mount point to the template using the following format:

    clearpart --initlabel drives=dasda,dasdb
    part / --fstype ext3 --size=100 --grow ondisk=dasda
    part /usr --fstype ext3 --size=100 --grow ondisk=dasdb
    

In the example above, a disk is added with a device name of dasdb. The disk will be mounted at /usr and will have a ext3 file system.

  1. Add the software you need to the %packages section.

For more information, refer to Red Hat Enterprise Linux Installation Guide.

SUSE Linux Enterprise Server

An autoyast generator is available for SLES 10 and SLES 11. It will help create an autoyast template with the desired DASD, partition layout, and software. For more advance configurations (e.g. LDAP), the autoyast template has to be configured manually.

To generate an autoyast template:

  1. Run mkay4z script under /opt/xcat/share/xcat/tools
    # /opt/xcat/share/xcat/tools/mkay4z
    
    Creating autoyast template for Linux on System z...
    Select SUSE Linux Enterprise Server version? (10 or 11) 11
    Where do you want to place the template? (e.g. /tmp/custom.sles11.s390x.tmpl) /tmp/custom.sles11.s390x.tmpl
      Do you want to use DHCP? (yes or no) y
    
    CONFIGURING DASD...
    Select from the following options:
      (1) Add DASD
      (2) Remove DASD
      (3) Show DASD configuration
      (4) Go to next step
    1
      What is the virtual address? 100
      What is the type? (eckd or fba) eckd
    Select from the following options:
      (1) Add DASD
      (2) Remove DASD
      (3) Show DASD configuration
      (4) Go to next step
    1
      What is the virtual address? 101
      What is the type? (eckd or fba) eckd
    Select from the following options:
      (1) Add DASD
      (2) Remove DASD
      (3) Show DASD configuration
      (4) Go to next step
    4
    
    CONFIGURING PARTITIONS...
    Select a device from the list below to create a new partition.
    #  |   Device   |   Address   |   Type
    
    * * *
    
    0   /dev/dasda   0.0.0100      dasd_eckd_mod
    1   /dev/dasdb   0.0.0101      dasd_eckd_mod
    Which device do you want to configure? (See list above)
    Leave blank and hit Enter to go to next step.
    0
      What is the filesystem for /dev/dasda? (ext2, ext3, ext4, or swap) ext4
      What is the partition size? (e.g. 1g, 2g, or max) max
      Do you want to assign it to an LVM group? (yes or no) n
      What is the mount point? /
    Which device do you want to configure? (See list above)
    Leave blank and hit Enter to go to next step.
    1
      What is the filesystem for /dev/dasdb? (ext2, ext3, ext4, or swap) ext4
      What is the partition size? (e.g. 1g, 2g, or max) max
      Do you want to assign it to an LVM group? (yes or no) n
      What is the mount point? /opt
    Which device do you want to configure? (See list above)
    Leave blank and hit Enter to go to next step.
    
    Done! See autoyast template under /tmp/custom.sles11.s390x.tmpl
    

The script will ask you several questions concerning the configuration in the autoyast template. It is designed to help you configure the disks, partitions, and networking in the autoyast template for xCAT. It is important to note that the template name is significant. The name should be in the following order: <profile>.<osvers>.s390x.tmpl. For more advanced configurations, you should manually edit the autoyast template.

  1. Place the custom template generated by the mkay4z script under /install/custom/install/sles/.

    # mv /tmp/custom.sles11.s390x.tmpl /install/custom/install/sles/
    
  2. Add this template to the osimage table. For our example, we customized the autoyast template for SLES 11 SP1 and added it to the osimage table using:

    # chtab imagename=sles11sp1-s390x-install-custom osimage.profile=custom osimage.imagetype=linux osimage.provmethod=install osimage.osname=Linux osimage.osvers=sles11sp1
    

For more information, refer to openSUSE AutoYast.

Appendix C: Setting up Network Address Translation

This section details how to setup network address translation (NAT) on a Linux host. NAT supports both layer 2 and 3 network devices. The setup below uses iptables and port forwarding to allow hosts on a private network to gain access to a public network. It is important to note that the Linux host must have both external (public) and internal (private) interfaces. NAT will route packets appropriately between the public and private networks using iptables. It is also important to note that a host on the private network cannot be reached via the public network because it does not have a unique public IP address. However, this can be solved by assigning a unique port number on the Linux host (setup with NAT), so that packets sent to this port will be forwarding to the private host.

Red Hat Enterprise Server

This section details how to setup NAT on Red Hat Enterprise Server. It is assumed that the Linux host (10.1.100.1) already has both external and internal interfaces. The external interface (eth0) is on the 9.10.11.0/24 network. The internal interface (eth1) is on the 10.1.100.0/24 network.

  1. Allow forwarding for the internal interface (eth1)

    # iptables --append FORWARD --in-interface eth1 -j ACCEPT
    
  2. Allow hosts on the private network to mask requests using the public IP address of the Linux host

    # iptables --table nat --append POSTROUTING --out-interface eth0 -j MASQUERADE
    
  3. Edit /etc/sysctl.conf and enable IP forwarding with the following setting

    net.ipv4.ip_forward = 1
    
  4. Update the system configuration to enable IP forwarding

    # sysctl -p /etc/sysctl.conf
    net.ipv4.ip_forward = 1
    net.ipv4.conf.default.rp_filter = 1
    net.ipv4.conf.default.accept_source_route = 0
    
  5. Allow appropriate services through the firewall. For example, allow SSH (port 22) through the firewall.

    # iptables -A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
    
  6. If you want a host (10.1.100.123) on the private network to be accessed publicly via SSH (port 22), you can forwarding the SSH request with the following command

    # iptables -A PREROUTING -i eth+ -p tcp -m tcp --dport 2123 -j DNAT --to-destination 10.1.100.123:22
    # iptables -A FORWARD -d 10.1.100.123/32 -i eth+ -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
    

Any request coming through port 2123 will be forwarded to the host on port 22 on the private network. It is important to note that the port number used must be free and not in use by any other service.

  1. Publicly access the host on the private network

    # ssh root@10.1.100.1 -p 2123
    
  2. Verify the NAT configuration by logging into a host (10.1.100.123) on the private network and accessing an external site from that host

    # ifconfig
    eth0      Link encap:Ethernet  HWaddr 02:00:01:FF:FE:FD  
              inet addr:10.1.100.123  Bcast:10.1.100.255  Mask:255.255.255.0
              inet6 addr: fd55:faaf:e1ab:263:0:6ff:feff:fefd/64 Scope:Global
              inet6 addr: fe80::6ff:feff:fefd/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1492  Metric:1
              RX packets:261657 errors:0 dropped:0 overruns:0 frame:0
              TX packets:314748 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000 
              RX bytes:103074088 (98.2 Mb)  TX bytes:27570328 (26.2 Mb)
    
    lo        Link encap:Local Loopback  
              inet addr:127.0.0.1  Mask:255.0.0.0
              inet6 addr: ::1/128 Scope:Host
              UP LOOPBACK RUNNING  MTU:16436  Metric:1
              RX packets:29 errors:0 dropped:0 overruns:0 frame:0
              TX packets:29 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:0 
              RX bytes:7060 (6.8 Kb)  TX bytes:7060 (6.8 Kb)
    
    # ping -c 4 sourceforge.net
    PING sourceforge.net (216.34.181.60) 56(84) bytes of data.
    64 bytes from ch3.sourceforge.net (216.34.181.60): icmp_seq=1 ttl=236 time=30.2 ms
    64 bytes from ch3.sourceforge.net (216.34.181.60): icmp_seq=2 ttl=236 time=30.6 ms
    64 bytes from ch3.sourceforge.net (216.34.181.60): icmp_seq=3 ttl=236 time=30.0 ms
    64 bytes from ch3.sourceforge.net (216.34.181.60): icmp_seq=4 ttl=236 time=29.9 ms
    
    --- sourceforge.net ping statistics ---
    4 packets transmitted, 4 received, 0% packet loss, time 3004ms
    rtt min/avg/max/mdev = 29.908/30.224/30.666/0.357 ms
    

If the host cannot reach the external site, make sure that the gateway for the default route goes to the Linux host (10.1.100.1/gpok1) you had setup with NAT.

    # route
    Kernel IP routing table
    Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
    10.1.100.0      *               255.255.255.0   U     0      0        0 eth0
    link-local      *               255.255.0.0     U     0      0        0 eth0
    loopback        *               255.0.0.0       U     0      0        0 lo
    default         gpok1.endicott. 0.0.0.0         UG    0      0        0 eth0

For more information, refer to Red Hat Enterprise Linux 6 Security Guide.

SUSE Linux Enterprise Server

This section details how to setup NAT on SUSE Linux Enterprise Server. It is assumed that the Linux host (10.1.100.1) already has both external and internal interfaces. The external interface (eth0) is on the 9.10.11.0/24 network. The internal interface (eth1) is on the 10.1.100.0/24 network.

  1. Enable the SuSEfirewall2 boot scripts

    # chkconfig SuSEfirewall2_init on
    # chkconfig SuSEfirewall2_setup on
    
  2. Edit /etc/sysconfig/SuSEfirewall2 such that it contains the following configurations

    # Space separated interfaces that point to the internet
    FW_DEV_EXT="any eth0"
    
    # Space separated interfaces that point to the internal network
    FW_DEV_INT="eth1"
    
    # Activate routing between internet and internal network
    FW_ROUTE="yes"
    
    # Masquerade internal networks to the outside
    FW_MASQUERADE="yes"
    
    # Interfaces to masquerade on
    FW_MASQ_DEV="zone:ext"
    
    # Unrestricted access to the internet
    FW_MASQ_NETS="0/0"
    
    # Any internal user can connect any service on the firewall
    FW_PROTECT_FROM_INT="no"
    
    # Services on the firewall that should be accessible from untrusted networks
    FW_CONFIGURATIONS_EXT="apache2-ssl bind dhcp-server sshd vsftpd xorg-x11-server"
    
    # Services accessed from the internet should be allowed to masqueraded servers (on the internal network)
    FW_FORWARD_MASQ="0/0,10.1.100.123,tcp,2123,22"
    
    # Allow the firewall to reply to icmp echo requests
    FW_ALLOW_PING_FW="yes"
    

If you want a host (10.1.100.123) on the private network to be accessed publicly via SSH (port 22), you can forward the SSH request with the FW_FORWARD_MASQ option. Any request coming through port 2123 will be forwarded to the host on port 22 on the private network. It is important to note that the port number used must be free and not in use by any other service.

It is important to note that the interfaces that point to the internet (FW_DEV_EXT), and interfaces that point to the internal network (FW_DEV_INT) need to be set correctly. If they are not set correctly, you will have problems provisioning using xCAT.

  1. Restart SuSEfirewall2 and load the configuration

    # SuSEfirewall2 stop; SuSEfirewall2 start
    SuSEfirewall2: batch committing...
    SuSEfirewall2: Firewall rules unloaded.
    SuSEfirewall2: Setting up rules from /etc/sysconfig/SuSEfirewall2 ...
    SuSEfirewall2: batch committing...
    SuSEfirewall2: Firewall rules successfully set
    
  2. Verify the NAT configuration by logging into a host (10.1.100.123) on the private network and accessing an external site from that host

    # ifconfig
    eth0      Link encap:Ethernet  HWaddr 02:00:01:FF:FE:FD  
              inet addr:10.1.100.123  Bcast:10.1.100.255  Mask:255.255.255.0
              inet6 addr: fd55:faaf:e1ab:263:0:6ff:feff:fefd/64 Scope:Global
              inet6 addr: fe80::6ff:feff:fefd/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1492  Metric:1
              RX packets:261657 errors:0 dropped:0 overruns:0 frame:0
              TX packets:314748 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000 
              RX bytes:103074088 (98.2 Mb)  TX bytes:27570328 (26.2 Mb)
    
    lo        Link encap:Local Loopback  
              inet addr:127.0.0.1  Mask:255.0.0.0
              inet6 addr: ::1/128 Scope:Host
              UP LOOPBACK RUNNING  MTU:16436  Metric:1
              RX packets:29 errors:0 dropped:0 overruns:0 frame:0
              TX packets:29 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:0 
              RX bytes:7060 (6.8 Kb)  TX bytes:7060 (6.8 Kb)
    
    # ping -c 4 sourceforge.net
    PING sourceforge.net (216.34.181.60) 56(84) bytes of data.
    64 bytes from ch3.sourceforge.net (216.34.181.60): icmp_seq=1 ttl=236 time=30.1 ms
    64 bytes from ch3.sourceforge.net (216.34.181.60): icmp_seq=2 ttl=236 time=30.3 ms
    64 bytes from ch3.sourceforge.net (216.34.181.60): icmp_seq=3 ttl=236 time=31.2 ms
    64 bytes from ch3.sourceforge.net (216.34.181.60): icmp_seq=4 ttl=236 time=30.3 ms
    
    --- sourceforge.net ping statistics ---
    4 packets transmitted, 4 received, 0% packet loss, time 3004ms
    rtt min/avg/max/mdev = 30.134/30.514/31.278/0.447 ms
    

If the host cannot reach the external site, make sure that the gateway for the default route goes to the Linux host (10.1.100.1/gpok1) you had setup with NAT.

    # route
    Kernel IP routing table
    Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
    10.1.100.0      *               255.255.255.0   U     0      0        0 eth0
    link-local      *               255.255.0.0     U     0      0        0 eth0
    loopback        *               255.0.0.0       U     0      0        0 lo
    default         gpok1.endicott. 0.0.0.0         UG    0      0        0 eth0

For more information, refer to openSUSE Security.


MongoDB Logo MongoDB