RHEV stands for 'Red Hat Enterprise Virtualization' which is a virtualization solution produced by Red Hat. The RHEV solution is based on KVM technology with an optimized KVM virtual host and a powerful management portal. With RHEV, the virtualization infrastructure is managed easily and efficiently.
RHEV consists of two parts:
A RHEV environment consists of the following components:
The basic process to setup a datacenter:
Perform the steps outlined in this document after setting up your xCAT management node and defining nodes in your cluster using [XCAT_iDataPlex_Cluster_Quick_Start].
The rhev-m server can be installed on rhels6.2 or above. There are 286 packages that need to be installed. (Including dependencies, it takes about 600MB.). Two approaches can be used to install the rhev-m server:
Rhevm can be installed on the xCAT management node or on a separate server.
If the target server can access RHN (or an RHN satelite), installing rhevm using RHN is recommended. Register the rehvm server with RHN and add following channels for the repositories that are needed for rhev-m.
rhn_register
rhn-channel --add --channel=rhel-x86_64-server-6-rhevm-3
rhn-channel --add --channel=jbappplatform-5-x86_64-server-6-rpm
rhn-channel --add --channel=rhel-x86_64-server-supplementary-6
If you plan to install rhevm from local media, the following repositories need to be created manually on the target server: (All the packages can be downloaded from RH)
Remove the classpathx-jaf package which may conflict with rhev-m.
yum remove classpathx-jaf
Install rhevm package and all dependencies.
yum install rhevm
After successful installation, rhevm needs to be configured and initialized. The configuration can be done through a configuration answer file.
Create the answer file answerrhevm:
rhevm-setup --gen-answer-file=answerrhevm
The created answer file answerrhevm has content like this:
[general]
OVERRIDE_IPTABLES=no
HTTP_PORT=8080
HTTPS_PORT=8443
MAC_RANGE=xx:xx:xx:xx:xx:xx-xx:xx:xx:xx:xx:xx (The customized mac range can be specified.
But the system generated range is recommended, otherwise it may cause the rhevm can NOT recognize the range.)
HOST_FQDN=hostname of rhevm server (The hostname must be FQDN. This is very important for rhevm to generate the certificates.)
AUTH_PASS=xxxx (The password of 'admin' that can be used to access the rhevm through web or REST api)
DB_PASS=xxxx (The password for rhevm to access the database)
ORG_NAME=xxx (The name of orgnization)
DC_TYPE=NFS (The storage type for default datacenter)
CONFIG_NFS=no
Change the answer file with proper values and setup the rhevm:
rhevm-setup --answer-file=answerrhevm
rhev-m ONLY can be accessed via the web GUI from a client which running Windows OS with IE. The root certificate of CA needs to be installed first to start the connection.
Open the IE web browser and enter the web url: http://<FQDN of rhev manager>:8080/RHEVManager
Install the 'certificate' first and then click 'admin' to get into the admin management portal with following account:
User name: admin
Password: <what you set for 'AUTH_PASS' attribute in the answer file 'answerrhevm' when running rhevm-setup>
For debugging when the web access is not available, you can access rhevm through REST API. See the REST API specification.
wget http://<hostname of rhevm>:8080/ca.crt
curl -X GET -H "Accept: application/xml" -u admin@internal:<password> --cacert </path/ca.crt> https://<rhevm>:8443/api
curl -X GET -H "Accept: application/xml" -u admin@internal:<password> --cacert </path/ca.crt> https://<rhevm>:8443/api/vms
There are two passwords need to be set for rhev: one is for rhev-m and the other for host (rhev-h).
This password is used for xcat to access the rhevm. Currently, only the 'admin' is supported as user.
chtab key=rhevm passwd.username=admin passwd.password=<password>
These passwords are used for 'root' and 'admin' accounts that will be configured to rhev-h during the installing. Customer can access the rhev-h through these passwords.
chtab key=rhevh,username=root passwd.password=<password>
chtab key=rhevh,username=admin passwd.password=<password>
Use the lsve command to display the datacenter and cluster in the environment:
lsve -t dc -m <rhevm> -o Default
lsve -t cl -m <rhevm> -o Default
Notes:
The Data Center contains: Cluster and Storage Domain. The Cluster contains: host and vm. By default, the following are automatically created for you:
If you want to change these objects, or create additional ones, see examples below.
Note: The storage domain must have the same storage type (nfs or localfs) with the data center it is part of. The storage type for the default data center 'Default' is 'nfs'. If you want to use a 'localfs' storage domain, a new data center with 'localfs' storage type needs to be created first.
cfgve -t dc -m <rhevm> -o <datacenter name> -k nfs -c
cfgve -t dc -m <rhevm> -o <datacenter name> -k localfs -c
cfgve -t cl -m <rhevm> -o <cluster name> -d mydc -c
lsve -t dc -m <rhevm> -o <data center name>
lsve -t cl -m <rhevm> -o <cluster name>
cfgve -t dc -m <rhevm> -o <datacenter name> -r
cfgve -t cl -m <rhevm> -o <cluster name> -r
Through the RHEV manager you can create logical networks that allow RHEV hypervisors and virtual machines to communicate with each other. For each logical network, the hypervisors are connected to it and they bridge to all the VMs that are part of that logical network. For example, in the diagram below there are 2 logical networks: rhev and rhev1. Each of the hypervisors (rhevh1 and rhevh2) are physically connected to each logical network. The hypervisors provide virtual networks for the VMs, and those virtual networks are bridged to the corresponding physical networks.
The default management network 'rhevm' is created automatically during the setup of rhev-m. If you need another network for storage, data, or communication, create a new one using the examples below.
If '-d' is not specified, the network will be attached to the 'Default' datacenter.
cfgve -t nw -m <rhevm> -o <network name> -d <datacenter> -c
If you are trying to add the network to a specific vlan, use the flag '-n'. Then all the nics which are added to this network will be assigned to this vlan:
cfgve -t nw -m <rhevm> -o <network name> -d <datacenter> -n 2 -c
The network must be attached to a cluster so that the nics of the hosts and vms in the cluster can be added to it.
cfgve -m <rhevm> -t nw -o <network name> -l <cluster> -a
lsve -t nw -m <rhevm> -o <network name>
cfgve -t nw -m <rhevm> -o <nework name> -r
Display the datacenter called 'Default'
#lsve -t dc -m <rhevm> -o Default
datacenters: [Default]
description: The default Data Center
state: up
storageformat: v1
storagetype: nfs
clusters: [Default]
cpu: Intel Westmere Family
description: The default server cluster
memory_hugepage: true
memory_overcommit: 100
networks: [rhevm2]
description:
state: operational
stp: false
networks: [rhevm]
description: Management Network
state: operational
stp: false
Download the iso of rhev-h and copy it to the xCAT management node. Run copycds to generate the installation directory:
copycds rhevh-6.2-xxx.iso -n rhevh6.2 -a x86_64
Note: the flags -n and -a must be specified so that xCAT knows the type of the iso. The distro name specified by '-n' must be prefixed with 'rhev'.
When the rhev-h installation is finished, xCAT needs to update the installation status of the node by a command named 'rhevhupdateflag'. You need to add the following entry to policy table to enable the running of this command. For security consideration, you can remove this entry after installing (although you will need it again if you redeploy rhev-h on nodes).
mkdef -t policy 7 commands=rhevhupdateflag rule=allow
The definition of a rhev-h node is the same as a normal node, except for the addition of the host attributes. Follow [XCAT_iDataPlex_Cluster_Quick_Start] to create your nodes, and then add the host attributes using chdef. An example lsdef output of a rhev-h node is:
Object name: <rhev-h node name>
objtype=node
arch=x86_64
bmc=<x.x.x.x>
cons=ipmi
groups=ipmi,all
installnic=mac
mac=<xx:xx:xx:xx:xx:xx>
mgt=ipmi
netboot=xnba
os=rhevh6.2
profile=compute
xcatmaster=<x.x.x.x>
nfsserver=<x.x.x.x>
...
Provision the host as a common system x node.
nodeset <host> install
rsetboot <host> net
rpower <host> boot
After installing, the status of <rhev-h> should turn to 'booted'.
lsdef <host> -i status
Configure attributes which will be used to configure the rhev-h in xCAT definition.
chdef <rhev-h> hostcluster=mycluster hostinterface=mynet:eth1:static:IP:255.255.255.0
chdef <rhev-h> hostmanager=<rhevm server> hosttype=rhevh
Object name: <rhev-h node name>
hostcluster=mycluster
hostinterface=mynet:eth1:static:IP:255.255.255.0
hostmanager=<rhevm server>
hosttype=rhevh
Here's an explanation of the host* attributes. For details see the node object definition. (The specific attribute name within the hypervisor table is given in parentheses.)
After installing a rhev-h node, if the settings for rhev-m (address and password) for the rhev-h host are correct, the rhev-h host will register to the rhev-m automatically. Check the status of rhev-h:
lsvm <host>
state: pending_approval
The status of the registered host should be 'pending_approval', which means rhevm needs to be told to approve it to make it part of the datacenter:
chhypervisor <host> -a
Configure the network interfaces for rhev-h based on the attribute hostinterface. The management network 'rhevm' has been created by default. And the nic that rhev-h is installed over will be automatically configured by dhcp on this network. If you need to configure additional nics in the rhev-h node, set the hostinterface attribute with the nic information. The format of the hostinterface attribute is multiple sections of network:interfacename:protocol:IP:netmask:gateway . The sections are separated with '|'. For example:
chdef <rhevh-node> hostinterface='rhevm1:eth1:static:10.1.0.236:255.255.255.0:0.0.0.0|rhevm2:eth2:static:10.2.0.236:255.255.255.0:0.0.0.0'
Once the hostinterface attribute is set correctly, you can push that configuration to the rhev-h node. (You need to do this even if the hostinterface attribute was set at the time of the rhev-h node installation, because the configuration of secondary interfaces doesn't happen during rhev-h node installation.)
chhypervisor <host> -d # must deactivate the host into maintenance status first
chhypervisor <host> -n # configure the nics
chhypervisor <host> -e # re-activate the host
The power management must be configured for the rhev-h host to make rhev-m monitor the power status of the host. This enables rhev-m to detect failed hosts and to fail over certain roles, like SPM, to another active host.
For IPMI-controlled hosts, the BMC IP, userid, and password are needed by rhevm for power management. xCAT will use the node's attributes 'bmc', 'bmcusername' and 'bmcpassword' to configure the power management. rhev-m will then the IPMI protocol to get the power status of the host.
You can check the 'storage_manager' attribute of a host to know whether it takes the 'SPM' role. If the host which takes the 'SPM' role encounters a problem, power down the host using 'rpower <host> off', then rhevm will move the 'SPM' role to another host automatically.
To display the storage manager role of a host:
#lsvm <host>
storage_manager: true
To configure the power management (based on the bmc, bmcusername, and bmcpassword attributes):
chhypervisor <host> -p
rmhypervisor <host> -f
The flag '-f' means to deactivate the host to 'maintenance' mode before the removing.
Note: if there's only one host in the data center, you have to remove the data center first:
cfgve -m <rhevm> -t dc -o <mydc> -r
rmhypervisor <host> -r -f
A storage domain needs a host as its SPM (Storage Pool Manager) to be initiated and needs a data center to be connected to. So before creating of a storage domain, the data center, cluster and 'SPM' host must be created first. The SPM host can be any host which has been added to the cluster.
A data center only can handle one type of storage domain, that means the SD must have the same storage type with Data Center (nfs or localfs).
xCAT supports two types of storage domain:
nfs: The storage will be created on the nfs server.
localfs: The storage will be created on the local disk of a host.
The entries for storage domains have to be added to the virtsd table to specify the attributes before creating them using cfgve. For example:
tabch node=sd virtsd.sdtype=data virtsd.stype=nfs virtsd.location=<nfsserver>:<nfs path> virtsd.host=<SPM-host>
tabch node=localsd virtsd.sdtype=data virtsd.stype=localfs virtsd.host=<host-for-localfs> virtsd.datacenter=mydc
virtsd.datacenter - The datacenter the storage will be attached to. 'Default' datacenter is the default value.
Check the data center and host before creating a storage domain:
lsve -t dc -m <rhevm> -o <datacenter name>
lsvm <host> # make sure host is in up state
cfgve -t sd -m <rhevm> -o <storage-domain> -c
Run lsve to check the result
Display the storage domain individually:
lsve -t sd -m <rhevm> -o <storage-domain>
The status of the storage domain only can be queried from the data center object:
lsve -t dc -m <rhevm> -o <datacenter name>
Remove storage domain
Remove the storage domain:
cfgve -t sd -m <rhevm> -o <storage-domain> -r
Remove the storage domain by force. It will try to deactivate SD first and detach SD from data center:
cfgve -t sd -m <rhevm> -o <storage-domain> -r -f
cfgve -t sd -m <rhevm> -o <storage-domain> {-a|-b}
cfgve -t sd -m <rhevm> -o <storage-domain> {-g|-s}
lsve -t dc -m <rhevm> -o Default
datacenters: [Default]
description: The default Data Center
state: up
storageformat: v1
storagetype: nfs
clusters: [Default]
cpu: Intel Westmere Family
description: The default server cluster
memory_hugepage: true
memory_overcommit: 100
storagedomains: [image]
available: 59055800320
committed: 0
ismaster: true
storage_add: ip9-114-34-211.ppd.pok.ibm.com
storage_format: v1
storage_path: /vfsimg
storage_type: nfs
type: data
used: 6442450944
storagedomains: [sd1]
available: 5368709120
committed: 5368709120
ismaster: false
storage_add: 9.114.34.226
storage_format: v1
storage_path: /wxp/vfs
storage_type: nfs
type: data
used: 47244640256
networks: [rhevm2]
description:
state: operational
stp: false
networks: [rhevm]
description: Management Network
state: operational
stp: false
Display the storage domain called 'image':
lsve -t sd -m <rhevm> -o image
storagedomains: [image]
available: 59055800320
committed: 0
ismaster: true
storage_add: ip9-114-34-211.ppd.pok.ibm.com
storage_format: v1
storage_path: /vfsimg
storage_type: nfs
type: data
used: 6442450944
To create your first vm, it is probably easiest to create a stanza file with the attributes. For example:
kvm1:
objtype=node
arch=x86_64
groups=vm,all
installnic=mac
ip=10.1.0.1
mgt=rhevm
netboot=xnba
os=rhels6.1
primarynic=mac
profile=compute
vmbootorder=network
vmcluster=mycluster
vmcpus=2:2
vmhost=rhevh1
vmmanager=rhevm
vmmemory=2G
vmnicnicmodel=virtio
vmnics=rhevm:eth0:yes|rhevm2:eth1
vmstorage=image:10G:system|image:20G:data
vmstoragemodel=virtio:cow
vmmaster=Blank
vmvirtflags=placement_affinity=migratable
vmnics - The network interfaces for vm. Valid format is: [network:interfacename:installnic]|[...]. The default value is: rhevm:eth0:yes. That means to add eth0 to rhevm (management network) and set it as install nic.
network - The name of logical network for the datacenter;
installnic - To specify whether this nic is the install nic. If it has any value, it means 'yes', otherwise 'no'. Only one nic can be set as the install nic.
vmnicnicmodel - The network interface type. Valid values: virtio, e1000, rtl8139, or rtl8139_virtio. Default value is: virtio.
vmstorage - Configure the disk for the vm. Valid format: [name_of_storage_domain:size_of_disk:disk_type]|[...].
name_of_storage_domain - The name of storage domain.
disk_type - The disk type. Valid values: system and data. The default type is 'system'. Only one disk can be set to 'system' type. And if the disk is set to 'system' type, this disk is also set to bootable.
vmstoragemodel - The type and format of disk interface. Valid format: disk_interface_type:disk_format.
disk_interface_type - Valid values: ide, virtio. Default value is 'virtio'.
disk_format - Valid value: cow (thin-provisioned, Copy-On-Write), raw (pre-allocated). Default value is 'cow'. Cow allows snapshots, with a small performance overhead. Raw does not allow snapshots, but offers improved performance.
vmvirtflags - To set the affinity of the vm to determine whether can be migrated or not. Valid format: [placement_affinity=xxx]. Valid values for 'xxx': migratable, user_migratable, pinned. Default value is 'migratable'.
migratable - Can be migrated by rhevm automatically.
Note: If you try to deploy a sles virtual machine, use the 'e1000' as the network driver
chdef <virtual machine> vmnicnicmodel=e1000
Note: Don't set the console attributes for the vm, since the text console is not supported at this time.
chdef kvm2 serialspeed= serialport= serialflow=
Once you have successfully created and booted a single virtual machine, you'll likely want to define many virtual machines. This is made easier by defining at a group level all of the attributes that are the same for all of your VMs:
chdef -t group vm arch=x86_64 installnic=mac mgt=rhevm netboot=xnba os=rhels6.1 primarynic=mac profile=compute vmbootorder=network vmcluster=mycluster vmmanager=rhevm vmnicnicmodel=virtio vmnics='rhevm:eth0:yes|rhevm2:eth1' vmstoragemodel=virtio:cow vmvirtflags=placement_affinity=migratable
The for individual VMs, you only have to set a few attributes. For example:
mkdef kvm2 groups=vm,all ip=10.1.0.2 vmcpus=2:2 vmhost=rhevh1 vmmemory=2G vmstorage='image:10G:system|image:20G:data' vmmaster=Blank
makehosts hkvm1
makedns hkvm1
mkvm hkvm1
Display the attributes and status for vms.
lsvm hkvm1
Run the chvm command to modify the configuration of virtual machines. Change the node attributes through 'chdef' first and then run 'chvm'. Note: The virtual machine needs to be shutdown before modifying some attributes like: CPU.
chvm hkvm1
rmvm hkvm1
Note: there's a bug in rhev that has a problem with the order: network,hd. We recommend just setting the boot order to 'network':
rsetboot hkvm1 network
nodeset hkvm1 osimage=rhels6.1-x86_64-install-compute
rpower hkvm1 boot
A RHEV vm does not support a text console (rcons), so you must use the graphic console via wvid. This requires the tigervnc rpm to be installed first if using the 'vnc' protocol. (And you probaly also want the tigervnc-server rpm installed so you can view the MN desktop if you are not sitting in front of it.)
wvid hkvm1
rpower hkvm1 suspend
To clone a vm, you first need to make a vm template (master) from an existing vm. Then you can clone a new vm from this template.
ssh hkvm1 shutdown -P now
clonevm hkvm1 -t tpl01
During the creating of template, the status of template is 'locked'. You need to wait for the template to be 'ok' before using it.
lsve -t tpl -m <rhevm> -o tpl01
All the attributes will be inherited from template. If you'd like to have specific settings for some attributes, change the setting in the node definition, before running mkvm. For example, if you want to give the new vm an additional disk:
chdef hkvm2 vmstorage=image:10G:data
When cloning, the 'vmcluster' attribute will not be inherited from the template. You should specify the vmcluster for the node, otherwise the new vm will be added to the 'Default' cluster:
chdef hkvm2 vmcluster=mycluster
chdef hkvm2 vmmaster=tpl01
mkvm hkvm2
rmigrate hkvm2 <newhost>
xCAT is using the REST api interface to manage the rhev data center. But the REST api does not implement the full set of functions of RHEV. We recommend that you setup a Windows system with IE which can access the rhev-m through its web interface. It will help you to debug problems you encounter.
For all RHEV related commands, you can run them with '-V' to display the communication details of the REST api.
lsvm hkvm1 -V
Mostly, this is caused by the SPM host encountering an issue. Use lsvm to show whether a host is acting as the SPM:
lsvm host1
storage_manager: true (true => yes; false => no;)
If (storage_manager => true), but this host encountered an issue, you have to deactivate this host to maintenance mode before powering it off.
chhypervisor host1 -d
If the power management for the host has been set correctly (by 'chhypervisor host -p'), when rhev-m detected the SPM is off, rhev-m will try to transfer the SPM role to another host. It needs at least 2 hosts to enable this function.
If you cannot switch off the SPM from the failed host, try to power off the host first and login to the rhev-m through the web interface, find the host and click the right button and choose 'Confirm 'Host has been rebooted_ to force the switching of the SPM role._
This issue mostly caused by the incorrect ownership of a directory which is in the nfs path. The owner must be set to 'vdsm:kvm'
chown vdsm:kvm <nfs path>
Wiki: FLEXCAT_system_x_support_for_IBM_Flex
Wiki: HowTos
Wiki: XCAT_2.8_Release_Notes
Wiki: XCAT_BladeCenter_Linux_Cluster
Wiki: XCAT_Documentation
Wiki: XCAT_NeXtScale_Clusters
Wiki: XCAT_iDataPlex_Cluster_Quick_Start
Wiki: XCAT_system_x_support_for_IBM_Flex