Edition 1
ftpdMono-spaced Bold
To see the contents of the filemy_next_bestselling_novelin your current working directory, enter thecat my_next_bestselling_novelcommand at the shell prompt and press Enter to execute the command.
Press Enter to execute the command.Press Ctrl+Alt+F1 to switch to the first virtual terminal. Press Ctrl+Alt+F7 to return to your X-Windows session.
Mono-spaced Bold. For example:
File-related classes includefilesystemfor file systems,filefor files, anddirfor directories. Each class has its own associated set of permissions.
Choose from the main menu bar to launch Mouse Preferences. In the Buttons tab, click the Left-handed mouse check box and click to switch the primary mouse button from the left to the right (making the mouse suitable for use in the left hand).To insert a special character into a gedit file, choose from the main menu bar. Next, choose from the Character Map menu bar, type the name of the character in the Search field and click . The character you sought will be highlighted in the Character Table. Double-click this highlighted character to place it in the Text to copy field and then click the button. Now switch back to your document and choose from the gedit menu bar.
Mono-spaced Bold Italic or Proportional Bold Italic
To connect to a remote machine using ssh, typesshat a shell prompt. If the remote machine isusername@domain.nameexample.comand your username on that machine is john, typessh john@example.com.Themount -o remountcommand remounts the named file system. For example, to remount thefile-system/homefile system, the command ismount -o remount /home.To see the version of a currently installed package, use therpm -qcommand. It will return a result as follows:package.package-version-release
When the Apache HTTP Server accepts requests, it dispatches child processes or threads to handle them. This group of child processes or threads is known as a server-pool. Under Apache HTTP Server 2.0, the responsibility for creating and maintaining these server-pools has been abstracted to a group of modules called Multi-Processing Modules (MPMs). Unlike other modules, only one module from the MPM group can be loaded by the Apache HTTP Server.
Mono-spaced Roman and presented thus:
books Desktop documentation drafts mss photos stuff svn books_tests Desktop1 downloads images notes scripts svgs
Mono-spaced Roman but are presented and highlighted as follows:
package org.jboss.book.jca.ex1;
import javax.naming.InitialContext;
public class ExClient
{
public static void main(String args[])
throws Exception
{
InitialContext iniCtx = new InitialContext();
Object ref = iniCtx.lookup("EchoBean");
EchoHome home = (EchoHome) ref;
Echo echo = home.create();
System.out.println("Created Echo");
System.out.println("Echo.echo('Hello') = " + echo.echo("Hello"));
}
}
Table of Contents

virt-manager, libvirt and virt-viewer for installation.

Customize the packages (if required)

%packages section of your Kickstart file, append the following package group:
%packages @kvm
yumkvm package. The kvm package contains the KVM kernel module providing the KVM hypervisor on the default Linux kernel.
kvm package, run:
# yum install kvm
python-virtinstvirt-install command for creating virtual machines.
libvirtlibvirt is an API library for interacting with hypervisors. libvirt uses the xm virtualization framework and the virsh command line tool to manage and control virtual machines.
libvirt-pythonlibvirt API.
virt-managervirt-manager, also known as Virtual Machine Manager, provides a graphical tool for administering virtual machines. It uses libvirt library as the management API.
# yum install virt-manager libvirt libvirt-python python-virtinst
virt-install. Both methods are covered by this chapter.
virt-install command to create virtualized guests from the command line. virt-install is used either interactively or as part of a script to automate the creation of virtual machines. Using virt-install with Kickstart files allows for unattended installation of virtual machines.
virt-install tool provides a number of options one can pass on the command line. To see a complete list of options run:
$ virt-install --help
virt-install man page also documents each command option and important variables.
qemu-img is a related command which may be used before virt-install to configure storage options.
--vnc option which opens a graphical window for the guest's installation.
rhel3support, from a CD-ROM, with virtual networking and with a 5 GB file-based block device image. This example uses the KVM hypervisor.
# virt-install --accelerate --hvm --connect qemu:///system \ --network network:default \ --name rhel3support --ram=756\ --file=/var/lib/libvirt/images/rhel3support.img \ --file-size=6 --vnc --cdrom=/dev/sr0
# virt-install --name Fedora11 --ram 512 --file=/var/lib/libvirt/images/Fedora11.img \ --file-size=3 --vnc --cdrom=/var/lib/libvirt/images/Fedora11.iso
virt-manager, also known as Virtual Machine Manager, is a graphical tool for creating and managing virtualized guests.
# virt-manager &
virt-manager command opens a graphical user interface window. Various functions are not available to users without root privileges or sudo configured, including the button and you will not be able to create a new virtualized guest.



kernel-xen is not the kernel running presently.

HTTP, FTP or NFS. The installation media URL must contain a Fedora installation tree. This tree is hosted using NFS, FTP or HTTP. The network services and files can be hosted using network services on the host or another mirror.
.iso file), mount the CD-ROM image and host the mounted files with one of the mentioned protocols.


/var/lib/xen/images/ directory. Other directory locations for file based images are prohibited by SELinux. If you run SELinux in enforcing mode, refer to Section 7.1, “SELinux and virtualization” for more information on installing guests.

/var/lib/xen/images/. If you are using a different location (such as /xen/images/ in this example) make sure it is added to your SELinux policy and relabeled before you continue with the installation (later in the document you will find information on how to modify your SELinux policy).


virt-manager. Chapter 3, Guest operating system installation procedures contains step-by-step instructions to installing a variety of common operating systems.
Create a new bridge
/etc/sysconfig/network-scripts/ directory. This example creates a file named ifcfg-installation which makes a bridge named installation
# cd /etc/sysconfig/network-scripts/ # vim ifcfg-installation DEVICE=installation TYPE=Bridge BOOTPROTO=dhcp ONBOOT=yes
TYPE=Bridge, is case-sensitive. It must have uppercase 'B' and lower case 'ridge'.
Start the new bridge. # ifup installation
brctl show command to view details about network bridges on the system.
# brctl show bridge name bridge id STP enabled interfaces installation 8000.000000000000 no virbr0 8000.000000000000 yes
virbr0 bridge is the default bridge used by libvirt for Network Address Translation (NAT) on the default Ethernet device.
Add an interface to the new bridge
BRIDGE parameter to the configuration file with the name of the bridge created in the previous steps.
# Intel Corporation Gigabit Network Connection DEVICE=eth1 BRIDGE=installation BOOTPROTO=dhcp HWADDR=00:13:20:F7:6E:8E ONBOOT=yes
# service network restart
brctl show command:
# brctl show bridge name bridge id STP enabled interfaces installation 8000.001320f76e8e no eth1 virbr0 8000.000000000000 yes
Security configuration
iptables to allow all traffic to be forwarded across the bridge.
# iptables -I FORWARD -m physdev --physdev-is-bridged -j ACCEPT # service iptables save # service iptables restart
iptables rules. In /etc/sysctl.conf append the following lines:
net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-arptables = 0
sysctl
# sysctl -p /etc/sysctl.conf
Restart libvirt before the installation
libvirt daemon.
# service libvirtd reload
virt-install append the --network=bridge:BRIDGENAME installation parameter where installation is the name of your bridge. For PXE installations use the --pxe parameter.
# virt-install --accelerate --hvm --connect qemu:///system \
--network=bridge:installation --pxe\
--name EL10 --ram=756 \
--vcpus=4
--os-type=linux --os-variant=rhel5
--file=/var/lib/libvirt/images/EL10.img \
Select PXE

Select the bridge

Start the installation

kernel-xen kernel.
virt-manager, refer to the procedure in Section 2.2, “Creating guests with virt-manager”.
virt-install tool. The --vnc option shows the graphical installation. The name of the guest in the example is rhel5PV, the disk image file is rhel5PV.dsk and a local mirror of the Red Hat Enterprise Linux 5 installation tree is ftp://10.1.1.1/trees/CentOS5-B2-Server-i386/. Replace those values with values accurate for your system and network.
# virt-install -nrhel5PV-r 500 \ -f /var/lib/libvirt/images/rhel5PV.dsk-s 3 --vnc -p \ -lftp://10.1.1.1/trees/CentOS5-B2-Server-i386/



DHCP (as shown below) or a static IP address:







Installation Number field:

rhn_register command. The rhn_register command requires root access.
# rhn_register












virt-install in Section 3.1, “Installing Red Hat Enterprise Linux 5 as a para-virtualized guest”. If you used the default example the name is rhel5PV.
virsh reboot rhel5PV
virt-manager, select the name of your guest, click , then click .








kdump if necessary.











Open virt-manager
virt-manager. Launch the application from the menu and submenu. Alternatively, run the virt-manager command as root.
Select the hypervisor
qemu.
Start the new virtual machine wizard

Name the virtual machine

Choose a virtualization method

Select the installation method

Locate installation media

/var/lib/libvirt/images/ directory. Any other location may require additional configuration for SELinux, refer to Section 7.1, “SELinux and virtualization” for details.
Storage setup
/var/lib/libvirt/images/ directory. Assign sufficient storage for your virtualized guest. Assign sufficient space for your virtualized guest and any application it requires.

Network setup

Memory and CPU allocation

Verify and start guest installation

Installing Linux
Starting virt-manager
Naming your virtual system

Choosing a virtualization method

Choosing an installation method

/var/lib/libvirt/images/ directory. Any other location may require additional configuration for SELinux, refer to Section 7.1, “SELinux and virtualization” for details.
/var/lib/libvirt/images/ directory. Other directory locations for file based images are prohibited by SELinux. If you run SELinux in enforcing mode, refer to Section 7.1, “SELinux and virtualization” for more information on installing guests.

/var/lib/libvirt/images/. If you are using a different location (such as /images/ in this example) make sure it is added to your SELinux policy and relabeled before you continue with the installation (later in the document you will find information on how to modify your SELinux policy)
Network setup



HAL, once you get the dialog box in the Windows install select the 'Generic i486 Platform' tab (scroll through selections with the Up and Down arrows.








# virsh start WindowsGuest
WindowsGuest is the name of your virtual machine.

virsh reboot WindowsGuestName. The will usually get the installation to continue. As you restart the virtual machine you will see a Setup is being restarted message:




virt-install command. virt-install can be used instead of virt-manager This process is similar to the Windows XP installation covered in Section 3.3, “Installing Windows XP as a fully virtualized guest”.
virt-install for installing Windows Server 2003 as the console for the Windows guest opens the virt-viewer window promptly. An example of using the virt-install for installing a Windows Server 2003 guest:
virt-install command.
# virt-install -hvm -s 5 -f /var/lib/libvirt/images/windows2003spi1.dsk \ -n windows2003sp1 -cdrom=/ISOs/WIN/en_windows_server_2003_sp1.iso \ -vnc -r 1024
Standard PC as the Computer Type. This is the only non standard step required.




Open virt-manager
virt-manager. Launch the application from the menu and submenu. Alternatively, run the virt-manager command as root.
Select the hypervisor
qemu.
Start the new virtual machine wizard

Name the virtual machine

Choose a virtualization method

Select the installation method

Locate installation media


/var/lib/libvirt/images/ directory. Any other location may require additional configuration for SELinux, refer to Section 7.1, “SELinux and virtualization” for details.
Storage setup
/var/lib/libvirt/images/ directory. Assign sufficient storage for your virtualized guest. Assign sufficient space for your virtualized guest and any application it requires.

Network setup

Memory and CPU allocation

Verify and start guest installation

Installing Windows

dd command. Replace /dev/fd0 with the name of a floppy device and name the disk appropriately.
# dd if=/dev/fd0 of=~/legacydrivers.img
virt-manager running a fully virtualized Linux installation with an image located in /var/lib/libvirt/images/rhel5FV.img. The Xen hypervisor is used in the example.
virsh command on a running guest.
# virsh dumpxml rhel5FV > rhel5FV.xml
# dd if=/dev/zero of=/var/lib/libvirt/images/rhel5FV-floppy.img bs=512 count=2880
<disk type='file' device='floppy'> <source file='/var/lib/libvirt/images/rhel5FV-floppy.img'/> <target dev='fda'/> </disk>
# virsh stop rhel5FV
# virsh create rhel5FV.xml
dd command. Sparse files are not recommended due to data integrity and performance issues. Sparse files are created much faster and can used for testing but should not be used in production environments.
# dd if=/dev/zero of=/var/lib/libvirt/images/FileName.img bs=1M seek=4096 count=0
# dd if=/dev/zero of=/var/lib/libvirt/images/FileName.img bs=1M count=4096
Guest1 and the file is saved in the users home directory.
# virsh dumpxmlGuest1> ~/Guest1.xml
Guest1.xml in this example) in a text editor. Find the entries starting with "disk=". This entry resembles:
>disk type='file' device='disk'<
>driver name='tap' type='aio'/<
>source file='/var/lib/libvirt/images/Guest1.img'/<
>target dev='xvda'/<
>/disk<
disk= entry. Ensure you specify a device name for the virtual block device which is not used already in the configuration file. The following example entry adds file, named FileName.img, as a file based storage container:
>disk type='file' device='disk'<
>driver name='tap' type='aio'/<
>source file='/var/lib/libvirt/images/Guest1.img'/<
>target dev='xvda'/<
>/disk<
>disk type='file' device='disk'<
>driver name='tap' type='aio'/<
>source file='/var/lib/libvirt/images/FileName.img'/<
>target dev='hda'/<
>/disk<
# virsh create Guest1.xml
FileName.img as the device called /dev/hdb. This device requires formatting from the guest. On the guest, partition the device into one primary partition for the entire device then format the device.
n for a new partition.
# fdisk /dev/hdb Command (m for help):
p for a primary partition.
Command action e extended p primary partition (1-4)
1.
Partition number (1-4): 1
Enter.
First cylinder (1-400, default 1):
Enter.
Last cylinder or +size or +sizeM or +sizeK (2-400, default 400):
t.
Command (m for help): t
1.
Partition number (1-4): 1
83 for a Linux partition.
Hex code (type L to list codes): 83
Command (m for help):wCommand (m for help):q
ext3 file system.
# mke2fs -j /dev/hdb
# mount /dev/hdb1 /myfiles
multipath and persistence on the host if required.
virsh attach command. Replace: myguest with your guest's name, /dev/hdb1 with the device to add, and hdc with the location for the device on the guest. The hdc must be an unused device name. Use the hd* notation for Windows guests as well, the guest will recognize the device correctly.
--type hdd parameter to the command for CD-ROM or DVD devices.
--type floppy parameter to the command for floppy devices.
# virsh attach-diskmyguest/dev/hdb1hdc--driver tap --mode readonly
/dev/hdb on Linux or D: drive, or similar, on Windows. This device may require formatting.
multipath must use Single path configuration. Systems running multipath can use Multiple path configuration.
udev. Only use this procedure for hosts which are not using multipath.
/etc/scsi_id.config file.
options=-b is line commented out.
# options=-b
options=-g
udev to assume all attached SCSI devices return a UUID.
scsi_id -g -s /block/sd* command. For example:
# scsi_id -g -s /block/sd* 3600a0b800013275100000015427b625e
/dev/sdc.
scsi_id -g -s /block/sd* command is identical from computer which accesses the device.
20-names.rules in the /etc/udev/rules.d directory. Add new rules to this file. All rules are added to the same file using the same format. Rules follow this format:
KERNEL="sd*", BUS="scsi", PROGRAM="/sbin/scsi_id -g -s", RESULT=UUID, NAME=devicename
UUID and devicename with the UUID retrieved above, and a name for the device. This is a rule for the example above:
KERNEL="sd*", BUS="scsi", PROGRAM="/sbin/scsi_id -g -s", RESULT="3600a0b800013275100000015427b625e", NAME="rack4row16"
udev daemon now searches all devices named /dev/sd* for the UUID in the rule. Once a matching device is connected to the system the device is assigned the name from the rule. In the a device with a UUID of 3600a0b800013275100000015427b625e would appear as /dev/rack4row16.
/etc/rc.local:
/sbin/start_udev
/etc/scsi_id.config, /etc/udev/rules.d/20-names.rules, and /etc/rc.local files to all relevant hosts.
/sbin/start_udev
multipath package is used for systems with more than one physical path from the computer to storage devices. multipath provides fault tolerance, fail-over and enhanced performance for network storage devices attached to Linux systems.
multipath environment requires defined alias names for your multipath devices. Each storage device has a UUID which acts as a key for the aliased names. Identify a device's UUID using the scsi_id command.
# scsi_id -g -s /block/sdc
/dev/mpath directory. In the example below 4 devices are defined in /etc/multipath.conf:
multipaths {
multipath {
wwid 3600805f30015987000000000768a0019
alias oramp1
}
multipath {
wwid 3600805f30015987000000000d643001a
alias oramp2
}
mulitpath {
wwid 3600805f3001598700000000086fc001b
alias oramp3
}
mulitpath {
wwid 3600805f300159870000000000984001c
alias oramp4
}
}
/dev/mpath/oramp1, /dev/mpath/oramp2, /dev/mpath/oramp3 and /dev/mpath/oramp4. Once entered, the mapping of the devices' WWID to their new names are now persistent after rebooting.
virsh with the attach-disk parameter.
# virsh attach-disk [domain-id] [source] [target] --driver file --type cdrom --mode readonly
The source and target parameters are paths for the files and devices, on the host and guest respectively. The source parameter can be a path to an ISO file or the device from the /dev directory.
# setenforce 1
AutoFS, NFS, FTP, HTTP, NIS, telnetd, sendmail and so on.
/var/lib/libvirt/images/. If you are using a different directory for your virtual machine images make sure you add the directory to your SELinux policy and relabel it before starting the installation.
vsftpd server.
/var/lib/libvirt/images.
NewVolumeName on the volume group named volumegroup.
# lvcreate -nNewVolumeName-L5Gvolumegroup
NewVolumeName logical volume with a file system that supports extended attributes, such as ext3.
# mke2fs -j /dev/volumegroup/NewVolumeName
/etc, /var, /sys) or in home directories (/home or /root). This example uses a directory called /virtstorage
# mkdir /virtstorage
# mount/dev/volumegroup/NewVolumeName/virtstorage
semanage fcontext -a -t xen_image_t "/virtualization(/.*)?"
semanage fcontext -a -t virt_image_t "/virtualization(/.*)?"
/etc/selinux/targeted/contexts/files/file_contexts.local file which makes the change persistent. The appended line may resemble this:
/virtstorage(/.*)? system_u:object_r:xen_image_t:s0
/virtstorage) and all files under it to xen_image_t (restorecon and setfiles read the files in /etc/selinux/targeted/contexts/files/).
# restorecon -R -v /virtualization
# semanage fcontext -a -t xen_image _t -f -b /dev/sda2 # restorecon /dev/sda2
xend_disable_t can set the xend to unconfined mode after restarting the daemon. It is better to disable protection for a single daemon than the whole system. It is advisable that you should not re-label directories as xen_image_t that you will use elsewhere.
libvirt installation provides NAT based connectivity to virtual machines out of the box. This is the so called 'default virtual network'. Verify that it is available with the virsh net-list --all command.
# virsh net-list --all Name State Autostart ----------------------------------------- default active yes
# virsh net-define /usr/share/libvirt/networks/default.xml
/usr/share/libvirt/networks/default.xml
# virsh net-autostart default Network default marked as autostarted
# virsh net-start default Network default started
libvirt default network is running, you will see an isolated bridge device. This device does not have any physical interfaces added, since it uses NAT and IP forwarding to connect to outside world. Do not add new interfaces.
# brctl show bridge name bridge id STP enabled interfaces virbr0 8000.000000000000 yes
libvirt adds iptables rules which allow traffic to and from guests attached to the virbr0 device in the INPUT, FORWARD, OUTPUT and POSTROUTING chains. libvirt then attempts to enable the ip_forward parameter. Some other applications may disable ip_forward, so the best option is to add the following to /etc/sysctl.conf.
net.ipv4.ip_forward = 1
<interface type='network'> <source network='default'/> </interface>
<interface type='network'> <source network='default'/> <mac address='00:16:3e:1a:b3:4a'/> </interface>
/etc/xen/xend-config.sxp and changing the line:
(network-script network-bridge)
(network-script /bin/true)
# chkconfig NetworkManager off # chkconfig network on # service NetworkManager stop # service network start
NM_CONTROLLED=no" to the ifcfg-* scripts used in the examples.
/etc/sysconfig/network-scripts directory:
# cd /etc/sysconfig/network-scripts
ifcfg-eth0 defines the physical network interface which is set as part of a bridge:
DEVICE=eth0 # change the hardware address to match the hardware address your NIC uses HWADDR=00:16:76:D6:C9:45 ONBOOT=yes BRIDGE=br0
MTU variable to the end of the configuration file.
MTU=9000
/etc/sysconfig/network-scripts directory called ifcfg-br0 or similar. The br0 is the name of the bridge, this can be anything as long as the name of the file is the same as the DEVICE parameter.
DEVICE=br0 TYPE=Bridge BOOTPROTO=dhcp ONBOOT=yes DELAY=0
TYPE=Bridge, is case-sensitive. It must have uppercase 'B' and lower case 'ridge'.
# service network restart
iptables to allow all traffic to be forwarded across the bridge.
# iptables -I FORWARD -m physdev --physdev-is-bridged -j ACCEPT # service iptables save # service iptables restart
iptables rules. In /etc/sysctl.conf append the following lines:
net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-arptables = 0
sysctl
# sysctl -p /etc/sysctl.conf
libvirt daemon.
# service libvirtd reload
# brctl show bridge name bridge id STP enabled interfaces virbr0 8000.000000000000 yes br0 8000.000e0cb30550 no eth0
virbr0 bridge. Do not attempt to attach a physical device to virbr0. The virbr0 bridge is only for Network Address Translation (NAT) connectivity.
Download the drivers
virtio-win.iso, in the /usr/share/virtio-win/ directory.
Install the para-virtualized drivers
virt-manager to mount a CD-ROM image for a Windows guest” to add a CD-ROM image with virt-manager.
virt-manager to mount a CD-ROM image for a Windows guestvirt-manager, select your virtualized guest from the list of virtual machines and press the button.

/usr/share/xenpv-win if you used yum to install the para-virtualized driver packages.


viostor.vfd as a floppy
Windows Server 2003
Windows Server 2008
virtio driver instead of virtualized IDE driver. This example edits libvirt configuration files. Alternatively, virt-manager, virsh attach-disk or virsh attach-interface can add a new device using the para-virtualized drivers Using KVM para-virtualized drivers for new devices.
<disk type='file' device='disk'> <source file='/var/lib/libvirt/images/disk1.img'/> <target dev='hda' bus='ide'/> </disk>
virtio.
<disk type='file' device='disk'>
<source file='/var/lib/libvirt/images/disk1.img'/>
<target dev='hda' bus='virtio'/>
</disk>
virt-manager.
virsh attach-disk or virsh attach-interface commands can be used to attach devices using the para-virtualized drivers.
virt-manager.



xend/etc/xen/xend-config.sxp. Here are the parameters you can enable or disable in the xend-config.sxp configuration file:
| Item | Description |
|---|---|
|
(console-limit)
|
Determines the console server's memory buffer limit xend_unix_server and assigns values on a per domain basis.
|
|
(min-mem)
|
Determines the minimum number of megabytes that is reserved for domain0 (if you enter 0, the value does not change).
|
|
(dom0-cpus)
|
Determines the number of CPUs in use by domain0 (at least 1 CPU is assigned by default).
|
|
(enable-dump)
|
Determines that a crash occurs then enables a dump (the default is 0).
|
|
(external-migration-tool)
|
Determines the script or application that handles external device migration. Scripts must reside in
etc/xen/scripts/external-device-migrate.
|
|
(logfile)
|
Determines the location of the log file (default is
/var/log/xend.log).
|
|
(loglevel)
|
Filters out the log mode values: DEBUG, INFO, WARNING, ERROR, or CRITICAL (default is DEBUG).
|
|
(network-script)
|
Determines the script that enables the networking environment (scripts must reside in
etc/xen/scripts directory).
|
|
(xend-http-server)
|
Enables the http stream packet management server (the default is no).
|
|
(xend-unix-server)
|
Enables the unix domain socket server, which is a socket server is a communications endpoint that handles low level network connections and accepts or rejects incoming connections. The default value is set to yes.
|
|
(xend-relocation-server)
|
Enables the relocation server for cross-machine migrations (the default is no).
|
|
(xend-unix-path)
|
Determines the location where the
xend-unix-server command outputs data (default is var/lib/xend/xend-socket)
|
|
(xend-port)
|
Determines the port that the http management server uses (the default is 8000).
|
|
(xend-relocation-port)
|
Determines the port that the relocation server uses (the default is 8002).
|
|
(xend-relocation-address)
|
Determines the host addresses allowed for migration. The default value is the value of
xend-address.
|
|
(xend-address)
|
Determines the address that the domain socket server binds to. The default value allows all connections.
|
service xend start
service xend stop
service xend restart
service xend status
xend at boot timechkconfig command to add the xend to the initscript.
chkconfig --level 345 xend
xend will now start at runlevels 3, 4 and 5.
ntpd service:
# service ntpd start
# chkconfig ntpd on
ntpd service should minimize the affects of clock skew in all cases.
constant_tsc flag is present. To determine if your CPU has the constant_tsc flag run the following command:
$ cat /proc/cpuinfo | grep constant_tsc
constant_tsc bit. If no output is given follow the instructions below.
constant_tsc bit, disable all power management features (BZ#513138). Each system has several timers it uses to keep time. The TSC is not stable on the host, which is sometimes caused by cpufreq changes, deep C state, or migration to a host with a faster TSC. To stop deep C states, which cam stop the TSC, add "processor.max_cstate=1" to the kernel boot options in grub on the host:
term Fedora (vmlinuz-2.6.29.6-217.2.3.fc11)
root (hd0,0)
kernel /vmlinuz-vmlinuz-2.6.29.6-217.2.3.fc11 ro root=/dev/VolGroup00/LogVol00 rhgb quiet processor.max_cstate=1
cpufreq (only necessary on hosts without the constant_tsc) by editing the /etc/sysconfig/cpuspeed configuration file and change the MIN_SPEED and MAX_SPEED variables to the highest frequency available. Valid limits can be found in the /sys/devices/system/cpu/cpu*/cpufreq/scaling_available_frequencies files.
| Red Hat Enterprise Linux | Additional guest kernel parameters |
|---|---|
| 5.4 AMD64/Intel 64 with the para-virtualized clock | Additional parameters are not required |
| 5.4 AMD64/Intel 64 without the para-virtualized clock | divider=10 notsc lpj=n |
| 5.4 x86 with the para-virtualized clock | Additional parameters are not required |
| 5.4 x86 without the para-virtualized clock | divider=10 clocksource=acpi_pm lpj=n |
| 5.3 AMD64/Intel 64 | divider=10 notsc |
| 5.3 x86 | divider=10 clocksource=acpi_pm |
| 4.8 AMD64/Intel 64 | notsc divider=10 |
| 4.8 x86 | clock=pmtmr divider=10 |
| 3.9 AMD64/Intel 64 | Additional parameters are not required |
| 3.9 x86 | Additional parameters are not required |
/use pmtimer
virsh command. The migrate command accepts parameters in the following format:
# virsh migrate --live GuestName DestinationURL
GuestName parameter represents the name of the guest which you want to migrate.
DestinationURL parameter is the URL or hostname of the destination system. The destination system must run the same version of Fedora, be using the same hypervisor and have libvirt running.
test1.bne.redhat.com to test2.bne.redhat.com. Change the host names for your environment. This example migrates a virtual machine named CentOS4test.
Verify the guest is running
test1.bne.redhat.com, verify CentOS4test is running:
[root@test1 ~]# virsh list Id Name State ---------------------------------- 10 CentOS4 running
Migrate the guest
test2.bne.redhat.com. Append /system to the end of the destination URL to tell libvirt that you need full access.
# virsh migrate --live CentOS4test qemu+ssh://test2.bne.redhat.com/system
Wait
virsh only reports errors. The guest continues to run on the source host until fully migrated.
Verify the guest has arrived at the destination host
test2.bne.redhat.com, verify CentOS4test is running:
[root@test2 ~]# virsh list Id Name State ---------------------------------- 10 CentOS4 running
virt-manager.













ssh or TLS and SSL.
libvirt management connection securely tunneled over an SSH connection to manage the remote machines. All the authentication is done using SSH public key cryptography and passwords or passphrases gathered by your local SSH agent. In addition the VNC console for each guest virtual machine is tunneled over SSH.
virt-managervirt-manager is used. If ssh is already configured you can skip this command.
$ ssh-keygen -t rsa
virt-manager needs a copy of the public key on each remote machine running libvirt. Copy the file $HOME/.ssh/id_rsa.pub from the machine you want to use for remote management using the scp command:
$ scp $HOME/.ssh/id_rsa.pub root@somehost:/root/key-dan.pub
ssh to connect to the remote machines as root and add the file that you copied to the list of authorized keys. If the root user on the remote host does not already have an list of authorized keys, make sure the file permissions are correctly set
$ ssh root@somehost
# mkdir /root/.ssh
# chmod go-rwx /root/.ssh
# cat /root/key-dan.pub >> /root/.ssh/authorized_keys
# chmod go-rw /root/.ssh/authorized_keys
libvirt daemon (libvirtd)libvirt daemon provide an interface for managing virtual machines. You should use the libvirtd daemon installed and running on every remote host that you need to manage. Using the Fedora kernel-xen package requires a speci TODO
$ ssh root@somehost
# chkconfig libvirtd on
# service libvirtd start
libvirtd and SSH are configured you should be able to remotely access and manage your virtual machines. You should also be able to access your guests with VNC at this point.
libvirt management connection opens a TCP port for incoming connections, which is securely encrypted and authenticated based on x509 certificates. In addition the VNC console for each guest virtual machine will be setup to use TLS with x509 certificate authentication.
libvirt server setup/etc/xen/xend-config.sxp. Remove the commenting on the (vnc-tls 1) configuration parameter in the configuration file.
/etc/xen/vnc directory needs the following 3 files:
ca-cert.pem - The CA certificate
server-cert.pem - The Server certificate signed by the CA
server-key.pem - The server private key
(vnc-x509-verify 1) parameter.
virt-manager and virsh client setuplibvirt management API over TLS, the CA and client certificates must be placed in /etc/pki. For details on this consult http://libvirt.org/remote.html
virt-manager user interface, use the '' transport mechanism option when connecting to a host.
virsh, the URI has the following format:
qemu://hostname.guestname/system for KVM.
xen://hostname.guestname/ for Xen.
$HOME/.pki, that is the following three files:
ca-cert.pem - The CA certificate.
libvirt-vnc or clientcert.pem - The client certificate signed by the CA.
libvirt-vnc or clientkey.pem - The client private key.
libvirt supports the following transport modes:
/var/run/libvirt/libvirt-sock and /var/run/libvirt/libvirt-sock-ro (for read-only connections).
libvirtd) must be running on the remote machine. Port 22 must be open for SSH access. You should use some sort of ssh key management (for example, the ssh-agent utility) or you will be prompted for a password.
A Uniform Resource Identifier (URI) is used by virsh and libvirt to connect to a remote host. URIs can also be used with the --connect parameter for the virsh command to execute single commands or migrations on remote hosts.
driver[+transport]://[username@][hostname][:port]/[path][?extraparameters]
towada, using SSH transport and the SSH username ccurran.
xen+ssh://ccurran@towada/
towada using TLS.
xen://towada/
towada using TLS. The no_verify=1 tells libvirt not to verify the server's certificate.
xen://towada/?no_verify=1
towada using SSH.
qemu+ssh://towada/system
qemu+unix:///system?socket=/opt/libvirt/run/libvirt/libvirt-sock
test+tcp://10.1.1.10:5000/default
| Name | Transport mode | Description | Example usage |
|---|---|---|---|
| name | all modes | The name passed to the remote virConnectOpen function. The name is normally formed by removing transport, hostname, port number, username and extra parameters from the remote URI, but in certain very complex cases it may be better to supply the name explicitly. | name=qemu:///system |
| command | ssh and ext | The external command. For ext transport this is required. For ssh the default is ssh. The PATH is searched for the command. | command=/opt/openssh/bin/ssh |
| socket | unix and ssh | The path to the UNIX domain socket, which overrides the default. For ssh transport, this is passed to the remote netcat command (see netcat). | socket=/opt/libvirt/run/libvirt/libvirt-sock |
| netcat | ssh | The name of the netcat command on the remote machine. The default is nc. For ssh transport, libvirt constructs an ssh command which looks like: command -p port [-l username] hostname netcat -U socket where port, username, hostname can be specified as part of the remote URI, and command, netcat and socket come from extra parameters (or sensible defaults). | netcat=/opt/netcat/bin/nc |
| no_verify | tls | If set to a non-zero value, this disables client checks of the server's certificate. Note that to disable server checks of the client's certificate or IP address you must change the libvirtd configuration. | no_verify=1 |
| no_tty | ssh | If set to a non-zero value, this stops ssh from asking for a password if it cannot log in to the remote machine automatically (for using ssh-agent or similar). Use this when you do not have access to a terminal - for example in graphical programs which use libvirt. | no_tty=1 |
Table of Contents
vmstat
iostat
lsof
# lsof -i :5900 xen-vncfb 10635 root 5u IPv4 218738 TCP grumble.boston.redhat.com:5900 (LISTEN)
qemu-img
systemTap
crash
xen-gdbserver
sysrq
sysrq t
sysrq w
sysrq c
brtcl
# brctl show
bridge name bridge id STP enabled interfaces
xenbr0 8000.feffffffffff no vif13.0
pdummy0
vif0.0
# brctl showmacs xenbr0 port no mac addr is local? aging timer 1 fe:ff:ff:ff:ff:ff yes 0.00
# brctl showstp xenbr0 xenbr0 bridge id 8000.feffffffffff designated root 8000.feffffffffff root port 0 path cost 0 max age 20.00 bridge max age 20.00 hello time 2.00 bridge hello time 2.00 forward delay 0.00 bridge forward delay 0.00 aging time 300.01 hello timer 1.43 tcn timer 0.00 topology change timer 0.00 gc timer 0.02 flags vif13.0 (3) port id 8003 state forwarding designated root 8000.feffffffffff path cost 100 designated bridge 8000.feffffffffff message age timer 0.00 designated port 8003 forward delay timer 0.00 designated cost 0 hold timer 0.43 flags pdummy0 (2) port id 8002 state forwarding designated root 8000.feffffffffff path cost 100 designated bridge 8000.feffffffffff message age timer 0.00 designated port 8002 forward delay timer 0.00 designated cost 0 hold timer 0.43 flags vif0.0 (1) port id 8001 state forwarding designated root 8000.feffffffffff path cost 100 designated bridge 8000.feffffffffff message age timer 0.00 designated port 8001 forward delay timer 0.00 designated cost 0 hold timer 0.43 flags
ifconfig
tcpdump
ps
pstree
top
kvmtrace
kvm_stat
xentop
xm dmesg
xm log
virsh is a command line interface tool for managing guests and the hypervisor.
virsh tool is built on the libvirt management API and operates as an alternative to the xm command and the graphical guest Manager (virt-manager). virsh can be used in read-only mode by unprivileged users. You can use virsh to execute scripts for the guest machines.
| Command | Description |
|---|---|
help
| Prints basic help information. |
list
| Lists all guests. |
dumpxml
| Outputs the XML configuration file for the guest. |
create
| Creates a guest from an XML configuration file and starts the new guest. |
start
| Starts an inactive guest. |
destroy
| Forces a guest to stop. |
define
| Outputs an XML configuration file for a guest. |
domid
| Displays the guest's ID. |
domuuid
| Displays the guest's UUID. |
dominfo
| Displays guest information. |
domname
| Displays the guest's name. |
domstate
| Displays the state of a guest. |
quit
| Quits the interactive terminal. |
reboot
| Reboots a guest. |
restore
| Restores a previously saved guest stored in a file. |
resume
| Resumes a paused guest. |
save
| Save the present state of a guest to a file. |
shutdown
| Gracefully shuts down a guest. |
suspend
| Pauses a guest. |
undefine
| Deletes all files associated with a guest. |
migrate
| Migrates a guest to another host. |
virsh command options to manage guest and hypervisor resources:
| Command | Description |
|---|---|
setmem
| Sets the allocated memory for a guest. |
setmaxmem
| Sets maximum memory limit for the hypervisor. |
setvcpus
| Changes number of virtual CPUs assigned to a guest. |
vcpuinfo
| Displays virtual CPU information about a guest. |
vcpupin
| Controls the virtual CPU affinity of a guest. |
domblkstat
| Displays block device statistics for a running guest. |
domifstat
| Displays network interface statistics for a running guest. |
attach-device
| Attach a device to a guest, using a device definition in an XML file. |
attach-disk
| Attaches a new disk device to a guest. |
attach-interface
| Attaches a new network interface to a guest. |
detach-device
|
Detach a device from a guest, takes the same kind of XML descriptions as command attach-device.
|
detach-disk
| Detach a disk device from a guest. |
detach-interface
| Detach a network interface from a guest. |
virsh options:
| Command | Description |
|---|---|
version
|
Displays the version of virsh
|
nodeinfo
| Outputs information about the hypervisor |
virsh:
# virsh connect {hostname OR URL}
<name> is the machine name of the hypervisor. To initiate a read-only connection, append the above command with -readonly.
virsh:
# virsh dumpxml {domain-id, domain-name or domain-uuid}
stdout). You can save the data by piping the output to a file. An example of piping the output to a file called guest.xml:
# virsh dumpxmlThis fileGuestID>guest.xml
guest.xml can recreate the guest (refer to Editing a guest's configuration file. You can edit this XML configuration file to configure additional devices or to deploy additional guests. Refer to Section 18.1, “Using XML configuration files with virsh” for more information on modifying files created with virsh dumpxml.
virsh dumpxml output:
# virsh dumpxml r5b2-mySQL01
<domain type='xen' id='13'>
<name>r5b2-mySQL01</name>
<uuid>4a4c59a7ee3fc78196e4288f2862f011</uuid>
<bootloader>/usr/bin/pygrub</bootloader>
<os>
<type>linux</type>
<kernel>/var/lib/libvirt/vmlinuz.2dgnU_</kernel>
<initrd>/var/lib/libvirt/initrd.UQafMw</initrd>
<cmdline>ro root=/dev/VolGroup00/LogVol00 rhgb quiet</cmdline>
</os>
<memory>512000</memory>
<vcpu>1</vcpu>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<devices>
<interface type='bridge'>
<source bridge='xenbr0'/>
<mac address='00:16:3e:49:1d:11'/>
<script path='vif-bridge'/>
</interface>
<graphics type='vnc' port='5900'/>
<console tty='/dev/pts/4'/>
</devices>
</domain>
dumpxml option (refer to Creating a virtual machine XML dump (configuration file)). To create a guest with virsh from an XML file:
# virsh create configuration_file.xml
dumpxml option (refer to Creating a virtual machine XML dump (configuration file)) guests can be edited either while they run or while they are offline. The virsh edit command provides this functionality. For example, to edit the guest named softwaretesting:
# virsh edit softwaretesting
$EDITOR shell parameter (set to vi by default).
virsh:
# virsh suspend {domain-id, domain-name or domain-uuid}
resume (Resuming a guest) option.
virsh using the resume option:
# virsh resume {domain-id, domain-name or domain-uuid}
suspend and resume operations.
virsh command:
# virsh save {domain-name, domain-id or domain-uuid} filename
restore (Restore a guest) option. Save is similar to pause, instead of just pausing a guest the present state of the guest is saved.
virsh save command (Save a guest) using virsh:
# virsh restore filename
virsh command:
# virsh shutdown {domain-id, domain-name or domain-uuid}
on_shutdown parameter in the guest's configuration file file.
virsh command:
#virsh reboot {domain-id, domain-name or domain-uuid}
on_reboot parameter in the guest's configuration file file.
virsh command:
# virsh destroy {domain-id, domain-name or domain-uuid}
virsh destroy can corrupt guest file systems . Use the destroy option only when the guest is unresponsive. For para-virtualized guests, use the shutdown option(Shut down a guest) instead.
# virsh domid {domain-name or domain-uuid}
# virsh domname {domain-id or domain-uuid}
# virsh domuuid {domain-id or domain-name}
virsh domuuid output:
# virsh domuuid r5b2-mySQL01 4a4c59a7-ee3f-c781-96e4-288f2862f011
virsh with the guest's domain ID, domain name or UUID you can display information on the specified guest:
# virsh dominfo {domain-id, domain-name or domain-uuid}
virsh dominfo output:
# virsh dominfo r5b2-mySQL01 id: 13 name: r5b2-mysql01 uuid: 4a4c59a7-ee3f-c781-96e4-288f2862f011 os type: linux state: blocked cpu(s): 1 cpu time: 11.0s max memory: 512000 kb used memory: 512000 kb
# virsh nodeinfo
virsh nodeinfo output:
# virsh nodeinfo CPU model x86_64 CPU (s) 8 CPU frequency 2895 Mhz CPU socket(s) 2 Core(s) per socket 2 Threads per core: 2 Numa cell(s) 1 Memory size: 1046528 kb
virsh:
# virsh list
--inactive option to list inactive guests (that is, guests that have been defined but are not currently active), and
--all option lists all guests. For example:
# virsh list --all Id Name State ---------------------------------- 0 Domain-0 running 1 Domain202 paused 2 Domain010 inactive 3 Domain9600 crashed
virsh list is categorized as one of the six states (listed below).
running state refers to guests which are currently active on a CPU.
blocked are blocked, and are not running or runnable. This is caused by a guest waiting on I/O (a traditional wait state) or guests in a sleep mode.
paused state lists domains that are paused. This occurs if an administrator uses the pause button in virt-manager, xm pause or virsh suspend. When a guest is paused it consumes memory and other resources but it is ineligible for scheduling and CPU resources from the hypervisor.
shutdown state is for guests in the process of shutting down. The guest is sent a shutdown signal and should be in the process of stopping its operations gracefully. This may not work with all guest operating systems; some operating systems do not respond to these signals.
dying state are in is in process of dying, which is a state where the domain has not completely shut-down or crashed.
crashed guests have failed while running and are no longer running. This state can only occur if the guest has been configured not to restart on crash.
virsh:
# virsh vcpuinfo {domain-id, domain-name or domain-uuid}
virsh vcpuinfo output:
# virsh vcpuinfo r5b2-mySQL01 VCPU: 0 CPU: 0 State: blocked CPU time: 0.0s CPU Affinity: yy
# virsh vcpupin {domain-id, domain-name or domain-uuid} vcpu, cpulist
vcpu is the virtual VCPU number and cpulist lists the physical number of CPUs.
virsh:
# virsh setvcpus {domain-name, domain-id or domain-uuid} count
count value cannot exceed the count above the amount specified when the guest was created.
virsh :
# virsh setmem {domain-id or domain-name} count
virsh domblkstat to display block device statistics for a running guest.
# virsh domblkstat GuestName block-device
virsh domifstat to display network interface statistics for a running guest.
# virsh domifstat GuestName interface-device
virsh. Migrate domain to another host. Add --live for live migration. The migrate command accepts parameters in the following format:
# virsh migrate --live GuestName DestinationURL
--live parameter is optional. Add the --live parameter for live migrations.
GuestName parameter represents the name of the guest which you want to migrate.
DestinationURL parameter is the URL or hostname of the destination system. The destination system must run the same version of Fedora, be using the same hypervisor and have libvirt running.
virsh command. To list virtual networks:
# virsh net-list
# virsh net-list Name State Autostart ----------------------------------------- default active yes vnet1 active yes vnet2 active yes
# virsh net-dumpxml NetworkName
# virsh net-dumpxml vnet1
<network>
<name>vnet1</name>
<uuid>98361b46-1581-acb7-1643-85a412626e70</uuid>
<forward dev='eth0'/>
<bridge name='vnet0' stp='on' forwardDelay='0' />
<ip address='192.168.100.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.100.128' end='192.168.100.254' />
</dhcp>
</ip>
</network>
virsh commands used in managing virtual networks are:
virsh net-autostart network-name — Autostart a network specified as network-name.
virsh net-create XMLfile — generates and starts a new network using an existing XML file.
virsh net-define XMLfile — generates a new network device from an existing XML file without starting it.
virsh net-destroy network-name — destroy a network specified as network-name.
virsh net-name networkUUID — convert a specified networkUUID to a network name.
virsh net-uuid network-name — convert a specified network-name to a network UUID.
virsh net-start nameOfInactiveNetwork — starts an inactive network.
virsh net-undefine nameOfInactiveNetwork — removes the definition of an inactive network.
virt-manager) windows, dialog boxes, and various GUI controls.
virt-manager provides a graphical view of hypervisors and guest on your system and on remote machines. You can use virt-manager to define both para-virtualized and fully virtualized guests. virt-manager can perform virtualization management tasks, including:


virt-manager. The UUID field displays the globally unique identifier for the virtual machines.

virt-manager details window
dom0)'s loopback address (127.0.0.1). This ensures only those with shell privileges on the host can access virt-manager and the virtual machine through VNC.
virt-managersticky key' capability to send these sequences. You must press any modifier key (Ctrl or Alt) 3 times and the key you specify gets treated as active until the next non-modifier key is pressed. Then you can send Ctrl-Alt-F11 to the guest by entering the key sequence 'Ctrl Ctrl Ctrl Alt+F1'.
virt-manager session open the menu, then the menu and select (virt-manager).
virt-manager main window appears.

virt-managervirt-manager can be started remotely using ssh as demonstrated in the following command:
ssh -X host's address[remotehost]# virt-manager
Using ssh to manage virtual machines and hosts is discussed further in Section 13.1, “Remote management with SSH”.




























DHCP range


Table of Contents
ftpdftpdvirsh to set a guest, TestServer, to automatically start when the host boots.
# virsh autostart TestServer
Domain TestServer marked as autostarted
--disable parameter
# virsh autostart --disable TestServer
Domain TestServer unmarked as autostarted
Install the KVM package
# yum install kvm
Verify which kernel is in use
uname command to determine which kernel is running:
$ uname -r 2.6.23.14-107.fc8xen
2.6.23.14-107.fc8xen" kernel is running on the system. If the default kernel, "2.6.23.14-107.fc8", is running you can skip the substep.
Changing the Xen kernel to the default kernel
grub.conf file determines which kernel is booted. To change the default kernel edit the /boot/grub/grub.conf file as shown below.
default=1
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title Fedora (2.6.23.14-107.fc8)
root (hd0,0)
kernel /vmlinuz-2.6.23.14-107.fc8 ro root=/dev/VolGroup00/LogVol00 rhgb quiet
initrd /initrd-2.6.23.14-107.fc8.img
title Fedora (2.6.23.14-107.fc8xen)
root (hd0,0)
kernel /xen.gz-2.6.23.14-107.fc8
module /vmlinuz-2.6.23.14-107.fc8xen ro root=/dev/VolGroup00/LogVol00 rhgb quiet
module /initrd-2.6.23.14-107.fc8xen.img
0 (or the number for the default kernel):
default=0
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title Fedora (2.6.23.14-107.fc8)
root (hd0,0)
kernel /vmlinuz-2.6.23.14-107.fc8 ro root=/dev/VolGroup00/LogVol00 rhgb quiet
initrd /initrd-2.6.23.14-107.fc8.img
title Fedora (2.6.23.14-107.fc8xen)
root (hd0,0)
kernel /xen.gz-2.6.23.14-107.fc8
module /vmlinuz-2.6.23.14-107.fc8xen ro root=/dev/VolGroup00/LogVol00 rhgb quiet
module /initrd-2.6.23.14-107.fc8xen.img
Reboot to load the new kernel
$ lsmod | grep kvm kvm_intel 85992 1 kvm 222368 2 ksm,kvm_intel
kvm module and either the kvm_intel module or the kvm_amd module are present if everything worked.
Install the Xen packages
# yum install kernel-xen xen
Verify which kernel is in use
uname command to determine which kernel is running.
$ uname -r 2.6.23.14-107.fc8
2.6.23.14-107.fc8" kernel is running on the system. This is the default kernel. If the kernel has xen on the end (for example, 2.6.23.14-107.fc8xen) then the Xen kernel is running and you can skip the substep.
Changing the default kernel to the Xen kernel
grub.conf file determines which kernel is booted. To change the default kernel edit the /boot/grub/grub.conf file as shown below.
default=0
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title Fedora (2.6.23.14-107.fc8)
root (hd0,0)
kernel /vmlinuz-2.6.23.14-107.fc8 ro root=/dev/VolGroup00/LogVol00 rhgb quiet
initrd /initrd-2.6.23.14-107.fc8.img
title Fedora (2.6.23.14-107.fc8xen)
root (hd0,0)
kernel /xen.gz-2.6.23.14-107.fc8
module /vmlinuz-2.6.23.14-107.fc8xen ro root=/dev/VolGroup00/LogVol00 rhgb quiet
module /initrd-2.6.23.14-107.fc8xen.img
1 (or the number for the Xen kernel):
default=1
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title Fedora (2.6.23.14-107.fc8)
root (hd0,0)
kernel /vmlinuz-2.6.23.14-107.fc8 ro root=/dev/VolGroup00/LogVol00 rhgb quiet
initrd /initrd-2.6.23.14-107.fc82.6.23.14-107.fc8.img
title Fedora (2.6.23.14-107.fc8xen)
root (hd0,0)
kernel /xen.gz-2.6.23.14-107.fc8
module /vmlinuz-2.6.23.14-107.fc8xen ro root=/dev/VolGroup00/LogVol00 rhgb quiet
module /initrd-2.6.23.14-107.fc8xen.img
Reboot to load the new kernel
uname command:
$ uname -r 2.6.23.14-107.fc8xen
xen on the end the Xen kernel is running.
qemu-img command line tool is used for formatting various file systems used by Xen and KVM. qemu-img should be used for formatting virtualized guest images, additional storage devices and network storage. qemu-img options and usages are listed below.
# qemu-img create [-6] [-e] [-b base_image] [-f format] filename [size]
# qemu-img convert [-c] [-e] [-f format] filename [-O output_format] output_filename
qcow or cow. The empty sectors are detected and suppressed from the destination image.
info parameter displays information about a disk image. the format for the info option is as follows:
# qemu-img info [-f format] filename
rawqemu-img info to know the real size used by the image or ls -ls on Unix/Linux.
qcow2qcowcowcow format is included only for compatibility with previous versions. It does not work with Windows.
vmdkclooppdflush process starts. pdflush kills processes to free memory so the system does not crash. pdflush may destroy virtualized guests or other system processes which may cause file system errors and may leave virtualized guests unbootable.
(0.5 * RAM) + (overcommit ratio * RAM) = Recommended swap size
/etc/grub.conf file to use the virtualization kernel. You must use the xen kernel to use the Xen hypervisor. Copy your existing xen kernel entry make sure you copy all of the important lines or your system will panic upon boot (initrd will have a length of '0'). If you require xen hypervisor specific values you must append them to the xen line of your grub entry.
grub.conf entry from a system running the kernel-xen package. The grub.conf on your system may vary. The important part in the example below is the section from the title line to the next new line.
#boot=/dev/sda default=0 timeout=15 #splashimage=(hd0,0)/grub/splash.xpm.gz hiddenmenu serial --unit=0 --speed=115200 --word=8 --parity=no --stop=1 terminal --timeout=10 serial console title Fedora (2.6.23.14-107.fc8xen) root (hd0,0) kernel /xen.gz-2.6.23.14-107.fc8 com1=115200,8n1 module /vmlinuz-2.6.23.14-107.fc8xen ro root=/dev/VolGroup00/LogVol00 module /initrd-2.6.23.14-107.fc8xen.img
grub.conf...grub.conf could look very different if it has been manually edited before or copied from an example.
dom0_mem=256M to the xen line in your grub.conf. A modified version of the grub configuration file in the previous example:
#boot=/dev/sda default=0 timeout=15 #splashimage=(hd0,0)/grub/splash.xpm.gz hiddenmenu serial --unit=0 --speed=115200 --word=8 --parity=no --stop=1 terminal --timeout=10 serial console title Fedora (2.6.23.14-107.fc8xen) root (hd0,0) kernel /xen.gz-2.6.23.14-107.fc8 com1=115200,8n1 dom0_mem=256MB module /vmlinuz-2.6.23.14-107.fc8xen ro root=/dev/VolGroup00/LogVol00 module /initrd-2.6.23.14-107.fc8xen.img
$ grep -E 'svm|vmx' /proc/cpuinfo
vmx entry indicating an Intel processor with the Intel VT extensions:
flags : fpu tsc msr pae mce cx8 apic mtrr mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall lm constant_tsc pni monitor ds_cpl vmx est tm2 cx16 xtpr lahf_lm
svm entry indicating an AMD processor with the AMD-V extensions:
flags : fpu tsc msr pae mce cx8 apic mtrr mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt lm 3dnowext 3dnow pni cx16 lahf_lm cmp_legacy svm cr8legacy ts fid vid ttp tm stc
flags:" content may appear multiple times for each hyperthread, core or CPU on in the system.
#!/bin/bash
declare -i IS_HVM=0
declare -i IS_PARA=0
check_hvm()
{
IS_X86HVM="$(strings /proc/acpi/dsdt | grep int-xen)"
if [ x"${IS_X86HVM}" != x ]; then
echo "Guest type is full-virt x86hvm"
IS_HVM=1
fi
}
check_para()
{
if $(grep -q control_d /proc/xen/capabilities); then
echo "Host is dom0"
IS_PARA=1
else
echo "Guest is para-virt domU"
IS_PARA=1
fi
}
if [ -f /proc/acpi/dsdt ]; then
check_hvm
fi
if [ ${IS_HVM} -eq 0 ]; then
if [ -f /proc/xen/capabilities ] ; then
check_para
fi
fi
if [ ${IS_HVM} -eq 0 -a ${IS_PARA} -eq 0 ]; then
echo "Baremetal platform"
fi
virsh capabilites command.
macgen.py. Now from that directory you can run the script using ./macgen.py . and it will generate a new MAC address. A sample output would look like the following:
$ ./macgen.py 00:16:3e:20:b0:11 #!/usr/bin/python # macgen.py script to generate a MAC address for virtualized guests on Xen # import random # def randomMAC(): mac = [ 0x00, 0x16, 0x3e, random.randint(0x00, 0x7f), random.randint(0x00, 0xff), random.randint(0x00, 0xff) ] return ':'.join(map(lambda x: "%02x" % x, mac)) # print randomMAC()
python-virtinst to generate a new MAC address and UUID for use in a guest configuration file:
# echo 'import virtinst.util ; print\ virtinst.util.uuidToString(virtinst.util.randomUUID())' | python # echo 'import virtinst.util ; print virtinst.util.randomMAC()' | python
#!/usr/bin/env python # -*- mode: python; -*- print "" print "New UUID:" import virtinst.util ; print virtinst.util.uuidToString(virtinst.util.randomUUID()) print "New MAC:" import virtinst.util ; print virtinst.util.randomMAC() print ""
ftpdvsftpd can provide access to installation trees for para-virtualized guests or other data. If you have not installed vsftpd during the server installation you can grab the RPM package from your Server directory of your installation media and install it using the rpm -ivh vsftpd*.rpm (note that the RPM package must be in your current directory).
vsftpd, edit /etc/passwd using vipw and change the ftp user's home directory to the directory where you are going to keep the installation trees for your para-virtualized guests. An example entry for the FTP user would look like the following:
ftp:x:14:50:FTP User:/xen/pub:/sbin/nologin
vsftpd start automatically during system boot use the chkconfig utility to enable the automatic start up of vsftpd.
vsftpd is not enabled using the chkconfig --list vsftpd:
$ chkconfig --list vsftpd vsftpd 0:off 1:off 2:off 3:off 4:off 5:off 6:off
chkconfig --levels 345 vsftpd on to start vsftpd automatically for run levels 3, 4 and 5.
chkconfig --list vsftpd command to verify vsftdp has been enabled to start during system boot:
$ chkconfig --list vsftpd vsftpd 0:off 1:off 2:off 3:on 4:on 5:on 6:off
service vsftpd start vsftpd to start the vsftpd service:
$service vsftpd start vsftpd Starting vsftpd for vsftpd: [ OK ]
udev to implement LUN persistence. Before implementing LUN persistence in your system, ensure that you acquire the proper UUIDs. Once you acquire these, you can configure LUN persistence by editing the scsi_id file that resides in the /etc directory. Once you have this file open in a text editor, you must comment out this line:
# options=-b
# options=-g
scsi_id command:
# scsi_id -g -s /block/sdc *3600a0b80001327510000015427b625e*
20-names.rules file in the /etc/udev/rules.d directory. The device naming rules follow this format:
# KERNEL="sd*", BUS="scsi", PROGRAM="sbin/scsi_id", RESULT="UUID", NAME="devicename"
UUID and devicename with the above UUID retrieved entry. The rule should resemble the following:
KERNEL="sd*", BUS="scsi", PROGRAM="sbin/scsi_id", RESULT="3600a0b80001327510000015427b625e", NAME="mydevicename"
/dev/sd* pattern to inspect the given UUID. When it finds a matching device, it creates a device node called /dev/devicename. For this example, the device node is /dev/mydevice . Finally, append the /etc/rc.local file with this line:
/sbin/start_udev
multipath.conf file that resides in the /etc/ directory:
multipath {
wwid 3600a0b80001327510000015427b625e
alias oramp1
}
multipath {
wwid 3600a0b80001327510000015427b6
alias oramp2
}
multipath {
wwid 3600a0b80001327510000015427b625e
alias oramp3
}
multipath {
wwid 3600a0b80001327510000015427b625e
alias oramp4
}
/dev/mpath/oramp1, /dev/mpath/oramp2, /dev/mpath/oramp3, and dev/mpath/oramp4. The devices will reside in the /dev/mpath directory. These LUN names are persistent over reboots as it creates the alias names on the wwid of the LUNs.
/sbin/service smartd stop /sbin/chkconfig --del smartd
uuidgen command. Then for the vif entries you must define a unique MAC address for each guest (if you are copying a guest configuration from an existing guest, you can create a script to handle it). For the xen bridge information, if you move an existing guest configuration file to a new host, you must update the xenbr entry to match your local networking configuration. For the Device entries, you must modify the entries in the 'disk=' section to point to the correct guest image.
/etc/sysconfig/network file to match the new guest's hostname.
HWADDR address of the /etc/sysconfig/network-scripts/ifcfg-eth0 file to match the output from ifconfig eth0 file and if you use static IP addresses, you must modify the IPADDR entry.
nameuuiduuidgen command. A sample UUID output:
$ uuidgen a984a14f-4191-4d14-868e-329906b211e5
vifxenbr entry to correspond with your local networking configuration (you can obtain the bridge information using the command brctl show command).
disk= section to point to the correct guest image.
/etc/sysconfig/networkHOSTNAME entry to the guest's new hostname.
/etc/sysconfig/network-scripts/ifcfg-eth0HWADDR address to the output from ifconfig eth0
IPADDR entry if a static IP address is used.
libvirt.
libvirt.
virsh can handle XML configuration files. You may want to use this to your advantage for scripting large deployments with special options. You can add devices defined in an XML file to a running para-virtualized guest. For example, to add a ISO file as hdc to a running guest create an XML file:
# cat satelliteiso.xml <disk type="file" device="disk"> <driver name="file"/> <source file="/var/lib/libvirt/images/rhn-satellite-5.0.1-11-redhat-linux-as-i386-4-embedded-oracle.iso"/> <target dev="hdc"/> <readonly/> </disk>Run
virsh attach-device to attach the ISO as hdc to a guest called "satellite" :
# virsh attach-device satellite satelliteiso.xml
/etc/modprobe.conf. Edit /etc/modprobe.conf and add the following line to it:
options loop max_loop=64
phy: block device or tap:aio commands. To employ loop device backed guests for a full virtualized system, use the phy: device or file: file commands.
cat /proc/cpuinfo | grep vmx svm. If the command outputs, the virtualization extensions are now enabled. If there is no output your system may not have the virtualization extensions or the correct BIOS setting enabled.
libvirt virtualization API.
/usr/share/doc/xen-<version-number>/ is the directory which contains information about the Xen para-virtualization hypervisor and associated management tools, including various example configurations, hardware-specific information, and the current Xen upstream user documentation.
man virsh and /usr/share/doc/libvirt-<version-number> — Contains sub commands and options for the virsh virtual machine management utility as well as comprehensive information about the libvirt virtualization library API.
/usr/share/doc/gnome-applet-vm-<version-number> — Documentation for the GNOME graphical panel applet that monitors and manages locally-running virtual machines.
/usr/share/doc/libvirt-python-<version-number> — Provides details on the Python bindings for the libvirt library. The libvirt-python package allows python developers to create programs that interface with the libvirt virtualization management library.
/usr/share/doc/python-virtinst-<version-number> — Provides documentation on the virt-install command that helps in starting installations of Fedora and Linux related distributions inside of virtual machines.
/usr/share/doc/virt-manager-<version-number> — Provides documentation on the Virtual Machine Manager, which provides a graphical tool for administering virtual machines.
| Revision History | |||
|---|---|---|---|
| Revision 12.1.3 | Mon Oct 12 2009 | ||
| |||
dom0 refers to the host instance of Linux running the Hypervisor which facilitates virtualization of guest operating systems. Dom0 runs on and manages the physical hardware and resource allocation for itself and the guest operating systems.
domU refers to the guest operating system which run on the host system (Domains).
ext2 and ext3 file system identifiers, RAID device identifiers, iSCSI and LUN device identifiers, MAC addresses and virtual machine identifiers.

