User manual XEN XEN 3

DON'T FORGET : ALWAYS READ THE USER GUIDE BEFORE BUYING !!!

If this document matches the user guide, instructions manual or user manual, feature sets, schematics you are looking for, download it now. Diplodocs provides you a fast and easy access to the user manual XEN XEN 3. We hope that this XEN XEN 3 user guide will be useful to you.


XEN XEN 3 : Download the complete user guide (277 Ko)

Manual abstract: user guide XEN XEN 3

Detailed instructions for use are in the User's Guide.

[. . . ] Users' Manual Xen v3. 0 DISCLAIMER: This documentation is always under active development and as such there may be mistakes and omissions -- watch out for these and please report any you find to the developers' mailing list, xen-devel@lists. xensource. com. Contributions of material, suggestions and corrections are welcome. Xen is Copyright c 2002-2005, University of Cambridge, UK, XenSource Inc. , IBM Corp. , Hewlett-Packard Co. , Intel Corp. , AMD Inc. , and others. Most portions of Xen are licensed for copying under the terms of the GNU General Public License, version 2. Other portions are licensed under the terms of the GNU Lesser General Public License, the Zope Public License 2. 0, or under "BSD-style" licenses. [. . . ] For example, if you have iSCSI disks or GNBD volumes imported into domain 0 you can export these to other domains using the phy: disk syntax. E. g. : disk = ['phy:vg/lvm1, sda2, w'] 31 Warning: Block device sharing Block devices should typically only be shared between domains in a readonly fashion otherwise the Linux kernel's file systems will get very confused as the file system structure may change underneath them (having the same ext3 partition mounted rw twice is a sure fire way to cause irreparable damage)!Xend will attempt to prevent you from doing this by checking that the device is not mounted read-write in domain 0, and hasn't already been exported read-write to another domain. If you want read-write sharing, export the directory to other domains via NFS from domain 0 (or use a cluster file system such as GFS or ocfs2). 6. 2 Using File-backed VBDs It is also possible to use a file in Domain 0 as the primary storage for a virtual machine. As well as being convenient, this also has the advantage that the virtual block device will be sparse -- space will only really be allocated as parts of the file are used. So if a virtual machine uses only half of its disk space then the file really takes up half of the size allocated. For example, to create a 2GB sparse file-backed virtual block device (actually only consumes 1KB of disk): # dd if=/dev/zero of=vm1disk bs=1k seek=2048k count=1 Make a file system in the disk file: # mkfs -t ext3 vm1disk (when the tool asks for confirmation, answer `y') Populate the file system e. g. by copying from the current root: # mount -o loop vm1disk /mnt # cp -ax /{root, dev, var, etc, usr, bin, sbin, lib} /mnt # mkdir /mnt/{proc, sys, home, tmp} Tailor the file system by editing /etc/fstab, /etc/hostname, etc. Don't forget to edit the files in the mounted file system, instead of your domain 0 filesystem, e. g. Now unmount (this is important!): # umount /mnt In the configuration file set: disk = ['file:/full/path/to/vm1disk, sda1, w'] 32 As the virtual machine writes to its `disk', the sparse file will be filled in and consume more space up to the original 2GB. Note that file-backed VBDs may not be appropriate for backing I/O-intensive domains. File-backed VBDs are known to experience substantial slowdowns under heavy I/O workloads, due to the I/O handling by the loopback block device used to support file-backed VBDs in dom0. Better I/O performance can be achieved by using either LVM-backed VBDs (Section 6. 3) or physical devices as VBDs (Section 6. 1). Linux supports a maximum of eight file-backed VBDs across all domains by default. This limit can be statically increased by using the max loop module parameter if CONFIG BLK DEV LOOP is compiled as a module in the dom0 kernel, or by using the max loop=n boot option if CONFIG BLK DEV LOOP is compiled directly into the dom0 kernel. 6. 3 Using LVM-backed VBDs A particularly appealing solution is to use LVM volumes as backing for domain filesystems since this allows dynamic growing/shrinking of volumes as well as snapshot and other features. To initialize a partition to support LVM volumes: # pvcreate /dev/sda10 Create a volume group named `vg' on the physical partition: # vgcreate vg /dev/sda10 Create a logical volume of size 4GB named `myvmdisk1': # lvcreate -L4096M -n myvmdisk1 vg You should now see that you have a /dev/vg/myvmdisk1 Make a filesystem, mount it and populate it, e. g. : # # # # mkfs -t ext3 /dev/vg/myvmdisk1 mount /dev/vg/myvmdisk1 /mnt cp -ax / /mnt umount /mnt disk = [ 'phy:vg/myvmdisk1, sda1, w' ] LVM enables you to grow the size of logical volumes, but you'll need to resize the corresponding file system to make use of the new space. You can also use LVM for creating copy-on-write (CoW) clones of LVM volumes (known as writable persistent snapshots in LVM terminology). This facility is new in 33 Now configure your VM with the following disk configuration: Linux 2. 6. 8, so isn't as stable as one might hope. In particular, using lots of CoW LVM disks consumes a lot of dom0 memory, and error conditions such as running out of disk space are not handled well. To create two copy-on-write clones of the above file system you would use the following commands: # lvcreate -s -L1024M -n myclonedisk1 /dev/vg/myvmdisk1 # lvcreate -s -L1024M -n myclonedisk2 /dev/vg/myvmdisk1 Each of these can grow to have 1GB of differences from the master volume. You can grow the amount of space for storing the differences using the lvextend command, e. g. : # lvextend +100M /dev/vg/myclonedisk1 Don't let the `differences volume' ever fill up otherwise LVM gets rather confused. It may be possible to automate the growing process by using dmsetup wait to spot the volume getting full and then issue an lvextend. [. . . ] For example, using machines hostA and hostB: hostA# ifconfig vnif0004 10. 0. 0. 100 up hostB# ifconfig vnif0004 10. 0. 0. 101 up hostB# ping 10. 0. 0. 100 The vnet implementation uses IP multicast to discover vnet interfaces, so all machines hosting vnets must be reachable by multicast. Network switches are often configured not to forward multicast packets, so this often means that all machines using a vnet must be on the same LAN segment, unless you configure vnet forwarding. You can test multicast coverage by pinging the vnet multicast address: # ping -b 224. 10. 0. 1 You should see replies from all machines with the vnet module running. You can see if vnet packets are being sent or received by dumping traffic on the vnet UDP port: # tcpdump udp port 1798 If multicast is not being forwaded between machines you can configure multicast forwarding using vn. [. . . ]

DISCLAIMER TO DOWNLOAD THE USER GUIDE XEN XEN 3




Click on "Download the user Manual" at the end of this Contract if you accept its terms, the downloading of the manual XEN XEN 3 will begin.

 

Copyright © 2015 - manualRetreiver - All Rights Reserved.
Designated trademarks and brands are the property of their respective owners.