Virtualization Blog

Discussions and observations on virtualization.

XenServer Administrators Handbook Published

Last year, I announced that we were working on a XenServer Administrators Handbook, and I'm very pleased to announce that it's been published. Not only have we been published, but based on the Amazon reviews to date we've done a pretty decent job. In part, I suspect that has a ton to do with the book being focused on what information you, XenServer administrators, need to be successful when running a XenServer environment regardless of scale or workload.

XenServer Administrators HandbookThe handbook is formatted following a simple premise; first you need to plan your deployment and second you need to run it. With that in mind, we start with exactly what a XenServer is, define how it works and what expectations it has on infrastructure. After all, it's critical to understand how a product like XenServer interfaces with the real world, and how its virtual objects relate to each other. We even cover some of the misunderstandings those new to XenServer might have.

While it might be tempting to go deep on some of this stuff, Jesse and I both recognized that virtualization SREs have a job to do and that's to run virtual infrastructure. As interesting as it might be to dig into how the product is implemented, that's not the role of an administrators handbook. That's why the second half of the book provides some real world scenarios, and how to go about solving them.

We had an almost limitless list of scenarios to choose from, and what you see in the book represents real world situations which most SREs will face at some point. The goal of this format being to have a handbook which can be actively used, not something which is read once and placed on some shelf (virtual or physical). During the technical review phase, we sent copies out to actual XenServer admins, all of whom stated that we'd presented some piece of information they hadn't previously known. I for one consider that to be a fantastic compliment.

Lastly, I want to finish off by saying that like all good works, this is very much a "we" effort. Jesse did a top notch job as co-author and brings the experience of someone who's job it is to help solve customer problems. Our technical reviewers added tremendously to the polish you'll find in the book. The O'Reilly Media team was a pleasure to work with, pushing when we needed to be pushed but understanding that day jobs and family take precedence.

So whether you're looking at XenServer out of personal interest, have been tasked with designing a XenServer installation to support Citrix workloads, clouds, or for general purpose virtualization, or have a XenServer environment to call your own, there is something in here for you. On behalf of Jesse, we hope that everyone who gets a copy finds it valuable. The XenServer Administrator's handbook is available from book sellers everywhere including:

Amazon: http://www.amazon.com/XenServer-Administration-Handbook-Successful-Deployments/dp/149193543X/

Barnes and Noble: http://www.barnesandnoble.com/w/xenserver-administration-handbook-tim-mackey/1123640451

O'Reilly Media: http://shop.oreilly.com/product/0636920043737.do

If you need a copy of XenServer to work with, you can obtain that for free from: http://xenserver.org/download

Recent Comments
Tobias Kreidl
A timely publication, given all the major recent enhancements to XenServer. It's packed with a lot of hands-on, practical advice a... Read More
Tuesday, 03 May 2016 03:37
Eric Hosmer
Been looking forward to getting this book, just purchased it on Amazon. Now I just need to find that mythical free time to read ... Read More
Friday, 06 May 2016 22:41
Continue reading
11762 Hits
2 Comments

Implementing VDI-per-LUN storage

With storage providers adding better functionality to provide features like QoS, fast snapshot & clone and with the advent of storage-as-a-service, we are interested in the ability to utilize these features from XenServer. VMware’s VVols offering already allows integration of vendor provided storage features into their hypervisor. Since most storage allows operations at the granularity of a LUN, the idea is to have a one-to-one mapping between a LUN on the backend and a virtual disk (VDI) on the hypervisor. In this post we are going to talk about the supplemental pack that we have developed in order to enable VDI-per-LUN.

Xenserver Storage

To understand the supplemental pack, it is useful to first review how XenServer storage works. In XenServer, a storage repository (SR) is a top-level entity which acts as a pool for storing VDIs which appear to the VMs as virtual disks. XenServer provides different types of SRs (File, NFS, Local, iSCSI). In this post we will be looking at iSCSI based SRs as iSCSI is the most popular protocol for remote storage and the supplemental pack we developed is targeted towards iSCSI based SRs. An iSCSI SR uses LVM to store VDIs over logical volumes (hence the type is lvmoiscsi). For instance:

[root@coe-hq-xen08 ~]# xe sr-list type=lvmoiscsi
uuid ( RO)                : c67132ec-0b1f-3a69-0305-6450bfccd790
          name-label ( RW): syed-sr
    name-description ( RW): iSCSI SR [172.31.255.200 (iqn.2001-05.com.equallogic:0-8a0906-c24f8b402-b600000036456e84-syed-iscsi-opt-test; LUN 0: 6090A028408B4FC2846E4536000000B6: 10 GB (EQLOGIC))]
                host ( RO): coe-hq-xen08
                type ( RO): lvmoiscsi
        content-type ( RO):

The above SR is created from a LUN on a Dell EqualLogic. The VDIs belonging to this SR can be listed by:

[root@coe-hq-xen08 ~]# xe vdi-list sr-uuid=c67132ec-0b1f-3a69-0305-6450bfccd790 params=uuid
uuid ( RO)    : ef5633d2-2ad0-4996-8635-2fc10e05de9a

uuid ( RO)    : b7d0973f-3983-486f-8bc0-7e0b6317bfc4

uuid ( RO)    : bee039ed-c7d1-4971-8165-913946130d11

uuid ( RO)    : efd5285a-3788-4226-9c6a-0192ff2c1c5e

uuid ( RO)    : 568634f9-5784-4e6c-85d9-f747ceeada23

[root@coe-hq-xen08 ~]#

This SR has 5 VDI. From LVM’s perspective, an SR is a volume group (VG) and each VDI is a logical volume(LV) inside that volume group. This can be seen via the following commands:

[root@coe-hq-xen08 ~]# vgs | grep c67132ec-0b1f-3a69-0305-6450bfccd790
  VG_XenStorage-c67132ec-0b1f-3a69-0305-6450bfccd790   1   6   0 wz--n-   9.99G 5.03G
[root@coe-hq-xen08 ~]# lvs VG_XenStorage-c67132ec-0b1f-3a69-0305-6450bfccd790
  LV                                       VG                                                 Attr   LSize 
  MGT                                      VG_XenStorage-c67132ec-0b1f-3a69-0305-6450bfccd790 -wi-a-   4.00M                                 
  VHD-568634f9-5784-4e6c-85d9-f747ceeada23 VG_XenStorage-c67132ec-0b1f-3a69-0305-6450bfccd790 -wi-ao   8.00M                               
  VHD-b7d0973f-3983-486f-8bc0-7e0b6317bfc4 VG_XenStorage-c67132ec-0b1f-3a69-0305-6450bfccd790 -wi-ao   2.45G                               
  VHD-bee039ed-c7d1-4971-8165-913946130d11 VG_XenStorage-c67132ec-0b1f-3a69-0305-6450bfccd790 -wi---   8.00M                                
  VHD-ef5633d2-2ad0-4996-8635-2fc10e05de9a VG_XenStorage-c67132ec-0b1f-3a69-0305-6450bfccd790 -ri-ao   2.45G
VHD-efd5285a-3788-4226-9c6a-0192ff2c1c5e VG_XenStorage-c67132ec-0b1f-3a69-0305-6450bfccd790 -ri-ao  36.00M

Here c67132ec-0b1f-3a69-0305-6450bfccd790 is the UUID of the SR. Each VDI is represented by a corresponding LV which is of the format VHD-. Some of the LVs have a small size of 8MB. These are snapshots taken on XenServer. There is also a LV named MGT which holds metadata about the SR and the VDIs present in it. Note that all of this is present in an SR which is a LUN on the backend storage.

Now XenServer can attach a LUN at the level of an SR but we want to map a LUN to a single VDI. In order to do that, we restrict an SR to contain a single VDI. Our new SR has the following LVs:

[root@coe-hq-xen09 ~]# lvs VG_XenStorage-1fe527a4-7e96-cdd9-f347-a15c240f26e9
LV                                       VG                                                 Attr   LSize
MGT                                      VG_XenStorage-1fe527a4-7e96-cdd9-f347-a15c240f26e9 -wi-a- 4.00M
VHD-09b14a1b-9c0a-489e-979c-fd61606375de VG_XenStorage-1fe527a4-7e96-cdd9-f347-a15c240f26e9 -wi--- 8.02G
[root@coe-hq-xen09 ~]#

b2ap3_thumbnail_vdi-lun.png

If a snapshot or clone of the LUN is taken on the backend, all the unique identifiers associated with the different entities in the LUN also get cloned and any attempt to attach the LUN back to XenServer will result in an error because of conflicts of unique IDs.

Resignature and supplemental pack

In order for the cloned LUN to be re-attached, we need to resignature the unique IDs present in the LUN. The following IDs need to be resignatured

  • LVM UUIDs (PV, VG, LV)
  • VDI UUID
  • SR metadata in the MGT Logical volume

We at CloudOps have developed an open-source supplemental pack which solves the resignature problem. You can find it here. The supplemental pack adds a new type of SR (relvmoiscsi) and you can use it to resignature your lvmoiscsi SRs. After installing the supplemental pack, you can resignature a clone using the following command

[root@coe-hq-xen08 ~]# xe sr-create name-label=syed-single-clone type=relvmoiscsi 
device-config:target=172.31.255.200
device-config:targetIQN=$IQN
device-config:SCSIid=$SCSIid
device-config:resign=true
shared=true
Error code: SR_BACKEND_FAILURE_1
Error parameters: , Error reporting error, unknown key The SR has been successfully resigned. Use the lvmoiscsi type to attach it,
[root@coe-hq-xen08 ~]#

Here, instead of creating a new SR, the supplemental pack re-signatures the provided LUN and detaches it (the error is expected as we don’t actually create an SR). You can see from the error message that the SR has been re-signed successfully. Now the cloned SR can be introduced back to XenServer without any conflicts using the following commands:

[root@coe-hq-xen09 ~]# xe sr-probe type=lvmoiscsi device-config:target=172.31.255.200 device-config:targetIQN=$IQN device-config:SCSIid=$SCSIid

   		 5f616adb-6a53-7fa2-8181-429f95bff0e7
   		 /dev/disk/by-id/scsi-36090a028408b3feba66af52e0000a0e6
   		 5364514816

[root@coe-hq-xen09 ~]# xe sr-introduce name-label=vdi-test-resign type=lvmoiscsi 
uuid=5f616adb-6a53-7fa2-8181-429f95bff0e7
5f616adb-6a53-7fa2-8181-429f95bff0e7

This supplemental pack can be used in conjunction with an external orchestrator like CloudStack or OpenStack which can manage both the storage and compute. Working with SolidFire we have implemented this functionality, available in the next release of Apache CloudStack. You can check out a preview of this feature in a screencast here.

Recent Comments
Nick
If I am reading this correctly, this is just basically setting up XS to use 1 SR per VM, this isn't scalable as the limits for LUN... Read More
Tuesday, 26 April 2016 14:57
Syed Ahmed
Hi Nick, The limit of 256 SRs is when using Multipating. If no multipath is used, the number of SRs that can be created are well... Read More
Tuesday, 26 April 2016 17:19
Syed Ahmed
There is an initial overhead when creating SRs. However, we did not find any performance degradation in our tests once the SR is s... Read More
Wednesday, 27 April 2016 09:21
Continue reading
8772 Hits
7 Comments

NAU VMbackup 3.0 for XenServer

NAU VMbackup 3.0 for XenServer

By Tobias Kreidl and Duane Booher

Northern Arizona University, Information Technology Services

Over eight years ago, back in the days of XenServer 5, not a lot of backup and restore options were available, either as commercial products or as freeware, and we quickly came to the realization that data recovery was a vital component to a production environment and hence we needed an affordable and flexible solution. The conclusion at the time was that we might as well build our own, and though the availability of options has grown significantly over the last number of years, we’ve stuck with our own home-grown solution which leverages Citrix XenServer SDK and XenAPI (http://xenserver.org/partners/developing-products-for-xenserver.html). Early versions were created from the contributions of Douglas Pace, Tobias Kreidl and David McArthur. During the last several years, the lion’s share of development has been performed by Duane Booher. This article discusses the latest 3.0 release.

A Bit of History

With the many alternatives now available, one might ask why we have stuck with this rather un-flashy script and CLI-based mechanism. There are clearly numerous reasons. For one, in-house products allow total control over all aspects of their development and support. The financial outlay is all people’s time and since there are no contracts or support fees, it’s very controllable and predictable. We also found from time-to-time that various features were not readily available in other sources we looked at. We also felt early on as an educational institution that we could give back to the community by freely providing the product along with its source code; the most recent version is available via GitHub at https://github.com/NAUbackup/VmBackup for free under the terms of the GNU General Public License. There was a write-up at https://www.citrix.com/blogs/2014/06/03/another-successful-community-xenserver-sdk-project-free-backup-tools-and-scripts-naubackup-restore-v2-0-released/ when the first GitHub version was published. Earlier versions were made available via the Citrix community site (Citrix Developer Network), sometimes referred to as the Citrix Code Share, where community contributions were published for a number of products. When that site was discontinued in 2013, we relocated the distribution to GitHub.

Because we “eat our own dog food,” VMbackup gets extensive and constant testing because we rely on it ourselves as the means to create backups and provide for restores for cases of accidental deletion, unexpected data corruption, or in the event that disaster recovery might be needed. The mechanisms are carefully tested before going into production and we perform frequent tests to ensure the integrity of the backups and that restores really do work. A number of times, we have relied on resorting to recovering from our backups and it has been very reassuring that these have been successful.

What VMbackup Does

Very simply, VMbackup provides a framework for backing up virtual machines (VMs) hosted on XenServer to an external storage device, as well as the means to recover such VMs for whatever reason that might have resulted in loss, be it disaster recovery, restoring an accidentally deleted VM, recovering from data corruption, etc.

The VMbackup distribution consists of a script written in Python and a configuration file. Other than a README document file, that’s it other than the XenServer SDK components which one needs to download separately; see http://xenserver.org/partners/developing-products-for-xenserver.html for details. There is no fancy GUI to become familiar with, and instead, just a few simple things that need to be configured, plus a destination for the backups needs to be made accessible (this is generally an NFS share, though SMB/CIFS will work, as well). Using cron job entries, a single host or an entire pool can be set up to perform periodic backups. Configurations on individual hosts in a pool are needed in that the pool master performs the majority of the work and it can readily change to a different XenServer, while individual host-based instances are also needed when local storage is also made use of, since access to any local SRs can only be performed from each individual XenServer. A cron entry and numerous configuration examples are given in the documentation.

To avoid interruptions of any running VMs, the process of backing up a VM follows these basic steps:

  1. A snapshot of the VM and its storage is made
  2. Using the xe utility vm-export, that snapshot is exported to the target external storage
  3. The snapshot is deleted, freeing up that space

In addition, some VM metadata are collected and saved, which can be very useful in the event a VM needs to be restored. The metadata include:

  • vm.cfg - includes name_label, name_description, memory_dynamic_max, VCPUs_max, VCPUs_at_startup, os_version, orig_uuid
  • DISK-xvda (for each attached disk)
    • vbd.cfg - includes userdevice, bootable, mode, type, unplugable, empty, orig_uuid
    • vdi.cfg - includes name_label, name_description, virtual_size, type, sharable, read_only, orig_uuid, orig_sr_uuid
  • VIFs (for each attached VIF)
    • vif-0.cfg - includes device, network_name_label, MTU, MAC, other_config, orig_uuid

An additional option is to create a backup of the entire XenServer pool metadata, which is essential in dealing with the aftermath of a major disaster that affects the entire pool. This is accomplished via the “xe pool-dump-database” command.

In the event of errors, there are automatic clean-up procedures in place that will remove any remnants plus make sure that earlier successful backups are not purged beyond the specified number of “good” copies to retain.

There are numerous configuration options that allow to specify which VMs get backed up, how many backup versions are to be retained, whether the backups should be compressed (1) as part of the process, as well as optional report generation and notification setups.

New Features in VMbackup 3.0

A number of additional features have been added to this latest 3.0 release, adding flexibility and functionality. Some of these came about because of the sheer number of VMs that needed to be dealt with, SR space issues as well as with changes coming to the next XenServer release. These additions include:

  • VM “preview” option: To be able to look for syntax errors and ensure parameters are being defined properly, a VM can have a syntax check performed on it and if necessary, adjustments can then be made based on the diagnosis to achieve the desired configuration.
  • Support for VMs containing spaces: By surrounding VM names in the configuration file with double quotes, VM names containing spaces can now be processed. 
  • Wildcard suffixes: This very versatile option permits groups of VMs to be configured to be handled similarly, eliminating the need to create individual settings for every desired VM. Examples include “PRD-*”, “SQL*” and in fact, if all VMs in the pool should be backed up, even “*”. There are however, a number of restrictions on wildcard usage (2).
  • Exclude VMs: Along with the wildcard option to select which VMs to back up, clearly a need arises to provide the means to exclude certain VMs (in addition to the other alternative, which is simply to rename them such that they do not match a certain backup set). Currently, each excluded VM must be named separately and any such VMs should de defined at the end of the configuration file. 
  • Export the OS disk VDI, only: In some cases, a VM may contain multiple storage devices (VDIs) that are so large that it is impractical or impossible to take a snapshot of the entire VM and its storage. Hence, we have introduced the means to backup and restore only the operating system device (assumed to be Disk 0). In addition to space limitations, some storage, such as DB data, may not be able to be reliably backed up using a full VM snapshot. Furthermore, the next XenServer release (Dundee) will likely support up to as many as perhaps 255 storage devices per VM, making a vm-export even more involved under such circumstances. Another big advantage here is that currently, this process is much more efficient and faster than a VM export by a factor of three or more!
  • Root password obfuscation: So that clear-text passwords associated with the XenServer pool are not embedded in the scripts themselves, the password can be basically encoded into a file.

The mechanism for a running VM from which only the primary disk is to be backed up is similar to the full VM backup. The process of backing up such a VM follows these basic steps:

  1. A snapshot of just the VM's Disk 0 storage is made
  2. Using the xe utility vdi-export, that snapshot is exported to the target external storage
  3. The snapshot is deleted, freeing up that space

As with the full VM export, some metadata for the VM are also collected and saved for this VDI export option.

These added features are of course subject to change in future releases, though typically later editions generally encompass the support of previous versions to preserve backwards compatibility.

Examples

Let’s look at the configuration file weekend.cfg:

# Weekend VMs
max_backups=4
backup_dir=/snapshots/BACKUPS
#
vdi-export=PROD-CentOS7-large-user-disks
vm-export=PROD*
vm-export=DEV-RH*:3
exclude=PROD-ubuntu12-benchmark
exclude=PRODtestVM

Comment lines start with a hash mark and may be contained anywhere with the file. The hash mark must appear as the first character in the line. Note that the default number of retained backups is set here to four. The destination directory is set next, indicating where the backups will be written to. We then see a case where only the OS disk is being backed up for the specific VM "PROD-CentOS7-large-user-disks" and below that, all VMs beginning with “PROD” are backed up using the default settings. Just below that, a definition is created for all VMs starting with "DEV-RH" and the default number of backups is reduced for all of these from the global default of four down to three. Finally, we see two excludes for specific VMs that fall into the “PROD*” group that should not be backed up at all.

To launch the script manually, you would issue from the command line:

./VmBackup.py password weekend.cfg

To launch the script via a cron job, you would create a single-line entry like this:

10 0 * * 6 /usr/bin/python /snapshots/NAUbackup/VmBackup.py password
/snapshots/NAUbackup/weekend.cfg >> /snapshots/NAUbackup/logs/VmBackup.log 2>&1

This would run the task at ten minutes past midnight on Saturday and create a log entry called VmBackup.log. This cron entry would need to be installed on each host of a XenServer pool.

Additional Notes

It can be helpful to break up when backups are run so that they don’t all have to be done at once, which may be impractical, take so long as to possibly impact performance during the day, or need to be coordinated with when is best for specific VMs (such as before or after patches are applied). These situations are best dealt with by creating separate cron jobs for each subset.

There is a fair load on the server, comparable to any vm-export, and hence the queue is processed linearly with only one active snapshot and export sequence for a VM being run at a time. This is also why we suggest you perform the backups and then asynchronously perform any compression on the files on the external storage host itself to alleviate the CPU load on the XenServer host end.

For even more redundancy, you can readily duplicate or mirror the backup area to another storage location, perhaps in another building or even somewhere off-site. This can readily be accomplished using various copy or mirroring utilities, such as rcp, sftp, wget, nsync, rsync, etc.

This latest release has been tested on XenServer 6.5 (SP1) and various beta and technical preview versions of the Dundee release. In particular, note that the vdi-export utility, while dating back a while, is not well documented and we strongly recommend not trying to use it on any XenServer release before XS 6.5. Doing so is clearly at your own risk.

The NAU VMbackup distribution can be found at: https://github.com/NAUbackup/VmBackup

In Conclusion

This is a misleading heading, as there is not really a conclusion in the sense that this project continues to be active and as long as there is a perceived need for it, we plan to continue working on keeping it running on future XenServer releases and adding functionality as needs and resources dictate. Our hope is naturally that the community can make at least as good use of it as we have ourselves.

Footnotes:

  1. Alternatively, to save time and resources, the compression can potentially be handled asynchronously by the host onto which the backups are written, hence reducing overhead and resource utilization on the XenServer hosts, themselves.
  2. Certain limitations exist currently with how wildcards can be utilized. Leading wildcards are not allowed, nor are multiple wildcards within a string. This may be enhanced at a later date to provide even more flexibility.

This article was written by Tobias Kreidl and Duane Booher, both of Northern Arizona University, Information Technology Services. Tobias' biography is available at this site, and Duane's LinkedIn profile is at https://www.linkedin.com/in/duane-booher-a068a03 while both can also be found on http://discussions.citrix.com primarily in the XenServer forum.     

Recent Comments
Lorscheider Santiago
Tobias Kreidl and Duane Booher, Greart Article! you have thought of a plugin for XenCenter?
Saturday, 09 April 2016 13:28
Tobias Kreidl
Thank you, Lorscheider, for your comment. Our thoughts have long been that others could take this to another level by developing a... Read More
Thursday, 14 April 2016 01:34
Niklas Ahden
Hi, First of all I want to thank you for this great article and NAUBackup. I am wondering about the export-performance while usin... Read More
Sunday, 17 April 2016 19:14
Continue reading
19220 Hits
11 Comments

XenServer 6.5 Can Do True UEFI Boot

Overview

We were interested in getting XenServer 6.5 to boot via UEFI.  Leaving servers in Legacy/BIOS boot was not an option in our target environment.  We still have to do the initial install with the server in Legacy BIOS mode; however, I managed to compile Xen as an EFI bootable binary using the source and patches distributed by Citrix.  With that I am able to change the servers boot mode back to UEFI and boot XenServer.  Here are the steps I used to compile it.

Steps

  1. Prepare a DDK
  2. Prepare a build environment
  3. Build some prerequisites
  4. Unpack the SRPM
  5. Compile Xen

DDK Preparation

Development will be done inside a 6.5 DDK. This is a CentOS 5.4-based Linux that has the same kernel as Dom0 and some of the required development tools. 

Import the VM template per Citrix DDK developer documentation.

After importing, set the following VM options:

  • 2 vCPUs
  • Increase memory to 2048MB
  • Resize disk image to 10GB
  • Add a network interface for SSH

Start the VM, set a root password, and then finalize resizing the disk by running:

# fdisk /dev/xvda 
cmd: d  -del
cmd:  n  -new
cmd:  p  -primary
cmd:  1  -num 1
	use default size
cmd:  w  -save

Preparing the Build Environment

Install rpmdevtools:

# yum --disablerepo citrix install rpmdevtools
# rpmdev-setuptree

I also needed several packages many of which were provided on the binpkg ISO from Citrix. I made them available by inserting XenServer-6.5-binpkg.iso and running the following:

# mount /dev/xvdb /mnt
# mkdir /opt/binpkg
# cp -a /mnt/domain0/RPMS/* /opt/binpkg
# cd /opt/binpkg
# createrepo /opt/binpkg
# cat >/etc/yum.repos.d/binpkg.repo
[binpkg]
name=binpkg
baseurl=file:///opt/binpkg
gpgcheck=0
^D

I also added the epel repository:

rpm -Uvh http://dl.fedoraproject.org/pub/epel/5/i386/epel-release-5-4.noarch.rpm

and I added the following packages:

bzip2-devel
e4fsprogs-devel
gettext
glib-devel
glib2-devel
iasl
netpbm
netpbm-progs
psutils
tetex-dvips
tetex-latex
DejaGnu
tcl
expect
makeinfo
texinfo
pixman-devel

Building the Prerequisites

This page: http://xenbits.xen.org/docs/4.3-testing/misc/efi.html says that it is required to use gcc 4.5 or better and that that binutils must be compiled with --enable-targets=x86_64-pep. I could not satisfy this with packages in the repositories, so I compiled and installed some requirements for gcc:

ftp://ftp.gnu.org/gnu/gmp/gmp-4.3.2.tar.gz
http://www.mpfr.org/mpfr-2.4.2/mpfr-2.4.2.tar.gz
http://www.multiprecision.org/mpc/download/mpc-0.8.1.tar.gz

and made sure the required files could be found:

# export LD_LIBRARY_PATH=:/usr/lib:/usr/local/lib:/usr/local/lib64:/usr/lib64
# ln -s /usr/lib64/libcrypto.so.0.9.8e libcrypto.so

then I compiled binutils using: https://ftp.gnu.org/gnu/binutils/binutils-2.22.tar.gz:

# tar xzf binutils-2.22.tar.gz
# mkdir binutils-build
# cd binutils-build
# ../binutils-2.22/configure --disable-werror --enable-targets=x86_64-pep
# make
# make install

then I compiled gcc I using: http://mirrors-usa.go-parts.com/gcc/releases/gcc-4.6.2/gcc-4.6.2.tar.gz. I adapted compilation instructions from: http://www.linuxfromscratch.org/lfs/view/7.1/chapter06/gcc.html:

# cd gcc-4.6.2
# sed -i 's/install_to_$(INSTALL_DEST) //' libiberty/Makefile.in
# case `uname -m` in
i?86) sed -i 's/^T_CFLAGS =$/& -fomit-frame-pointer/' 
gcc/Makefile.in ;;
esac
# sed -i 's@./fixinc.sh@-c true@' gcc/Makefile.in
# mkdir -v ../gcc-build
# cd ../gcc-build
# ../gcc-4.6.2/configure --prefix=/usr 
--libexecdir=/usr/lib --enable-shared 
--enable-threads=posix --enable-__cxa_atexit 
--enable-clocale=gnu --enable-languages=c,c++ 
--disable-multilib --disable-bootstrap --with-system-zlib
# make -j3
# ulimit -s 16384
# make -k check
# ../gcc-4.6.2/contrib/test_summary
# make install

Unpack the SRPM

At this point everything is ready to compile Xen as an EFI bootable binary.  The source code for Xen with Citrix's patches is available here: http://xenserver.org/open-source-virtualization-download.html so download XenServer-6.5.0-source-main-1.iso and mount it at /mnt/cdrom:

# mount -r /dev/xvdb /mnt/cdrom

Now install the SRPM:

# rpm -ivh /mnt/cdrom/xen/xen-4.4.1-1.9.0.459.28798.src.rpm 

Compile Xen

The source code and required scripts are now all under /root/rpmbuild/, so just run:

# cd ~/rpmbuild
# QA_RPATHS=$[ 0x0020 ] rpmbuild -bc SPECS/xen.spec

The -bc flag causes the process to follow the spec file and patch the source, but then stop just before running the make commands. The make commands would fail to compile with warnings about uninitialized variables being treated as errors. Fix this by changing line 45 of ~/rpmbuild/BUILD/xen-4.4.1/xen/Rules.mk to read:

CFLAGS += -Werror -Wno-error=uninitialized -Wredundant-decls
-Wno-pointer-arith

and line 39 of ~/rpmbuild/BUILD/xen-4.4.1/Config.mk to read:

HOSTCFLAGS = -Wall -Werror -Wno-error=uninitialized -Wstrict-prototypes -O2 -fomit-frame-pointer

after making those changes run:

# cd ~/rpmbuild/BUILD/xen-4.4.1/
# make clean
# make max_phys_cpus=256 XEN_TARGET_ARCH=x86_64 -C xen
XEN_VENDORVERSION=-xs90192 debug=n build

and the compile will finish successfully.  I probably could have just made a quick patch to add the -Wno-error flag and allowed the rpmbuild to run the full spec file, but I didn't actually need to compile xen-tools etc, those are already compiled and installed on the XenServer installation. The only file needed is ~/rpmbuild/BUILD/xen-4.4.1/xen/xen.efi. With that in hand I created a xen.cfg file like this:

[global]
default=xen

[xen]
options=console=vga,com1,com2 com1=115200,8n1,0x3F8,4
com2=115200,8n1,0x2F8,3 loglvl=all noreboot
kernel=vmlinuz-3.10-xen root=UUID=b4ee0ace-b587-41df-a66b-16f89731b2a8
rw ignore_loglevel acpi_rsdp=0x7B7FE014
ramdisk=initrd-3.10-xen.img

where the root UUID is the boot disk created during the XenServer install and the RSDP number came from running:

# dmesg | grep RSDP

I ran that in an EFI booted live Linux environment. I found that some
vendors' UEFI implementations were able to provide the RSDP during boot
and some were not, so without specifying it in the xen.cfg I had trouble
with things like usb peripherals.

Boot XenServer

With the xen.efi and xen.cfg I was able to boot XenServer in UEFI boot mode using refind.  We have done extensive testing on several different servers and found no problems.  I was also able to repeat the process with the source code provided by the service packs up to and including Service Pack 1.  I haven't tried any further than that yet.

Editors Note

For those of you wishing to retain Citrix commercial support status, the above procedure will convert the XenServer 6.5 host into an "unsupported configuration".

 

Recent Comments
Paolo
Hello Mr Sandberg, first of all good job!!! Is Xenserver not uefi capable ?! Isn't wired ?! I am interesting about your how to bu... Read More
Monday, 13 March 2017 11:05
Andy Halley
Please note that UEFI boot was added to XenServer 7,0 in 2016 and is of course available after that in 7.1 and now in 7.2.
Tuesday, 06 June 2017 14:29
Continue reading
15467 Hits
2 Comments

A New Year, A New Way to Build for XenServer

Building bits of XenServer outside of Citrix has in the past been a bit of a challenging task, requiring careful construction of the build environment to replicate what 'XenBuilder', our internal build system, puts together. This has meant using custom DDK VMs or carefully installing by hand a set of packages taken from one of the XenServer ISOs. With XenServer Dundee, this will be a pain of the past, and making a build environment will be just a 'docker run' away.

Part of the work that's being done for XenServer Dundee has been moving things over to using standard build tools and packaging. In previous releases there have been a mix of RPMs, tarballs and patches for existing files, but for the Dundee project everything installed into dom0 is now packaged into an RPM. Taking inspiration and knowledge gained while working on xenserver/buildroot, we're building most of these dom0 packages now using mock. Mock is a standard tool for building RPM packages from source RPMs (SRPMS), and it works by constructing a completely clean chroot with only the dependencies defined by the SRPM. This means that everything needed to build a package must be in an RPM, and the dependencies defined by the SRPM must be correct too.

From the point of view of making reliably reproducible builds, using mock means there is very little possibility of having the build dependent upon the the environment. But there is also a side benefit of this work: If you actually want to rebuild a bit of XenServer you just need to have a yum repository with the XenServer RPMs in, and use 'yum-builddep' to put in place all of the build dependencies, and then building should be as simple as cloning the repository and typing 'make'.

The simplest place to do this would be in the dom0 environment itself, particularly now that the partition size has been bumped up to 20 gigs or so. However, that may well not be the most convenient. In fact, for a use case like this, the mighty Docker provides a perfect solution. Docker can quickly pull down a standard CentOS environment and then put in the reference to the XenServer yum repository, install gcc, OCaml, git, emacs and generally prepare the perfect build environment for development.

In fact, even better, Docker will actually do all of these bits for you! The docker hub has a facility for automatically building a Docker image provided everything required is in repository on Github. So we've prepared a repository containing a Dockerfile and associated gubbins that sets things up as above, and then the docker hub builds and hosts the resulting docker image.

Let's dive in with an example on how to use this. Say you have a desire to change some aspect of how networking works on XenServer, something that requires a change to the networking daemon itself, 'xcp-networkd'. We'll start by rebuilding that from the source RPM. Start the docker container and install the build dependencies:

$ docker run -i -t xenserver/xenserver-build-env
[root@15729a23550b /]# yum-builddep -y xcp-networkd

this will now download and install everything required to be able to build the network daemon. Next, let's just download and build the SRPM:

[root@15729a23550b /]# yumdownloader --source xcp-networkd

At time of writing, this downloads the SRPM "xcp-networkd-0.9.6-1+s0+0.10.0+8+g96c3fcc.el7.centos.src.rpm". This will build correctly in our environment:

[root@15729a23550b /]# rpmbuild --rebuild xcp-networkd-*
...
[root@15729a23550b /]# ls -l ~/rpmbuild/RPMS/x86_64/
total 2488
-rw-r--r-- 1 root root 1938536 Jan  7 11:15 xcp-networkd-0.9.6-1+s0+0.10.0+8+g96c3fcc.el7.centos.x86_64.rpm
-rw-r--r-- 1 root root  604440 Jan  7 11:15 xcp-networkd-debuginfo-0.9.6-1+s0+0.10.0+8+g96c3fcc.el7.centos.x86_64.rpm

To patch this, it's just the same as for CentOS, Fedora, and any other RPM based distro, so follow one of the many guides available.

Alternatively, you can compile straight from the source. Most of our software is hosted on github, either under the xapi-project or xenserver organisations. xcp-networkd is a xapi-project repository, so we can clone it from there:

[root@15729a23550b /]# cd ~
[root@15729a23550b ~]# git clone git://github.com/xapi-project/xcp-networkd

Most of our RPMs have version numbers constructed automatically containing useful information about the source, and where the source is from git repositories the version information comes from 'git describe'.

[root@15729a23550b ~]# cd xcp-networkd
[root@15729a23550b xcp-networkd]# git describe --tags
v0.10.0-8-g96c3fcc

The important part here is the hash, in this case '96c3fcc'. Comparing with the SRPM version, we can see these are identical. We can now just type 'make' to build the binaries:

[root@15729a23550b xcp-networkd]# make

this networkd binary can then be put onto your XenServer and run.

The yum repository used by the container is being created directly from the snapshot ISOs uploaded to xenserver.org, using a simple bash script named update_xs_yum.sh available on github. The container default will be to use the most recently available release, but the script can be used by anyone to generate a repository from the daily snapshots too, if this is required. There’s still a way to go before Dundee is released, and some aspect of this workflow are in flux – for example, the RPMs aren’t currently signed. However, by the time Dundee is out the door we hope to make many improvements in this area. Certainly here in Citrix, many of us have switched to using this for our day-to-day build needs, because it's simply far more convenient than our old custom chroot generation mechanism.

Recent comment in this post
Shawn Edwards
devrepo.xenerver.org is down, so this method of developing for xenserver currently doesn't work. Who do I need to bug to get this... Read More
Thursday, 02 June 2016 22:07
Continue reading
12060 Hits
1 Comment

Preview of XenServer Administrators Handbook

Administering any technology can be both fun and challenging at times. For many, the fun part is designing a new deployment while for others the hardware selection process, system configuration and tuning and actual deployment can be a rewarding part of being an SRE. Then the challenging stuff hits where the design and deployment become a real part of the everyday inner workings of your company and with it come upgrades, failures, and fixes. For example, you might need to figure out how to scale beyond the original design, deal with failed hardware or find ways to update an entire data center without user downtime. No matter how long you've been working with a technology, the original paradigms often do change, and there is always an opportunity to learn how to do something more efficiently.

That's where a project JK Benedict and I have been working on with the good people of O'Reilly Media comes in. The idea is a simple one. We wanted a reference guide which would contain valuable information for anyone using XenServer - period. If you are just starting out, there would be information to help you make that first deployment a successful one. If you are looking at redesigning an existing deployment, there are valuable time-saving nuggets of info, too. If you are a longtime administrator, you would find some helpful recipes to solve real problems that you may not have tried yet. We didn't focus on long theoretical discussions, and we've made sure all content is relevant in a XenServer 6.2 or 6.5 environment. Oh, and we kept it concise because your time matters.

I am pleased to announce that attendees of OSCON will be able to get their hands on a preview edition of the upcoming XenServer Administrators Handbook. Not only will you be able to thumb through a copy of the preview book, but I'll have a signing at the O'Reilly booth on Wednesday July 22nd at 3:10 PM. I'm also told the first 25 people will get free copies, so be sure to camp out ;)

Now of course everyone always wants to know what animal which gets featured for the book cover. As you can see below, we have a bird. Not just any bird mind you, but a xenops. Now I didn't do anything to steer O'Reilly towards this, but find it very cool that we have an animal which also represents a very core component in XenServer; the xenopsd. For me, that's a clear indication we've created the appropriate content, and I hope you'll agree.

 

             

Recent Comments
prashant sreedharan
cool ! cant wait to get my hands on the book :-)
Tuesday, 07 July 2015 19:32
Tobias Kreidl
Congratulations, Tim and Jesse, as an update in this area is long overdue and in very good hands with you two. The XenServer commu... Read More
Tuesday, 07 July 2015 19:42
JK Benedict
Ah, Herr Tobias -- Danke freund. Danke fur ihre unterstutzung! Guten abent!
Thursday, 23 July 2015 09:26
Continue reading
13317 Hits
6 Comments

History and Syslog Tweaks

Introduction

As XenServer Administrators already know (or will know), there is one user "to rule them all"... and that user is root.  Be it an SSH connection or command-line interaction with DOM0 via XenCenter, while you may be typing commands in RING3 (user space), you are doing it as the root user.

This is quite appropriate for XenServer's architecture as once the bare-metal is powered on, one is not booting into the the latest "re-spin" of some well-known (or completely obscure) Linux-spin.  Quite the opposite.  One is actually booting into the virtualization layer: dom0 or the Control Domain.  This is where separation of Guest VMs (domUs) and user space programmes (ping, fsck, and even XE) begins... even at the command line for root.

In summary, it is not uncommon for many Administrators to require root access to a XenServer... at one time.  Thus, this article will show my own means of adding granularity to the HISTORY command as well as logging (via Syslog) of each and every root user session.

Assumptions

As BASH is the default shell, this article assumes that one has knowledge of BASH, things "BASH", Linux-based utilities, and so forth.  If one isn't familiar with BASH, how BASH leverages global and local scripts to setup a user environment, etc I have provided the following resources:

  • BASH login scripts : http://www.linuxfromscratch.org/blfs/view/6.3/postlfs/profile.html
  • Terminal Colors : http://www.tldp.org/HOWTO/Bash-Prompt-HOWTO/x329.html
  • HISTORY command : http://www.tecmint.com/history-command-examples/

Purpose

The purpose I wanted to achieve was not just a more 'clean way' to look at the history command, but to also log the root user's session information: recording their access means, what command they ran, and WHEN.


In short, we go from this:

To this (plus record of each command in /var/log/user.log | /var/log/messages):

What To Do?

First, we want to backup /etc/bashrc to /etc/backup.bashrc in the event one would like to revert to the original HISTORY method, etc.  This can be done via the command-line of the XenServer:

cp /etc/bashrc /etc/backup.bashrc

Secondly, the following addition will should be added to the end of /etc/bashrc:

##[ HISTORY LOGGING ]#######################################################
#
# ADD USER LOGGING AND HISTORY COMMAND CONTEXT FOR SOME AUDITING
# DEC 2014, JK BENEDICT
# This email address is being protected from spambots. You need JavaScript enabled to view it. | @xenfomation
#
#########################################################################

# Grab current user's name
export CURRENT_USER_NAME=`id -un`

# Grab current user's level of access: pts/tty/or SSH
export CURRENT_USER_TTY="local `tty`"
checkSSH=`set | grep "^SSH_CONNECTION" | wc -l`

# SET THE PROMPT
if [ "$checkSSH" == "1" ]; then
     export CURRENT_USER_TTY="ssh `set | grep "^SSH_CONNECTION" | awk {' print $1 '} | sed -rn "s/.*?='//p"`"
     export PROMPT_COMMAND='history -a >(tee -a ~/.bash_history | logger -t "HISTORY for $CURRENT_USER_NAME[$$] via $SSH_CONNECTION : ")'
else
     export CURRENT_USER_TTY
     export PROMPT_COMMAND='history -a >(tee -a ~/.bash_history | logger -t "HISTORY for $CURRENT_USER_NAME[$$] via $CURRENT_USER_TTY : ")'
fi

# SET HISTORY SETTINGS
# Lines to retain, ignore dups, time stamp, and user information
# For date variables, check out http://www.computerhope.com/unix/udate.htm
export HISTSIZE=5000
export HISTCONTROL=ignoredups
export HISTTIMEFORMAT=`echo -e "e[1;31m$CURRENT_USER_NAMEe[0m[$$] via e[1;35m$CURRENT_USER_TTYe[0m on e[0;36m%d-%m-%y %H:%M:%S%ne[0m       "`

A link to a file providing this addition downloaded from https://github.com/xenfomation/bash-history-tweak

What Next?

Well, with the changes added and saved to /etc/bashrc, exit the command-line prompt or SSH session: logging back in to test the changes.

exit

hostname
whoami
history
tail -f /var/log/user.log

... And that is that.  So, while there are 1,000,000 more sophisticated ways to achieve this, I thought I'd share what I have used for a long time... have fun and enjoy!

--jkbs | @xenfomation

Continue reading
2733 Hits
0 Comments

XenServer at FOSDEM

Having just released Creedence as XenServer 6.5, 2015 has definitely started off with a bang. In 2014 the focus for XenServer was on a platform refresh, and creating a solid platform for future work. For me, 2015 is about enabling the ecosystem to be successful with XenServer, and that's where FOSDEM comes in. For those unfamiliar with FOSDEM, it's the Free and Open Source Developers European Meeting, and many of the most influential projects will have strong representation. Many of those same projects have strong relationships with other hypervisors, but not necessarily with XenServer. For those projects, XenServer needs to demonstrate its relevance, and I hope through a set of demos within the Xen Project stand to provide exactly that.

Demo #1 - Provisioning Efficiency

XenServer is a hypervisor, and as such is first and foremost a provisioning target. That means it needs to work well with provisioning solutions and their respective template paradigms. Some of you may have seen me present at various events on the topic of hypervisor selection in various cloud provisioning tools. One of the core workflow items for all cloud solutions is the ability to take a template and provision it consistently to the desired hypervisor. In Apache CloudStack with XenServer for example, those templates are VHD files. Unfortunately, XenServer by default exports XVA files, not native VHD; which makes the template process for CloudStack needlessly difficult.

This is where a technology like Packer comes in. Some of the XenServer engineers have been working on a Packer integration to support Vagrant. That's cool, but I'm also looking at this from the perspective of other tools and so will be showing Packer creating a CentOS 7 template which could be used anywhere. That template would then be provisioned and as part of the post-provisioning configuration management become a "something" with the addition of applications.

Demo #2 - Application Containerization

Once I have my template from Packer, and have provisioned it into a XenServer 6.5 host, the next step is application management. For this I'm going to use Ansible to personalize the VM, and to add in some applications which are containerized by Docker. There has been some discussion in the marketplace about containers replacing VMs, and I really see proper use of containers as being efficient use of VMs not as a replacement for a VM. Proper container usage is really proper application management, and understanding when to use which technology. For me this means that a host is a failure point which contains VMs. A VM represents a security and performance wrapper for a given tenant and their applications. Within a VM applications are provisioned, and where containerization of the applications makes sense, it should be used.

System administrators should be able to directly manage each of these three "containers" from the same pane of glass, and as part of my demo, I'll be showing just that using XenCenter. XenCenter has a simple GUI from which host and VM level management can be performed, and which is in the process of being extended to include Dockerized containers.

With this as the demo backdrop, I encourage anyone planning on attending FOSDEM to please stop by and ask about the work we've done with Creedence and also where we're thinking of going. If you're a contributor to a project and would like to talk more about how integrating with XenServer might make sense, either for your project or as something we should be thinking about, please do feel free to reach out to me. Of course if you're not planning on being at FOSDEM, but know folks who are, please do feel free to have them seek me out. We want XenServer to be a serious contender in every data center, but if we don't know about issues facing your favorite projects, we can't readily work to resolve them.

btw, if you'd like to plan anything around FOSDEM, please either comment on this blog, or contact me on Twitter as @XenServerArmy.

-tim     

Recent Comments
Tobias Kreidl
Thank you for sharing this, Tim. Some progress has already been made in being able to export VHD files and even VHD snapshots, so ... Read More
Monday, 26 January 2015 20:39
Felipe Franciosi
For those arriving in Brussels a day earlier, I'll be presenting at the CentOS Dojo and talking about Optimising Xen Deployments f... Read More
Tuesday, 27 January 2015 09:07
Tobias Kreidl
Tim, Would like to see a summary of your experiences and impressions at FOSDEM after it has concluded.
Sunday, 01 February 2015 15:54
Continue reading
12430 Hits
4 Comments

Basic Network Testing with IPERF

Purpose

I am often asked how one can perform simple network testing within, outside, and into XenServer.  This is a great question as – by itself – it is simple enough to answer.  However, depending on what one desires out of “network testing” the answer can quickly become more complex.

As such, this I have decided to answer this question using a long standing, free utility called IPERF (well, IPERF2).  It is a rather simple, straight-forward, but powerful utility I have used over many, many years.  Links to IPERF will be provided - along with documentation on its use - as it will serve in this guide as a way to:


- Test bandwidth between two or more points

- Determine bottlenecks

- Assists with black box testing or “what happens if” scenarios

- Use a tool that runs on both Linux and Windows

- And more…

IPERF: A Visual Breakdown

IPERF has to be installed on/at at least two separate end points.  One point acts a server/receiver and the other point acts as a client/transmitter.  This so network testing can be done on a simple subnet to a complex, routed network: end-to-end using TCP or UDP generated traffic:

The visual shows an IPERF client transmitting data over IPv4 to an IPERF receiver.  Packets traverse the network - from wireless routers and through firewalls - from the client side to the server side to over port 5001.

IPERF and XenServer

The key to network testing is in remembering that any device which is connected to a network infrastructure – Virtual or Physical – is a node, host, target, end point, or just simply … a networked device.

With regards to virtual machines, XenServer obviously supports Windows and Linux operating systems.  IPERF can be used to test virtual-to-virtual networking as well as virtual-to-physical networking.  If we stack virtual machines in a box to our left and stack physical machines in a box to our right – despite a common subnet or routed network – we can quickly see the permutations of how "Virtual and Physical Network Testing" can be achieved with IPERF transmitting data from one point to another:

And if one wanted, they could just as easily test networking for this:

Requirements

To illustrate a basic server/client model with IPERF, the following will be required:

- A Windows 7 VM that will act as an IPERF client

- A CentOS 5.x VM that will act as a receiver.

- IPERF2 (the latest version of IPERF, or "IPERF3" can be found at https://github.com/esnet/iperf or, more specifically, http://downloads.es.net/pub/iperf/)

The reason for using IPERF2 is quite simple: portability and compatibility on two of the most popular operating systems that I know are virtualized.  In addition, the same steps to installing IPERF2 on these hosts can be carried out on physical systems running similar operating systems, as well. 

The remainder of this article - regarding IPERF2 - will require use of the MS-DOS command-line as well as the Linux shell (of choice).  I will carefully explain all commands as so if you are “strictly a GUI” person, you should fit right in.

Disclaimer

When utilizing IPERF2, keep in mind that this is a traffic generator.  While one can control the quantity and duration of traffic, it is still network traffic

So, consider testing during non-peak hours or after hours as to not interfere with production-based network activity.

Windows and IPERF

The Windows port of IPERF 2.0.5 requires Windows XP (or greater) and can be downloaded from:

http://sourceforge.net/p/iperf/patches/_discuss/thread/20d4a4b0/5c44/attachment/Iperf.zip

Within the .zip file you will find two directories.  One is labeled DEBUG and the other is labeled RELEASE.  Export the Iperf.exe program to a directory you will remember, such as C:\iperf\

Now, accessing the command line (cmd.exe), navigate to C:\iperf\ and execute:

iperf

The following output should appear:

Linux and IPERF

If you have additional repos already configured for CentOS, you can simply execute (as root):

yum install iperf

If that fails, one will need to download the Fedora/RedHat EPEL-Release RPM file for the version of CentOS being used.  To do this (as root), execute:

wget  http://dl.fedoraproject.org/pub/epel/5/i386/epel-release-5-4.noarch.rpm
rpm -Uvh epel-release-5-4.noarch.rpm

 

*** Note that the above EPEL-Release RPM file is just an example (a working one) ***

 

Once epel-release-5-4.noarch.rpm is installed, execute:

yum install iperf

And once complete, as root execute iperf and one should see the following output:

http://cdn.ws.citrix.com/wp-content/uploads/2014/06/CMD2.png?__utma=222274247.1078613845.1409810797.1412210514.1412210784.2&__utmb=222274247.5.8.1412227628611&__utmc=222274247&__utmx=-&__utmz=222274247.1412210514.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none)&__utmv=222274247.|1=my%20account%20holder=y=1^14=industry=(Non-company%20Visitor)=1^15=sub_industry=(Non-company%20Visitor)=1^16=employee_count=(Non-company%20Visitor)=1^17=company_name=(Non-company%20Visitor)=1^18=primary_sic=(Non-company%20Visitor)=1^19=registry_dma_code=(Non-company%20Visitor)=1&__utmk=208580497

Notice that it is the same output as what is being displayed from Windows.  IPERF2 is expecting a "-s" (server) or "-c" (client) command-line option with additional arguments.

IPERF Command-Line Arguments

On either Windows or Linux, a complete list of options for IPERF2 can be listed by executing:

iperf –help

A few good resources of examples to use IPERF2 options for the server or client can be referenced at:

http://www.slashroot.in/iperf-how-test-network-speedperformancebandwidth

http://samkear.com/networking/iperf-commands-network-troubleshooting

http://www.techrepublic.com/blog/data-center/handy-iperf-commands-for-quick-network-testing/

For now, we will focus on the options needed for our server and client:

-f, –format    [kmKM]   format to report: Kbits, Mbits, KBytes, MBytes
-m, –print_mss          print TCP maximum segment size (MTU – TCP/IP header)
-i, –interval  #        seconds between periodic bandwidth reports
-s, –server             run in server mode
-c, –client    <host>   run in client mode, connecting to <host>
-t, –time      #        time in seconds to transmit for (default 10 secs)

Lastly, there is a TCP/IP Window setting.  This goes beyond the scope of this document as it relates to the TCP frame/windowing of data.  I highly recommend reading either of the two following links – especially for Linux – as there has always been some debate as what is “best to be used”:

https://kb.doit.wisc.edu/wiscnet/page.php?id=11779

http://kb.pert.geant.net/PERTKB/IperfTool

Running An IPERF Test

So, we have IPERF2 installed on Windows 7 and on CentOS 5.10.  Before one performs any testing, ensure any AV does not block iperf.exe from running as well as port 5001 being opened across the network network.

Again, another port can be specified, but the default port IPERF2 uses for both client and server is 5001.

Server/Receiver Side

The Server/Receiver side will be on the CentOS VM.

Following the commands above, we want to execute the following to run IPERF2 as a server/receiver from our Windows 7 client machine:

iperf -s -f M -m -i 10

The output should show:

————————————————————
Server listening on TCP port 5001
TCP window size: 0.08 MByte (default)
————————————————————

The TCP window size has been previously commented on and the server is now ready to accept connections (press Control+C or Control+Z to exit).

Client/Transmission Side

Let us now focus on the client side to start sending data from the Windows 7 VM to the CentOS VM.

From Windows 7, the command line to start transmitting data for 30 seconds to our CentOS host (x.x.x.48) is:

iperf -c x.x.x.48 -t 30 -f M

Pressing enter, the traffic flow begins and the output from the client side looks like this:

From the server side, the output looks something like this:

And there we have it – a first successful test from a Windows 7 VM (located on one XenServer) to a CentOS 5.10 VM (located on another XenServer).

Understanding the Results

From either the client side or server side, results are shown by time and average.  The key item to look for from either side is:

0.0-30.0 sec  55828 MBytes  1861 MBytes/sec

Why?  This shows the average over the course of 0.0 to 30.0 seconds in terms of total megabytes transmitted as well as average megabytes of data sent per second.  In addition, since the "-f M" argument was passed as a command-line option, the output is calculated in megabytes accordingly.

In this particular case, we simply illustrated that from one VM to another VM, we transferred data at 1861 megabytes per second.

*** Note that this test was performed in a local lab with lower-end hardware than what you probably have! ***

--jkbs | @xenfomation

 

Recent Comments
chaitanya
Hi, Nice article.. I have a simple question.. you did this test for windows and linux os. Any specific requirement on that? I d... Read More
Monday, 10 November 2014 16:59
JK Benedict
Exactly: to show that IPERF can be used in any configuration, any school of thought, etc! Windows Windows Linux Linux Linux Wi... Read More
Wednesday, 12 November 2014 03:08
Massimo De Nadal
Hi, your throughput is 1861 MB/sec which means more than 14Gb !!!! Can I ask you what kind of server/setup are you using ??? I'... Read More
Tuesday, 11 November 2014 12:24
Continue reading
46483 Hits
15 Comments

Increasing Ubuntu's Resolution

Increasing Ubuntu's Resolution

Maximizing Desktop Real-estate with Ubuntu

With the addition of Ubuntu (and the likes) to Creedence, you may have noticed that the default resolution is 1024x768.  I certainly noticed it and with much work on 6.2 and Creedence Beta, I have a quick solution to maximizing the screen resolution for you.

The thing to consider is that a virtual frame buffer is what is essentially being used.  You can re-invent X configs all day, but the shortest path is to - first - ensure that that the following files are installed on your Ubuntu guest VM:

sudo apt-get install xvfb xfonts-100dpi xfonts-75dpi xfstt

Once that is all done installing, the next step is to edit Grub -- specifically /etc/default/grub:

sudo vi /etc/default/grub

Considering your monitor's maximum resolution (or not if you want to remote into Ubuntu using XRDP), look for the variable GRUB_GFXMODE.  This is where you can specify your desired BOOT resolutions that we will instruct the guest VM to SUSTAIN into user-space:

GRUB_GFXMODE=1280x960,1280x800,1280x720,1152x768,1152x700,1024x768,800x600

Next, adjust the variable GRUB_PAYLOAD_LINUX to equal keep, or:

GRUB_PAYLOAD_LINUX=keep

Save the changes and be certain to execute the following:

sudo update-grub
sudo reboot

Now, you will notice that even during the boot phase that the resolution is large and this will carry into user space: Lightdm, Xfce, and the likes.

Finally, I would highly suggest installing XRDP for your Guest VM.  It allows you to access that Ubuntu/Xbunutu/etc desktop remotely.  Specific details regarding this can be found through Ubuntu's forum:

http://askubuntu.com/questions/449785/ubuntu-14-04-xrdp-grey


Enjoy!

--jkbs | @xenfomation

 

 

Recent Comments
JK Benedict
Thanks, YLK - I am so glad to hear this helped someone else! Now... install XRDP and leverage the power to Remote Desktop (secure... Read More
Thursday, 25 December 2014 04:46
gfpl
thanks guy is very good help me !!!
Friday, 06 March 2015 10:52
Fredrik Wendt
Would be really nice to see all steps needed (CLI on dom0) to go from http://se.archive.ubuntu.com/ubuntu/dists/vivid/main/install... Read More
Monday, 14 September 2015 21:48
Continue reading
25862 Hits
6 Comments

VGA over Cirrus in XenServer 6.2

Achieve Higher Resolution and 32Bpp

For many reasons – not exclusive to XenServer – the Cirrus video driver has been a staple wherein a basic/somewhat agnostic video driver is needed.  When one creates a VM within XenServer (specifically 6.2 and previous versions) the Cirrus video driver is used by default for video...and it does the job.

I had been working on a project with my mentor related to an eccentric OS, but I needed a way to get more real-estate to test a HID pointing device by increasing the screen resolution.  This led me to find that at some point in our upstream code there were platform (virtual machine metadata) options that allowed an one to "ditch" Cirrus and 1024x768 resolution for higher resolutions and color depth via a standard VGA driver addition.

This is not tied into GPU Pass through nor is it a hack.  It is a valuable way to achieve 32bpp color in Guest VMs with video support as well as obtaining higher resolutions.

Windows 7: A Before and After Example

To show the difference between "default Cirrus" and the Standard VGA driver (which I will discuss how to switch to shortly), Windows 7 Enterprise had the following resolution to offer me with Cirrus:


Now, after switching to standard VGA for the same Guest VM and rebooting, I now had the following resolution options within Windows 7 Enterprise:

Switching a Guest for VGA

After you create your VM – Windows, Linux, etc – perform the following steps to enable the VGA adapter:

 

  • Halt the Guest VM
  • From the command line, find the UUID of your VM:
 xe vm-list name-label=”Name of your VM”
  • Taking the UUID value, run the following two commands:
 xe vm-param-set uuid=<UUID of your VM> platform:vga=std
 xe vm-param-set uuid=<UUID of your VM> platform:videoram=4
  •  Finally, start your VM and one should be able to achieve higher resolution at 32bpp.

 

It is worth noting that the max amount of "videoram" that can be specified is 16 (megabytes).

Switching Back to Cirrus

If – for one reason or another – you want to reset/remove these settings as to stick with the Cirrus driver, run the following commands:

 xe vm-param-remove uuid=<UUID of your VM> param-name=platform param-key=vga
 xe vm-param-remove uuid=<UUID of your VM> param-name=platform param-key=videoram

Again, reboot your Guest VM and with the lack of VGA preference, the default Cirrus driver will be used.

What is the Catch?

There is no catch and no performance hit.  The VGA driver's "videoram" specification is carved out of the virtual memory allocated to the Guest VM.  So, for example, if you have 4GB allocated to a Guest VM, subtract at max 16 megabytes from 4GB.  Needless to say, that is a pittance and does not impact performance.

Speaking of performance, my own personal tests were simple and repeated several times:

 

  • Utilized a tool that will remain anonymous
  • Use various operating systems with Cirrus and resolution at 1024 x 768
  • Run 2D graphic test suite
  • Write down Product X, Y, or Z’s magic number that represents good or bad performance
  • Apply the changes to the VM to use VGA (keeping the resolution at 1024 x 768 for some kind of balance)
  • Run the same volley of 2D tests after a reboot
  • Write down Product X, Y or Z’s magic number that represents good or bad performance

 

In the end, I personally found from my experience that there was a very minor, but noticeable difference in Cirrus versus VGA.  Cirrus usually came in 10-40 points below VGA at the 1024 x 768 level.  Based on the test suite used, this is nothing spectacular, but it is certainly a benefit as I found no degraded performance across XenServer (other Guests), etc.

I hope this helps and as always: questions and comments are welcomed!

 

--jkbs | @xenfomation

 

Recent Comments
JK Benedict
Hey, Chris!! Excellent questions! So - I think I need to clear up my poor use of words: more importantly, tying words together. ... Read More
Saturday, 11 October 2014 22:50
Continue reading
32422 Hits
4 Comments

Creedence: Debian 7.x and PVHVM Testing

Introduction

On my own time and on my own testing equipment, I have been able to run many Guests VMs in PVHVM containers - before Creedence after its release to the public back in June.  Last week's broadcast of Creedence Beta 3's release, I was naturally excited to see Tim's spotlight on PVHVM and the following article's intent is to show - in a test environment only - how I was able to run Debian 7.x (64-bit) in the same fashion.

For more information regarding PV + HVM as to establish a PVHVM container, Tim linked a great article in his Creedence Beta 3 post last Monday that I highly recommend you read as the finer details are out of scope for this article's intent and purpose.

Why is this important to me?  Quite simply we can go from this....

... to this ...

So now, let's make a PVHVM container for a Debian 7.x (64-Bit) Guest VM within XenCenter!

Requirements

1.  Creedence Beta 3 and XenCenter

2.  The full installation ISO for Debian 7.x (from https://www.debian.org/CD/http-ftp/#stable )

3.  Any changes mentioned below should not be applied to any of the stock Debian templates

4.  This should not be performed on your production environment

Creating A Default Template

With XenCenter open, ensure that from the View options one has "XenServer Templates" selected:

We should now see the default templates that XenServer installs:

1.  Right-click on the "Debian Wheezy 7 (64-bit)" template and save it as "Debian 7":

 

3.  This will produce a "custom template" - highlight it and copy the UUID of the custom template:

4.  The remainder of this configuration will take place from the command-line.

5.  To make the changes to the custom template easier, export the UUID of the custom template we created to avoid copy/paste errors:

export myTemp="af84ad43-8caf-4473-9c4d-8835af818335"
echo $myTemp
af84ad43-8caf-4473-9c4d-8835af818335

6.  With the $myTemp variable created, let us first convert this custom template to a default template by executing:

xe template-param-set uuid=$myTemp other-config:default_template=true

xe template-param-remove uuid=$myTemp param-name=other-config param-key=base_template_name

7.  Now configure the template's "platform" variable to leverage VGA graphics:

xe template-param-set uuid=$myTemp platform:viridian=false platform:device_id=0001 platform:vga=std platform:videoram=16

8.  Due to how some distros work with X, clear the PV-args and set a "vga=792" flag:

xe template-param-set uuid=$myTemp PV-args="vga=792"

9.  Disable the PV-bootloader:

xe template-param-set uuid=$myTemp PV-bootloader=""

10.  Specify that the template uses an HVM-style bootloader (DVD/CD first, then hard drive, and then network):

xe template-param-set uuid=$myTemp HVM-boot-policy="BIOS order"
xe template-param-set uuid=$myTemp HVM-boot-params:order="dcn"

 

Now, before creating a Debian 7.x Guest VM, one should now see in XenCenter that "Debian 7" is listed as a "default template":

 

Lastly, for the VGA flag and what it means to most distros, the following is a table explaining the VGA flag and bit settings to achieve XxY resoluton @ a color depth:

VGA Resolution and Color Depth reference Chart:

Depth 800×600 1024×768 1152×864 1280×1024 1600×1200
8 bit vga=771 vga=773 vga=353 vga=775 vga=796
16 bit vga=788 vga=791 vga=355 vga=794 vga=798
24 bit vga=789 vga=792   vga=795 vga=799

Create A New Debian Guest

From now, one should be able to create a new Guest VM using the template we have just created and should be able to walk through the entire install:

Post installation, tools can be installed as well!

Enjoy and happy testing!

 

jkbs | @xenfomation

Recent Comments
JK Benedict
Hey, Tobi - Thanks for the feedback! With regards to the graphical install, are you referring to how to do this with XenServer 6... Read More
Friday, 10 October 2014 19:40
JK Benedict
Alrighty -- Been busy, but the following BASH script should make a copy of your Debain 7 template and make a generic, HVM templat... Read More
Wednesday, 22 October 2014 03:10
JK Benedict
You should quite able to copy-n-paste the code above -- that will remove the emoticons from the colon + some other character.... Read More
Wednesday, 22 October 2014 03:21
Continue reading
20971 Hits
18 Comments

Before Electing a New Pool Master

Overview

The following is a reminder of specific steps to take before electing a new pool master - especially in High Availability-enabled deployments.  Albeit, there are circumstances where this will happen automatically due to High Availability (by design) or in an emergency situation, but never-the-less, the following steps should be taken when electing a new pool master where High Availability is enabled.

Disable High Availability

Before electing a new master one must disable High Availability.  The reason is quite simple:

If a new host is designated as master with HA enabled, the subsequent processes and transition time can lead to HA see that a pool member is down.  It is doing what it is supposed to do from the "mathematical" sense, but from "reality" it is actually confused.

The end result is that HA could either recover with some time or fence as it attempts to apply fault tolerance in contradiction to the desire to "simply elect a new master".

It is also worth noting that upon recovery - if any Guests which had a mounted ISO are rebooted on another host - that "VDI not found" errors can appear although this is not the case.  The ISO image that is mounted is seen as a VDI and if that resource is not available on another host, the Guest VM will fail to resume: presenting the generic VDI error.

Steps to Take

HA must be disabled and for safe practice, I always recommend ejecting all mounted ISO images.  The latter can be accomplished by executing the following from the pool master:

xe vm-cd-eject --multiple

As for HA it can be disabled in two ways: via the command-line or from XenCenter.

From the command line of the current pool master, execute:

xe pool-ha-disable
xe pool-sync

If desired - just for safe guarding one's work - those commands can be executed on every other pool member.

As for XenCenter one can select the Pool/Pool Master icon in question and from the "HA" tab, select the option to disable HA for the pool.

Workload Balancing

For versions of XenServer utilizing Workload Balancing it is not necessary to halt this.

Now that HA is disabled, switch Pool Masters and when all servers are in an active state: re-enable HA from XenCenter or from the command line:

xe pool-recover-slaves
xe pool-ha-enable

I hope this is helpful and as always: questions and comments are welcomed!

 

--jkbs | @xenfomation

Continue reading
17254 Hits
0 Comments

Log Rotation and Syslog Forwarding

A Continuation of Root Disk Management

First, this article is applicable to any sized XenServer deployment and secondly, it is a continuation off of my previous article regarding XenServer root disk maintenance.  The difference is that - for all XenServer deployments - the topic revolves specifically with that of Syslog: from tuning log rotation, specifying the amount of logs to retain, leveraging compression, and of course... Syslog forwarding.

All of this is an effort to share tips to new (or seasoned) XenServer Administrators in the options available to ensure necessary Syslog data does not fill a XenServer root disk while ensuring - for certain industry specific requirements - that log-specific data is retained without sacrafice.

Syslog: A Quick Introduction

So, what is this Syslog?  In short it can be compared to the Unix/Linux equivalent of Windows Event Log (along with other logging mechanisms popular to specific applications/Operating Systems). 

The slightly longer explanation is that Syslog is not only a daemon, but also a protocol: established long ago for Unix systems to record system and application to local disk as well as offering the ability to forward the same log information to its peers for redundancy, concentration, and to conserve disk space on highly active systems.  For more detailed information on the finer details of the Syslog protocol and daemon one can review the IETF's specification at http://tools.ietf.org/html/rfc5424.

On a stand-alone XenServer, the Syslog daemon is started on boot and its configuration file for handling source, severity, types of logs, and where to store them are defined in /etc/syslog.conf.  It is highly recommended that one does not alter this file unless necessary and if one knows what they are doing.  From boot to reboot, information is stored in various files: found under the root disk's /var/log directory.

Taken from a fresh installation of XenServer, the following shows various log files that store information specific to a purpose.  Note that the items in "brown" are sub-directories:

For those seasoned in administering XenServer it is visible that from the kernel-level and user-space level there are not many log files.  However, XenServer is verbose about logging for a very simple reason: collection, analysis, and troubleshooting if an issue should arise.

So for a lone XenServer (by default) logs are essentially received by the Syslog daemon and based on /etc/syslog.conf - as well as the source and type of message - stored on the local root file system as discussed:

Within a pooled XenServer environment things are pretty much the same: for the most part.  As a pool has a master server, log data for the Storage Manager (as a quick example) is trickled up to the master server.  This is to ensure that while each pool member is recording log data specific to itself, the master server has the aggregate log data needed to promote troubleshooting of the entire pool from one point.

Log Rotation

Log rotation, or "logrotate", is what ensures that Syslog files in /var/log do not grow out of hand.  Much like Syslog, logrotate utilizes a configuration file to dictate how often, at what size, and if compression should be used when archiving a particular Syslog file.  The term "archive" is truly meant for rotating out a current log in place of a fresh, current log to take its place.

Post XenServer installation and before usage, one can measure the amount of free root disk space by executing the following command:

df -h

The output will be similar to the following and the line one should be most concerned with is in bold font:

Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1             4.0G  1.9G  2.0G  49% /
none                  381M   16K  381M   1% /dev/shm
/opt/xensource/packages/iso/XenCenter.iso
                       52M   52M     0 100% /var/xen/xc-install

Once can see by the example that only 49% of the root disk on this XenServer host has been used.  Repeating this process as implementation ramps up, an administrator should be able to measure how best to tune logrotate's configuration file for after install, /etc/logrotate should resemble the following:

# see "man logrotate" for details
# rotate log files weekly
weekly
# keep 4 weeks worth of backlogs
rotate 4
# create new (empty) log files after rotating old ones
create
# uncomment this if you want your log files compressed
#compress
# RPM packages drop log rotation information into this directory
include /etc/logrotate.d
# no packages own wtmp -- we'll rotate them here
/var/log/wtmp {
    monthly
    minsize 1M
    create 0664 root utmp
    rotate 1
}
/var/log/btmp {
    missingok
    monthly
    minsize 1M
    create 0600 root utmp
    rotate 1
}
# system-specific logs may be also be configured here.

In previous versions, /etc/logrotate.conf was setup to retain 999 archived/rotated logs, but as of 6.2 the configuration above is standard. 

Before covering the basic premise and purpose of this configuration file, one can see this exact configuration file explained in more detail at http://www.techrepublic.com/article/manage-linux-log-files-with-logrotate/

The options declared in the default configuration are conditions that, when met, rotate logs accordingly:

  1. The first option specifies when to invoke log rotation.  By default this is set to weekly and may need to be adjusted for "daily".  This will only swap log files out for new ones and will not delete log files.
  2. The second option specifies how long to keep archived/rotate log files on the disk.  The default is to remove archived/rotated log files after a week.  This will delete log files that meet this age.
  3. The third options specifies what to do after rotating a log file out.  The default - which should not be changed is to create a new/fresh log after rotating out its older counterpart.
  4. The fourth option - which is commented out - specifies another what to do, but this time for the archived log files.  It is highly recommended to remove the comment mark so that archived log files are compressed: saving on disk space.
  5. A fifth option which is not present in the default conf is the "size" option.  This specifies how to handle logs that reach a certain size, such as "size 15M".  This option should be employed: especially if an administrator has SNMP logs that grow exponentially or notices that the particular XenServer's Syslog files are growing faster than logrotate can rotate and dispose of archived files.
  6. The "include" option specifies a sub-directory wherein unique, logrotate configurations can be specified for individual log files.
  7. The remaining portion should be left as is


In summary for logrotate, one is advised to measure use of the root disk using "df -h" and to tune logrotate.conf as needed to ensure Syslog does not inadvertently consume available disk space.

And Now: Syslog Forwarding

Again, this is a long standing feature and one I have been looking forward to explaining, highlighting, and providing examples for.  However, I have had a kind of writers block for many reasons: mainly that it ties into Syslog, Logrotate, and XenCenter, but also that there is a tradeoff.

I mentioned before that Syslog can forward messages to other hosts.  Furthermore, it can forward Syslog messages to other hosts without writing a copy of the log to local disk.  What this means is that a single XenServer or a pool of XenServers can send their log data to a "Syslog Aggregator".

The trade off is that one cannot generate a server status report via XenCenter, but instead gather the logs from the Syslog aggregate server and manually submit them for review.  That being said, one can ensure that low root disk space is not nearly as high of a concern on the "Admin Todo List" and can retain vast amounts of log data for a deployment of any size: based on dictated industry practices or for, sarcastically, nostalgic purposes.

The principles with Syslog and logrotate.conf will apply to the Syslog Aggregator as what good is a Syslog server if not configured properly as to ensure it does not fill itself up?  The requirements to instantiate a Syslog aggregation server, configure the forwarding of Syslog messages, and so forth are quite simple:

  1. Port 514 must be opened on the network
  2. The Syslog aggregation server must be reachable - either by being on the same network segment or not - by each XenServer host
  3. The Syslog aggregation server can be a virtual or physical machine; Windows or Linux-based with either a native Syslog daemon configured to receive external host messages or using a Windows-based Syslog solution offering the same "listening" capabilities.
  4. The Syslog aggregation server must have a static IP assigned to it
  5. The Syslog aggregation server should be monitored and tuned just as if it were Syslog/logrotate on a XenServer
  6. For support purposes, logs should be easily copied/compressed from the Syslog aggregation server - such as using WinSCP, scp, or other tools to copy log data for support's analysis

The quickest means to establish a simple virtual or physical Syslog aggregation server - in my opinion - is to reference the following two links.  These describe the installation of a base Debian-based system with specific intent to leverage Rsyslog for the recording of remote Syslog messages sent to it over UDP port 514 from one's XenServers:

http://www.aboutdebian.com/syslog.htm

http://www.howtoforge.com/centralized-rsyslog-server-monitoring

Alternatively, the following is an all-in-one guide (using Debian) with Syslog-NG:

http://www.binbert.com/blog/2010/04/syslog-server-installation-configuration-debian/

Once the server is instantiated and ready to record remote Syslog messages, it is time to open XenCenter.  Right click on a pool master or stand-alone XenServer and select "Properties":


In the window that appear - in the lower left-hand corner - is an option for "Log Destination":

To the right, one should notice the default option selected is "Local".  From there, select the "Remote" option and enter the IP address (or FQDN) of the remote Syslog aggregate server as follows:

Finally, select "OK" and the stand-alone XenServer (or pool) will update its Syslog configuration, or more specifically, /var/lib/syslog.conf.  The reason for this is so Elastic Syslog can take over the normal duties of Syslog: forwarding messages to the Syslog aggregator accordingly.

For example, once configured, the local /var/log/kern.log file will state:

Sep 18 03:20:27 bucketbox kernel: Kernel logging (proc) stopped.
Sep 18 03:20:27 bucketbox kernel: Kernel log daemon terminating.
Sep 18 03:20:28 bucketbox exiting on signal 15

Certain logs will still continue to record Syslog on the host, so it may be desirable to edit /var/lib/syslog.conf and add comments to lines where a "-/var/log/some_filename" is specified as lines with "@x.x.x.x" dictate to forward to the Syslog aggregator.  As an example, I have marked the lines in bold to show where comments should be added to prevent further logging to the local disk:

# Save boot messages also to boot.log
local7.*             @10.0.0.1
# local7.*         /var/log/boot.log

# Xapi rbac audit log echoes to syslog local6
local6.*             @10.0.0.1
# local6.*         -/var/log/audit.log

# Xapi, xenopsd echo to syslog local5
local5.*             @10.0.0.1
# local5.*         -/var/log/xensource.log

After one - The Administrator - has decided what logs to keep and what logs to forward, Elastic Syslog can be restarted as so the changes take affect by executing:

/etc/init.d/syslog restart

Since Elastic Syslog - a part of XenServer - is being utilized, the init script will ensure that Elastic Syslog is bounced and that it is responsible for handling Syslog forwarding, etc.

 

So, with this - I hope you find it useful and as always: feedback and comments are welcomed!

 

--jkbs | @xenfomation

 

 

 

Recent Comments
Tobias Kreidl
Super nice post, Jesse! One great reason to have logs on more than one server is that if there is ever a security issue, you stan... Read More
Thursday, 18 September 2014 17:12
JK Benedict
I could NOT agree more, Tobias and why I have been testing, experimenting, and really just trying to push the bounds as far as I c... Read More
Saturday, 27 September 2014 08:53
JK Benedict
Thank you, Tobias! Indeed, RSyslog and a base Debian install is my preferred choice for Syslog aggregation due to exactly what yo... Read More
Friday, 19 September 2014 03:08
Continue reading
51651 Hits
16 Comments

XenServer Root Disk Maintenance

The Basis for a Problem

UPDATE 21-MAR-2015: Thanks to feedback from our community, I have added key notes and additional information to this article.

For all that it does, XenServer has a tiny installation footprint: 1.2 GB (roughly).  That is the modern day equivalent of a 1.44" disk, really.  While the installation footprint is tiny, well, so is the "root/boot" partition that the XenServer installer creates: 4GB in size - no more, no less, and don't alter it! 

The same is also true - during the install process - for the secondary partition that XenServer uses for upgrades and backups:

The point is that this amount of space does not facilitate much room for log retention, patch files, and other content.  As such, it is highly important to tune, monitor, and perform clean-up operations on a periodic basis.  Without attention over time all hotfix files, syslog files, temporary log files, and other forms of data can accumulate until the point with which the root disk will become full.

UPDATE: If you are wondering where the swap partition is, wonder no more.  For XenServer, swap is file-based and is instantiated during the boot process of XenServer.  As for the 4GB partitions, never alter the size of these partitions upgrades, etc will re-align the partitions to match upstream XenServer release specifications.

One does not want a XenServer (or any server for that matter) to have a full root disk as this will lead to a full stop of processes as well as virtualization for the full disk will go "read only".  Common symptoms are:

  • VMs appear to be running, but one cannot manage a XenServer host with XenCenter
  • One can ping the XenServer host, but cannot SSH into it
  • If one can SSH into the box, one cannot write or create files: "read only file system" is reported
  • xsconsole can be used, but it returns errors when "actions" are selected

So, while there is a basis for a problem, the following article offers the basis for a solution (with emphasis on regular administration).

Monitoring the Root Disk

Shifting into the first person, I am often asked how I monitor my XenServer root disks.  In short, I utilize tools that are built into XenServer along with my own "Administrative Scripts".  The most basic way to see how much space is available on a XenServer's root disk is to execute the following:

df -h

This command will show you "disk file systems" and the "-h" means "human readable", ie Gigs, Megs, etc.  The output should resemble the following and I have made the line we care about in bold font:

Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1             4.0G  1.9G  1.9G  51% /
none                  299M   28K  299M   1% /dev/shm
/opt/xensource/packages/iso/XenCenter.iso
                       56M   56M     0 100% /var/xen/xc-install

A more "get to the point" way is to run:

df -h | grep "/$" | head -n 1

Which produces the line we are concerned with:

/dev/sda1             4.0G  1.9G  1.9G  51% /

The end result is that we know 51% of the root partition is used.  Not bad, really.  Still, I am a huge fan of automation and will now discuss a simple way that this task can be ran - automatically - for each of your XenServers.

What I am providing is essentially a simple BASH script that checks a XenServer's local disk.  If the local disk use exceeds a threshold (which you can change), it will send an alert to XenCenter so the the tactics described further in this document can be employed for the assurance of as much free space as possible.

Using nano or VI, create a file in the /root/ (root's home) directory called "diskmonitor" and paste in the following content:

#!/bin/bash
# Quick And Dirty Disk Monitoring Utility
# Get this host's UUID
thisUUID=`xe host-list name-label=$HOSTNAME params=uuid --minimal`
# Threshold of disk usage to report on
threshold=75    # an example of how much disk can be used before alerting
# Get disk usage
diskUsage=`df -h | grep "/$" | head -n 1 | awk {' print $5 '} | sed -n -e "s/%//p"`
# Check
if [ $diskUsage -gt $threshold ]; then
     xe message-create host-uuid=$thisUUID name="ROOT DISK USAGE" body="Disk space use has exceeded $diskUsage on `echo $HOSTNAME`!" priority="1"
fi

After saving this file be sure to make it executable:

chmod +x /root/diskmonitor

The "#!/bin/bash" at the start of this script now becomes imperative as it tells the user space (when called upon) to use the BASH interpreter.

UPDATE: To execute this script manually, one can execute the following command if in the same directory as this script:

./diskmonitor

This convention is used so that scripts can be execute just as if they were a binary/compiled piece of code.  If the "./" prefix is an annoyance, move /root/diskmonitor to /sbin/ -- this will ensure that one can execute diskmonitor without the "dot forward-slash" prefix while in other directories:

mv /root/diskmonitor /sbin/
# Now you should be able to execute diskmonitor from anywhere
diskmonitor

If you move the diskmonitor script make note of where you placed it as this directory will be needed for the cron entry.

For automation of the diskmonitor script one can now leverage cron: adding an entry to root's "crontab" and specify a recurring time diskmonitor should be executed (behind the scenes). 

The following is a basic outline as how to leverage cron so that diskmonitor will be executed four times per day.  Now, if you are looking for more information regarding cron, what it does, and how to configure it for other automation-based task then visit http://www.thegeekstuff.com/2009/06/15-practical-crontab-examples/ for more detailed examples and explanations.

1.  From the XenServer host command-line execute the following to add an entry to crontab for root:

crontab -e

2.  This will open root's crontab in VI or nano (text editors) where one will want to add one of the following lines based on where diskmonitor has been moved to or if it is still located in the /root/ directory:

# If diskmonitor is still located in /root/
00 00,06,12,18 * * * ./root/diskmonitor
# OR if it has been moved to the /sbin/ directory
00 00,06,12,18 * * * diskmonitor

3.  After saving this, we now have a cron entry that runs diskmonitor at midnight, six in the morning, noon, and 6 in the evening (military time) for every day of every week of every month.  If the script detects that the root drive on a XenServer is > 75% "used" (you can adjust this), it will send an alert to XenCenter where one can leverage - further - built in tools for email notifications, etc. 

The following is an example of the output of diskmonitor, but it is apropos to note that the following test was done using a threshold of 50% -- yes, in Creedence there is a bit more free space!  Kudos to Dev!

One can expand upon the script (and XenCenter), but lets focus on a few areas where root disk usage can be slowly consumed.

Removing Old Hotfixes

After applying one or more hotfixes to XenServer, copies of each decompressed hotfix are stored in /var/patch.  The main reason for this - in short - is that in pooled environments, hotfixes are distributed from a host master to each host slave to eliminate the need to repetitively download one hotfix multiplied by the number of hosts in a pool. 

The more complex reason is for consistency, for if a host becomes the master of the pool, it must reflect the same content and configuration as its predecessor did and this includes hotfixes.

The following is an example of what the /var/patch/ directory can look like after the application of one or more hotfixes:

Notice the /applied sub-directory?  We never want to remove that. 

UPDATE 21-MAR-2015:  Thanks to Tim, the Community Comments, and my Senior Lead for validating I was not "crazy" in my findings before composing this article: "xe patch-destroy" did not do its job as many commented.  It has been resolved post 6.2, so I thank everyone - especially Dev - for addressing this.

APPROPRIATE REMOVAL:

To appropriately remove these patch files, one can should utilize the "xe patch-destroy" command.  While I do not have a "clever" command-line example to take care of all files at once, the following should be ran against each file that has a UUID-based naming convention:

cd /var/patch/

xe patch-destroy uuid=<FILENAME, SUCH AS 4d2caa35-4771-ea0e-0876-080772a3c4a7>
(repeat "xe patch-destroy uuid=" command for each file with the UUID convention)

While this is not optimum, especially to run per-host in a pool, it is the prescribed method and as I have a more automated/controlled solution, I will naturally document it.

EMERGENCY SITUATIONS:

In the event that removal of other contents discussed in this article does not resolve a full root disk issue, the following can be used to remove these patch files.  However, it must be emphasized that a situation could arise wherein the lack of these files will require a re-download and install of said patches:

find /var/patch -maxdepth 1 | grep "[0-9]" | xargs rm -f

Finally, if you are in the middle of applying hotfixes do not perform the removal procedure (above) until all hosts are rebooted, fully patched, and verified as in working order.  This applies for pools - especially - where a missing patch file could throw off XenCenter's perspective of what hotfixes have yet to be installed and for which host.

The /tmp Directory

Plain and simple, the /tmp directory is truly meant for just that: holding temporary data.  Pre-Creedence, one can access a XenServer's command-line and execute the following to see a quantity of ".log" files:

cd /tmp
ls

As visualized (and overtime) one can see that an accumulation of many, many log files.  Albeit, these are small at the individual file perspective, but collectively... they take up space.

UPDATE 21-MAR-2015:  Again, thanks to everyone as these logs were always intended to be "removed" automatically once a Guest VM was started.  So, as of 6.5 and beyond -- this section is irrelevant!

cd /tmp/
rm -rf *.log

This will remove only ".log" files so any driver ISO images stored in /tmp (or elsewhere) should be manually addressed.

Compressed Syslog Files

The last item is to remove all compressed Syslog files stored under /var/log.  These usually consume the most disk space and as such, I will be authoring an article shortly to explain how one can tune logrotate and even forward these messages to a Syslog aggregator.

UPDATE:  As a word of of advice, we are only looking to clear "*.gz" (compressed/archived) log files.  Once these are deleted, they are gone.  Naturally this means an server status report gathered for collection will lack historical information so one may consider copying these off to another host (using scp or WinSCP) before following the next steps to remove them under a full root disk scenario.

In the meantime, just as before one can execute the following command to keep current syslog files in-tact, but remove old, compressed log files:

cd /var/log/
rm -rf *gz

So For Now...

It is at this point one has a tool to know when a disk has hit capacity and methods with which to clean-up specific items.  This can be taken by the admin to be ran in an automated fashion or manual fashion.  It is truly up to the admin's style of work.

Please be on the lookout for my next article involving Syslog forwarding, logrotation, and so forth as this will help any size deployment of XenServer: especially where regulations for log retention is a strict requirement.

Feel free to post any questions, suggestions, or methods you may even use to ensure XenServer's root disk does not fill up.

 

--jkbs | @xenfomation

 

 

Recent Comments
JK Benedict
Just as an update, Heinrich - my Beta 3 system is at 48% post-install and with a PV/HVM Debian 7 Guest running (I will be posting ... Read More
Saturday, 27 September 2014 08:55
JK Benedict
Heinrich, Quite welcome, sir!! Different versions of XenServer naturally leave different footprints, but 60-65% is where my syste... Read More
Tuesday, 16 September 2014 13:05
JK Benedict
Yup: my error. I grew this simple script to be modular: for pools and other data. The correct syntax for the "xe message" line ne... Read More
Tuesday, 16 September 2014 13:16
Continue reading
148427 Hits
51 Comments

Running Scientific Linux Guest VMs on XenServer

Running Scientific Linux Guest VMs on XenServer

What is Scientific Linux?

In short, Scientific Linux is an customized RedHat/CentOS Linux distribution provided by CERN and Fermilab: popular in educational institutions as well as laboratory environments.  More can be read about Scientific Linux here: https://www.scientificlinux.org/

From my own long-term testing - before XenServer 6.2 and our pre-release/Alpha - Creedence - I have ran both Scientific Linux 5 and Scientific Linux 6 without issues.  This article's scope is to show how one can install Scientific Linux and, more specifically, ensure the XenTools Guest Additions for Linux are installed as these do not require any form of "Xen-ified" kernel.

XenServer and Creedence

The following are my own recommendations to run Scientific Linux in XenServer:

  1. I recommend using XenServer 6.1 through any of the Alpha releases due to improvements with XenTools
  2. I recommend using Scientific Linux 5 or Scientific Linux 6
  3. The XenServer VM Template one will need to use will either be of CentOS 5 or CentOS 6: 32 or 64 bit depends on the release of Scientific Linux you will be using

One will also require a URL as to install Scientific Linux from their repository, found at http://ftp.scientificlinux.org/linux/scientific/

The following are URLs I recommend for use during the Guest Installation process (discussed later):

Scientific Linux 5 or 6 Guest VM Installation

With XenCenter, the process of installing Scientific Linux 5.x or Scientific Linux 6 uses the same principles.  You need to create a new VM, select the appropriate CentOS template, and define the VM parameters for disk, RAM, and networking:

1.  In XenCenter, select "New VM":

2.  When prompted for the new VM Template, select the appropriate CentOS-based template (5 or 6, 32 or 64 bit):

3.  Follow the wizard to add processors, disc, and networking information

4.  From the console, follow the steps as to install Scientific Linux 5 or 6 based on your preferences.

5.  After rebooting, login as root and execute the following command within the Guest VM:

yum update

6.  Once yum has applied any updates, reboot the Scientific Linux 5 or 6 Guest VM by executing the following within the Guest VM:

reboot

7.  With the Guest VM back up, login as root and mount the xs-tools.iso within XenCenter:

7.  From the command line, execute the following commands to mount xs-tools.iso within the Guest VM as well as to run the install.sh utility:

cd ~
mkdir tools
mount /dev/xvdd tools/
cd tools/Linux/
./install.sh

8.  With Scientific Linux 5 you will be prompted to install the XenTools Guest Additions - select yes and when complete, reboot the VM:

reboot

9.  With Scientific Linux 6 you will notice the following output:

Fatal Error: Failed to determine Linux distribution and version.

10.  This is not a Fatal Error, but an error induced because the distro build and revision are not presented as expected.  This means that you will manually need to install the XenTools Guest Additions by executing the following commands and rebooting:

rpm -ivh xe-guest-utilities-xenstore-<version number here.x86_64.rpm
rpm -ivh xe-guest-utilities-<version number here>.x86_64.rpm
reboot

Finally after the last reboot (post guest addition install) one will notice from XenCenter that the network address, stats, and so forth are available (including the ability to migrate the VM):

 

I hope this article helps any of you out there and feedback is always welcomed!

--jkbs

@xenfomation

 

Recent Comments
Terry Wang
Running PV on HVM (also called PVHVM sometimes) is just fine. For modern Linux distros with Linux 3.0+ kernel (it'll unplug the QE... Read More
Monday, 28 July 2014 03:56
JK Benedict
Stay tuned! I have more to offer for Creedence... especially in lieu of Mr. Mackey's request from the following article @ http://... Read More
Saturday, 27 September 2014 09:03
Ian Yates
Hi, I'm new to this community but independently worked out a (pretty much identical) install routine for ScientificLinux on Xen so... Read More
Wednesday, 30 July 2014 10:24
Continue reading
20852 Hits
3 Comments

Resetting Lost Root Password in XenServer 6.2

The Situation

Bad things can happen... badly.  In this case the root password to manage a XenServer (version 6.2) was... lost.

Physical or remote login to the XenServer 6.2 host failed authentication, naturally, and XenCenter had been disconnected from the host: requiring an administrator to provide these precious credentials, but in vein.

An Alternate Situation

Had XenCenter been left open ( offering command line access to the XenServer host in question) the following command could have been used from the XenServer's command line as to initiate a root password reset:

passwd

Once the root user's password has been changed the connection to the host from XenCenter to the XenServer host will need to be reestablished: using the root username and "new" password.

Once connected the remainder of this article becomes irrelevant otherwise you may very well need to...

Boot into Linux Single User Mode

Be it forgetfulness, change of guard, another administrator changing the password, or simply a typo in company documentation, the core problem being address via this post is that one cannot connect to XenServer 6.2 as the root password is... lost or forgotten.

As a secondary problem, one has lost patience and has obtained physical or iLO/iDRAC access to the XenServer in question, but still the root password is not accepted:

 

The Shortest Solution: Breaking The Law of Physical Security

I am not encouraging hacking, but that physical interaction with the XenServer in question and altering the boot to "linux single user mode" is the last solution to this problem.  To do this, one will need have/understand:

  • Physical Access, iLO, iDRAC, etc
  • A rebooted of the XenServer in question will be required

With disclaimers aside I now highly recommend reading and reviewing the steps outlined below before going through the motions. 

Some steps are time sensitive, so being prepared is merely a part of the overall pla.

  1. After gaining physical or iLO/iDRAC access to the XenServer in question, reboot it!  With iLO and iDRAC, there are options to hard or soft reset a system and either option is fine.
  2. Burn the following image into your mind for after the server reboots and runs through hardware BIOS/POST tests, you will see the following for 5 seconds (or so):
  3. Immediately grab the keyboard and enter the following:
    menu.c32 (press enter)
  4. The menu.c32 boot prompt will appear and again, you will only have 5 or so seconds to select the "XE" entry and pressing tab to edit boot options:
  5. Now, at the bottom of the screen one will see the boot entry information.  Don't worry, you have time so make sure it is similar to the following:
  6. Near the end of the, one should see "console=tty0 quiet vga=785 splash quiet": replace "quiet vga=785 splash" with "linux single".  More specifically - without the quotes - such as:
    linux single
  7. With that completed, simply press enter as to boot into Linux's single user mode.  You should eventually be dropped into a command line prompt (as illustrated below):
  8. Finally, we can reset the root password to something one can remember by executing the Linux command:
    passwd

  9. When prompted, enter the new root user password: you will be asked to verify it and upon success you should see the following:
  10. Now, enter the following command to reboot the XenServer in question:
    reboot
  11. Obviously, this will reboot the XenServer as illustrated below:
  12. Let the system fully reboot and present the xsconsole.  To verify that the new password has taken affect, select "Local Command Shell" from xsconsole.  This will require you to authenticate as the root user:
  13. If successful you will be dropped to the local command shell and this also means you can reconnect and manage this XenServer via XenCenter with the new root password!
Tags:
Recent Comments
Davide Poletto
Basically it's a matter of entering in Linux Single User mode to (re)initialize root's password before XenServer starts its boot p... Read More
Saturday, 12 July 2014 13:23
Davide Poletto
Basically it's a matter of entering in Linux Single User mode to (re)initialize root's password before XenServer starts its boot p... Read More
Saturday, 12 July 2014 13:31
JK Benedict
Davide, Thanks for the feedback: it is greatly appreciated. Sincerely, --jkbs @xenfomation... Read More
Wednesday, 16 July 2014 17:58
Continue reading
93274 Hits
12 Comments

Patching XenServer at Scale

In January, I posted a how-to guide covering the installation of XenServer in a large scale environment, and this month we're going to talk about patching XenServer in a similar environment. Patching any operating environment is an important aspect of running a production installation, and XenServer is no different. Patching a XenServer manually can be done in one of two ways; either through XenCenter and its rolling pool upgrade option or via the CLI. The rolling pool upgrade wizard has been available since XenServer 6.0, and not only applies hotfixes to all the servers in a pool in the correct order, but also ensures any running VMs are migrated if reboots are required. If you prefer to apply the patches using the CLI, it becomes your responsibility to perform the VM migration, but the process is quite simple. XenServer customers with a Citrix support contract can utilize the rolling pool upgrade wizard, while free users have the option of manually patching using the CLI. Of course these two options can be used in a large scale environment, but generally the requirement is to script everything, and that's where this blog comes in.

Assumptions

The core assumption in the script in this blog is that the XenServer hosts are not in a pool. If the hosts are in a pool, then you should apply patches to the pool master first, and then any slaves. Since we're building on the environment in my previous blog which had standalone hosts, this assumption is valid.

Preparation Steps

  1. Download the desired hotfixes, patches and service packs from either citrix.com (http://support.citrix.com/product/xens/v6.2.0/) or xenserver.org (http://xenserver.org/overview-xenserver-open-source-virtualization/download.html)
  2. Extract the xsupdate file of each patch into a directory on an NFS share
  3. Test each patch to verify it works in your environment. While not required, I always like to do this because QA can't know every possible configuration and bugs do happen.
  4. Create a file named manifestand place it in the same directory as the xsupdate files. The manifest file will contain a single line for each patch, and those patches will be processed in order. An example manifest file is provided below, and any given line can be commented out using the hash (#) character.
    XS62E001.xsupdate
    XS62E002.xsupdate
    XS62E004.xsupdate
    XS62E005.xsupdate
    XS62E009.xsupdate
    XS62E010.xsupdate
    XS62E011.xsupdate
    XS62E012.xsupdate
    XS62ESP1.xsupdate
  5. Create a script file named apply-patches.shand place it in a known location. The contents of the script will be
    #!/bin/sh 
    # apply all XenServer patches which have been approved in our manifest
    
    mkdir /mnt/xshotfixes
    mount 192.168.98.3:/vol/exports/isolibrary/xs-hotfixes /mnt/xshotfixes
    
    
    HOSTNAME=$(hostname)
    HOSTUUID=$(xe host-list name-label=$HOSTNAME --minimal)
    while read PATCH
    do 
    if [ "$(echo "$PATCH" | head -c 1)" != '#' ]
    then 
    	PATCHNAME=$(echo "$PATCH" | awk -F: '{ split($1,a,"."); printf ("%s\n", a[1]); }')
    	echo "Processing $PATCHNAME"
    	PATCHUUID=$(xe patch-list name-label=$PATCHNAME hosts=$HOSTUUID --minimal)
    	if [ -z "$PATCHUUID" ]
    	then
    		echo "Patch not yet applied; applying .."
    		PATCHUUID=$(xe patch-upload file-name=/mnt/xshotfixes/$PATCH)
    		if [ -z "$PATCHUUID" ] #empty uuid means patch uploaded, but not applied to this host
    		then
    			PATCHUUID=$(xe patch-list name-label=$PATCHNAME --minimal)
    		fi
    		#apply the patch to *this* host only
    		xe patch-apply uuid=$PATCHUUID host-uuid=$HOSTUUID
    
    		# remove the patch files to avoid running out of disk space in the future
    		xe patch-clean uuid=$PATCHUUID 
    		
    		#figure out what the patch needs to be fully applied and then do it
    		PATCHACTIVITY=$(xe patch-list name-label=$PATCHNAME params=after-apply-guidance | sed -n 's/.*: \([.]*\)/\1/p')
    		if [ "$PATCHACTIVITY" == 'restartXAPI' ]
    		then
    			xe-toolstack-restart
    			# give time for the toolstack to restart before processing any more patches
    			sleep 60
    		elif [ "$PATCHACTIVITY" == 'restartHost' ]
    		then
    			# we need to rebot, but we may not be done.
    			# need to create a link to our script
    			
    			# first find out if we're already being run from a reboot
    			MYNAME="`basename \"$0\"`"
    			if [ "$MYNAME" == 'apply-patches.sh' ]
    			then
    				# I'm the base path so copy myself to the correct location
    				cp "$0" /etc/rc3.d/S99zzzzapplypatches  
    			fi
    			
    			reboot
    			exit
    		fi
    		
    	else
    		echo "$PATCHNAME already applied"
    	fi
    	
    fi
    done < "/mnt/xshotfixes/manifest"
    
    echo "done"
    umount /mnt/xshotfixes
    rmdir /mnt/xshotfixes
    
    # lastly if I'm running as part of a reboot; kill the link
    rm -f /etc/rc3.d/S99zzzzapplypatches 

Applying Patches

Applying patches is as simple as running the script file and letting it do what it needs to do. Here's how it works...

  1. We need to find out if the patch has already been applied.
  2. If the patch hasn't been applied, we want to upload it and then apply it. Since any given patch might require the toolstack to be restarted, we check for that and restart the toolstack. Additionally we need to handle the case where the patch might require a reboot. If that's the case, we want to reboot, but also might need to process additional patches. To account for that, we'll insert ourself into the reboot sequence to keep processing more patches until we've reached the end.
  3. Since we want to be sensitive to disk space usage, we'll cleanup the patch files once each patch has been applied.

 

This script becomes quite valuable when used in conjunction with the provisioning script in my blog on installing XenServer at scale. Simply copy the patch script to /etc/rc3.d/S99zzzzapplypatches and add that command to first-boot-script.sh prior to the final reboot. With the combination of these two scripts, you now can install XenServer at scale, and ensure those newly installed XenServer hosts are fully patched from the beginning.     

Recent Comments
Tim Mackey
Thanks Matthew. The HTML tidy code in the blog editor ate it.
Wednesday, 19 March 2014 15:48
Anthony
Hi Tim Mackey, Thanks a great deal for you hard work, just the script I've been looking for. I'm a newby in bash shell scrip... Read More
Wednesday, 26 March 2014 17:24
Tim Mackey
Anthony, For this script the only thing you'd want to change is the IP address of the NFS server and the export point where you p... Read More
Tuesday, 01 April 2014 14:32
Continue reading
46794 Hits
27 Comments

How-to: Installing XenServer at Scale

Once upon a time, in a time far, far away (don’t most good stories start this way?) XenServer was so easy to get installed and running that we promoted it as “Ten Minutes to Xen”.  While this is still often the case for small installations, even ten minutes can be problematic for some, and even more so when hundreds of hosts are involved.  In this article, we’ll expand upon the XenServer Quick Installation Guide and show how you can scale out your XenServer environment quickly using a scripting model, and ensure you have correct monitoring and logging in place by default. 

Assumptions

This article assumes you’ve already installed XenServer on one server and validated that no additional drivers are required.  It also assumes that you’ve configured you server BIOS to be identical across all servers, and that PXE is supported on the NIC used as the management network.  One key item in the preparation is that the servers are set to boot in legacy BIOS mode and not UEFI.

Preparation steps

1.       Download the XenServer installation ISO media: http://xenserver.org/open-source-virtualization-download.html

2.       Extract the entire contents of XenServer installation ISO file to either a HTTP, FTP or NFS location (in this example we’ll be using NFS)

3.       Collect the following information

Hostname: xenserver
Root password: password
Keyboard locale: us
NTP server address: 0.us.pool.ntp.org
DNS server address: dns.local
Time zone:  America/New_York (supported time zones in RHEL)
Location of extracted ISO file: nfsserver:/
TFTP server IP address: pxehost

Configuring TFTP server to supply XenServer installer

1.       In the /tftpboot directory create a new directory called xenserver

2.       Copy the mboot.c32 and pxelinux.0 files from the /boot/pxelinux directory of the XenServer ISO file to the /tftpboot directory

3.       Copy the install.img file from the root directory of the XenServer ISO file to the /tftpboot/xenserver directory

4.       Copy the vmlinuz and zen.gz files from the /boot directory of the XenServer ISO file to the /tftpboot/xenserver directory

5.       In the /tftpboot directory, create a new directory called pxelinux.cfg

The above steps are covered in this script: 

mkdir /mnt/xsinstall
mount [XenServer ISO Extract Location] /mnt/xsinstall
cd ./tftpboot
mkdir xenserver
cp /mnt/xsinstall/boot/pxelinux/mboot.c32 ./
cp /mnt/xsinstall/boot/pxelinux/pxelinux.0 ./
cp /mnt/xsinstall/install.img ./xenserver
cp /mnt/xsinstall/boot/vmlinuz ./xenserver 
cp /mnt/xsinstall/boot/zen.gz./xenserver 

6.       In the /tftpboot/pxelinux.cfg directory create a new configuration file called default

7.       Edit the default file to contain the following information.  Note that this command includes remote logging to a SYSLOG server.

default xenserver-auto
label xenserver-auto
	kernel mboot.c32
	append xenserver/xen.gz dom0_max_vcpus=1-2 dom0_mem=752M,max:752M com1=115200,8n1 console=com1,vga --- xenserver/vmlinuz xencons=hvc console=hvc0 console=tty0 answerfile=http://[pxehost]/answerfile.xml remotelog=[SYSLOG] install --- xenserver/install.img 

8.       Unattended installation of XenServer requires an answer file.  Place the answer file in the root directory of your NFS server.  Please note that there are many more options than are listed here, but this will suffice for most installations.

  
<?xml version="1.0"?> <installation mode="fresh" srtype="lvm"> <bootloader>extlinux</bootloader> <primary-disk gueststorage="yes">sda</primary-disk> <keymap>[keyboardmap]</keymap> <hostname>[hostname]</hostname> <root-password>[password]</root-password> <source type="nfs">[XenServer ISO Extract Location]</source> <admin-interface name="eth0" proto="dhcp"/> <name-server>dns.local</name-server> <timezone>[Time zone]</timezone> <time-config-method>ntp</time-config-method> <ntp-server>[NTP Server Address]</ntp-server> <script stage="filesystem-populated" type="nfs">[XenServer ISO Extract Location]/post-install-script.sh</script> </installation>

Configuring the post installation scripts 

1.       In the root directory of the XenServer ISO extract location, create a file named post-install-script.sh with the following contents.  This script will run after a successful installation, and copies a first boot script for post installation configuration.

 

#!/bin/sh
touch $1/tmp/post-executed
mkdir $1/mnt/xsinstall
mount [XenServer ISO Extract Location] $1/mnt/xsinstall
cp $1/mnt/xsinstall/first-boot-script.sh $1/var/xen/fbs.sh
chmod 777 $1/var/xen/fbs.sh
ln -s /var/xen/fbs.sh $1/etc/rc3.d/S99zzzzpostinstall

2.       In the root directory of the XenServer ISO extract location, create a file named first-boot-script.sh with whatever steps you need to configure XenServer for your environment.  In the script below, we take care of the following cases:

a.       Assign a unique, human understandable hostname based on the assigned IP address

b.      Configure a dedicated storage network which uses Jumbo frames

c.       Configure centralized logging using SYSLOG

d.      Configure network monitoring using NetFlow

e.      Apply a socket based license

f.        Remove the first script to ensure it doesn’t run on subsequent reboots

 

#!/bin/bash
# Wait before start
sleep 60
 
# Get current hostname which then gets us the host-uuid
HOSTNAME=$(hostname)
HOSTUUID=$(xe host-list name-label=$HOSTNAME --minimal)
 
# Get the management pif UUID which gets us the IP address
MGMTPIFUUID=$(xe pif-list params=uuid management=true host-name-label=$HOSTNAME --minimal)
MGMTIP=$(xe pif-param-list uuid=$MGMTPIFUUID | grep 'IP '| sed -n 's/.*: ([0-9.]*)/1/p')
 
# From the IP address, get the zone and host
ZONE=$(echo "$MGMTIP" | awk -F: '{ split($1,a,"."); printf ("%dn", a[3]); }')
HOST=$(echo "$MGMTIP" | awk -F: '{ split($1,a,"."); printf ("%dn", a[4]); }')
 
# Configure SYSLOG
xe host-param-set uuid=$HOSTUUID logging:syslog_destination=[SYSLOG]
xe host-syslog-reconfigure host-uuid=$HOSTUUID
 
# Assign License to server
xe host-apply-edition edition=per-socket host-uuid=$HOSTUUID license-server-address=[LicenseServer] license-server-port=27000
 
# Setup storage network. For us, that’s on eth1 (aka xenbr1)
STORAGEPIFUUID=$(xe pif-list params=uuid host-name-label=$HOSTNAME device=eth1 --minimal)
xe pif-reconfigure-ip mode=static uuid=$STORAGEPIFUUID ip=192.168.$ZONE.$HOST netmask=255.255.255.0
xe pif-param-set disallow-unplug=true uuid=$STORAGEPIFUUID
xe pif-param-set other-config:management_purpose="Storage" uuid=$STORAGEPIFUUID
NETWORKUUID=$(xe network-list params=uuid bridge=xenbr1 –minimal)
xe network-param-set uuid=$NETWORKUUID MTU=9000
 
# Setup NetFlow monitoring on the 4 network bridges in our hosts
ovs-vsctl -- set Bridge xenbr0 netflow=@nf -- --id=@nf create NetFlow targets=\"192.168.0.34:5566\" active-timeout=30
ovs-vsctl -- set Bridge xenbr1 netflow=@nf -- --id=@nf create NetFlow targets=\"192.168.0.34:5566\" active-timeout=30
ovs-vsctl -- set Bridge xenbr2 netflow=@nf -- --id=@nf create NetFlow targets=\"192.168.0.34:5566\" active-timeout=30
ovs-vsctl -- set Bridge xenbr3 netflow=@nf -- --id=@nf create NetFlow targets=\"192.168.0.34:5566\" active-timeout=30
 
# Rename host in both XenServer and for XenCenter
NEWHOSTNAME=$(echo $HOSTNAME$ZONE-$HOST)
xe host-set-hostname-live host-uuid=$HOSTUUID host-name="$NEWHOSTNAME "
xe host-param-set uuid=$HOSTUUID name-label="$NEWHOSTNAME"
# Disable first boot script for subsequent reboots rm -f /etc/rc3.d/S99zzzzpostinstall # Final Reboot reboot

Configuring the network

There are several considerations we need to account for in our network design. 

1.       The XenServer management networks cannot be tagged within XenServer.  To work around this, the network ports will need to have a default VLAN assigned to them. 

2.       The storage management network is using jumbo frames and will need an MTU of 9000

3.       The TFTP server will need to be on the primary management network

4.       Since we will want to have persistent control over the XenServer hosts and their VMs, we will want to have each XenServer use a static address.  In order to accomplish with DHCP, we’ll need to configure our DHCP service to use static MAC address reservations.  A sample dhcpd.conf is provided below:

authoritative;
dns-update-style interim;
default-lease-time 28800;
max-lease-time 28800;
 
        option routers                  10.10.2.1;
        option broadcast-address        10.10.2.255;
        option subnet-mask              255.255.255.0;
        option domain-name-servers      10.10.2.2, 10.10.2.3;
        option domain-name              "xspool.local";
 
        subnet 10.10.2.0 netmask 255.255.255.0 {
             pool {
                range 10.10.2.50 10.10.2.250;
 
# one host entry following our naming convention
                host xenserver2-50 {
                  hardware ethernet 00:11:22:33:44:55;
                  fixed-address 10.10.2.50;
                }
                host xenserver2-51 {
                  hardware ethernet 00:11:22:33:44:56;
                  fixed-address 10.10.2.51;
                }
                host xenserver2-52 {
                  hardware ethernet 00:11:22:33:44:57;
                  fixed-address 10.10.2.52;
                }
# prevent unknown hosts from polluting the pool
                deny unknown-clients;
             }

Booting the servers to perform the install

Since our objective is to perform a scale installation using scripting, we also need to script the PXE boot of our servers, and ensure the PXE boot is a first boot only (i.e. we’re not continuously reinstalling on each reboot).  Thankfully remote access cards provide this capability, and I'm currently compiling a set of scripts to cover as many vendors as I can. 

Tying it all together

In this article you've seen how easy it is to deploy a large number of XenServer hosts consistently.  That's not the end of things, and over the coming weeks I'll be posting guides covering many more scale operations with XenServer.

 

Tags:
Recent Comments
roshan
Great article, I am researching how to setup Diskless XenServer booting.. Is it possible to expand this article to include diskl... Read More
Thursday, 23 January 2014 06:20
Tim Mackey
Roshan, I'll add that to the list of blogs I'm working on publishing. One of the key areas I'd need to investigate is the log ac... Read More
Thursday, 23 January 2014 14:44
roshan
Hi Tim That would be fantastic.. I am researching this atm, technical this is possible with linux. Just need to cater for dif... Read More
Thursday, 23 January 2014 22:07
Continue reading
64379 Hits
18 Comments

About XenServer

XenServer is the leading open source virtualization platform, powered by the Xen Project hypervisor and the XAPI toolstack. It is used in the world's largest clouds and enterprises.
 
Technical support for XenServer is available from Citrix.