Virtualization Blog

Discussions and observations on virtualization.

Increasing Ubuntu's Resolution

Increasing Ubuntu's Resolution

Maximizing Desktop Real-estate with Ubuntu

With the addition of Ubuntu (and the likes) to Creedence, you may have noticed that the default resolution is 1024x768.  I certainly noticed it and with much work on 6.2 and Creedence Beta, I have a quick solution to maximizing the screen resolution for you.

The thing to consider is that a virtual frame buffer is what is essentially being used.  You can re-invent X configs all day, but the shortest path is to - first - ensure that that the following files are installed on your Ubuntu guest VM:

sudo apt-get install xvfb xfonts-100dpi xfonts-75dpi xfstt

Once that is all done installing, the next step is to edit Grub -- specifically /etc/default/grub:

sudo vi /etc/default/grub

Considering your monitor's maximum resolution (or not if you want to remote into Ubuntu using XRDP), look for the variable GRUB_GFXMODE.  This is where you can specify your desired BOOT resolutions that we will instruct the guest VM to SUSTAIN into user-space:

GRUB_GFXMODE=1280x960,1280x800,1280x720,1152x768,1152x700,1024x768,800x600

Next, adjust the variable GRUB_PAYLOAD_LINUX to equal keep, or:

GRUB_PAYLOAD_LINUX=keep

Save the changes and be certain to execute the following:

sudo update-grub
sudo reboot

Now, you will notice that even during the boot phase that the resolution is large and this will carry into user space: Lightdm, Xfce, and the likes.

Finally, I would highly suggest installing XRDP for your Guest VM.  It allows you to access that Ubuntu/Xbunutu/etc desktop remotely.  Specific details regarding this can be found through Ubuntu's forum:

http://askubuntu.com/questions/449785/ubuntu-14-04-xrdp-grey


Enjoy!

--jkbs | @xenfomation

 

 

Recent Comments
JK Benedict
Thanks, YLK - I am so glad to hear this helped someone else! Now... install XRDP and leverage the power to Remote Desktop (secure... Read More
Thursday, 25 December 2014 04:46
gfpl
thanks guy is very good help me !!!
Friday, 06 March 2015 10:52
Fredrik Wendt
Would be really nice to see all steps needed (CLI on dom0) to go from http://se.archive.ubuntu.com/ubuntu/dists/vivid/main/install... Read More
Monday, 14 September 2015 21:48
Continue reading
25863 Hits
6 Comments

VGA over Cirrus in XenServer 6.2

Achieve Higher Resolution and 32Bpp

For many reasons – not exclusive to XenServer – the Cirrus video driver has been a staple wherein a basic/somewhat agnostic video driver is needed.  When one creates a VM within XenServer (specifically 6.2 and previous versions) the Cirrus video driver is used by default for video...and it does the job.

I had been working on a project with my mentor related to an eccentric OS, but I needed a way to get more real-estate to test a HID pointing device by increasing the screen resolution.  This led me to find that at some point in our upstream code there were platform (virtual machine metadata) options that allowed an one to "ditch" Cirrus and 1024x768 resolution for higher resolutions and color depth via a standard VGA driver addition.

This is not tied into GPU Pass through nor is it a hack.  It is a valuable way to achieve 32bpp color in Guest VMs with video support as well as obtaining higher resolutions.

Windows 7: A Before and After Example

To show the difference between "default Cirrus" and the Standard VGA driver (which I will discuss how to switch to shortly), Windows 7 Enterprise had the following resolution to offer me with Cirrus:


Now, after switching to standard VGA for the same Guest VM and rebooting, I now had the following resolution options within Windows 7 Enterprise:

Switching a Guest for VGA

After you create your VM – Windows, Linux, etc – perform the following steps to enable the VGA adapter:

 

  • Halt the Guest VM
  • From the command line, find the UUID of your VM:
 xe vm-list name-label=”Name of your VM”
  • Taking the UUID value, run the following two commands:
 xe vm-param-set uuid=<UUID of your VM> platform:vga=std
 xe vm-param-set uuid=<UUID of your VM> platform:videoram=4
  •  Finally, start your VM and one should be able to achieve higher resolution at 32bpp.

 

It is worth noting that the max amount of "videoram" that can be specified is 16 (megabytes).

Switching Back to Cirrus

If – for one reason or another – you want to reset/remove these settings as to stick with the Cirrus driver, run the following commands:

 xe vm-param-remove uuid=<UUID of your VM> param-name=platform param-key=vga
 xe vm-param-remove uuid=<UUID of your VM> param-name=platform param-key=videoram

Again, reboot your Guest VM and with the lack of VGA preference, the default Cirrus driver will be used.

What is the Catch?

There is no catch and no performance hit.  The VGA driver's "videoram" specification is carved out of the virtual memory allocated to the Guest VM.  So, for example, if you have 4GB allocated to a Guest VM, subtract at max 16 megabytes from 4GB.  Needless to say, that is a pittance and does not impact performance.

Speaking of performance, my own personal tests were simple and repeated several times:

 

  • Utilized a tool that will remain anonymous
  • Use various operating systems with Cirrus and resolution at 1024 x 768
  • Run 2D graphic test suite
  • Write down Product X, Y, or Z’s magic number that represents good or bad performance
  • Apply the changes to the VM to use VGA (keeping the resolution at 1024 x 768 for some kind of balance)
  • Run the same volley of 2D tests after a reboot
  • Write down Product X, Y or Z’s magic number that represents good or bad performance

 

In the end, I personally found from my experience that there was a very minor, but noticeable difference in Cirrus versus VGA.  Cirrus usually came in 10-40 points below VGA at the 1024 x 768 level.  Based on the test suite used, this is nothing spectacular, but it is certainly a benefit as I found no degraded performance across XenServer (other Guests), etc.

I hope this helps and as always: questions and comments are welcomed!

 

--jkbs | @xenfomation

 

Recent Comments
JK Benedict
Hey, Chris!! Excellent questions! So - I think I need to clear up my poor use of words: more importantly, tying words together. ... Read More
Saturday, 11 October 2014 22:50
Continue reading
32423 Hits
4 Comments

Security bulletin covering "Shellshock"

Over the past several weeks, there has been considerable interest in a series of vulnerabilities in bash with the attention grabbing name of "shellshock". These bash vulnerabilities are more properly known as CVE-2014-6271, CVE-2014-6277, CVE-2014-6278, CVE-2014-7169, CVE-2014-7186 and CVE-2014-7187. As was indicated in security bulletin CTX200217, XenServer hosts were potentially impacted, but investigation was continuing. That investigation has been completed and the associated impact is described in security bulletin CTX200223, which also contains patch information for these vulnerabilities.

Learning about new XenServer hotfixes

When a hotfix is released for XenServer, it will be posted to the Citrix support web site. You can receive alerts from the support site by registering at http://support.citrix.com/profile/watches and following the instructions there. You will need to create an account if you don't have one, but the account is completely free. Whenever a hotfix is released, there will be an accompanying security advisory in the form of a CTX knowledgebase article for it, and those same KB articles will be linked on xenserver.org in the download page.

Patching XenServer hosts

XenServer admins are encouraged to schedule patching of their XenServer installations. Please note that the items contained in the CTX200223 bulletin do impact XenServer 6.2 hosts, and to apply the patch, all XenServer 6.2 hosts will first need to be patched to service pack 1. The complete list of patches can be found on the XenServer download page.     

Continue reading
19774 Hits
0 Comments

Creedence: Debian 7.x and PVHVM Testing

Introduction

On my own time and on my own testing equipment, I have been able to run many Guests VMs in PVHVM containers - before Creedence after its release to the public back in June.  Last week's broadcast of Creedence Beta 3's release, I was naturally excited to see Tim's spotlight on PVHVM and the following article's intent is to show - in a test environment only - how I was able to run Debian 7.x (64-bit) in the same fashion.

For more information regarding PV + HVM as to establish a PVHVM container, Tim linked a great article in his Creedence Beta 3 post last Monday that I highly recommend you read as the finer details are out of scope for this article's intent and purpose.

Why is this important to me?  Quite simply we can go from this....

... to this ...

So now, let's make a PVHVM container for a Debian 7.x (64-Bit) Guest VM within XenCenter!

Requirements

1.  Creedence Beta 3 and XenCenter

2.  The full installation ISO for Debian 7.x (from https://www.debian.org/CD/http-ftp/#stable )

3.  Any changes mentioned below should not be applied to any of the stock Debian templates

4.  This should not be performed on your production environment

Creating A Default Template

With XenCenter open, ensure that from the View options one has "XenServer Templates" selected:

We should now see the default templates that XenServer installs:

1.  Right-click on the "Debian Wheezy 7 (64-bit)" template and save it as "Debian 7":

 

3.  This will produce a "custom template" - highlight it and copy the UUID of the custom template:

4.  The remainder of this configuration will take place from the command-line.

5.  To make the changes to the custom template easier, export the UUID of the custom template we created to avoid copy/paste errors:

export myTemp="af84ad43-8caf-4473-9c4d-8835af818335"
echo $myTemp
af84ad43-8caf-4473-9c4d-8835af818335

6.  With the $myTemp variable created, let us first convert this custom template to a default template by executing:

xe template-param-set uuid=$myTemp other-config:default_template=true

xe template-param-remove uuid=$myTemp param-name=other-config param-key=base_template_name

7.  Now configure the template's "platform" variable to leverage VGA graphics:

xe template-param-set uuid=$myTemp platform:viridian=false platform:device_id=0001 platform:vga=std platform:videoram=16

8.  Due to how some distros work with X, clear the PV-args and set a "vga=792" flag:

xe template-param-set uuid=$myTemp PV-args="vga=792"

9.  Disable the PV-bootloader:

xe template-param-set uuid=$myTemp PV-bootloader=""

10.  Specify that the template uses an HVM-style bootloader (DVD/CD first, then hard drive, and then network):

xe template-param-set uuid=$myTemp HVM-boot-policy="BIOS order"
xe template-param-set uuid=$myTemp HVM-boot-params:order="dcn"

 

Now, before creating a Debian 7.x Guest VM, one should now see in XenCenter that "Debian 7" is listed as a "default template":

 

Lastly, for the VGA flag and what it means to most distros, the following is a table explaining the VGA flag and bit settings to achieve XxY resoluton @ a color depth:

VGA Resolution and Color Depth reference Chart:

Depth 800×600 1024×768 1152×864 1280×1024 1600×1200
8 bit vga=771 vga=773 vga=353 vga=775 vga=796
16 bit vga=788 vga=791 vga=355 vga=794 vga=798
24 bit vga=789 vga=792   vga=795 vga=799

Create A New Debian Guest

From now, one should be able to create a new Guest VM using the template we have just created and should be able to walk through the entire install:

Post installation, tools can be installed as well!

Enjoy and happy testing!

 

jkbs | @xenfomation

Recent Comments
JK Benedict
Hey, Tobi - Thanks for the feedback! With regards to the graphical install, are you referring to how to do this with XenServer 6... Read More
Friday, 10 October 2014 19:40
JK Benedict
Alrighty -- Been busy, but the following BASH script should make a copy of your Debain 7 template and make a generic, HVM templat... Read More
Wednesday, 22 October 2014 03:10
JK Benedict
You should quite able to copy-n-paste the code above -- that will remove the emoticons from the colon + some other character.... Read More
Wednesday, 22 October 2014 03:21
Continue reading
20971 Hits
18 Comments

Security bulletin covering XSA-108

Over the past week there has been considerable interest in an embargoed Xen Project security advisory known as XSA-108. On October 1st, 2014, the embargo surrounding this advisory was lifted, and coincident with that action, Citrix released a security bulletin covering XSA-108, as well as two additional advisories which impact XenServer releases.

CVE-2014-7188 (XSA-108) Status

CVE-2014-7188, also known as XSA-108, has received significant press. A patch for this was made available on the Citrix support site on October 1st. The patch is available at CTX200218, and also includes remedies for CVE-2014-7155 and CVE-2014-7156.

Learning about new XenServer hotfixes

When a hotfix is released for XenServer, it will be posted to the Citrix support web site. You can receive alerts from the support site by registering at http://support.citrix.com/profile/watches and following the instructions there. You will need to create an account if you don't have one, but the account is completely free. Whenever a hotfix is released, there will be an accompanying security advisory in the form of a CTX knowledge base article for it, and those same KB articles will be linked on xenserver.org in the download page.

Patching XenServer hosts

XenServer admins are encouraged to schedule patching of their XenServer installations at their earliest opportunity. Please note that this bulletin does impact XenServer 6.2 hosts, and to apply the patch, all XenServer 6.2 hosts will first need to be patched to service pack 1. The complete list of patches can be found on the XenServer download page.     

Continue reading
13844 Hits
0 Comments

Before Electing a New Pool Master

Overview

The following is a reminder of specific steps to take before electing a new pool master - especially in High Availability-enabled deployments.  Albeit, there are circumstances where this will happen automatically due to High Availability (by design) or in an emergency situation, but never-the-less, the following steps should be taken when electing a new pool master where High Availability is enabled.

Disable High Availability

Before electing a new master one must disable High Availability.  The reason is quite simple:

If a new host is designated as master with HA enabled, the subsequent processes and transition time can lead to HA see that a pool member is down.  It is doing what it is supposed to do from the "mathematical" sense, but from "reality" it is actually confused.

The end result is that HA could either recover with some time or fence as it attempts to apply fault tolerance in contradiction to the desire to "simply elect a new master".

It is also worth noting that upon recovery - if any Guests which had a mounted ISO are rebooted on another host - that "VDI not found" errors can appear although this is not the case.  The ISO image that is mounted is seen as a VDI and if that resource is not available on another host, the Guest VM will fail to resume: presenting the generic VDI error.

Steps to Take

HA must be disabled and for safe practice, I always recommend ejecting all mounted ISO images.  The latter can be accomplished by executing the following from the pool master:

xe vm-cd-eject --multiple

As for HA it can be disabled in two ways: via the command-line or from XenCenter.

From the command line of the current pool master, execute:

xe pool-ha-disable
xe pool-sync

If desired - just for safe guarding one's work - those commands can be executed on every other pool member.

As for XenCenter one can select the Pool/Pool Master icon in question and from the "HA" tab, select the option to disable HA for the pool.

Workload Balancing

For versions of XenServer utilizing Workload Balancing it is not necessary to halt this.

Now that HA is disabled, switch Pool Masters and when all servers are in an active state: re-enable HA from XenCenter or from the command line:

xe pool-recover-slaves
xe pool-ha-enable

I hope this is helpful and as always: questions and comments are welcomed!

 

--jkbs | @xenfomation

Continue reading
17254 Hits
0 Comments

PowerShell SDK examples

Santiago Cardenas from the Citrix Solutions Lab has written a blog post that caught my eye. It's entitled Scripting: Automating VM operations on XenServer using PowerShell, and in it he describes how the Solutions Lab has been using the XenServer PowerShell SDK to automate XenServer deployments at scale. The thing I found most interesting was that he's included several example scripts for common operations, which could be very useful to other people.

If anyone else has example scripts in any of our five SDK languages (C, C#, Java, Python and PowerShell), and would like to share them with the community, please put a note in the comments below. We would love to link to more examples, and maybe even include them in the distribution.

PS If you're interested in the PowerShell SDK, also check out the blog post that Konstantina Chremmou wrote here in May describing improvements in the SDK since the 6.2 release.

Continue reading
13805 Hits
0 Comments

Creedence Final Beta Available

As we move steadily towards a release of XenServer Creedence, I'm pleased to announce that we're ending the beta phase of development with the release of Creedence beta.3. Beta.3 sees us as functionally complete, and with the majority of known performance issues resolved. The performance issues resolved range from a dom0 memory leak when VIFs are disabled, through to resolution of a workaround with Mellanox 40Gbps NICs, and some are resolved with both an updated driver bundle and a bump of the ovs version from 2.1.2 to 2.1.3. Functionally, beta.3 differs from beta.2 in having PVHVM support for Ubuntu 14.04, RedHat Enterprise Linux 7 and CentOS 7. Since these are new operating systems for us, the team is really interested in learning what you see for performance and stability for them.

As with all previous pre-release builds, we'd like the community to help ensure Creedence is a rock solid release. This time we're a bit less interested in Creedence itself, and more about the operating and support environment. One of the less known "features" of Citrix support is the free "Insight Services" or "TaaS". TaaS was originally designed to be "Tools-as-a-Service", and deliver on demand insight into the operation of Citrix technologies. With XenServer, Citrix Insight Services consumes a server status report from your XenServer host or pool, and then provides detailed guidance on how to potentially avoid an issue (say due to outdated BIOS or firmware), or resolve an issue you might be having (say by applying a hotfix). Honestly, it's not a bad practice in general to upload a server status report post XenServer install to ensure there aren't any items which could be latent in the deployment; rather like a health check.

How does this relate to Creedence? Well, Insight Services uses a series of plugins to ensure the data is processed properly. The support team has recently updated TaaS to support Creedence, and I'd like to ensure two things. First I'd like to ensure the processing logic is capturing everything it should, and secondly I'd like to ensure that those of you who have been successfully running Creedence don't have any hidden errors. Since this is a free service offered by Citrix, I'd also like the open source XenServer install base to know about it as a way to ensure XenServer hosts are deployed in a manner which will allow for Citrix to support you if the need arises.

Here's how you can help.

  1. Install either beta.2 or beta.3 (beta.3 preferred) from the pre-release downloads: http://xenserver.org/overview-xenserver-open-source-virtualization/prerelease.html
  2. From either XenCenter or the CLI take a server status report.
    • XenCenter: Server status reports can be run from "Tools->Server Status Report ..."
    • CLI: xen-bugtool --yestoall --output zip
  3. Log into TaaS (create a free account if required): https://taas.citrix.com/AutoSupport/
  4. Upload your server status report and see if anything interesting is found. If anything unexpected is found, we'd like to know about it.  The best way to let us know would be to submit an incident to https://bugs.xenserver.org which contains the TaaS information.

Thanks again to everyone who has contributed to the success we're seeing with Creedence.

Recent Comments
JK Benedict
Great post, Tim! I will definitely be leveraging this tomorrow after I check in my previous posts for PVHVM OSes I have used from... Read More
Thursday, 25 September 2014 00:42
Sebastian
hmm.. ? should yum install work ? Loaded plugins: fastestmirror Determining fastest mirrors Could not retrieve mirrorlist http://... Read More
Friday, 26 September 2014 15:14
Tim Mackey
@Sebastian, XenServer is a packaged and tuned system, so we intentionally disable yum install. The biggest reason we do that is ... Read More
Friday, 26 September 2014 15:47
Continue reading
18021 Hits
20 Comments

Log Rotation and Syslog Forwarding

A Continuation of Root Disk Management

First, this article is applicable to any sized XenServer deployment and secondly, it is a continuation off of my previous article regarding XenServer root disk maintenance.  The difference is that - for all XenServer deployments - the topic revolves specifically with that of Syslog: from tuning log rotation, specifying the amount of logs to retain, leveraging compression, and of course... Syslog forwarding.

All of this is an effort to share tips to new (or seasoned) XenServer Administrators in the options available to ensure necessary Syslog data does not fill a XenServer root disk while ensuring - for certain industry specific requirements - that log-specific data is retained without sacrafice.

Syslog: A Quick Introduction

So, what is this Syslog?  In short it can be compared to the Unix/Linux equivalent of Windows Event Log (along with other logging mechanisms popular to specific applications/Operating Systems). 

The slightly longer explanation is that Syslog is not only a daemon, but also a protocol: established long ago for Unix systems to record system and application to local disk as well as offering the ability to forward the same log information to its peers for redundancy, concentration, and to conserve disk space on highly active systems.  For more detailed information on the finer details of the Syslog protocol and daemon one can review the IETF's specification at http://tools.ietf.org/html/rfc5424.

On a stand-alone XenServer, the Syslog daemon is started on boot and its configuration file for handling source, severity, types of logs, and where to store them are defined in /etc/syslog.conf.  It is highly recommended that one does not alter this file unless necessary and if one knows what they are doing.  From boot to reboot, information is stored in various files: found under the root disk's /var/log directory.

Taken from a fresh installation of XenServer, the following shows various log files that store information specific to a purpose.  Note that the items in "brown" are sub-directories:

For those seasoned in administering XenServer it is visible that from the kernel-level and user-space level there are not many log files.  However, XenServer is verbose about logging for a very simple reason: collection, analysis, and troubleshooting if an issue should arise.

So for a lone XenServer (by default) logs are essentially received by the Syslog daemon and based on /etc/syslog.conf - as well as the source and type of message - stored on the local root file system as discussed:

Within a pooled XenServer environment things are pretty much the same: for the most part.  As a pool has a master server, log data for the Storage Manager (as a quick example) is trickled up to the master server.  This is to ensure that while each pool member is recording log data specific to itself, the master server has the aggregate log data needed to promote troubleshooting of the entire pool from one point.

Log Rotation

Log rotation, or "logrotate", is what ensures that Syslog files in /var/log do not grow out of hand.  Much like Syslog, logrotate utilizes a configuration file to dictate how often, at what size, and if compression should be used when archiving a particular Syslog file.  The term "archive" is truly meant for rotating out a current log in place of a fresh, current log to take its place.

Post XenServer installation and before usage, one can measure the amount of free root disk space by executing the following command:

df -h

The output will be similar to the following and the line one should be most concerned with is in bold font:

Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1             4.0G  1.9G  2.0G  49% /
none                  381M   16K  381M   1% /dev/shm
/opt/xensource/packages/iso/XenCenter.iso
                       52M   52M     0 100% /var/xen/xc-install

Once can see by the example that only 49% of the root disk on this XenServer host has been used.  Repeating this process as implementation ramps up, an administrator should be able to measure how best to tune logrotate's configuration file for after install, /etc/logrotate should resemble the following:

# see "man logrotate" for details
# rotate log files weekly
weekly
# keep 4 weeks worth of backlogs
rotate 4
# create new (empty) log files after rotating old ones
create
# uncomment this if you want your log files compressed
#compress
# RPM packages drop log rotation information into this directory
include /etc/logrotate.d
# no packages own wtmp -- we'll rotate them here
/var/log/wtmp {
    monthly
    minsize 1M
    create 0664 root utmp
    rotate 1
}
/var/log/btmp {
    missingok
    monthly
    minsize 1M
    create 0600 root utmp
    rotate 1
}
# system-specific logs may be also be configured here.

In previous versions, /etc/logrotate.conf was setup to retain 999 archived/rotated logs, but as of 6.2 the configuration above is standard. 

Before covering the basic premise and purpose of this configuration file, one can see this exact configuration file explained in more detail at http://www.techrepublic.com/article/manage-linux-log-files-with-logrotate/

The options declared in the default configuration are conditions that, when met, rotate logs accordingly:

  1. The first option specifies when to invoke log rotation.  By default this is set to weekly and may need to be adjusted for "daily".  This will only swap log files out for new ones and will not delete log files.
  2. The second option specifies how long to keep archived/rotate log files on the disk.  The default is to remove archived/rotated log files after a week.  This will delete log files that meet this age.
  3. The third options specifies what to do after rotating a log file out.  The default - which should not be changed is to create a new/fresh log after rotating out its older counterpart.
  4. The fourth option - which is commented out - specifies another what to do, but this time for the archived log files.  It is highly recommended to remove the comment mark so that archived log files are compressed: saving on disk space.
  5. A fifth option which is not present in the default conf is the "size" option.  This specifies how to handle logs that reach a certain size, such as "size 15M".  This option should be employed: especially if an administrator has SNMP logs that grow exponentially or notices that the particular XenServer's Syslog files are growing faster than logrotate can rotate and dispose of archived files.
  6. The "include" option specifies a sub-directory wherein unique, logrotate configurations can be specified for individual log files.
  7. The remaining portion should be left as is


In summary for logrotate, one is advised to measure use of the root disk using "df -h" and to tune logrotate.conf as needed to ensure Syslog does not inadvertently consume available disk space.

And Now: Syslog Forwarding

Again, this is a long standing feature and one I have been looking forward to explaining, highlighting, and providing examples for.  However, I have had a kind of writers block for many reasons: mainly that it ties into Syslog, Logrotate, and XenCenter, but also that there is a tradeoff.

I mentioned before that Syslog can forward messages to other hosts.  Furthermore, it can forward Syslog messages to other hosts without writing a copy of the log to local disk.  What this means is that a single XenServer or a pool of XenServers can send their log data to a "Syslog Aggregator".

The trade off is that one cannot generate a server status report via XenCenter, but instead gather the logs from the Syslog aggregate server and manually submit them for review.  That being said, one can ensure that low root disk space is not nearly as high of a concern on the "Admin Todo List" and can retain vast amounts of log data for a deployment of any size: based on dictated industry practices or for, sarcastically, nostalgic purposes.

The principles with Syslog and logrotate.conf will apply to the Syslog Aggregator as what good is a Syslog server if not configured properly as to ensure it does not fill itself up?  The requirements to instantiate a Syslog aggregation server, configure the forwarding of Syslog messages, and so forth are quite simple:

  1. Port 514 must be opened on the network
  2. The Syslog aggregation server must be reachable - either by being on the same network segment or not - by each XenServer host
  3. The Syslog aggregation server can be a virtual or physical machine; Windows or Linux-based with either a native Syslog daemon configured to receive external host messages or using a Windows-based Syslog solution offering the same "listening" capabilities.
  4. The Syslog aggregation server must have a static IP assigned to it
  5. The Syslog aggregation server should be monitored and tuned just as if it were Syslog/logrotate on a XenServer
  6. For support purposes, logs should be easily copied/compressed from the Syslog aggregation server - such as using WinSCP, scp, or other tools to copy log data for support's analysis

The quickest means to establish a simple virtual or physical Syslog aggregation server - in my opinion - is to reference the following two links.  These describe the installation of a base Debian-based system with specific intent to leverage Rsyslog for the recording of remote Syslog messages sent to it over UDP port 514 from one's XenServers:

http://www.aboutdebian.com/syslog.htm

http://www.howtoforge.com/centralized-rsyslog-server-monitoring

Alternatively, the following is an all-in-one guide (using Debian) with Syslog-NG:

http://www.binbert.com/blog/2010/04/syslog-server-installation-configuration-debian/

Once the server is instantiated and ready to record remote Syslog messages, it is time to open XenCenter.  Right click on a pool master or stand-alone XenServer and select "Properties":


In the window that appear - in the lower left-hand corner - is an option for "Log Destination":

To the right, one should notice the default option selected is "Local".  From there, select the "Remote" option and enter the IP address (or FQDN) of the remote Syslog aggregate server as follows:

Finally, select "OK" and the stand-alone XenServer (or pool) will update its Syslog configuration, or more specifically, /var/lib/syslog.conf.  The reason for this is so Elastic Syslog can take over the normal duties of Syslog: forwarding messages to the Syslog aggregator accordingly.

For example, once configured, the local /var/log/kern.log file will state:

Sep 18 03:20:27 bucketbox kernel: Kernel logging (proc) stopped.
Sep 18 03:20:27 bucketbox kernel: Kernel log daemon terminating.
Sep 18 03:20:28 bucketbox exiting on signal 15

Certain logs will still continue to record Syslog on the host, so it may be desirable to edit /var/lib/syslog.conf and add comments to lines where a "-/var/log/some_filename" is specified as lines with "@x.x.x.x" dictate to forward to the Syslog aggregator.  As an example, I have marked the lines in bold to show where comments should be added to prevent further logging to the local disk:

# Save boot messages also to boot.log
local7.*             @10.0.0.1
# local7.*         /var/log/boot.log

# Xapi rbac audit log echoes to syslog local6
local6.*             @10.0.0.1
# local6.*         -/var/log/audit.log

# Xapi, xenopsd echo to syslog local5
local5.*             @10.0.0.1
# local5.*         -/var/log/xensource.log

After one - The Administrator - has decided what logs to keep and what logs to forward, Elastic Syslog can be restarted as so the changes take affect by executing:

/etc/init.d/syslog restart

Since Elastic Syslog - a part of XenServer - is being utilized, the init script will ensure that Elastic Syslog is bounced and that it is responsible for handling Syslog forwarding, etc.

 

So, with this - I hope you find it useful and as always: feedback and comments are welcomed!

 

--jkbs | @xenfomation

 

 

 

Recent Comments
Tobias Kreidl
Super nice post, Jesse! One great reason to have logs on more than one server is that if there is ever a security issue, you stan... Read More
Thursday, 18 September 2014 17:12
JK Benedict
I could NOT agree more, Tobias and why I have been testing, experimenting, and really just trying to push the bounds as far as I c... Read More
Saturday, 27 September 2014 08:53
JK Benedict
Thank you, Tobias! Indeed, RSyslog and a base Debian install is my preferred choice for Syslog aggregation due to exactly what yo... Read More
Friday, 19 September 2014 03:08
Continue reading
51651 Hits
16 Comments

XenServer Root Disk Maintenance

The Basis for a Problem

UPDATE 21-MAR-2015: Thanks to feedback from our community, I have added key notes and additional information to this article.

For all that it does, XenServer has a tiny installation footprint: 1.2 GB (roughly).  That is the modern day equivalent of a 1.44" disk, really.  While the installation footprint is tiny, well, so is the "root/boot" partition that the XenServer installer creates: 4GB in size - no more, no less, and don't alter it! 

The same is also true - during the install process - for the secondary partition that XenServer uses for upgrades and backups:

The point is that this amount of space does not facilitate much room for log retention, patch files, and other content.  As such, it is highly important to tune, monitor, and perform clean-up operations on a periodic basis.  Without attention over time all hotfix files, syslog files, temporary log files, and other forms of data can accumulate until the point with which the root disk will become full.

UPDATE: If you are wondering where the swap partition is, wonder no more.  For XenServer, swap is file-based and is instantiated during the boot process of XenServer.  As for the 4GB partitions, never alter the size of these partitions upgrades, etc will re-align the partitions to match upstream XenServer release specifications.

One does not want a XenServer (or any server for that matter) to have a full root disk as this will lead to a full stop of processes as well as virtualization for the full disk will go "read only".  Common symptoms are:

  • VMs appear to be running, but one cannot manage a XenServer host with XenCenter
  • One can ping the XenServer host, but cannot SSH into it
  • If one can SSH into the box, one cannot write or create files: "read only file system" is reported
  • xsconsole can be used, but it returns errors when "actions" are selected

So, while there is a basis for a problem, the following article offers the basis for a solution (with emphasis on regular administration).

Monitoring the Root Disk

Shifting into the first person, I am often asked how I monitor my XenServer root disks.  In short, I utilize tools that are built into XenServer along with my own "Administrative Scripts".  The most basic way to see how much space is available on a XenServer's root disk is to execute the following:

df -h

This command will show you "disk file systems" and the "-h" means "human readable", ie Gigs, Megs, etc.  The output should resemble the following and I have made the line we care about in bold font:

Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1             4.0G  1.9G  1.9G  51% /
none                  299M   28K  299M   1% /dev/shm
/opt/xensource/packages/iso/XenCenter.iso
                       56M   56M     0 100% /var/xen/xc-install

A more "get to the point" way is to run:

df -h | grep "/$" | head -n 1

Which produces the line we are concerned with:

/dev/sda1             4.0G  1.9G  1.9G  51% /

The end result is that we know 51% of the root partition is used.  Not bad, really.  Still, I am a huge fan of automation and will now discuss a simple way that this task can be ran - automatically - for each of your XenServers.

What I am providing is essentially a simple BASH script that checks a XenServer's local disk.  If the local disk use exceeds a threshold (which you can change), it will send an alert to XenCenter so the the tactics described further in this document can be employed for the assurance of as much free space as possible.

Using nano or VI, create a file in the /root/ (root's home) directory called "diskmonitor" and paste in the following content:

#!/bin/bash
# Quick And Dirty Disk Monitoring Utility
# Get this host's UUID
thisUUID=`xe host-list name-label=$HOSTNAME params=uuid --minimal`
# Threshold of disk usage to report on
threshold=75    # an example of how much disk can be used before alerting
# Get disk usage
diskUsage=`df -h | grep "/$" | head -n 1 | awk {' print $5 '} | sed -n -e "s/%//p"`
# Check
if [ $diskUsage -gt $threshold ]; then
     xe message-create host-uuid=$thisUUID name="ROOT DISK USAGE" body="Disk space use has exceeded $diskUsage on `echo $HOSTNAME`!" priority="1"
fi

After saving this file be sure to make it executable:

chmod +x /root/diskmonitor

The "#!/bin/bash" at the start of this script now becomes imperative as it tells the user space (when called upon) to use the BASH interpreter.

UPDATE: To execute this script manually, one can execute the following command if in the same directory as this script:

./diskmonitor

This convention is used so that scripts can be execute just as if they were a binary/compiled piece of code.  If the "./" prefix is an annoyance, move /root/diskmonitor to /sbin/ -- this will ensure that one can execute diskmonitor without the "dot forward-slash" prefix while in other directories:

mv /root/diskmonitor /sbin/
# Now you should be able to execute diskmonitor from anywhere
diskmonitor

If you move the diskmonitor script make note of where you placed it as this directory will be needed for the cron entry.

For automation of the diskmonitor script one can now leverage cron: adding an entry to root's "crontab" and specify a recurring time diskmonitor should be executed (behind the scenes). 

The following is a basic outline as how to leverage cron so that diskmonitor will be executed four times per day.  Now, if you are looking for more information regarding cron, what it does, and how to configure it for other automation-based task then visit http://www.thegeekstuff.com/2009/06/15-practical-crontab-examples/ for more detailed examples and explanations.

1.  From the XenServer host command-line execute the following to add an entry to crontab for root:

crontab -e

2.  This will open root's crontab in VI or nano (text editors) where one will want to add one of the following lines based on where diskmonitor has been moved to or if it is still located in the /root/ directory:

# If diskmonitor is still located in /root/
00 00,06,12,18 * * * ./root/diskmonitor
# OR if it has been moved to the /sbin/ directory
00 00,06,12,18 * * * diskmonitor

3.  After saving this, we now have a cron entry that runs diskmonitor at midnight, six in the morning, noon, and 6 in the evening (military time) for every day of every week of every month.  If the script detects that the root drive on a XenServer is > 75% "used" (you can adjust this), it will send an alert to XenCenter where one can leverage - further - built in tools for email notifications, etc. 

The following is an example of the output of diskmonitor, but it is apropos to note that the following test was done using a threshold of 50% -- yes, in Creedence there is a bit more free space!  Kudos to Dev!

One can expand upon the script (and XenCenter), but lets focus on a few areas where root disk usage can be slowly consumed.

Removing Old Hotfixes

After applying one or more hotfixes to XenServer, copies of each decompressed hotfix are stored in /var/patch.  The main reason for this - in short - is that in pooled environments, hotfixes are distributed from a host master to each host slave to eliminate the need to repetitively download one hotfix multiplied by the number of hosts in a pool. 

The more complex reason is for consistency, for if a host becomes the master of the pool, it must reflect the same content and configuration as its predecessor did and this includes hotfixes.

The following is an example of what the /var/patch/ directory can look like after the application of one or more hotfixes:

Notice the /applied sub-directory?  We never want to remove that. 

UPDATE 21-MAR-2015:  Thanks to Tim, the Community Comments, and my Senior Lead for validating I was not "crazy" in my findings before composing this article: "xe patch-destroy" did not do its job as many commented.  It has been resolved post 6.2, so I thank everyone - especially Dev - for addressing this.

APPROPRIATE REMOVAL:

To appropriately remove these patch files, one can should utilize the "xe patch-destroy" command.  While I do not have a "clever" command-line example to take care of all files at once, the following should be ran against each file that has a UUID-based naming convention:

cd /var/patch/

xe patch-destroy uuid=<FILENAME, SUCH AS 4d2caa35-4771-ea0e-0876-080772a3c4a7>
(repeat "xe patch-destroy uuid=" command for each file with the UUID convention)

While this is not optimum, especially to run per-host in a pool, it is the prescribed method and as I have a more automated/controlled solution, I will naturally document it.

EMERGENCY SITUATIONS:

In the event that removal of other contents discussed in this article does not resolve a full root disk issue, the following can be used to remove these patch files.  However, it must be emphasized that a situation could arise wherein the lack of these files will require a re-download and install of said patches:

find /var/patch -maxdepth 1 | grep "[0-9]" | xargs rm -f

Finally, if you are in the middle of applying hotfixes do not perform the removal procedure (above) until all hosts are rebooted, fully patched, and verified as in working order.  This applies for pools - especially - where a missing patch file could throw off XenCenter's perspective of what hotfixes have yet to be installed and for which host.

The /tmp Directory

Plain and simple, the /tmp directory is truly meant for just that: holding temporary data.  Pre-Creedence, one can access a XenServer's command-line and execute the following to see a quantity of ".log" files:

cd /tmp
ls

As visualized (and overtime) one can see that an accumulation of many, many log files.  Albeit, these are small at the individual file perspective, but collectively... they take up space.

UPDATE 21-MAR-2015:  Again, thanks to everyone as these logs were always intended to be "removed" automatically once a Guest VM was started.  So, as of 6.5 and beyond -- this section is irrelevant!

cd /tmp/
rm -rf *.log

This will remove only ".log" files so any driver ISO images stored in /tmp (or elsewhere) should be manually addressed.

Compressed Syslog Files

The last item is to remove all compressed Syslog files stored under /var/log.  These usually consume the most disk space and as such, I will be authoring an article shortly to explain how one can tune logrotate and even forward these messages to a Syslog aggregator.

UPDATE:  As a word of of advice, we are only looking to clear "*.gz" (compressed/archived) log files.  Once these are deleted, they are gone.  Naturally this means an server status report gathered for collection will lack historical information so one may consider copying these off to another host (using scp or WinSCP) before following the next steps to remove them under a full root disk scenario.

In the meantime, just as before one can execute the following command to keep current syslog files in-tact, but remove old, compressed log files:

cd /var/log/
rm -rf *gz

So For Now...

It is at this point one has a tool to know when a disk has hit capacity and methods with which to clean-up specific items.  This can be taken by the admin to be ran in an automated fashion or manual fashion.  It is truly up to the admin's style of work.

Please be on the lookout for my next article involving Syslog forwarding, logrotation, and so forth as this will help any size deployment of XenServer: especially where regulations for log retention is a strict requirement.

Feel free to post any questions, suggestions, or methods you may even use to ensure XenServer's root disk does not fill up.

 

--jkbs | @xenfomation

 

 

Recent Comments
JK Benedict
Just as an update, Heinrich - my Beta 3 system is at 48% post-install and with a PV/HVM Debian 7 Guest running (I will be posting ... Read More
Saturday, 27 September 2014 08:55
JK Benedict
Heinrich, Quite welcome, sir!! Different versions of XenServer naturally leave different footprints, but 60-65% is where my syste... Read More
Tuesday, 16 September 2014 13:05
JK Benedict
Yup: my error. I grew this simple script to be modular: for pools and other data. The correct syntax for the "xe message" line ne... Read More
Tuesday, 16 September 2014 13:16
Continue reading
148428 Hits
51 Comments

XenServer Creedence World Tour kicks off

XenServer Creedence World Tour kicks off

On September 11th, the XenServer Creedence World Tour kicked off at FOSSETCON in Orlando with a three hour live Master Class covering core XenServer functions, as well as more details on the composition of Creedence. The 2014 World Tour will see different Creedence content presented at a number of events spanning sixteen cities. Following todays Master Class, will be the Xen Project User Summit in New York on September 15th followed by a quick trip to the Cambridge UK Citrix offices for some deep dive time with the engineers working on Creedence. At the Xen Project User Summit, not only will details of Creedence be presented, but some of the post-Creedence priority work will also be presented.

This world tour is designed to get the word out about just how cool XenServer technology is, and that what you knew about XenServer virtualization from prior versions might not be correct any longer. In fact, our slogan for the tour is "XenServer Creedence - Rocking the world of virtualization, powering your organization for free.".  If you happen to see people running around with a XenServer Creedence t-shirt, you'll know they've been at one of our events.  You'll find tour stops at some of the more likely events such as CloudOpen in Dusseldorf, but also some of the less obvious venues like RICON in Las Vegas or LISA 2014 in Seattle. Regardless of the event, you'll find people with XenServer expertise ready to talk about how XenServer might rationally fit into your data center.

Now I recognize the majority of my followers are likely already quite familiar with XenServer and its capabilities, and that most are unlikely to show up at an event because it has Creedence content. Many of you have probably taken one of the pre-release Creedence builds and are both actively testing it and providing valuable feedback. To you I make a request; if you are active in your local technology community and would like to be involved in the world tour, please let me know. If you would like to provide quotes, observations or even testimonials about how you feel about the changes in XenServer, I'd love to hear them. The two best ways to provide feedback are either via direct message on Twitter (@XenServerArmy for those who didn't know), or via comment on this blog. I'm one of the moderators for comments on xenserver.org, so if you want to use the comment mechanism as a direct message medium, please feel free to do so. If you want your comment to stay private, just say so, and if you don't I'll use my best judgment.     

Recent Comments
Tobias Kreidl
Tim, Great news -- the logo and slogans are great! How can one get hold of one of those T-shirts?! I've got amazon pounds sterling... Read More
Thursday, 11 September 2014 22:52
Tim Mackey
Tobias, I've got to have a bit of a think on the best way getting some of the shirts out to those who can't make it to an event. ... Read More
Friday, 12 September 2014 18:01
Jay
Hi Tim, We can't connect to Xenserver 6.4.95 via XenCenter. Is there a alternative tool or method to access Xenserver 6.4.95 (Be... Read More
Monday, 15 September 2014 08:52
Continue reading
10980 Hits
5 Comments

Post-Creedence details to be presented at Xen Project User Summit

Last month I posted seeking feedback from you our community on what the post-Creedence world should look like. The response was impressive, and we've started incorporating what you want in a virtualization platform into our plans for the next release of XenServer; occurring after Creedence. While it's a bit early to divulge those details, I plan as part of my session on Creedence at the Xen Project User Summit to give you a roadmap for what to expect, what the code name will be for the project, and how you can help move the project forward. There will also be a few other surprises at the event for XenServer attendees, so if are able to be in New York City on September 15th please do try and join us. Not only will you be able to see what the new XenServer has to offer, but you'll see what the core hypervisor community is up to and potentially push them to help deliver features you feel valuable.     

Recent Comments
Russell Pavlicek
And don't forget that the leaders of the Xen Orchestra project will be presenting at Xen Project User Summit too. If you've ever ... Read More
Tuesday, 26 August 2014 20:06
Tobias Kreidl
I would like to see XenServer/XenDesktop embrace similar technology to the VMware Project Fargo, where a clone on demand of a live... Read More
Saturday, 30 August 2014 15:00
Sam McLeod
Hi, will the talks be uploaded online after for those who can't attend? Thanks,
Thursday, 11 September 2014 04:13
Continue reading
11595 Hits
5 Comments

Pushing XenServer limits with Creedence beta.2

Well folks, its that time once again; we've another XenServer build ripe for your inspection, and this one is a critical one for a number of reasons. Today we've released XenServer Creedence beta.2, and this is binary compatible with a Citrix Tech Preview refresh. The build number is 87850 and it presents itself to the outside world as 6.4.95. Over the past few announcements I've hinted at pushing the boundaries of XenServer and wanting the community at large to "have at it", but I've not put out too many details on the overall performance we're seeing internally. The most important attribute of this build is that internally, its going to form part of a series of long term stability tests. Yes folks, we're that confident in what we're seeing and I wanted to thank everyone who has participated in our pre-release activities by sharing a few performance tidbits:

  • Creedence can start and run 1000 PV VMs with only 8GB dom0 memory. That's up from the 650 we have in XenServer 6.2.
  • Booting 125 Windows 7 VMs on a single host takes only 350 seconds in a bootstorm scenario. That's down from 850 seconds in XenServer 6.2
  • Aggregate disk throughput has been measured to improve by as much as 100% when compared to XenServer 6.2
  • Aggregate intrahost network throughput has been measured to improve by as much as 200% when compared to XenServer 6.2
  • The number of virtual disks per host has been raised by a factor of four when compared to XenServer 6.2

When compared to beta.1, the team has been looking at a number of performance and scalability system aspects, with a primary focus on dom0 idle state behavior at scale. This is a very important aspect of system operation as overall system responsiveness is directly tied to the overhead of managing a large number of VMs. We did see two distinct areas for investigation, and are inviting the community to look into these and provide us with others. Those two areas are:

  • When using 40Gb NICs outbound (transmit) performance is below expectations. We have some internal fixes, but are encouraging anyone with such NICs to test and report their findings
  • When large numbers of hosts are pooled we're seeing VM start times appear to slow unexpectedly under large pool VM densities.

 

As always we're actively encouraging you to test the beta and provide your feedback (both positive and negative) in an incident report. You can download beta.2 from here: http://xenserver.org/component/content/article/11-product/142-download-pre-release.html, and enter your feedback at https://bugs.xenserver.org.     

Recent Comments
Tim Mackey
@bpbp If you're comparing the ovs in Creedence to previous Creedence builds, then they are identical. If you're comparing it to ... Read More
Sunday, 24 August 2014 23:37
Tim Mackey
@vati, If I understand correctly, you're looking for when the DVSC will appear? The DVSC is being handled via the Citrix Tech Pr... Read More
Monday, 25 August 2014 14:32
Tim Mackey
@bpbp, We're hoping that the upgrade to ovs 2.1.2 will address most of the issues seen with the older ovs. If you're in a positi... Read More
Monday, 25 August 2014 14:33
Continue reading
18146 Hits
19 Comments

XenServer and VMworld

Next week the world of server virtualization and cloud will turn its attention to the Moscone Center in San Fransisco and VMworld 2014 to see what VMware has planned for its offerings in 2015. As the leader in closed source virtualization refines its "No Limits" message; I wish my friends, and former colleagues, now at VMware a very successful event. If you're attending VMworld, I also wish you a successful event, and hope that you'll find in VMware what you're looking for. I personally won't be at VMworld this year, and while I'll miss opportunities to see what VMware has planned to push vSphere forward, how VMware NSX for multi-hypervisors is evolving, and whether they're expanding support for XenServer in vCloud Automation Center, I'll be working hard ensuring that XenServer Creedence delivers clear value to its community. Of course, I'll probably have a live stream of the keynotes; but that's not quite the same ;)

 

If you're attending VMworld and have an interest in seeing an open source choice in a VMware environment, I hope you'll take the time to ask the various vendors about XenServer; and most importantly to encourage VMware to continue supporting XenServer in some of its strategic products. No one solution can ever hope to satisfy everyone's needs and choice is an important thing. So while you're benefiting from the efforts VMware has put into informing and supporting their community, I hope they realize that with choice everyone is stronger, and embracing other communities only benefits the user.     

Recent comment in this post
rukawa
May I ask you a question? why boot the computer from the main installation CD faster than PXE-boot(Unattended Setup) a few minutes... Read More
Thursday, 06 November 2014 01:32
Continue reading
7661 Hits
1 Comment

XenServer Creedence Reaches Beta Stage

In May we announced to the world that the next version of XenServer would be code named Creedence, and officially opened a public alpha program around this new version of XenServer. That alpha program was more successful than I'd expected with well over a thousand participants. Drawing these participants to Creedence was a combination of enthusiasm for XenServer as a virtualization platform, and a desire to see us make significant improvements in that platform. What greeted them was a full platform refresh including a 64 bit dom0, modern Linux kernel and the most recent Xen Project hypervisor. Over the following weeks we invited this enthusiast community to test three additional builds, each with increasing capabilities and performance.

As the audience for the XenServer Creedence message increased, we heard from some that they wanted to wait until we exited alpha mode and entered a beta phase. I'm happy to report that we've done just that with the release on August 5th, 2014 of beta.1 for XenServer Creedence. This means we're largely feature complete, and are looking seriously at the overall performance and stability of the platform. It also means we want you to start stressing the platform with your real-world workloads. Everything is fair game at this point, and we want to know where the breaking points are. Over the coming weeks you'll see blog posts covering some of the key performance and scalability improvements that we've achieved internally, but that's internally. Your experiences will vary, and we want to know about them.

We want you to push XenServer and tell us where your expectations weren't met. Here's how to do just that:

  1. Download beta.1 and install it on your favorite hardware. If it doesn't install, we want to know. If it didn't detect the devices you have, we want to know. Oddly enough we also want to know if it does!!
  2. Create any number of VMs using your preferred operating systems, or alternatively use your favorite provisioning solution and provision some VMs. If something goes wrong here, let us know.
  3. Install into those VMs the applications you care about. Again if something goes wrong here, let us know.
  4. Exercise your applications, verify if the performance you see is what you'd expect. If it isn't let us know.
  5. While your applications are running, test core virtualization functions like live migration, storage migration, high availability, etc. Everything is fair game. If something doesn't behave as you expect, let us know.

How do you let us know of issues, you ask? Simply create a new account at https://bugs.xenserver.org and report an incident (or incidents). It's that simple. If you're looking to discuss at a deeper technical level why something might be behaving a certain way, or are seeking debugging help, then our developer list might be helpful. You can subscribe to it at: https://lists.xenserver.org/sympa/info/xs-devel, but please understand that the developers are working hard to build XenServer and aren't product support folks.

 

We want Creedence to be the best XenServer release ever, and with your input it can be.     

Recent Comments
Tim Mackey
James, Thanks for the words of encouragement. beta.1 *should* be able to successfully upgrade 6.2 SP1 pools, and if it doesn't w... Read More
Wednesday, 06 August 2014 17:30
Bob Martens
Are we able to import raw Xen images?
Thursday, 07 August 2014 00:19
Tobias Kreidl
No Ceph support in Creedence, unfortunately. Keep lobbying for it! See: http://xenserver.org/discuss-virtualization/virtualization... Read More
Wednesday, 13 August 2014 02:54
Continue reading
14919 Hits
13 Comments

2 weeks to Xen Project Developer Summit - What to expect and Why to attend

Only last week, the Xen Project team was at OSCON where we launched Mirage OS 2.0 (event report to follow soon, but in the meantime check out the following sessions Nymote and MirageFloss Weekly on Mirage OS, Hypervisor Selection in Apache CloudStack and Community War Stories) and now our Developer Summit is just round the corner. As we have seen tremendous community growth in the last 12 months (>30%) and the most feature reach Xen Project Hypervisor release coming up soon, I thought I'd share what you can expect.

xpds14
(click image to go to event website)

What to expect?

Xen Project Developer Summits are packed with highly technical content where the core developers of the Xen Project community come together to discuss the evolution of the Xen Project. The conference is a mixture of talks and interactive sessions in un-conference format (which we call BoFs). Newcomers and those who are interested in the progress and future of the Xen Project, it's sub projects (Hypervisor on ARM and x86, Upstreams and Downstreams, Embedded and Automotive variants, Cloud Operating Systems such as Mirage OS) usually will get tremendous value from attending the event. Besides roadmap, feature updates and developer topics, this year features a few themes:

  • Network Function Virtualization
  • Security
  • Performance and Scalability
  • Cloud Operating Systems
  • Topics that are important for automotive/embedded/mobile use-cases, such as Real-time virtualization, certification and ARM support

Why not check out the agenda or watch last year's sessions to get a sense of what is coming. Note that BoF's and discussion groups will be published next week.

How to get the most out of the Summit?

Our developer events are designed to help you make connections and to participate. A good way to network are our evening social event and to network during the breaks. Another great way to get the most out of the summit is to submit a BoF/discussion groups about a topic you care about or to participate in a BoF/discussion group. BoF submissions are open until August 11 and the BoF schedule will be published the week before the event. Most of our talks will have an extensive and interactive Q&A portion, which is another way to engage.

Of course quite a few of the Xen Server developers will be at the summit too. You may want to look out for them.


 

Continue reading
7621 Hits
0 Comments

Beyond Creedence - XenServer 2015 Planning

In a few weeks James Bulpin and I will be at the Xen Project Developers Summit in Chicago, and some of our discussions will be about the future of XenServer, and more importantly to the community "What comes after Creedence?". With the Creedence alpha program we're seeing a level of community engagement which has honestly exceeded my expectations. I attribute this to the significant improvements in the platform, but also the level of transparency we've had with respect to early access to pre-release builds.

While it was pretty obvious what we needed to do to make Creedence viable, your input is important to the future success of XenServer. With that in mind, we'd like to hear what platform improvements you'd find most valuable. When I speak of platform improvements, I'm thinking of things like storage, networking, core virtualization, performance, scalability and operating system support. I'm not thinking of things which can be classified as data center or virtualization management, so things like network management, disaster recovery, or virtual machine provisioning are out of scope. Based on the blog comments for the various alpha announcements, we already know that CentOS 7 dom0, NFS4 and Ceph are on your wish lists, but what else?

Internally we use a "How would you spend $100?" model to prioritize changes, and if you were interested in providing feedback following that model, it would be ideal. If you've never used this model before, it's pretty simple. Write down the things you'd want to see (optionally with a "why" beside them), and then given a budget of $100. Spend the $100 by allocating it to your desired functionality, and anything with a zero is removed. This has the benefit of focusing on the high value changes without worrying about complexity. If you'd like to provide input, please do so in the comments section below, and let's see what the future of XenServer in 2015 looks like from your perspective.     

Recent Comments
David Reade
Could Changed Block Tracking (CBT) be considered as a feature please to speed up incremental backups? We use Unitrends to perform ... Read More
Wednesday, 30 July 2014 17:35
Keith Walker
CBT, the entire $100.
Tuesday, 31 March 2015 18:32
Andrew
$80 for CBT, $20 for online disk expansion.
Wednesday, 20 May 2015 13:12
Continue reading
56594 Hits
103 Comments

In-memory read caching for XenServer

Overview

In this blog post, I introduce a new feature of XenServer Creedence alpha.4, in-memory read caching, the technical details, the benefits it can provide, and how best to use it.

Technical Details

A common way of using XenServer is to have an OS image, which I will call the golden image, and many clones of this image, which I will call leaf images. XenServer implements cheap clones by linking images together in the form of a tree. When the VM accesses a sector in the disk, if a sector has been written into the leaf image, this data is retrieved from that image. Otherwise, the tree is traversed and data is retrieved from a parent image (in this case, the golden image). All writes go into the leaf image. Astute readers will notice that no writes ever hit the golden image. This has an important implication and allows read caching to be implemented.

tree.png

tapdisk is the storage component in dom0 which handles requests from VMs (see here for many more details). For safety reasons, tapdisk opens the underlying VHD files with the O_DIRECT flag. The O_DIRECT flag ensures that dom0's page cache is never used; i.e. all reads come directly from disk and all writes wait until the data has hit the disk (at least as far as the operating system can tell, the data may still be in a hardware buffer). This allows XenServer to be robust in the face of power failures or crashes. Picture a situation where a user saves a photo and the VM flushes the data to its virtual disk which tapdisk handles and writes to the physical disk. If this write goes into the page cache as a dirty page and then a power failure occurs, the contract between tapdisk and the VM is broken since data has been lost. Using the O_DIRECT flag allows this situation to be avoided and means that once tapdisk has handled a write for a VM, the data is actually on disk.

Because no data is ever written to the golden image, we don't need to maintain the safety property mentioned previously. For this reason, tapdisk can elide the O_DIRECT flag when opening a read-only image. This allows the operating system's page cache to be used which can improve performance in a number of ways:

  • The number of physical disk I/O operations is reduced (as a direct consequence of using a cache).
  • Latency is improved since the data path is shorter if data does not need to be read from disk.
  • Throughput is improved since the disk bottleneck is removed.

One of our goals for this feature was that it should have no drawbacks when enabled. An effect which we noticed initially was that data appeared to be read twice from disk which increases the number of I/O operations in the case where data is only read once from the VM. After a little debugging, we found that disabling O_DIRECT causes the kernel to automatically turn on readahead. Because data access pattern of a VM's disk tends to be quite random, this had a detrimental effect on the overall number of read operations. To fix this, we made use of a POSIX feature, posix_fadvise, which allows an application to inform the kernel how it plans to use a file. In this case, tapdisk tells the kernel that access will be random using the POSIX_FADV_RANDOM flag. The kernel responds to this by disabling readahead, and the number of read operations drops to the expected value (the same as when O_DIRECT is enabled).

Administration

Because of difficulties maintaining cache consistency across multiple hosts in a pool for storage operations, read caching can only be used with file-based SRs; i.e. EXT and NFS SRs. For these SRs, it is enabled by default. There shouldn't be any performance problems associated with this; however, if necessary, it is possible to disable read caching for an SR:

xe sr-param-set uuid=<UUID> other-config:o_direct=true

You may wonder how read caching differs from IntelliCache. The major difference is that IntelliCache works by caching reads from the network onto a local disk while in-memory read caching caches reads from either into memory. The advantage of in-memory read caching is that memory is still an order of magnitude faster than an SSD so performance in bootstorms and other heavy I/O situations should be improved. It is possible for them both to be enabled simultaneously; in this case reads from the network are cached by IntelliCache to a local disk and reads from that local disk are cached in memory with read caching. It is still advantageous to have IntelliCache turned on in this situation because the amount of available memory in dom0 may not be enough to cache the entire working set and reading the remainder from local storage is quicker than reading over the network. IntelliCache further reduces the load on shared storage when using VMs with disks that are not persistent across reboots by only writing to the local disk, not the shared storage.

Talking of available memory, XenServer admins should note that to make best use of read caching, the amount of dom0 memory may need to be increased. Ideally the amount of dom0 memory would be increased to the size of the golden image so that once cached, no more reads hit the disk. In case this is not possible, an approach to take would be to temporarily increase the amount of dom0 memory to the size of the golden image, boot up a VM and open the various applications typically used, determine how much dom0 memory is still free, and then reduce dom0's memory by this amount.

Performance Evaluation

Enough talk, let's see some graphs!

reads.png

In this first graph, we look at the number of bytes read over the network when booting a number of VMs on an NFS SR in parallel. Notice how without read caching, the number of bytes read scales proportionately with the number of VMs booted which checks out since each VM's reads go directly to the disk. When O_DIRECT is removed, the number of bytes read remains constant regardless of the number of VMs booted in parallel. Clearly the in-memory caching is working!

time.png

How does this translate to improvements in boot time? The short answer: see the graph! The longer answer is that it depends on many factors. In the graph, we can see that there is little difference in boot time when booting less than 4 VMs in parallel because the NFS server is able to handle that much traffic concurrently. As the number of VMs increases, the NFS server becomes saturated and the difference in boot time becomes dramatic. It is clear that for this setup, booting many VMs is I/O-limited so read caching makes a big difference. Finally, you may wonder why the boot time per VM increases slowly as the number of VMs increases when read caching is enabled. Since the disk is no longer a bottleneck, it appears that some other bottleneck has been revealed, probably CPU contention. In other words, we have transformed an I/O-limited bootstorm into a CPU-limited one! This improvement in boot times would be particularly useful for VDI deployments where booting many instances of the same VM is a frequent occurrence.

Conclusions

In this blog post, we've seen that in-memory read caching can improve performance in read I/O-limited situations substantially without requiring new hardware, compromising reliability, or requiring much in the way of administration.

As future work to improve in-memory read caching further, we'd like to remove the limitation that it can only use dom0's memory. Instead, we'd like to be able to use the host's entire free memory. This is far more flexible than the current implementation and would remove any need to tweak dom0's memory.

Credits

Thanks to Felipe Franciosi, Damir Derd, Thanos Makatos and Jonathan Davies for feedback and reviews.

Recent Comments
Tim Mackey
I'm not familiar with ZFS, but XenServer has had an shared storage cache called IntelliCache. It's designed for use in highly tem... Read More
Monday, 28 July 2014 02:19
Tobias Kreidl
Ross, Nice article! This cache is definitely going to help but as you pointed out, at some point, the size of the golden image wil... Read More
Tuesday, 29 July 2014 05:12
Tobias Kreidl
Apparently I hit a sore spot with you, "whatever"... I never said Nexenta was the best or most innovative solution out there, but... Read More
Thursday, 31 July 2014 04:28
Continue reading
40264 Hits
7 Comments

Running Scientific Linux Guest VMs on XenServer

Running Scientific Linux Guest VMs on XenServer

What is Scientific Linux?

In short, Scientific Linux is an customized RedHat/CentOS Linux distribution provided by CERN and Fermilab: popular in educational institutions as well as laboratory environments.  More can be read about Scientific Linux here: https://www.scientificlinux.org/

From my own long-term testing - before XenServer 6.2 and our pre-release/Alpha - Creedence - I have ran both Scientific Linux 5 and Scientific Linux 6 without issues.  This article's scope is to show how one can install Scientific Linux and, more specifically, ensure the XenTools Guest Additions for Linux are installed as these do not require any form of "Xen-ified" kernel.

XenServer and Creedence

The following are my own recommendations to run Scientific Linux in XenServer:

  1. I recommend using XenServer 6.1 through any of the Alpha releases due to improvements with XenTools
  2. I recommend using Scientific Linux 5 or Scientific Linux 6
  3. The XenServer VM Template one will need to use will either be of CentOS 5 or CentOS 6: 32 or 64 bit depends on the release of Scientific Linux you will be using

One will also require a URL as to install Scientific Linux from their repository, found at http://ftp.scientificlinux.org/linux/scientific/

The following are URLs I recommend for use during the Guest Installation process (discussed later):

Scientific Linux 5 or 6 Guest VM Installation

With XenCenter, the process of installing Scientific Linux 5.x or Scientific Linux 6 uses the same principles.  You need to create a new VM, select the appropriate CentOS template, and define the VM parameters for disk, RAM, and networking:

1.  In XenCenter, select "New VM":

2.  When prompted for the new VM Template, select the appropriate CentOS-based template (5 or 6, 32 or 64 bit):

3.  Follow the wizard to add processors, disc, and networking information

4.  From the console, follow the steps as to install Scientific Linux 5 or 6 based on your preferences.

5.  After rebooting, login as root and execute the following command within the Guest VM:

yum update

6.  Once yum has applied any updates, reboot the Scientific Linux 5 or 6 Guest VM by executing the following within the Guest VM:

reboot

7.  With the Guest VM back up, login as root and mount the xs-tools.iso within XenCenter:

7.  From the command line, execute the following commands to mount xs-tools.iso within the Guest VM as well as to run the install.sh utility:

cd ~
mkdir tools
mount /dev/xvdd tools/
cd tools/Linux/
./install.sh

8.  With Scientific Linux 5 you will be prompted to install the XenTools Guest Additions - select yes and when complete, reboot the VM:

reboot

9.  With Scientific Linux 6 you will notice the following output:

Fatal Error: Failed to determine Linux distribution and version.

10.  This is not a Fatal Error, but an error induced because the distro build and revision are not presented as expected.  This means that you will manually need to install the XenTools Guest Additions by executing the following commands and rebooting:

rpm -ivh xe-guest-utilities-xenstore-<version number here.x86_64.rpm
rpm -ivh xe-guest-utilities-<version number here>.x86_64.rpm
reboot

Finally after the last reboot (post guest addition install) one will notice from XenCenter that the network address, stats, and so forth are available (including the ability to migrate the VM):

 

I hope this article helps any of you out there and feedback is always welcomed!

--jkbs

@xenfomation

 

Recent Comments
Terry Wang
Running PV on HVM (also called PVHVM sometimes) is just fine. For modern Linux distros with Linux 3.0+ kernel (it'll unplug the QE... Read More
Monday, 28 July 2014 03:56
JK Benedict
Stay tuned! I have more to offer for Creedence... especially in lieu of Mr. Mackey's request from the following article @ http://... Read More
Saturday, 27 September 2014 09:03
Ian Yates
Hi, I'm new to this community but independently worked out a (pretty much identical) install routine for ScientificLinux on Xen so... Read More
Wednesday, 30 July 2014 10:24
Continue reading
20852 Hits
3 Comments

Off to OSCON ....

This week is OSCON, and I'm looking forward to my first year there as the official community manager for XenServer. In fact, it was at OSCON 2013 that I tentatively accepted the position and transitioned from a purely commercial Citrix CloudPlatform and XenServer focus to one where the platform, users and install-base matter most. It's been an interesting year and while we've not accomplished everything I'd have liked, we've made some significant strides forward. The most significant of which has to be the platform refresh, performance improvements and the alpha program we're currently running. So whether you like XenServer, think it might be cool, are curious as to why you should care, have used it in the past, or would use it if it only was a bit better, do try and find me and voice your opinion. In addition to learning from others, I'll be at the Open@Citrix Open Cloud Lounge, at the various Open@Citrix activities, presenting on hypervisor selection in Apache CloudStack on Wednesday, and of course if you want to hit me up on twitter as @XenServerArmy and grab some time, please do.     

Recent comment in this post
JK Benedict
Kudos, Tim! I look forward to living through you vicariously and thank you for representing our product, dear sir! --jkbs | @xen... Read More
Thursday, 23 July 2015 09:47
Continue reading
7652 Hits
1 Comment

About XenServer

XenServer is the leading open source virtualization platform, powered by the Xen Project hypervisor and the XAPI toolstack. It is used in the world's largest clouds and enterprises.
 
Technical support for XenServer is available from Citrix.