Virtualization Blog

Discussions and observations on virtualization.

Better Together: PVS and XenServer!

XenServer adds new functionality to further simplify and enhance the secure and on-demand delivery of applications and desktops to enterprise environments.

If you haven't visited the Citrix blogs recently, we encourage you to visit https://www.citrix.com/blogs/2016/10/31/pvs-and-xenserver-better-together/ to read about the latest integration efforts between PVS and XS.

If you're a Citrix customer, this article is a must read!

Andy Melmed, Senior Solutions Architect, XenServer PM

 

Continue reading
1862 Hits
0 Comments
Featured

XenServer Ely Alpha 1 Available

Hear ye, hear ye… we are pleased to announce that an alpha release of XenServer Project Ely is now available for download! After Dundee (7.0), we've come a little closer to Cambridge (the birthplace of Xen) for our codename, as the city of Ely is just up the road.

 
Since releasing version 7.0 in May, the XenServer engineering team has been working fervently to prepare the platform with the latest innovations in server virtualization technology. As a precursor, a pre-release containing the prerequisites for enabling a number of powerful (and really cool!) new features has been made available for download from the pre-release page.
 

What's In it?

 

The following is a brief description of some of the feature-prerequisites included in this pre-release:

 

Xen 4.7:  This release of Xen adds support for "live-patching" of the Xen hypervisor, allowing issues to be patched without requiring a host reboot. In this alpha release there is no functionality for you to test in this area, but we thought it was worth telling you about none the less. Xen 4.7 also includes various performance improvements, and updates to the virtual machine introspection code (surfaced in XenServer as Direct Inspect).

 

Kernel 4.4: Updated kernel to support future feature considerations. All device drivers will be at the upstream versions; we'll be updating these with drops direct from the hardware vendors as we go through the development cycle.

 

VM import/export performance: a longstanding request from our user community, we've worked to improve the import/export speeds of VMs, and Ely alpha 1 now averages 2x faster than the previous version.

 

What We'd Like Help With

 

The purpose of this alpha release is really to make sure that a variety of hardware works with project Ely. Because we've updated core platform components (Xen and the Dom0 kernel), it's always important to check on hardware that we don’t have in our QA labs that all is well. Thus, the more people who can download this build, install, and run a couple of VMs to check all is well the better.

 

Additionally, we've been working with the community (over on XSO-445) on improving VM import/export performance: we'd like to see whether the improvements we've seen in our tests are what you see too. If they're not, we can figure out why and fix it :-).

 

Upgrading

 

This is pre-release software, not for production use. Upgrades from XenServer 7.0 should work fine, but it goes without saying that you should ensure you back up any critical data.

 

Reporting Bugs

 

We encourage visitors to download the pre-release and provide us with your feedback. If you do find a problem, please head over to the bug tracker and file a ticket. Please be sure to include a server status report!

 

Now that we've moved up to a new pre-release project, it's time to remove the XS 6.5 SP1 fix version from the bug tracker, in order that we keep it tidy. You'll see an "Ely alpha" affects version is now present instead.

 

What Next?

 

Stay tuned for another pre-release build in the near future: as you may have heard, we've been keeping busy!

 
As always, we look forward to working with the XenServer community to make the next major release of XenServer the best version ever!

 

Cheers!

 

Andy M.

Senior Solutions Architect - XenServer PM

 

Recent Comments
David Reade
The download link for the release notes does not work. For "info.citrite.net" I'm getting "server not found". Is this a link to an... Read More
Tuesday, 11 October 2016 15:56
Tobias Kreidl
Always nice to see XenServer improvements and added features. The new kernel and live patching are nice. The vm-export/import gain... Read More
Wednesday, 12 October 2016 06:41
Willem Boterenbrood
Tobias, In my XenServer 7.0 test environment I see a large improvement of VM export speed compared to my 6.5SP1 live environment.... Read More
Friday, 14 October 2016 07:13
Continue reading
4451 Hits
39 Comments

Running XenServer... without a server

With the exciting release of the latest XenServer Dundee beta, the immediate reaction is to download it to give it a whirl to see all the shiny new features (and maybe to find out if your favourite bug has been fixed!). Unfortunately, it's not something that can just be installed, tested and uninstalled like a normal application - you'll need to find yourself a server somewhere you're willing to sacrifice in order to try it out. Unless, of course, you decide to use the power of virtualisation!

XenServer as a VM

Nested virtualisation - running a VM inside another VM - is not something that anyone recommends for production use, or even something that works at all in some cases. However, since Xen has its origins way back before hardware virtualisation became ubiquitous in intel processors, running full PV guests (that don't require any HW extensions) when XenServer is running as a VM actually works very well indeed. So for the purposes of evaluating a new release of XenServer it's actually a really good solution. It's also ideal for trying out many of the Unikernel implementations, such as Mirage or Rump kernels as these are pure PV guests too.

XenServer works very nicely when run on another XenServer, and indeed this is what we use extensively to develop and test our own software. But once again, not everyone has spare capacity to do this. So let's look to some other virtualisation solutions that aren't quite so server focused and that you might well have installed on your own laptop. Enter Oracle's VirtualBox.

VirtualBox, while not as performant a virtualization solution as Xen, is a very capable platform that runs XenServer without any problems. It also has the advantage of being easily installable on your own desktop or laptop. Therefore it's an ideal way to try out these betas of XenServer in a quick and convenient way. It also has some very convenient tools that have been built around it, one of which is Vagrant.

Vagrant

Vagrant is a tool for provisioning and managing virtual machines. It targets several virtualization platforms including VirtualBox, which is what we'll use now to install our XenServer VM. The model is that it takes a pre-installed VM image - what Vagrant calls a 'box' - and some provisioning scripts (using scripts, Salt, Chef, Ansible or others), and sets up the VM in a reproducible way. One of its key benefits is it simplifies management of these boxes, and Hashicorp run a service called Atlas that will host your boxes and metadata associated with them. We have used this service to publish a Vagrant box for the Dundee Beta. 

Try the Dundee Beta

Once you have Vagrant installed, trying the Dundee beta is as simple as:

vagrant init xenserver/dundee-beta
vagrant up

This will download the box image (about 1 Gig) and create a new VM from this box image. As it's booting it will ask which network to bridge onto, which if you want your nested VMs to be available on the network should be a wired network rather than wireless. 

The XenServer image is tweaked a little bit to make it easier to access - for example, it will by default DHCP all of the interfaces, which is useful for testing XenServer, but wouldn't be advisable for a real deployment. To connect to your XenServer, we need to find the IP address, so the simplest way of doing this is to ssh in and ask:

Mac-mini:xenserver jon$ vagrant ssh -c "sudo xe pif-list params=IP,device"
device ( RO) : eth1 IP ( RO): 192.168.1.102 device ( RO) : eth2 IP ( RO): 172.28.128.5 device ( RO) : eth0 IP ( RO): 10.0.2.15

So you should be able to connect using one of those IPs via XenCenter or via a browser to download XenCenter (or via any other interface to XenServer).

Going Deeper

Let's now go all Inception and install ourselves a VM within our XenServer VM. Let's assume, for the sake of argument, and because as I'm writing this it's quite true, that we're not running on a Windows machine, nor do we have one handy to run XenCenter on. We'll therefore restrict ourselves to using the CLI.

As mentioned before, HVM VMs are out so we're limited to pure PV guests. Debian Wheezy is a good example of one of these. First, we need to ssh in and become root:

Mac-mini:xenserver jon$ vagrant ssh
Last login: Thu Mar 31 00:10:29 2016 from 10.0.2.2
[vagrant@localhost ~]$ sudo bash
[root@localhost vagrant]#

Now we need to find the right template:

[root@localhost vagrant]# xe template-list name-label="Debian Wheezy 7.0 (64-bit)"
uuid ( RO)                : 429c75ea-a183-a0c0-fc70-810f28b05b5a
          name-label ( RW): Debian Wheezy 7.0 (64-bit)
    name-description ( RW): Template that allows VM installation from Xen-aware Debian-based distros. To use this template from the CLI, install your VM using vm-install, then set other-config-install-repository to the path to your network repository, e.g. http:///

Now, as the description says, we use 'vm-install' and set the mirror:

[root@localhost vagrant]# xe vm-install template-uuid=429c75ea-a183-a0c0-fc70-810f28b05b5a new-name-label=wheezy
479f228b-c502-a791-85f2-a89a9f58e17f
[root@localhost vagrant]# xe vm-param-set uuid=479f228b-c502-a791-85f2-a89a9f58e17f other-config:install-repository=http://ftp.uk.debian.org/debian

The VM doesn't have any network connection yet, so we'll need to add a VIF. We saw the IP addresses of the network interfaces above, and in my case eth1 corresponds to the bridged network I selected when starting the XenServer VM using Vagrant. So I need the uuid of the network, so I'll list the networks:

[root@localhost vagrant]# xe network-list
uuid ( RO)                : c7ba748c-298b-20dc-6922-62e6a6645648
          name-label ( RW): Pool-wide network associated with eth2
    name-description ( RW):
              bridge ( RO): xenbr2

uuid ( RO)                : f260c169-20c3-2e20-d70c-40991d57e9fb 
          name-label ( RW): Pool-wide network associated with eth1  
    name-description ( RW): 
              bridge ( RO): xenbr1 

uuid ( RO)                : 8d57e2f3-08aa-408f-caf4-699b18a15532 
          name-label ( RW): Host internal management network 
    name-description ( RW): Network on which guests will be assigned a private link-local IP address which can be used to talk XenAPI 
              bridge ( RO): xenapi 

uuid ( RO)                : 681a1dc8-f726-258a-eb42-e1728c44df30 
          name-label ( RW): Pool-wide network associated with eth0 
    name-description ( RW):
              bridge ( RO): xenbr0

So I need a VIF on the network with uuid f260c...

[root@localhost vagrant]# xe vif-create vm-uuid=479f228b-c502-a791-85f2-a89a9f58e17f network-uuid=681a1dc8-f726-258a-eb42-e1728c44df30 device=0
e96b794e-fef3-5c2b-8803-2860d8c2c858

All set! Let's start the VM and connect to the console:

[root@localhost vagrant]# xe vm-start uuid=479f228b-c502-a791-85f2-a89a9f58e17f
[root@localhost vagrant]# xe console uuid=479f228b-c502-a791-85f2-a89a9f58e17f

This should drop us into the Debian installer:

b2ap3_thumbnail_Screen-Shot-2016-03-31-at-00.55.07.png

A few keystrokes later and we've got ourselves a nice new VM all set up and ready to go.

All of the usual operations will work; start, shutdown, reboot, suspend, checkpoint and even, if you want to set up two XenServer VMs, migration and storage migration. You can experiment with bonding, try multipathed ISCSI, check that alerts are generated, and almost anything else (with the exception of HVM and anything hardware specific such as VGPUs, of course!). It's also an ideal companion to the Docker build environment I blogged about previously, as any new things you might be experimenting with can be easily built using Docker and tested using Vagrant. If anything goes wrong, a 'vagrant destroy' followed by a 'vagrant up' and you've got a completely fresh XenServer install to try again in less than a minute.

The Vagrant box is itself created using Packer, a tool often used to create Vagrant boxes.  The configuration for this is available on github, and feedback on this box is very welcome!

In a future blog post, I'll be discussing how to use Vagrant to manage XenServer VMs.

Recent Comments
Ivan Grynenko
Very interesting post, thank you. We maintain XenServer VM for testing purposes of our GitHub project https://github.com/ivrh/xs D... Read More
Tuesday, 12 April 2016 00:44
Jon Ludlam
Hi Ivan, Thanks! Very interesting project you've got there too - looks really nice. To change the IP settings within the XenServ... Read More
Wednesday, 13 April 2016 00:19
Continue reading
8411 Hits
2 Comments

Creedence launches as XenServer 6.5

Today the entire XenServer team is very proud to announce that Creedence has officially been released as XenServer 6.5. It is available for download from xenserver.org, and is recommended for all new XenServer installs. We're so confident in what has been produced that I'm encouraging all XenServer 6.2 users to upgrade at their earliest convenience. So what have we actually accomplished?

The headline features

Every product release I've ever done, and there have been quite a large number over the years, has had some headline features; but Creedence is a bit different. Creedence wasn't about new features, and Creedence wasn't about chasing some perceived competitor. Creedence very much was about getting the details right for XenServer. It was about creating a very solid platform upon which anyone can comfortably, and successfully, build a virtualized data center regardless of workload. Creedence consisted of a lot of mundane improvements whose combination made for one seriously incredible outcome; Creedence restored the credibility of XenServer within the entire virtualization community. We even made up some t-shirts that the cool kids want ;)

So let's look at some of those mundane improvements, and see just how significant they really are.

  • 64 bit dom0 freed us from the limitations of dreaded Linux low memory, but also allows us to use modern drivers and work better with modern servers. From personal experience, when I took alpha.2 and installed it on some of my test Dell servers, it automatically detected my hardware RAID without my having to jump through any driver disk hoops. That was huge for me.
  • The move to a 3.10 kernel from kernel.org meant that we were out of the business of having a completely custom kernel and corresponding patch queue. Upstream is goodness.
  • The move to the Xen Project hypervisor 4.4 meant that we're now consuming the most stable version of the core hypervisor available to us.
  • We've updated to an ovs 2.10 virtual switch giving us improved network stability when the virtual switch is under serious pressure. While we introduced the ovs way back in December of 2010, there remained cases where the legacy Linux bridge worked best. With Creedence, those situations should be very few and far between
  • A thread per vif model was introduced to better ensure network hogs didn't impact adjacent VM performance
  • Network datapath optimizations allow us to drive line rate for 10Gbps NICs, and we're doing pretty well with 40Gbps NICs.
  • Storage was improved through an update to tapdisk3, and the team did a fantastic job of engaging with the community to provide performance details. Overall we've seen very significant improvements in aggregate disk throughput, and when you're virtualizing it's the aggregate which matters more than the single VM case.

What this really means for you is that XenServer 6.5 has a ton more headroom than 6.2 ever did. If you happen to be on even older versions, you'll likely find that while 6.5 looks familiar, it's not quite like any other XenServer you've seen. As has been said multiple times in blog comments, and by multiple people, this is going to be the best release ever. In his blog, Steve Wilson has a few performance graphs to share for those doubters. 

The future

While today we've officially released Creedence, much more work remains. There is a backlog of items we really want to accomplish, and you've already provided a pretty long list of features for us to figure out how to make. The next project will be unveiled very soon, and you can count on having access to it early and being able to provide feedback just as the thousands of pre-release participants did for Creedence. Creedence is very much a success of the community as it is an engineering success.

Thank you to everyone involved. The hard work doesn't go unnoticed.     

Recent Comments
Fabian
Finally! Now hope that it's as stable in production as it was during the testing phase... Fingers crossed here. BTW: There's a ty... Read More
Tuesday, 13 January 2015 19:32
Tim Mackey
@James, thanks for the kinds words @Fabian, I would expect things to be just as stable, and thanks for the catch.... Read More
Tuesday, 13 January 2015 19:39
Tobias Kreidl
Did DVSC ever get bundled in with the free or at least other versions?
Tuesday, 13 January 2015 19:42
Continue reading
37823 Hits
51 Comments

VGA over Cirrus in XenServer 6.2

Achieve Higher Resolution and 32Bpp

For many reasons – not exclusive to XenServer – the Cirrus video driver has been a staple wherein a basic/somewhat agnostic video driver is needed.  When one creates a VM within XenServer (specifically 6.2 and previous versions) the Cirrus video driver is used by default for video...and it does the job.

I had been working on a project with my mentor related to an eccentric OS, but I needed a way to get more real-estate to test a HID pointing device by increasing the screen resolution.  This led me to find that at some point in our upstream code there were platform (virtual machine metadata) options that allowed an one to "ditch" Cirrus and 1024x768 resolution for higher resolutions and color depth via a standard VGA driver addition.

This is not tied into GPU Pass through nor is it a hack.  It is a valuable way to achieve 32bpp color in Guest VMs with video support as well as obtaining higher resolutions.

Windows 7: A Before and After Example

To show the difference between "default Cirrus" and the Standard VGA driver (which I will discuss how to switch to shortly), Windows 7 Enterprise had the following resolution to offer me with Cirrus:


Now, after switching to standard VGA for the same Guest VM and rebooting, I now had the following resolution options within Windows 7 Enterprise:

Switching a Guest for VGA

After you create your VM – Windows, Linux, etc – perform the following steps to enable the VGA adapter:

 

  • Halt the Guest VM
  • From the command line, find the UUID of your VM:
 xe vm-list name-label=”Name of your VM”
  • Taking the UUID value, run the following two commands:
 xe vm-param-set uuid=<UUID of your VM> platform:vga=std
 xe vm-param-set uuid=<UUID of your VM> platform:videoram=4
  •  Finally, start your VM and one should be able to achieve higher resolution at 32bpp.

 

It is worth noting that the max amount of "videoram" that can be specified is 16 (megabytes).

Switching Back to Cirrus

If – for one reason or another – you want to reset/remove these settings as to stick with the Cirrus driver, run the following commands:

 xe vm-param-remove uuid=<UUID of your VM> param-name=platform param-key=vga
 xe vm-param-remove uuid=<UUID of your VM> param-name=platform param-key=videoram

Again, reboot your Guest VM and with the lack of VGA preference, the default Cirrus driver will be used.

What is the Catch?

There is no catch and no performance hit.  The VGA driver's "videoram" specification is carved out of the virtual memory allocated to the Guest VM.  So, for example, if you have 4GB allocated to a Guest VM, subtract at max 16 megabytes from 4GB.  Needless to say, that is a pittance and does not impact performance.

Speaking of performance, my own personal tests were simple and repeated several times:

 

  • Utilized a tool that will remain anonymous
  • Use various operating systems with Cirrus and resolution at 1024 x 768
  • Run 2D graphic test suite
  • Write down Product X, Y, or Z’s magic number that represents good or bad performance
  • Apply the changes to the VM to use VGA (keeping the resolution at 1024 x 768 for some kind of balance)
  • Run the same volley of 2D tests after a reboot
  • Write down Product X, Y or Z’s magic number that represents good or bad performance

 

In the end, I personally found from my experience that there was a very minor, but noticeable difference in Cirrus versus VGA.  Cirrus usually came in 10-40 points below VGA at the 1024 x 768 level.  Based on the test suite used, this is nothing spectacular, but it is certainly a benefit as I found no degraded performance across XenServer (other Guests), etc.

I hope this helps and as always: questions and comments are welcomed!

 

--jkbs | @xenfomation

 

Recent Comments
JK Benedict
Hey, Chris!! Excellent questions! So - I think I need to clear up my poor use of words: more importantly, tying words together. ... Read More
Saturday, 11 October 2014 22:50
Continue reading
25239 Hits
4 Comments

XenServer and VMworld

Next week the world of server virtualization and cloud will turn its attention to the Moscone Center in San Fransisco and VMworld 2014 to see what VMware has planned for its offerings in 2015. As the leader in closed source virtualization refines its "No Limits" message; I wish my friends, and former colleagues, now at VMware a very successful event. If you're attending VMworld, I also wish you a successful event, and hope that you'll find in VMware what you're looking for. I personally won't be at VMworld this year, and while I'll miss opportunities to see what VMware has planned to push vSphere forward, how VMware NSX for multi-hypervisors is evolving, and whether they're expanding support for XenServer in vCloud Automation Center, I'll be working hard ensuring that XenServer Creedence delivers clear value to its community. Of course, I'll probably have a live stream of the keynotes; but that's not quite the same ;)

 

If you're attending VMworld and have an interest in seeing an open source choice in a VMware environment, I hope you'll take the time to ask the various vendors about XenServer; and most importantly to encourage VMware to continue supporting XenServer in some of its strategic products. No one solution can ever hope to satisfy everyone's needs and choice is an important thing. So while you're benefiting from the efforts VMware has put into informing and supporting their community, I hope they realize that with choice everyone is stronger, and embracing other communities only benefits the user.     

Recent comment in this post
rukawa
May I ask you a question? why boot the computer from the main installation CD faster than PXE-boot(Unattended Setup) a few minutes... Read More
Thursday, 06 November 2014 01:32
Continue reading
5918 Hits
1 Comment

Debian 7.4 and 7.6 Guest VMs

"Four Debians, Two XenServers"

The purpose of this article is to discuss my own success with virtualizing "four" releases of Debian (7.4/7.6; 32-bit/64-bit) in my own test labs.

For more information about Debian, head on over to Debian.org - specifically here to download the 7.6 ISO of your choice ( I used both the full DVD install ISO as well as the net install ISO ).

Note: If you are utilizing the Debian 7.4 net install ISO the OS will be updated to 7.6 during the install process.  This is just a "heads up" in the event you are keen to stick with a vanilla Debian 7.4 VM for test purposes.  And so you will need to download the full install DVD for the 7.4 32-bit/64-bit release instead of the net install ISO.

Getting A New VM Started

Once I had the install media of my choice, I copied it to my ISO repository that both XenServer 6.2 and Creedence utilize in my test environment.

From XenCenter (distributed with Creedence Alpha 4) I selected "New VM".

In both 6.2 and Creedence I chose the "Debian 7.0 (Wheezy) 64-bit" VM template:

I then continued through the "New VM" wizard: specifying processors, RAM, networking, and so forth.  On the last step, I made sure as to select "Start the new VM Automatically" before I pressed "Create Now":

Within a few moments, this familiar view appeared in the console:

I installed a minimum instance of both: SSH and BASE system.  I also used guided partitioning just because I was in quite a hurry.

After championing my way through the installer, as expected, Debian 7.4 and 7.6 both prompted that I reboot:

Since this is a PV install, I have access to the Shutdown, Reboot, and Suspend buttons, but I was curious about tools as memory consumption, etc were not present under each guest's "Performance" tab:

... and the "Network" tab stated "Unknown":

Before I logged in as root - in both XenServer 6.2 and Creedence Alpha 4 - I mounted the xs-tools.iso.  Once in with root access, I executed the following commands to install xs-tools for these guest VMs:


mkdir iso
mount /dev/xvdd/ iso/
cd iso/Linux/
./install.sh

The output was exactly the same in both VMs and naturally I selected "Y" to install the guest additions:

Detected `Debian GNU/Linux 7.6 (wheezy)' (debian version 7).

The following changes will be made to this Virtual Machine:
  * update arp_notify sysctl.conf.
  * packages to be installed/upgraded:
    - xe-guest-utilities_6.2.0-1137_amd64.deb

Continue? [y/n] y

Selecting previously unselected package xe-guest-utilities.
(Reading database ... 24502 files and directories currently installed.)
Unpacking xe-guest-utilities (from .../xe-guest-utilities_6.2.0-1137_amd64.deb) ...
Setting up xe-guest-utilities (6.2.0-1137) ...
Mounting xenfs on /proc/xen: OK
Detecting Linux distribution version: OK
Starting xe daemon:  OK

You should now reboot this Virtual Machine.

Following the installer's instructions, I rebooted the guest VMs accordingly.

Creedence Alpha 4 Results

As soon as the reboot was complete I was able to see each guest VM's memory performance as well as networking for both IPv4 and IPv6:

XenServer 6.2

With XenServer 6.2, I found that after installing the guest agent - under the "Network" tab - there still was no IPv4 information for my 64-bit Debian 7.4 and 7.6 guest VMs.  This does not apply to 32-Bit Debian 7.4 and 7.6 guest VMs as the tools installed just fine.

Then I thought about it and realized that by disabling IPv6, presto - the network information appeared for my IPv4 address.  To accomplish this, I edited the following file (as to avoid adjusting GRUB parameters):

/etc/sysctly.conf

And at the bottom of this file I added:

net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1 net.ipv6.conf.lo.disable_ipv6 = 1 net.ipv6.conf.eth0.disable_ipv6 = 1

After saving my changes, I rebooted and immediately was able to see my memory usage:

However... I still could not see my IPv4 address under the "Network" tab until I noticed the device ID of the network interface -- it was Device 1 (not 0):

I deleted this interface and re-added a new one from XenCenter.  Instantly, I could see my IPv4 address and the device ID for the network interface was back to 0:

And yes, I tested rebooting -- the address is still shown and memory usage is still measured.  In addition I did try to remove the flags to disable IPv6, but that resulted in seeing "UNKNOWN" - again - for 64-Bit Debian 7.4 and 7.6 guests.  That just means in XenServer 6.2 I have kept my changes in /etc/sysctl.conf as to ensure my 64-Bit Debian 7.4 and 7.6 hosts with XenTools' Guest Additions for Linux work just fine.

So, that's that -- something to experiment and test with: Debian 7.4 or 7.6 32-bit/64-bit in a XenServer 6.2 or Creedence Alpha test environment!

 

--jkbs

@xenfomation

Recent comment in this post
JK Benedict
Tested on Creedence Beta, as well. Love it!!!
Thursday, 07 August 2014 18:58
Continue reading
15891 Hits
1 Comment

In-memory read caching for XenServer

Overview

In this blog post, I introduce a new feature of XenServer Creedence alpha.4, in-memory read caching, the technical details, the benefits it can provide, and how best to use it.

Technical Details

A common way of using XenServer is to have an OS image, which I will call the golden image, and many clones of this image, which I will call leaf images. XenServer implements cheap clones by linking images together in the form of a tree. When the VM accesses a sector in the disk, if a sector has been written into the leaf image, this data is retrieved from that image. Otherwise, the tree is traversed and data is retrieved from a parent image (in this case, the golden image). All writes go into the leaf image. Astute readers will notice that no writes ever hit the golden image. This has an important implication and allows read caching to be implemented.

tree.png

tapdisk is the storage component in dom0 which handles requests from VMs (see here for many more details). For safety reasons, tapdisk opens the underlying VHD files with the O_DIRECT flag. The O_DIRECT flag ensures that dom0's page cache is never used; i.e. all reads come directly from disk and all writes wait until the data has hit the disk (at least as far as the operating system can tell, the data may still be in a hardware buffer). This allows XenServer to be robust in the face of power failures or crashes. Picture a situation where a user saves a photo and the VM flushes the data to its virtual disk which tapdisk handles and writes to the physical disk. If this write goes into the page cache as a dirty page and then a power failure occurs, the contract between tapdisk and the VM is broken since data has been lost. Using the O_DIRECT flag allows this situation to be avoided and means that once tapdisk has handled a write for a VM, the data is actually on disk.

Because no data is ever written to the golden image, we don't need to maintain the safety property mentioned previously. For this reason, tapdisk can elide the O_DIRECT flag when opening a read-only image. This allows the operating system's page cache to be used which can improve performance in a number of ways:

  • The number of physical disk I/O operations is reduced (as a direct consequence of using a cache).
  • Latency is improved since the data path is shorter if data does not need to be read from disk.
  • Throughput is improved since the disk bottleneck is removed.

One of our goals for this feature was that it should have no drawbacks when enabled. An effect which we noticed initially was that data appeared to be read twice from disk which increases the number of I/O operations in the case where data is only read once from the VM. After a little debugging, we found that disabling O_DIRECT causes the kernel to automatically turn on readahead. Because data access pattern of a VM's disk tends to be quite random, this had a detrimental effect on the overall number of read operations. To fix this, we made use of a POSIX feature, posix_fadvise, which allows an application to inform the kernel how it plans to use a file. In this case, tapdisk tells the kernel that access will be random using the POSIX_FADV_RANDOM flag. The kernel responds to this by disabling readahead, and the number of read operations drops to the expected value (the same as when O_DIRECT is enabled).

Administration

Because of difficulties maintaining cache consistency across multiple hosts in a pool for storage operations, read caching can only be used with file-based SRs; i.e. EXT and NFS SRs. For these SRs, it is enabled by default. There shouldn't be any performance problems associated with this; however, if necessary, it is possible to disable read caching for an SR:

xe sr-param-set uuid=<UUID> other-config:o_direct=true

You may wonder how read caching differs from IntelliCache. The major difference is that IntelliCache works by caching reads from the network onto a local disk while in-memory read caching caches reads from either into memory. The advantage of in-memory read caching is that memory is still an order of magnitude faster than an SSD so performance in bootstorms and other heavy I/O situations should be improved. It is possible for them both to be enabled simultaneously; in this case reads from the network are cached by IntelliCache to a local disk and reads from that local disk are cached in memory with read caching. It is still advantageous to have IntelliCache turned on in this situation because the amount of available memory in dom0 may not be enough to cache the entire working set and reading the remainder from local storage is quicker than reading over the network. IntelliCache further reduces the load on shared storage when using VMs with disks that are not persistent across reboots by only writing to the local disk, not the shared storage.

Talking of available memory, XenServer admins should note that to make best use of read caching, the amount of dom0 memory may need to be increased. Ideally the amount of dom0 memory would be increased to the size of the golden image so that once cached, no more reads hit the disk. In case this is not possible, an approach to take would be to temporarily increase the amount of dom0 memory to the size of the golden image, boot up a VM and open the various applications typically used, determine how much dom0 memory is still free, and then reduce dom0's memory by this amount.

Performance Evaluation

Enough talk, let's see some graphs!

reads.png

In this first graph, we look at the number of bytes read over the network when booting a number of VMs on an NFS SR in parallel. Notice how without read caching, the number of bytes read scales proportionately with the number of VMs booted which checks out since each VM's reads go directly to the disk. When O_DIRECT is removed, the number of bytes read remains constant regardless of the number of VMs booted in parallel. Clearly the in-memory caching is working!

time.png

How does this translate to improvements in boot time? The short answer: see the graph! The longer answer is that it depends on many factors. In the graph, we can see that there is little difference in boot time when booting less than 4 VMs in parallel because the NFS server is able to handle that much traffic concurrently. As the number of VMs increases, the NFS server becomes saturated and the difference in boot time becomes dramatic. It is clear that for this setup, booting many VMs is I/O-limited so read caching makes a big difference. Finally, you may wonder why the boot time per VM increases slowly as the number of VMs increases when read caching is enabled. Since the disk is no longer a bottleneck, it appears that some other bottleneck has been revealed, probably CPU contention. In other words, we have transformed an I/O-limited bootstorm into a CPU-limited one! This improvement in boot times would be particularly useful for VDI deployments where booting many instances of the same VM is a frequent occurrence.

Conclusions

In this blog post, we've seen that in-memory read caching can improve performance in read I/O-limited situations substantially without requiring new hardware, compromising reliability, or requiring much in the way of administration.

As future work to improve in-memory read caching further, we'd like to remove the limitation that it can only use dom0's memory. Instead, we'd like to be able to use the host's entire free memory. This is far more flexible than the current implementation and would remove any need to tweak dom0's memory.

Credits

Thanks to Felipe Franciosi, Damir Derd, Thanos Makatos and Jonathan Davies for feedback and reviews.

Recent Comments
Tim Mackey
I'm not familiar with ZFS, but XenServer has had an shared storage cache called IntelliCache. It's designed for use in highly tem... Read More
Monday, 28 July 2014 02:19
Tobias Kreidl
Ross, Nice article! This cache is definitely going to help but as you pointed out, at some point, the size of the golden image wil... Read More
Tuesday, 29 July 2014 05:12
Tobias Kreidl
Apparently I hit a sore spot with you, "whatever"... I never said Nexenta was the best or most innovative solution out there, but... Read More
Thursday, 31 July 2014 04:28
Continue reading
33019 Hits
7 Comments

Running Scientific Linux Guest VMs on XenServer

Running Scientific Linux Guest VMs on XenServer

What is Scientific Linux?

In short, Scientific Linux is an customized RedHat/CentOS Linux distribution provided by CERN and Fermilab: popular in educational institutions as well as laboratory environments.  More can be read about Scientific Linux here: https://www.scientificlinux.org/

From my own long-term testing - before XenServer 6.2 and our pre-release/Alpha - Creedence - I have ran both Scientific Linux 5 and Scientific Linux 6 without issues.  This article's scope is to show how one can install Scientific Linux and, more specifically, ensure the XenTools Guest Additions for Linux are installed as these do not require any form of "Xen-ified" kernel.

XenServer and Creedence

The following are my own recommendations to run Scientific Linux in XenServer:

  1. I recommend using XenServer 6.1 through any of the Alpha releases due to improvements with XenTools
  2. I recommend using Scientific Linux 5 or Scientific Linux 6
  3. The XenServer VM Template one will need to use will either be of CentOS 5 or CentOS 6: 32 or 64 bit depends on the release of Scientific Linux you will be using

One will also require a URL as to install Scientific Linux from their repository, found at http://ftp.scientificlinux.org/linux/scientific/

The following are URLs I recommend for use during the Guest Installation process (discussed later):

Scientific Linux 5 or 6 Guest VM Installation

With XenCenter, the process of installing Scientific Linux 5.x or Scientific Linux 6 uses the same principles.  You need to create a new VM, select the appropriate CentOS template, and define the VM parameters for disk, RAM, and networking:

1.  In XenCenter, select "New VM":

2.  When prompted for the new VM Template, select the appropriate CentOS-based template (5 or 6, 32 or 64 bit):

3.  Follow the wizard to add processors, disc, and networking information

4.  From the console, follow the steps as to install Scientific Linux 5 or 6 based on your preferences.

5.  After rebooting, login as root and execute the following command within the Guest VM:

yum update

6.  Once yum has applied any updates, reboot the Scientific Linux 5 or 6 Guest VM by executing the following within the Guest VM:

reboot

7.  With the Guest VM back up, login as root and mount the xs-tools.iso within XenCenter:

7.  From the command line, execute the following commands to mount xs-tools.iso within the Guest VM as well as to run the install.sh utility:

cd ~
mkdir tools
mount /dev/xvdd tools/
cd tools/Linux/
./install.sh

8.  With Scientific Linux 5 you will be prompted to install the XenTools Guest Additions - select yes and when complete, reboot the VM:

reboot

9.  With Scientific Linux 6 you will notice the following output:

Fatal Error: Failed to determine Linux distribution and version.

10.  This is not a Fatal Error, but an error induced because the distro build and revision are not presented as expected.  This means that you will manually need to install the XenTools Guest Additions by executing the following commands and rebooting:

rpm -ivh xe-guest-utilities-xenstore-<version number here.x86_64.rpm
rpm -ivh xe-guest-utilities-<version number here>.x86_64.rpm
reboot

Finally after the last reboot (post guest addition install) one will notice from XenCenter that the network address, stats, and so forth are available (including the ability to migrate the VM):

 

I hope this article helps any of you out there and feedback is always welcomed!

--jkbs

@xenfomation

 

Recent Comments
Terry Wang
Running PV on HVM (also called PVHVM sometimes) is just fine. For modern Linux distros with Linux 3.0+ kernel (it'll unplug the QE... Read More
Monday, 28 July 2014 03:56
JK Benedict
Stay tuned! I have more to offer for Creedence... especially in lieu of Mr. Mackey's request from the following article @ http://... Read More
Saturday, 27 September 2014 09:03
Ian Yates
Hi, I'm new to this community but independently worked out a (pretty much identical) install routine for ScientificLinux on Xen so... Read More
Wednesday, 30 July 2014 10:24
Continue reading
16839 Hits
3 Comments

XenServer Status - October 2013

This past month saw some significant progress toward our objective of converting XenServer from a closed source product developed within Citrix to an open source project.  This is a process which is considerably more difficult and detailed than simply announcing that we’re now open source, and I’m pleased to announce that in October we completed the publication of all sources making up XenServer.  While there is considerable work left to be done, not only can interested parties view all of the code, but we have also posted nightly snapshots of the last thirty builds from trunk.  For organizations looking to integrate with XenServer, these builds represent an ideal early access program from which to test integrations.

In addition to the code progress, we’ve also been busy building capabilities and supporting a vibrant ecosystem.  Some of the highlights include:

Now no status report would be complete without some metrics, and we’ve got some pretty decent stats as well.  Unique visitors to xenserver.org in October were up 12% to over 26,000.  Downloads of the core XenServer installation ISO directly from xenserver.org were up by over 1000 downloads.  Mailing list activity was up 50% and we had over 80 commits to the XenServer repositories.  What’s even more impressive with these numbers is that XenServer is built from a number of other open source projects, so the real activity level within XenServer is considerably larger.

At the end of the day this is one month, but it is a turning point.  I’ve been associated in one form or another with XenServer since 2008, and even way back then there were many who expected XenServer was unlikely to be around for long.  Five years later there are more competing solutions, but the future for XenServer is as solid as ever.  We’re working through some of the technical issues which have artificially limited XenServer in recent years, but we are making significant progress.  If you are looking for a solid, high performance, open source virtualization platform; then XenServer needs to be on your list.  If you are looking to contain the costs of delivering virtualized infrastructure, the same holds true.  

More important than all these excellent steps forward is how XenServer can benefit the ecosystem of vendors and fellow open source projects which are required to fully deliver virtualized infrastructure at large scale.  Over the next several months I’m going to be reaching out to various constituencies to see what we should be doing to make participating in the ecosystem more valuable.  If you want to be included in that process, please let me know.

Continue reading
12026 Hits
0 Comments

Citrix XenServer 6.1 Wins the “Best Virtualization Product” Award

[originally posted on the Citrix Blog by Sarah Vallieres]

Citrix XenServer 6.1 has won the “Best Virtualization Product” award from “IT Decision-Making in the Future – 2012 Outstanding Enterprise-Class IT Products Selection”, organized by TechTarget China.

The reason: Citrix XenServer 6.1 is the industry’s leading enterprise-class virtualization platform. XenServer can create and manage virtualization infrastructure of server, desktop and cloud computing, and can provide a shortcut in transitioning to cloud computing for enterprises.

The award is one of TechTarget China’s Products of the Year. The award was jointly organized by TechTarget China’s 14 websites, including storage, network, database, virtualization, cloud computing and data center management. The selection was made by evaluating various aspects of products, including cost-effectiveness, technical service, reliability, manageability, scalability, and so forth. It honors the enterprises and products that give outstanding performance in the enterprise-class IT market, providing a reference in the selection of products by enterprise ITs.

For more information, please see the following web page:http://www.techtarget.com.cn/microsites/2012/awards/result.html.

Continue reading
7565 Hits
0 Comments

About XenServer

XenServer is the leading open source virtualization platform, powered by the Xen Project hypervisor and the XAPI toolstack. It is used in the world's largest clouds and enterprises.
 
Commercial support for XenServer is available from Citrix.