Virtualization Blog

Discussions and observations on virtualization.

Preview of XenServer Administrators Handbook

Administering any technology can be both fun and challenging at times. For many, the fun part is designing a new deployment while for others the hardware selection process, system configuration and tuning and actual deployment can be a rewarding part of being an SRE. Then the challenging stuff hits where the design and deployment become a real part of the everyday inner workings of your company and with it come upgrades, failures, and fixes. For example, you might need to figure out how to scale beyond the original design, deal with failed hardware or find ways to update an entire data center without user downtime. No matter how long you've been working with a technology, the original paradigms often do change, and there is always an opportunity to learn how to do something more efficiently.

That's where a project JK Benedict and I have been working on with the good people of O'Reilly Media comes in. The idea is a simple one. We wanted a reference guide which would contain valuable information for anyone using XenServer - period. If you are just starting out, there would be information to help you make that first deployment a successful one. If you are looking at redesigning an existing deployment, there are valuable time-saving nuggets of info, too. If you are a longtime administrator, you would find some helpful recipes to solve real problems that you may not have tried yet. We didn't focus on long theoretical discussions, and we've made sure all content is relevant in a XenServer 6.2 or 6.5 environment. Oh, and we kept it concise because your time matters.

I am pleased to announce that attendees of OSCON will be able to get their hands on a preview edition of the upcoming XenServer Administrators Handbook. Not only will you be able to thumb through a copy of the preview book, but I'll have a signing at the O'Reilly booth on Wednesday July 22nd at 3:10 PM. I'm also told the first 25 people will get free copies, so be sure to camp out ;)

Now of course everyone always wants to know what animal which gets featured for the book cover. As you can see below, we have a bird. Not just any bird mind you, but a xenops. Now I didn't do anything to steer O'Reilly towards this, but find it very cool that we have an animal which also represents a very core component in XenServer; the xenopsd. For me, that's a clear indication we've created the appropriate content, and I hope you'll agree.

 

             

Recent Comments
prashant sreedharan
cool ! cant wait to get my hands on the book :-)
Tuesday, 07 July 2015 19:32
Tobias Kreidl
Congratulations, Tim and Jesse, as an update in this area is long overdue and in very good hands with you two. The XenServer commu... Read More
Tuesday, 07 July 2015 19:42
JK Benedict
Ah, Herr Tobias -- Danke freund. Danke fur ihre unterstutzung! Guten abent!
Thursday, 23 July 2015 09:26
Continue reading
11552 Hits
6 Comments

XenServer's LUN scalability

"How many VMs can coexist within a single LUN?"

An important consideration when planning a deployment of VMs on XenServer is around the sizing of your storage repositories (SRs). The question above is one I often hear. Is the performance acceptable if you have more than a handful of VMs in a single SR? And will some VMs perform well while others suffer?

In the past, XenServer's SRs didn't always scale too well, so it was not always advisable to cram too many VMs into a single LUN. But all that changed in XenServer 6.2, allowing excellent scalability up to very large numbers of VMs. And the subsequent 6.5 release made things even better.

The following graph shows the total throughput enjoyed by varying numbers of VMs doing I/O to their VDIs in parallel, where all VDIs are in a single SR.

3541.png

In XenServer 6.1 (blue line), a single VM would experience modest 240 MB/s. But, counter-intuitively, adding more VMs to the same SR would cause the total to fall, reaching a low point around 20 VMs achieving a total of only 30 MB/s – an average of only 1.5 MB/s each!

On the other hand, in XenServer 6.5 (red line), a single VM achieves 600 MB/s, and it only requires three or four VMs to max out the LUN's capabilities at 820 MB/s. Crucially, adding further VMs no longer causes the total throughput to fall, but remains constant at the maximum rate.

And how well distributed was the available throughput? Even with 100 VMs, the available throughput was spread very evenly -- on XenServer 6.5 with 100 VMs in a LUN, the highest average throughput achieved by a single VM was only 2% greater than the lowest. The following graph shows how consistently the available throughput is distributed amongst the VMs in each case:

4016.png

Specifics

  • Host: Dell R720 (2 x Xeon E5-2620 v2 @ 2.1 GHz, 64 GB RAM)
  • SR: Hardware HBA using FibreChannel to a single LUN on a Pure Storage 420 SAN
  • VMs: Debian 6.0 32-bit
  • I/O pattern in each VM: 4 MB sequential reads (O_DIRECT, queue-depth 1, single thread). The graph above has a similar shape for smaller block sizes and for writes.
Recent Comments
Tobias Kreidl
Very nice, Jonathan, and it is always good to raise discussions about standards that are known to change over time. This is partic... Read More
Friday, 26 June 2015 19:52
Tobias Kreidl
Indeed, depending on the specific characteristics of each storage array there will be some maximum queue depth per connection (por... Read More
Saturday, 27 June 2015 04:27
Jonathan Davies
Thanks for your comments, Tobias and John. You're absolutely right -- the LUN's capabilities are an important consideration. And n... Read More
Monday, 29 June 2015 08:53
Continue reading
12898 Hits
6 Comments

Security bulletin covering VENOM

Last week a vulnerability in QEUM was reported with the marketing name of "VENOM", but which is more correctly known as CVE-2015-3456.  Citrix have released a security bulletin covering CVE-2015-3456 which has been updated to include hotfixes for XenServer 6.5, 6.5 SP1 and XenServer 6.2 SP1.

Learning about new XenServer hotfixes

When a hotfix is released for XenServer, it will be posted to the Citrix support web site. You can receive alerts from the support site by registering at http://support.citrix.com/profile/watches and following the instructions there. You will need to create an account if you don't have one, but the account is completely free. Whenever a security hotfix is released, there will be an accompanying security advisory in the form of a CTX knowledge base article for it, and those same KB articles will be linked on xenserver.org in the download page.

Patching XenServer hosts

XenServer admins are encouraged to schedule patching of their XenServer installations at their earliest opportunity. Please note that this bulletin does impact XenServer 6.2 hosts, and to apply the patch, all XenServer 6.2 hosts will first need to be patched to service pack 1 which can be found on the XenServer download page

Continue reading
26716 Hits
1 Comment

Increasing Ubuntu's Resolution

Increasing Ubuntu's Resolution

Maximizing Desktop Real-estate with Ubuntu

With the addition of Ubuntu (and the likes) to Creedence, you may have noticed that the default resolution is 1024x768.  I certainly noticed it and with much work on 6.2 and Creedence Beta, I have a quick solution to maximizing the screen resolution for you.

The thing to consider is that a virtual frame buffer is what is essentially being used.  You can re-invent X configs all day, but the shortest path is to - first - ensure that that the following files are installed on your Ubuntu guest VM:

sudo apt-get install xvfb xfonts-100dpi xfonts-75dpi xfstt

Once that is all done installing, the next step is to edit Grub -- specifically /etc/default/grub:

sudo vi /etc/default/grub

Considering your monitor's maximum resolution (or not if you want to remote into Ubuntu using XRDP), look for the variable GRUB_GFXMODE.  This is where you can specify your desired BOOT resolutions that we will instruct the guest VM to SUSTAIN into user-space:

GRUB_GFXMODE=1280x960,1280x800,1280x720,1152x768,1152x700,1024x768,800x600

Next, adjust the variable GRUB_PAYLOAD_LINUX to equal keep, or:

GRUB_PAYLOAD_LINUX=keep

Save the changes and be certain to execute the following:

sudo update-grub
sudo reboot

Now, you will notice that even during the boot phase that the resolution is large and this will carry into user space: Lightdm, Xfce, and the likes.

Finally, I would highly suggest installing XRDP for your Guest VM.  It allows you to access that Ubuntu/Xbunutu/etc desktop remotely.  Specific details regarding this can be found through Ubuntu's forum:

http://askubuntu.com/questions/449785/ubuntu-14-04-xrdp-grey


Enjoy!

--jkbs | @xenfomation

 

 

Recent Comments
JK Benedict
Thanks, YLK - I am so glad to hear this helped someone else! Now... install XRDP and leverage the power to Remote Desktop (secure... Read More
Thursday, 25 December 2014 04:46
gfpl
thanks guy is very good help me !!!
Friday, 06 March 2015 10:52
Fredrik Wendt
Would be really nice to see all steps needed (CLI on dom0) to go from http://se.archive.ubuntu.com/ubuntu/dists/vivid/main/install... Read More
Monday, 14 September 2015 21:48
Continue reading
17018 Hits
6 Comments

VGA over Cirrus in XenServer 6.2

Achieve Higher Resolution and 32Bpp

For many reasons – not exclusive to XenServer – the Cirrus video driver has been a staple wherein a basic/somewhat agnostic video driver is needed.  When one creates a VM within XenServer (specifically 6.2 and previous versions) the Cirrus video driver is used by default for video...and it does the job.

I had been working on a project with my mentor related to an eccentric OS, but I needed a way to get more real-estate to test a HID pointing device by increasing the screen resolution.  This led me to find that at some point in our upstream code there were platform (virtual machine metadata) options that allowed an one to "ditch" Cirrus and 1024x768 resolution for higher resolutions and color depth via a standard VGA driver addition.

This is not tied into GPU Pass through nor is it a hack.  It is a valuable way to achieve 32bpp color in Guest VMs with video support as well as obtaining higher resolutions.

Windows 7: A Before and After Example

To show the difference between "default Cirrus" and the Standard VGA driver (which I will discuss how to switch to shortly), Windows 7 Enterprise had the following resolution to offer me with Cirrus:


Now, after switching to standard VGA for the same Guest VM and rebooting, I now had the following resolution options within Windows 7 Enterprise:

Switching a Guest for VGA

After you create your VM – Windows, Linux, etc – perform the following steps to enable the VGA adapter:

 

  • Halt the Guest VM
  • From the command line, find the UUID of your VM:
 xe vm-list name-label=”Name of your VM”
  • Taking the UUID value, run the following two commands:
 xe vm-param-set uuid=<UUID of your VM> platform:vga=std
 xe vm-param-set uuid=<UUID of your VM> platform:videoram=4
  •  Finally, start your VM and one should be able to achieve higher resolution at 32bpp.

 

It is worth noting that the max amount of "videoram" that can be specified is 16 (megabytes).

Switching Back to Cirrus

If – for one reason or another – you want to reset/remove these settings as to stick with the Cirrus driver, run the following commands:

 xe vm-param-remove uuid=<UUID of your VM> param-name=platform param-key=vga
 xe vm-param-remove uuid=<UUID of your VM> param-name=platform param-key=videoram

Again, reboot your Guest VM and with the lack of VGA preference, the default Cirrus driver will be used.

What is the Catch?

There is no catch and no performance hit.  The VGA driver's "videoram" specification is carved out of the virtual memory allocated to the Guest VM.  So, for example, if you have 4GB allocated to a Guest VM, subtract at max 16 megabytes from 4GB.  Needless to say, that is a pittance and does not impact performance.

Speaking of performance, my own personal tests were simple and repeated several times:

 

  • Utilized a tool that will remain anonymous
  • Use various operating systems with Cirrus and resolution at 1024 x 768
  • Run 2D graphic test suite
  • Write down Product X, Y, or Z’s magic number that represents good or bad performance
  • Apply the changes to the VM to use VGA (keeping the resolution at 1024 x 768 for some kind of balance)
  • Run the same volley of 2D tests after a reboot
  • Write down Product X, Y or Z’s magic number that represents good or bad performance

 

In the end, I personally found from my experience that there was a very minor, but noticeable difference in Cirrus versus VGA.  Cirrus usually came in 10-40 points below VGA at the 1024 x 768 level.  Based on the test suite used, this is nothing spectacular, but it is certainly a benefit as I found no degraded performance across XenServer (other Guests), etc.

I hope this helps and as always: questions and comments are welcomed!

 

--jkbs | @xenfomation

 

Recent Comments
JK Benedict
Hey, Chris!! Excellent questions! So - I think I need to clear up my poor use of words: more importantly, tying words together. ... Read More
Saturday, 11 October 2014 22:50
Continue reading
25683 Hits
4 Comments

Security bulletin covering "Shellshock"

Over the past several weeks, there has been considerable interest in a series of vulnerabilities in bash with the attention grabbing name of "shellshock". These bash vulnerabilities are more properly known as CVE-2014-6271, CVE-2014-6277, CVE-2014-6278, CVE-2014-7169, CVE-2014-7186 and CVE-2014-7187. As was indicated in security bulletin CTX200217, XenServer hosts were potentially impacted, but investigation was continuing. That investigation has been completed and the associated impact is described in security bulletin CTX200223, which also contains patch information for these vulnerabilities.

Learning about new XenServer hotfixes

When a hotfix is released for XenServer, it will be posted to the Citrix support web site. You can receive alerts from the support site by registering at http://support.citrix.com/profile/watches and following the instructions there. You will need to create an account if you don't have one, but the account is completely free. Whenever a hotfix is released, there will be an accompanying security advisory in the form of a CTX knowledgebase article for it, and those same KB articles will be linked on xenserver.org in the download page.

Patching XenServer hosts

XenServer admins are encouraged to schedule patching of their XenServer installations. Please note that the items contained in the CTX200223 bulletin do impact XenServer 6.2 hosts, and to apply the patch, all XenServer 6.2 hosts will first need to be patched to service pack 1. The complete list of patches can be found on the XenServer download page.     

Continue reading
18086 Hits
0 Comments

Security bulletin covering XSA-108

Over the past week there has been considerable interest in an embargoed Xen Project security advisory known as XSA-108. On October 1st, 2014, the embargo surrounding this advisory was lifted, and coincident with that action, Citrix released a security bulletin covering XSA-108, as well as two additional advisories which impact XenServer releases.

CVE-2014-7188 (XSA-108) Status

CVE-2014-7188, also known as XSA-108, has received significant press. A patch for this was made available on the Citrix support site on October 1st. The patch is available at CTX200218, and also includes remedies for CVE-2014-7155 and CVE-2014-7156.

Learning about new XenServer hotfixes

When a hotfix is released for XenServer, it will be posted to the Citrix support web site. You can receive alerts from the support site by registering at http://support.citrix.com/profile/watches and following the instructions there. You will need to create an account if you don't have one, but the account is completely free. Whenever a hotfix is released, there will be an accompanying security advisory in the form of a CTX knowledge base article for it, and those same KB articles will be linked on xenserver.org in the download page.

Patching XenServer hosts

XenServer admins are encouraged to schedule patching of their XenServer installations at their earliest opportunity. Please note that this bulletin does impact XenServer 6.2 hosts, and to apply the patch, all XenServer 6.2 hosts will first need to be patched to service pack 1. The complete list of patches can be found on the XenServer download page.     

Continue reading
12323 Hits
0 Comments

Debian 7.4 and 7.6 Guest VMs

"Four Debians, Two XenServers"

The purpose of this article is to discuss my own success with virtualizing "four" releases of Debian (7.4/7.6; 32-bit/64-bit) in my own test labs.

For more information about Debian, head on over to Debian.org - specifically here to download the 7.6 ISO of your choice ( I used both the full DVD install ISO as well as the net install ISO ).

Note: If you are utilizing the Debian 7.4 net install ISO the OS will be updated to 7.6 during the install process.  This is just a "heads up" in the event you are keen to stick with a vanilla Debian 7.4 VM for test purposes.  And so you will need to download the full install DVD for the 7.4 32-bit/64-bit release instead of the net install ISO.

Getting A New VM Started

Once I had the install media of my choice, I copied it to my ISO repository that both XenServer 6.2 and Creedence utilize in my test environment.

From XenCenter (distributed with Creedence Alpha 4) I selected "New VM".

In both 6.2 and Creedence I chose the "Debian 7.0 (Wheezy) 64-bit" VM template:

I then continued through the "New VM" wizard: specifying processors, RAM, networking, and so forth.  On the last step, I made sure as to select "Start the new VM Automatically" before I pressed "Create Now":

Within a few moments, this familiar view appeared in the console:

I installed a minimum instance of both: SSH and BASE system.  I also used guided partitioning just because I was in quite a hurry.

After championing my way through the installer, as expected, Debian 7.4 and 7.6 both prompted that I reboot:

Since this is a PV install, I have access to the Shutdown, Reboot, and Suspend buttons, but I was curious about tools as memory consumption, etc were not present under each guest's "Performance" tab:

... and the "Network" tab stated "Unknown":

Before I logged in as root - in both XenServer 6.2 and Creedence Alpha 4 - I mounted the xs-tools.iso.  Once in with root access, I executed the following commands to install xs-tools for these guest VMs:


mkdir iso
mount /dev/xvdd/ iso/
cd iso/Linux/
./install.sh

The output was exactly the same in both VMs and naturally I selected "Y" to install the guest additions:

Detected `Debian GNU/Linux 7.6 (wheezy)' (debian version 7).

The following changes will be made to this Virtual Machine:
  * update arp_notify sysctl.conf.
  * packages to be installed/upgraded:
    - xe-guest-utilities_6.2.0-1137_amd64.deb

Continue? [y/n] y

Selecting previously unselected package xe-guest-utilities.
(Reading database ... 24502 files and directories currently installed.)
Unpacking xe-guest-utilities (from .../xe-guest-utilities_6.2.0-1137_amd64.deb) ...
Setting up xe-guest-utilities (6.2.0-1137) ...
Mounting xenfs on /proc/xen: OK
Detecting Linux distribution version: OK
Starting xe daemon:  OK

You should now reboot this Virtual Machine.

Following the installer's instructions, I rebooted the guest VMs accordingly.

Creedence Alpha 4 Results

As soon as the reboot was complete I was able to see each guest VM's memory performance as well as networking for both IPv4 and IPv6:

XenServer 6.2

With XenServer 6.2, I found that after installing the guest agent - under the "Network" tab - there still was no IPv4 information for my 64-bit Debian 7.4 and 7.6 guest VMs.  This does not apply to 32-Bit Debian 7.4 and 7.6 guest VMs as the tools installed just fine.

Then I thought about it and realized that by disabling IPv6, presto - the network information appeared for my IPv4 address.  To accomplish this, I edited the following file (as to avoid adjusting GRUB parameters):

/etc/sysctly.conf

And at the bottom of this file I added:

net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1 net.ipv6.conf.lo.disable_ipv6 = 1 net.ipv6.conf.eth0.disable_ipv6 = 1

After saving my changes, I rebooted and immediately was able to see my memory usage:

However... I still could not see my IPv4 address under the "Network" tab until I noticed the device ID of the network interface -- it was Device 1 (not 0):

I deleted this interface and re-added a new one from XenCenter.  Instantly, I could see my IPv4 address and the device ID for the network interface was back to 0:

And yes, I tested rebooting -- the address is still shown and memory usage is still measured.  In addition I did try to remove the flags to disable IPv6, but that resulted in seeing "UNKNOWN" - again - for 64-Bit Debian 7.4 and 7.6 guests.  That just means in XenServer 6.2 I have kept my changes in /etc/sysctl.conf as to ensure my 64-Bit Debian 7.4 and 7.6 hosts with XenTools' Guest Additions for Linux work just fine.

So, that's that -- something to experiment and test with: Debian 7.4 or 7.6 32-bit/64-bit in a XenServer 6.2 or Creedence Alpha test environment!

 

--jkbs

@xenfomation

Recent comment in this post
JK Benedict
Tested on Creedence Beta, as well. Love it!!!
Thursday, 07 August 2014 18:58
Continue reading
16052 Hits
1 Comment

XenServer Status – January 2014

The release of true hardware GPU sharing and XenServer 6.2 SP1 was a strong finish to 2013 and based on the feedback from the Citrix Partner Summit a few weeks back, we really are a key differentiator for Citrix XenDesktop which fulfills one of the roles XenServer has in the ecosystem. Of course this also opens the question of how to get the sources for the cool new bits, and I’ll cover that in just a little bit. Looking beyond the commercial role for XenServer, we also saw significant growth in the core project with visitors, page views, downloads and mailing list activity all up at least 20% compared to December. From the perspective of engineering accomplishments, completed work in Q4 included a move to 4.3 for the Xen Project hypervisor, a move to CentOS 6.4 with Linux kernel to 3.10, and significant work towards a 64 bit dom0, upstream support for Windows PV drivers and blktap3 for storage. All told, this is a fantastic base from which to build upon in 2014.

Speaking of foundations, as an open source project we have an obligation to our community to provide clear access to the source used to produce XenServer. Unfortunately, it’s become apparent some confusion exists in the state of the project and source code locations. Fundamentally we had a miscommunication where many assumed the sources on xenserver.org and posted in GitHub represented XenServer 6.2 and that code changes which occurred in the GitHub repositories represented the XenServer 6.2 product direction. In reality, XenServer 6.2 represents a fork of XenServer which occurred prior to the creation of xenserver.org and the code which is part of xenserver.org represents trunk. So what does this mean for those of you looking for code, and for that matter test your solution with the correct binaries? To solve that I’ve created this handy little table:

XenServer 6.1 and prior Source is located on citrix.com within the downloads section
XenServer 6.2 Source is located on citrix.com within the downloads section, and on xenserver.org download page
XenServer 6.2 hotfixes Source is located within the zip file containing the hotfix
XenServer 6.2 SP1 Source is located within the zip file containing the service pack
XenServer trunk Source is located in the XenServer GitHub repository
XenServer nightly builds Source is located in the XenServer GitHub repository
XenCenter 6.1 and prior Source is not available
XenCenter 6.2 and later Source is located in the XenServer GitHub repository, and all XenCenter 6.2 versions are built from trunk
XenServer optional components Not all optional components are open source. For components which are open source, the source will be available with the component. Note that source code from third parties may require a license from the third party to obtain source (e.g. proprietary headers)

 

So what does this mean for specific feature work, and more importantly the next major version of XenServer? If the work being performed occurs within the XenServer 6.2 branch (for example as a hotfix), then that work will continue to be performed as it always has and source will be posted with that release. The only exception to that is work on XenCenter which is always occurring in trunk. Work for the next major release will occur in trunk as it currently has, but specific feature implementations in trunk shouldn't be considered "ready" until we actually release. In practice that means we may have some proof of concept stuff get committed, and we may decide that proof of concept work isn't compatible with newer work and refactor things before the release. I hope this clears things up a little, and there is now a better understanding of where a given feature can be found.     

Recent Comments
GizmoChicken
Tim, you mention the move to Xen 4.3, the move to CentOS 6.4 with Linux kernel to 3.10, and the significant work towards a 64-bit ... Read More
Sunday, 16 February 2014 06:52
Kristoffer Sheather
Can you provide a roadmap / projected schedule for the next releases of XenServer?
Monday, 03 March 2014 00:49
Continue reading
15477 Hits
3 Comments

XenServer Status – November 2013

The progress towards fulfilling the goal of making XenServer a proper open source project continues, but this month much of the work isn’t visible yet.  The big process improvements will hopefully be unveiled in late December or early January when we get our long needed wiki and defect trackers online.  The logical question of course is why it’s taking so long to get them out there.  After all we obviously do have the content, so why not just make it all public and be done?  Unfortunately, there is no magic wand to remove customer sensitive information, ensure that designs linked to closed source development on other Citrix products, or information provided to Citrix by partners under NDA isn’t accidentally made public.  Its painstaking work and we want to get it right.

In terms of partner announcements, we’ve been focusing on the NVIDIA vGPU work, as well as security efforts.

-          “Kaspersky trusted status” awarded to XenServer Windows Tools: http://blogs.citrix.com/2013/11/14/citrix-xenserver-windows-tools-awarded-kaspersky-trusted-status-plus-a-security-ecosystem-update/

-          SAP 3D Enterprise on XenDesktop on XenServer powered by NVIDIA GRID: http://blogs.citrix.com/2013/11/15/vgpu-sap-3d-visual-enterprise-the-potential-for-mobile-cadplm-xendesktop-on-xenserver-powered-by-nvidia-grid/

-          The XenServer HCL has been expanded to include new servers from HP, Hitachi, Supermicro, Huawei, Lenovo and Fujitsu, storage devices from QNAP, Nexsan and Hitachi Data Systems, storage adapters from IBM and QLogic plus two CNAs from Emulex.

When I posted the project status last month, we had some significant gains, and this month is no different.  Compared to October:

-          Unique visitors were up 30% to 34,000

-          xenserver.org page views were up 21% to over 110,000

-          Downloads of the XenServer installer were up by 7,000

-          We had over 110 commits to the XenServer repositories.

What’s most interesting about these stats isn’t the growth, which I do love, but that we’re getting to a point where the activity level is starting to feel right for a project of our maturity.  Don’t get me wrong, I still am looking for lots more growth, but I’m also looking for sustained activity.  That’s why I’m looking more at how XenServer interacts with its community, and what can be done to improve the relationship.  In my Open@Citrix blog, I asked the question “What kind of community do you want?”.  In my mind, everyone has a voice; it’s just up to you to engage with us.  I’d like to hear what you want from us, and that’s both the good and the bad.  If you have a community you’d like us to be involved with, I’d also like to hear about that too. 

Here is how I define the XenServer community:

The XenServer community is an independent group working to common purpose with a goal of leveraging each other to maximize the success of the community.  Members are proud to be associated with the community.

 

We all have a role to play in the future success of XenServer, and while I have the twitter handle of @XenServerArmy, I view my role as supporting you.  If there is something which is preventing you from adopting XenServer, or being as successful with XenServer as you intended, I want to know.  I want to remove as many barriers to adopting XenServer as I can, and I am your voice within the XenServer team at Citrix.  Please be vocal in your support, and vocal with what you need.

Recent Comments
srinivas j
XenServer status? Please post or update the status of XenServer roadmap
Sunday, 05 January 2014 03:31
srinivas j
currently hold XenServer licenses for 10+ hosts and are eagerly waiting for any updates to XenServer roadmap..
Sunday, 05 January 2014 03:32
Continue reading
10406 Hits
2 Comments

XenServer Status - October 2013

This past month saw some significant progress toward our objective of converting XenServer from a closed source product developed within Citrix to an open source project.  This is a process which is considerably more difficult and detailed than simply announcing that we’re now open source, and I’m pleased to announce that in October we completed the publication of all sources making up XenServer.  While there is considerable work left to be done, not only can interested parties view all of the code, but we have also posted nightly snapshots of the last thirty builds from trunk.  For organizations looking to integrate with XenServer, these builds represent an ideal early access program from which to test integrations.

In addition to the code progress, we’ve also been busy building capabilities and supporting a vibrant ecosystem.  Some of the highlights include:

Now no status report would be complete without some metrics, and we’ve got some pretty decent stats as well.  Unique visitors to xenserver.org in October were up 12% to over 26,000.  Downloads of the core XenServer installation ISO directly from xenserver.org were up by over 1000 downloads.  Mailing list activity was up 50% and we had over 80 commits to the XenServer repositories.  What’s even more impressive with these numbers is that XenServer is built from a number of other open source projects, so the real activity level within XenServer is considerably larger.

At the end of the day this is one month, but it is a turning point.  I’ve been associated in one form or another with XenServer since 2008, and even way back then there were many who expected XenServer was unlikely to be around for long.  Five years later there are more competing solutions, but the future for XenServer is as solid as ever.  We’re working through some of the technical issues which have artificially limited XenServer in recent years, but we are making significant progress.  If you are looking for a solid, high performance, open source virtualization platform; then XenServer needs to be on your list.  If you are looking to contain the costs of delivering virtualized infrastructure, the same holds true.  

More important than all these excellent steps forward is how XenServer can benefit the ecosystem of vendors and fellow open source projects which are required to fully deliver virtualized infrastructure at large scale.  Over the next several months I’m going to be reaching out to various constituencies to see what we should be doing to make participating in the ecosystem more valuable.  If you want to be included in that process, please let me know.

Continue reading
12142 Hits
0 Comments

How did we increase VM density in XenServer 6.2? (part 2)

In a previous article, I described how dom0 event channels can cause a hard limitation on VM density scalability.

Event channels were just one hard limit the XenServer engineering team needed to overcome to allow XenServer 6.2 to support up to 500 Windows VMs or 650 Linux VMs on a single host.

In my talk at the 2013 Xen Developer Summit towards the end of October, I spoke about a further six hard limits and some soft limits that we overcame along the way to achieving this goal. This blog article summarises that journey.

Firstly, I'll explain what I mean by hard and soft VM density limits. A hard limit is where you can run a certain number of VMs without any trouble, but you are unable to run one more. Hard limits arise when there is some finite, unsharable resource that each VM consumes a bit of. On the other hand, a soft limit is where performance degrades with every additional VM you have running; there will be a point at which it's impractical to run more than a certain number of VMs because they will be unusable in some sense. Soft limits arise when there is a shared resource that all VMs must compete for, such as CPU time.

Here is a run-down of all seven hard limits, how we mitigated them in XenServer 6.2, and how we might be able to push them even further back in future:

  1. dom0 event channels

    • Cause of limitation: XenServer uses a 32-bit dom0. This means a maximum of 1,024 dom0 event channels.
    • Mitigation for XenServer 6.2: We made a special case for dom0 to allow it up to 4,096 dom0 event channels.
    • Mitigation for future: Adopt David Vrabel's proposed change to the Xen ABI to provide unlimited event channels.
  2. blktap2 device minor numbers

    • Cause of limitation: blktap2 only supports up to 1,024 minor numbers, caused by #define MAX_BLKTAP_DEVICE in blktap.h.
    • Mitigation for XenServer 6.2: We doubled that constant to allow up to 2,048 devices.
    • Mitigation for future: Move away from blktap2 altogether?
  3. aio requests in dom0

    • Cause of limitation: Each blktap2 instance creates an asynchronous I/O context for receiving 402 events; the default system-wide number of aio requests (fs.aio-max-nr) was 444,416 in XenServer 6.1.
    • Mitigation for XenServer 6.2: We set fs.aio-max-nr to 1,048,576.
    • Mitigation for future: Increase this parameter yet further. It's not clear whether there's a ceiling, but it looks like this would be okay.
  4. dom0 grant references

    • Cause of limitation: Windows VMs used receive-side copy (RSC) by default in XenServer 6.1. In netbk_p1_setup, netback allocates 22 grant-table entries per virtual interface for RSC. But dom0 only had a total of 8,192 grant-table entries in XenServer 6.1.
    • Mitigation for XenServer 6.2: We could have increased the size of the grant-table, but for other reasons RSC is no longer the default for Windows VMs in XenServer 6.2, so this limitation no longer applies.
    • Mitigation for future: Continue to leave RSC disabled by default.
  5. Connections to xenstored

    • Cause of limitation: xenstored uses select(2), which can only listen on up to 1,024 file descriptors; qemu opens 3 file descriptors to xenstored.
    • Mitigation for XenServer 6.2: We made two qemu watches share a connection.
    • Mitigation for future: We could modify xenstored to accept more connections, but in the future we expect to be using upstream qemu, which doesn't connect to xenstored, so it's unlikely that xenstored will run out of connections.
  6. Connections to consoled

    • Cause of limitation: Similarly, consoled uses select(2), and each PV domain opens 3 file descriptors to consoled.
    • Mitigation for XenServer 6.2: We use poll(2) rather than select(2). This has no such limitation.
    • Mitigation for future: Continue to use poll(2).
  7. dom0 low memory

    • Cause of limitation: Each running VM eats about 1 MB of dom0 low memory.
    • Mitigation for future: Using a 64-bit dom0 would remove this limit.

Summary of limits

Okay, so what does this all mean in terms of how many VMs you can run on a host? Well, since some of the limits concern your VM configuration, it depends on the type of VM you have in mind.

Let's take the example of Windows VMs with PV drivers, each with 1 vCPU, 3 disks and 1 network interface. Here are the number of those VMs you'd have to run on a host in order to hit each limitation:

Limitation XS 6.1 XS 6.2 Future
dom0 event channels 150 570 no limit
blktap minor numbers 341 682 no limit
aio requests 368 869 no limit
dom0 grant references 372 no limit no limit
xenstored connections 333 500 no limit
consoled connections no limit no limit no limit
dom0 low memory 650 650 no limit

The first limit you'd arrive at in each release is highlighted. So the overall limit is event channels in XenServer 6.1, limiting us to 150 of these VMs. In XenServer 6.2, it's the number of xenstore connections that limits us to 500 VMs per host. In the future, none of these limits will hit us, but there will surely be an eighth limit when running many more than 500 VMs on a host.

What about Linux guests? Here's where we stand for paravirtualised Linux VMs each with 1 vCPU, 1 disk and 1 network interface:

Limitation XS 6.1 XS 6.2 Future
dom0 event channels 225 1000 no limit
blktap minor numbers 1024 2048 no limit
aio requests 368 869 no limit
dom0 grant references no limit no limit no limit
xenstored connections no limit no limit no limit
consoled connections 341 no limit no limit
dom0 low memory 650 650 no limit

This explains why the supported limit for Linux guests can be as high as 650 in XenServer 6.2. Again, in the future, we'll likely be limited by something else above 650 VMs.

What about the soft limits?

After having pushed the hard limits such a long way out, we then needed to turn our attention towards ensuring that there weren't any soft limits that would make it infeasible to run a large number of VMs in practice.

Felipe Franciosi has already described how qemu's utilisation of dom0 CPUs can be reduced by avoiding the emulation of unneeded virtual devices. The other major change in XenServer 6.2 to reduce dom0 load was to reduce the amount of xenstore traffic. This was achieved by replacing code that polled xenstore with code that registers watches on xenstore and by removing some spurious xenstore accesses from the Windows guest agent.

These things combine to keep dom0 CPU load down to a very low level. This means that VMs can remain healthy and responsive, even when running a very large number of VMs.

Recent comment in this post
Tobias Kreidl
We see xenstored eat anywhere from 30 to 70% of a CPU with something like 80 VMs running under XenServer 6.1. When major updates t... Read More
Wednesday, 13 November 2013 17:10
Continue reading
24058 Hits
1 Comment

How did we increase VM density in XenServer 6.2?

One of the most noteworthy improvements in XenServer 6.2 is the support for a significantly increased number of VMs running on a host: now up to 500 Windows VMs or 650 Linux VMs.

We needed to remove several obstacles in order to achieve this huge step up. Perhaps the most important of the technical changes that led to this was to increase in the number of event channels available to dom0 (the control domain) from 1024 to 4096. This blog post is an attempt to shed some light on what these event channels are, and why they play a key role in VM density limits.

What is an event channel?

It's a channel for communications between a pair of VMs. An event channel is typically used by one VM to notify another VM about something. For example, a VM's paravirtualised disk driver would use an event channel to notify dom0 of the presence of newly written data in a region of memory shared with dom0.

Here are the various things that a VM requires an event channel for:

  • one per virtual disk;
  • one per virtual network interface;
  • one for communications with xenstore;
  • for HVM guests, one per virtual CPU (rising to two in XenServer 6.2); and
  • for PV guests; one to communicate with the console daemon.


Therefore VMs will typically require at least four dom0 event channels depending on the configuration of the VM. Requiring more than ten is not an uncommon configuration.

Why can event channels cause scalability problems when trying to run lots of VMs?

The total number of event channels any domain can use is part of a shared structure in the interface between a paravirtualised VM and the hypervisor; it is fixed at 1024 for 32-bit domains such as XenServer's dom0. Moreover, there are normally around 50--100 event channels used for other purposes, such as physical interrupts. This is normally related to the number of physical devices you have in your host. This overhead means that in practice there might be not too many more than 900--950 event channels available for VM use. So the number of available event channels becomes a limited resource that can cause you to experience a hard limit on the number of VMs you can run on a host.

To take an example: Before XenServer 6.2, if each of your VMs requires 6 dom0 event channels (e.g. an HVM guest with 3 virtual disks, 1 virtual network interface and 1 virtual CPU) then you'll probably find yourself running out of dom0 event channels if you go much over 150 VMs.

In XenServer 6.2, we have made a special case for our dom0 to allow it to behave differently to other 32-bit domains to allow it to use up to four times the normal number of event channels. Hence there are now a total of 4096 event channels available.

So, on XenServer 6.2 in the same scenario as the example above, even though each VM of this type would now use 7 dom0 event channels, the increased total number of dom0 event channels means you'd have to run over 570 of them before running out.

What happens when I run out of event channels?

On VM startup, the XenServer toolstack will try to plumb all the event channels through from dom0 to the nascent VM. If there are no spare slots, the connection will fail. The exact failure mode depends on which subsystem the event channel was intended for use in, but you may see error messages like these when the toolstack tries to connect up the next event channel after having run out:

error 28 mapping ring-refs and evtchn
message: xenopsd internal error: Device.Ioemu_failed("qemu-dm exited unexpectedly")

In other words, it's not pretty. The VM either won't boot or will run with reduced functionality.

That sounds scary. How can I tell whether there's sufficient spare event channels to start another VM?

XenServer has a utility called "lsevtchn" that allows you to inspect the event channel plumbing.

In dom0, run the following command to see what event channels are connected to a particular domain.

/usr/lib/xen/bin/lsevtchn 

For example, here is the output from a PV domain with domid 36:

[root@xs62 ~]# /usr/lib/xen/bin/lsevtchn 36
   1: VCPU 0: Interdomain (Connected) - Remote Domain 0, Port 51
   2: VCPU 0: Interdomain (Connected) - Remote Domain 0, Port 52
   3: VCPU 0: Virtual IRQ 0
   4: VCPU 0: IPI
   5: VCPU 0: IPI
   6: VCPU 0: Virtual IRQ 1
   7: VCPU 0: IPI
   8: VCPU 0: Interdomain (Connected) - Remote Domain 0, Port 55
   9: VCPU 0: Interdomain (Connected) - Remote Domain 0, Port 53
  10: VCPU 0: Interdomain (Connected) - Remote Domain 0, Port 54
  11: VCPU 0: Interdomain (Connected) - Remote Domain 0, Port 56

You can see that six of this VM's event channels are connected to dom0.

But the domain we are most interested in is dom0. The total number of event channels connected to dom0 can be determined by running

/usr/lib/xen/bin/lsevtchn 0 | wc -l

Before XenServer 6.2, if that number is close to 1024 then your host is on the verge of not being able to run an additional VM. On XenServer 6.2, the number to watch out for is 4096. However, before you'd be able to get enough VMs up and running to approach that limit, there are various other things you might run into depending on configuration and workload. Watch out for further blog posts describing how we have cleared more of these hurdles in XenServer 6.2.

Continue reading
65098 Hits
0 Comments

About XenServer

XenServer is the leading open source virtualization platform, powered by the Xen Project hypervisor and the XAPI toolstack. It is used in the world's largest clouds and enterprises.
 
Commercial support for XenServer is available from Citrix.