Virtualization Blog

Discussions and observations on virtualization.

Creedence Release Candidate Available

Just in time for the holiday season - we're pleased to announce a another tech toy for the geeks of the world to play with. Now of course XenServer is serious business, but just like many kids toys, the arrival of Creedence is eagerly awaited by many. As I mentioned earlier this week, you'll have to wait a bit longer for the official release, but today you can download the release candidate and see exactly what the world of Creedence should look like. Andy also mentioned last week that we're closing out the alpha/beta program, and as part of that effort the nightly Creedence snapshot page has been removed. You can still access the final beta (beta.3) on the pre-release page, but all prior builds have been removed. The pre-release page is also where you can download the release candidate.

What's in the Release Candidate

Performance tuning

The release candidate contains a number of bug fixes, but also has had some performance tuning done on it. This performance tuning is a little bit different than what we normally talk about, so if you've been benchmarking Creedence, you'll want to double check with the release candidate. What we've done is take a look at the interaction of a variety of system components and put in some limits on how hard we'll let you push them. Our first objective is a rock solid system, and while this work doesn't result in any configuration limit changes (at least not yet - that comes later in our cycle), it could reduce some of the headroom you might have experienced with a prior build. It's also possible that you could experience better headroom due to an overall improvement in system stability, so doing a performance test or two isn't a bad idea.

Core bug fixes over beta.3

  • mulitpath.conf is now preserved as multipath.conf.bak on upgrade
  • The default cpufreq governor is now set to performance
  • Fixes for XSA-109 through XSA-114 inclusive
  • Increase the number of PIRQs to more than 256 to support large quantities of NICs per host

What we'd like you to do with this build

The core two things we'd like you to do with this build are:

  1. If you've reported any issue at https://bugs.xenserver.org, please validate that we did indeed get the issue addressed.
  2. If you can, run this release candidate through its paces. We think it's nice and solid, and hope you do too.

Lastly, I'd like to take this opportunity to wish everyone in our community a festive end to 2014 and hope that what ever celebrating you might do is enjoyable. 2014 was an exciting year for XenServer, and that's in large part to the contributions of everyone reading this blog and working with Creedence. Thank you.

 

-tim     

Trading for Creedence shirts
Status of Creedence

Related Posts

 

Comments 15

Tassos Papadopoulos on Monday, 22 December 2014 07:21

Boot from iSCSI is not supported on XS6.2. We had to do some hacks to do make it possible on Cisco UCS Blades. Are you going to support it on Credence release?

0
Boot from iSCSI is not supported on XS6.2. We had to do some hacks to do make it possible on Cisco UCS Blades. Are you going to support it on Credence release?
Guest - Frediano Ziglio on Monday, 22 December 2014 16:27

Yes, boot from iSCSI saw much attention on this release.

Frediano (formerly XenServer developer)

0
Yes, boot from iSCSI saw much attention on this release. Frediano (formerly XenServer developer)
Itaru OGAWA on Monday, 22 December 2014 17:03

Is "xe vm-export" performance improved in Creedence?

From my test on RC, it looks similar, around 15MB/sec, even on 10GBE link:

[root@cloud05 ~]# time xe vm-export uuid=c2ffa2c7-6a86-60e0-7761-46f2602eb121 filename=/mnt/test/normal.xva
Export succeeded

real 3m26.173s
user 0m0.660s
sys 0m3.220s


[root@cloud05 ~]# time xe vm-export compress=false uuid=c2ffa2c7-6a86-60e0-7761-46f2602eb121 filename=/mnt/test/uncompressed.xva
Export succeeded

real 3m27.461s
user 0m0.744s
sys 0m3.164s


[root@cloud05 ~]# time xe vm-export compress=true uuid=c2ffa2c7-6a86-60e0-7761-46f2602eb121 filename=/mnt/test/compressed.xva
Export succeeded

real 4m29.471s
user 0m0.336s
sys 0m1.204s

[root@cloud05 ~]# ls -la /mnt/test/
total 7326850
drwxr-xr-x 2 root root 0 Dec 23 00:07 .
drwxr-xr-x 3 root root 4096 Dec 22 23:59 ..
-rwxr-xr-x 0 root root 823917077 Dec 23 01:38 compressed.xva
-rwxr-xr-x 0 root root 3339386368 Dec 23 01:22 normal.xva
-rwxr-xr-x 0 root root 3339386368 Dec 23 01:28 uncompressed.xva

/mnt/test is SSD cifs share.

0
Is "xe vm-export" performance improved in Creedence? From my test on RC, it looks similar, around 15MB/sec, even on 10GBE link: [root@cloud05 ~]# time xe vm-export uuid=c2ffa2c7-6a86-60e0-7761-46f2602eb121 filename=/mnt/test/normal.xva Export succeeded real 3m26.173s user 0m0.660s sys 0m3.220s [root@cloud05 ~]# time xe vm-export compress=false uuid=c2ffa2c7-6a86-60e0-7761-46f2602eb121 filename=/mnt/test/uncompressed.xva Export succeeded real 3m27.461s user 0m0.744s sys 0m3.164s [root@cloud05 ~]# time xe vm-export compress=true uuid=c2ffa2c7-6a86-60e0-7761-46f2602eb121 filename=/mnt/test/compressed.xva Export succeeded real 4m29.471s user 0m0.336s sys 0m1.204s [root@cloud05 ~]# ls -la /mnt/test/ total 7326850 drwxr-xr-x 2 root root 0 Dec 23 00:07 . drwxr-xr-x 3 root root 4096 Dec 22 23:59 .. -rwxr-xr-x 0 root root 823917077 Dec 23 01:38 compressed.xva -rwxr-xr-x 0 root root 3339386368 Dec 23 01:22 normal.xva -rwxr-xr-x 0 root root 3339386368 Dec 23 01:28 uncompressed.xva /mnt/test is SSD cifs share.
Guest - Nathan Payne on Monday, 22 December 2014 22:51

Will there be an upgrade path from this "Release Candidate" build to the official XenServer release that will be supported by Citrix?

0
Will there be an upgrade path from this "Release Candidate" build to the official XenServer release that will be supported by Citrix?
Tim Mackey on Monday, 05 January 2015 19:53

@Nathan,

Since this is pre-release software intended for testing, no official upgrade path exists to the final release. Practically, it does work, but you shouldn't expect success (because we don't test that path), and if you did run into an issue down the road, support might not be able to sort out what happened.

0
@Nathan, Since this is pre-release software intended for testing, no official upgrade path exists to the final release. Practically, it does work, but you shouldn't expect success (because we don't test that path), and if you did run into an issue down the road, support might not be able to sort out what happened.
Guest - Niklas on Friday, 26 December 2014 19:25

Hello,

I am also wondering about the export performance - Since we're using some shellscripts to do backup we are very much relying on the speed of vm-export to finish in time before the next cycle.
Our only solution now is to split the vm's up in different smaller pools instead of running just one big pool since there is a for loop doing (only one export per time) the vm-export from a list of vm's with a specific tag.
//Niklas

0
Hello, I am also wondering about the export performance - Since we're using some shellscripts to do backup we are very much relying on the speed of vm-export to finish in time before the next cycle. Our only solution now is to split the vm's up in different smaller pools instead of running just one big pool since there is a for loop doing (only one export per time) the vm-export from a list of vm's with a specific tag. //Niklas
Tobias Kreidl on Friday, 26 December 2014 22:01

There is still I believe a limit of four such processes per XenServer plus I believe at one point there was a also a limit to taking up not more than 80% of the primary network interface (if indeed that network was being used). Specifying a different network for XenServer operations is one way to address the issue. Splitting up your VMs into smaller pools is certainly one option, or Xenmotion them onto a XenServer that isn't so busy specifically for this backup purpose. Either that or you have to have a backup plan that is more spread out. The destination storage can also be a big part of the bottleneck, if its storage is slow, so the network isn't always the main issue. With multiple exports you are also putting a lot of I/O pressure on your SRs so spreading out storage is also beneficial.

0
There is still I believe a limit of four such processes per XenServer plus I believe at one point there was a also a limit to taking up not more than 80% of the primary network interface (if indeed that network was being used). Specifying a different network for XenServer operations is one way to address the issue. Splitting up your VMs into smaller pools is certainly one option, or Xenmotion them onto a XenServer that isn't so busy specifically for this backup purpose. Either that or you have to have a backup plan that is more spread out. The destination storage can also be a big part of the bottleneck, if its storage is slow, so the network isn't always the main issue. With multiple exports you are also putting a lot of I/O pressure on your SRs so spreading out storage is also beneficial.
Werner Reuser on Saturday, 27 December 2014 14:44

The best approach would be a backup solution where you can snapshot on the storage appliance, directly mount the snapshot on the backup server and work from there. My experience in backing up through a host, especially now when host seem to host more and more vm's, will cause problems with performance.

0
The best approach would be a backup solution where you can snapshot on the storage appliance, directly mount the snapshot on the backup server and work from there. My experience in backing up through a host, especially now when host seem to host more and more vm's, will cause problems with performance.
Tobias Kreidl on Saturday, 27 December 2014 18:39

@Werner: That is precisely why there are a number of successful storage companies that already provide this capability, not to mention that some support unlimited snapshot chains, the ability to clone and replicate the snapshots, the means to duplicate snapshots to other independent storage devices for DR, etc. I agree that even being able to create a VM snapshot onto an external storage device or at least a separate SR would enable subsequent processing to be more efficient.

As the ability of a Xenserver to support more and more VMs increases, backups are indeed becoming more of a concern. Alone evacuating a XenServer with 100 VMs or so is a very slow process, let alone backing up the VMs, so being able to support 500 VMs or so on a host only becomes attractive if other operations scale along with that enhancement.

0
@Werner: That is precisely why there are a number of successful storage companies that already provide this capability, not to mention that some support unlimited snapshot chains, the ability to clone and replicate the snapshots, the means to duplicate snapshots to other independent storage devices for DR, etc. I agree that even being able to create a VM snapshot onto an external storage device or at least a separate SR would enable subsequent processing to be more efficient. As the ability of a Xenserver to support more and more VMs increases, backups are indeed becoming more of a concern. Alone evacuating a XenServer with 100 VMs or so is a very slow process, let alone backing up the VMs, so being able to support 500 VMs or so on a host only becomes attractive if other operations scale along with that enhancement.
Werner Reuser on Tuesday, 30 December 2014 08:43

By the way, I still miss the option in Xenserver where we could simply schedule snapshots for groups of vm's. It wasn't ideal but cheap and it did work. Together with storage based snapshots it was possible to create a basic DR plan.

What would be a handy feature is being able to mount a snapshot of a storage repository to a pool. As far as I know this isn't possible at the moment. If I work in a VMWare environment I use that sometimes to be able to restore a vm from a storage based snapshot.

0
By the way, I still miss the option in Xenserver where we could simply schedule snapshots for groups of vm's. It wasn't ideal but cheap and it did work. Together with storage based snapshots it was possible to create a basic DR plan. What would be a handy feature is being able to mount a snapshot of a storage repository to a pool. As far as I know this isn't possible at the moment. If I work in a VMWare environment I use that sometimes to be able to restore a vm from a storage based snapshot.
Werner Reuser on Tuesday, 30 December 2014 08:35

What you need is your backup application being able to read the contents of a Xenserver snapshot or volume. Checkout Veeam for example, they create hypervisor based snapshots, in this case VMWare's VMFS, then snapshot the entire datastore on the underlying storage appliance, mount that snapshot to the backup software and start backing up from there. They're capable of reading data from the VMFS backup that's in the mounted volume. When the storage appliance's snapshot is created the snapshots on the original datastore are deleted. I'm sure software like Alike or Unitrends (isn't that the name these days) does probably the same thing.

I'm sure something like this can be scripted as wel, at least to read the data from the LVHD but it became definitely more complicated since LVHD, with LVM it was fairly easy. Next to this you'll need something that enables you to restore this data, that's why the exports come in handy, they're with all metadata so can be simply imported. Of course the payed backup solutions offer a lot more with deduplication and being able to build a replicated environment based on the changes in vm's. But they're also pretty expensive, with the amount of vm's on a host growing and growing I wouldn't be surprised if they leave the 'a license per host' policy they have right now.

0
What you need is your backup application being able to read the contents of a Xenserver snapshot or volume. Checkout Veeam for example, they create hypervisor based snapshots, in this case VMWare's VMFS, then snapshot the entire datastore on the underlying storage appliance, mount that snapshot to the backup software and start backing up from there. They're capable of reading data from the VMFS backup that's in the mounted volume. When the storage appliance's snapshot is created the snapshots on the original datastore are deleted. I'm sure software like Alike or Unitrends (isn't that the name these days) does probably the same thing. I'm sure something like this can be scripted as wel, at least to read the data from the LVHD but it became definitely more complicated since LVHD, with LVM it was fairly easy. Next to this you'll need something that enables you to restore this data, that's why the exports come in handy, they're with all metadata so can be simply imported. Of course the payed backup solutions offer a lot more with deduplication and being able to build a replicated environment based on the changes in vm's. But they're also pretty expensive, with the amount of vm's on a host growing and growing I wouldn't be surprised if they leave the 'a license per host' policy they have right now.
Guest - james on Monday, 29 December 2014 17:19

@Tobias, good feedback on desired abilities of XS to scale beyond 100's of VMs and support efficient backup mechanisms.
Is this already part of XS-next wish list or feature feedback? If not please post it for inclusion towards new features/enhancements.

0
@Tobias, good feedback on desired abilities of XS to scale beyond 100's of VMs and support efficient backup mechanisms. Is this already part of XS-next wish list or feature feedback? If not please post it for inclusion towards new features/enhancements.
Guest - Niklas on Saturday, 27 December 2014 16:35

We are pretty sure the performance problem while exporting is due to some kind of limitation in XenServer.
If I run 1 vm-export I peak about 40Mbyte/s and average is 38Mbyte/s, if I run 2x vm-exports I peak about 80Mbyte/s and average is 76Mbyte/s so multiple exports is one solution but it is not a perfect one.
Since I just have a simple bash-script without any intelligence it would be hard to make the script run 2 exports per time and keep track of which export is running and which is to be done, atleast with my limited shellscripting-knowledge.
The pools hosting the VM's are pretty much idle during nights and so is the SR's - The SR's hosts are very much overpowered but this is due to handling the I/O peaks during daytime.

I've tried PHPBackup (trial version) and it was alot faster, I guess this is because of using the xapi instead of the "vm-export"-command.
For now we'll just have to keep splitting the pools up to be able to handle our nightly backups, but this is definately something to put on the wishlist for next time.
//Niklas

0
We are pretty sure the performance problem while exporting is due to some kind of limitation in XenServer. If I run 1 vm-export I peak about 40Mbyte/s and average is 38Mbyte/s, if I run 2x vm-exports I peak about 80Mbyte/s and average is 76Mbyte/s so multiple exports is one solution but it is not a perfect one. Since I just have a simple bash-script without any intelligence it would be hard to make the script run 2 exports per time and keep track of which export is running and which is to be done, atleast with my limited shellscripting-knowledge. The pools hosting the VM's are pretty much idle during nights and so is the SR's - The SR's hosts are very much overpowered but this is due to handling the I/O peaks during daytime. I've tried PHPBackup (trial version) and it was alot faster, I guess this is because of using the xapi instead of the "vm-export"-command. For now we'll just have to keep splitting the pools up to be able to handle our nightly backups, but this is definately something to put on the wishlist for next time. //Niklas
Tobias Kreidl on Monday, 29 December 2014 17:43

Fresh benchmarks with iometer and bonnie++ show XenServer 6.2 and XenServer Creedence Release Candidate VMs to be very close in performance, both for Windows and Linux VMs. Most of the results were within 2.5 sigma of each other (95th percentile). This was with a stock installation, no tweaks, under as identical as possible conditions. Very encouraging.

0
Fresh benchmarks with iometer and bonnie++ show XenServer 6.2 and XenServer Creedence Release Candidate VMs to be very close in performance, both for Windows and Linux VMs. Most of the results were within 2.5 sigma of each other (95th percentile). This was with a stock installation, no tweaks, under as identical as possible conditions. Very encouraging.
Guest - Chris Beasley on Thursday, 08 January 2015 04:44

Hi Tim,

Haven't seen this anyway so far, so please point me in the right direction if this is not the right place. Does XS6.5 have native support for IP over Infinityband? I'm looking at doing some homelab work and want to see how successful cheap IB cards are at creating a 10Gbps home network across VMs. I guess I would like to know whether XS6.5 can recognise these cards, setup an IPoIB network (using appropriate switching gear) and then use these high speed connections as 10Gbps network NICs in to the VMs?

I've read that VSphere supports this but I'm curious about XS?

Thanks,

Chris

0
Hi Tim, Haven't seen this anyway so far, so please point me in the right direction if this is not the right place. Does XS6.5 have native support for IP over Infinityband? I'm looking at doing some homelab work and want to see how successful cheap IB cards are at creating a 10Gbps home network across VMs. I guess I would like to know whether XS6.5 can recognise these cards, setup an IPoIB network (using appropriate switching gear) and then use these high speed connections as 10Gbps network NICs in to the VMs? I've read that VSphere supports this but I'm curious about XS? Thanks, Chris

About XenServer

XenServer is the leading open source virtualization platform, powered by the Xen Project hypervisor and the XAPI toolstack. It is used in the world's largest clouds and enterprises.
 
Commercial support for XenServer is available from Citrix.