Virtualization Blog

Discussions and observations on virtualization.

XenServer Creedence Alpha 2 Released

We're pleased to announce that XenServer Creedence Alpha 2 has been released. Alpha 2 builds on the capabilities seen in Alpha 1, and we're interested your feedback on this release. With Alpha 1, we were primarily interested in receiving basic feedback on the stability of the code, with Alpha 2 we're interested in feedback not only on basic operations, but also storage performance.

The following functional enhancements are contained in Alpha 2.

  • Storage read caching. Boot storm conditions in environments using common templates can create unnecessary IO on shared storage systems. Storage read caching uses free dom0 memory to cache common read IO and reduce the impact of boot storms on storage networks and NAS devices.
  • DM Multipath storage support. For users of legacy MPP-RDAC, this functionality has been deprecated in XenServer Creedence following storage industry practices. If you are still using MPP-RDAC with XenServer 6.2 or prior, please enter an incident in https://bugs.xenserver.org to record your usage such that we can develop appropriate guidance.
  • Support for Ubuntu 14.04 and CentOS 5.10 as guest operating systems

The following performance improvements were observed with Alpha 2 compared to Alpha 1, but we'd like to hear your experiences.

  • GRO enabled physical network to guest network performance improved by 65%
  • Aggregate network throughput improved by 50%
  • Disk IO throughput improved by 100%

While these improvements are rather impressive, we do need to be aware this is alpha code. What this means in practice is that when we start looking at overall scalability the true performance numbers could go down a bit to ensure stable operations. That being said, if you have performance issues with this alpha we want to hear about them. Please also look to this blog space for updates from our performance engineering team detailing how some of these improvements were measured.

 

Please do download XenServer Creedence Alpha 2, and provide your feedback in our incident database.     

Overview of the Performance Improvements between X...
The reality of a XenServer 64 bit dom0

Related Posts

 

Comments 17

Tobias Kreidl on Tuesday, 10 June 2014 14:42

In Creedence Alpha 1, we did not see any discernible storage performance difference compared to XS 6.2 SP1, so it will definitely be interesting to see how disk I/O compares in the Alpha 2 release. Keep up the great work!

0
In Creedence Alpha 1, we did not see any discernible storage performance difference compared to XS 6.2 SP1, so it will definitely be interesting to see how disk I/O compares in the Alpha 2 release. Keep up the great work!
Guest - Joshua Foster on Friday, 13 June 2014 15:26

Are there plans to use a modern OS as the base for Dom0 like CentOS 7?

0
Are there plans to use a modern OS as the base for Dom0 like CentOS 7?
James Bulpin on Friday, 13 June 2014 15:44

We'd very much like to move to CentOS 7 when it becomes available. However the timing of this, and the need to integrate and stabilize, may mean we stick with 5.x for a little longer. At this point with 7 being so close we'll likely skip over 6.x.

0
We'd very much like to move to CentOS 7 when it becomes available. However the timing of this, and the need to integrate and stabilize, may mean we stick with 5.x for a little longer. At this point with 7 being so close we'll likely skip over 6.x.
Bruno de Paula Larini on Thursday, 26 June 2014 12:27

But there will be plans to support RHEL7/CentOS 7 guests?

0
But there will be plans to support RHEL7/CentOS 7 guests?
Tobias Kreidl on Friday, 13 June 2014 19:34

James,
Would this mean that XFS would be adopted as the file system of choice for at least dom0 if CentOS 7 were adopted?
-=Tobias

0
James, Would this mean that XFS would be adopted as the file system of choice for at least dom0 if CentOS 7 were adopted? -=Tobias
Adriano Criscuolo on Friday, 13 June 2014 20:20

With the "Storage read caching" feature will be possible to also use SSD local disk?

Thanks
Adriano

0
With the "Storage read caching" feature will be possible to also use SSD local disk? Thanks Adriano
Martin Cerveny on Saturday, 14 June 2014 17:59

Hello.

When will be released NVIDIA GRID vGPU for ” Creedence” (eg. 64 bit drivers & libraries) ?
Second release of ” Creedence” is here but vGPU plugin intentionally still missing !
(https://wiki.xenserver.org/index.php?title=XenServer_Creedence_Alpha_Release#vGPU)

Thanks, Martin Cerveny

0
Hello. When will be released NVIDIA GRID vGPU for ” Creedence” (eg. 64 bit drivers & libraries) ? Second release of ” Creedence” is here but vGPU plugin intentionally still missing ! (https://wiki.xenserver.org/index.php?title=XenServer_Creedence_Alpha_Release#vGPU) Thanks, Martin Cerveny
Martin Cerveny on Tuesday, 08 July 2014 10:35
Driver is here (under beta): http://www.nvidia.com/download/driverResults.aspx/76862/en-us
Tim Mackey on Sunday, 15 June 2014 01:07

Martin,

The NVIDIA stuff is actually part of a XenDesktop feature set, so availability and functionality is owned by that team. I can't speak to which parts of vGPU will end up in the XenServer open source code, but I wouldn't expect everything to end up here, nor would I have expectations on future serviceability until a supported release occurs from both teams.

-tim

0
Martin, The NVIDIA stuff is actually part of a XenDesktop feature set, so availability and functionality is owned by that team. I can't speak to which parts of vGPU will end up in the XenServer open source code, but I wouldn't expect everything to end up here, nor would I have expectations on future serviceability until a supported release occurs from both teams. -tim
Guest - Enoch on Monday, 16 June 2014 04:31

I want to know if Ceph integration (via libvirt) would be available soon? And how about read caching on SSD that I'd heard ppl mentioned over and over again? These are the two most interesting features to me.

0
I want to know if Ceph integration (via libvirt) would be available soon? And how about read caching on SSD that I'd heard ppl mentioned over and over again? These are the two most interesting features to me.
Tim Mackey on Monday, 16 June 2014 08:16

Enoch,

XenServer uses XAPI for its toolstack, not libvirt, so anything using libvirt as an integration model would be tricky. In terms of read caching, this alpha build uses dom0 memory for read caching. Using local disk (including SSD) as a cache is called Intellicache and has been in XenServer for a number of years now.

-tim

0
Enoch, XenServer uses XAPI for its toolstack, not libvirt, so anything using libvirt as an integration model would be tricky. In terms of read caching, this alpha build uses dom0 memory for read caching. Using local disk (including SSD) as a cache is called Intellicache and has been in XenServer for a number of years now. -tim
Guest - Enoch on Tuesday, 17 June 2014 07:01

Thanks Tim for the info about IntelliCache. I am not very sure about the marketing name.

For the ceph part, I am referring to the tech preview that was released last year. What I understand is that the preview is actually an interface for XAPI to work with libvirt, and in which ceph rbd are defined.

http://wiki.xenproject.org/wiki/Ceph_and_libvirt_technology_preview

0
Thanks Tim for the info about IntelliCache. I am not very sure about the marketing name. For the ceph part, I am referring to the tech preview that was released last year. What I understand is that the preview is actually an interface for XAPI to work with libvirt, and in which ceph rbd are defined. http://wiki.xenproject.org/wiki/Ceph_and_libvirt_technology_preview
Guest - james on Tuesday, 17 June 2014 06:44

@Tim,
Looking forward to a blog post on latest XAPI 2.0 - as there is lot of interest in the cloud community about the capabilities and features this brings to table
thanks,
James

0
@Tim, Looking forward to a blog post on latest XAPI 2.0 - as there is lot of interest in the cloud community about the capabilities and features this brings to table thanks, James
Guest - MT on Monday, 16 June 2014 18:42

Disk Performance increase, does it also affect iSCSI performance in any way? Can we expect faster read/writes over iSCSI in Alpha 2?

0
Disk Performance increase, does it also affect iSCSI performance in any way? Can we expect faster read/writes over iSCSI in Alpha 2?
Guest - Kai Qian on Thursday, 03 July 2014 07:09

is it easy to assign more vCPU in Windows client system in new version XenServer, no need type command to do it.

0
is it easy to assign more vCPU in Windows client system in new version XenServer, no need type command to do it.
Lingfei KOng on Wednesday, 24 December 2014 05:14

Hi Tim,
Does RHEL7/RHEL6.6 guest is supported on Creedence, if it is not supported, what may happens to a RHEL7/RHEL6.6 guest running on Creedence?
Thanks!

0
Hi Tim, Does RHEL7/RHEL6.6 guest is supported on Creedence, if it is not supported, what may happens to a RHEL7/RHEL6.6 guest running on Creedence? Thanks!
Guest - james on Friday, 26 December 2014 03:54

XS Creedence - release candidate is out - donot know the official word on RHEL 7, but informally they work fine.

please try the latest RC build.

0
XS Creedence - release candidate is out - donot know the official word on RHEL 7, but informally they work fine. please try the latest RC build.

About XenServer

XenServer is the leading open source virtualization platform, powered by the Xen Project hypervisor and the XAPI toolstack. It is used in the world's largest clouds and enterprises.
 
Commercial support for XenServer is available from Citrix.