Virtualization Blog

Discussions and observations on virtualization.

Tech preview of XenServer + libvirt + ceph

One of the benefits of making XenServer fully open-source is that it’s easier to integrate directly with other great open-source projects. A project that I’m particularly interested in is Ceph: a distributed storage system which is particularly suitable for storing VM disk images in the cloud. The Ceph community has already integrated support directly into two other open-source projects:

  • qemu: the virtual hardware emulator used by Xen and KVM
  • libvirt: an open-source virtualisation toolkit 

All that’s needed to make Ceph work with XenServer is therefore:

  1. to use a newer version of qemu (in Xen jargon this is called “the upstream qemu”)
  2. to integrate libvirt with the XenServer toolstack (i.e. xapi, xenopsd and friends)

After much coffee-fuelled late-night hacking^Wsoftware development, I’m pleased to announce the availability of a “technology preview” of XenServer integrated with Ceph via libvirt. To give it a go, first install yourself a normal CentOS 6.4 x86_64 system. Second login as root and type:

rpm -ihv http://xenbits.xen.org/djs/xenserver-tech-preview-release-0.3.0-0.x86_64.rpm

This will add the experimental preview software repository based on the Xen code already in CentOS. Next type:

yum install xenserver-core

-- notice how easy it is to install the core packages of XenServer using the normal CentOS distro tools. Now that XenServer is fully open-source, expect to see more like this in future!

It’s now necessary to set up a basic XenServer configuration. The easiest way is to use a simple install “wizard” by typing:

xenserver-install-wizard

After the wizard has done its magic and you’ve rebooted, you should be able to connect XenCenter like you would with a regular XenServer.

Assuming you have already configured a Ceph cluster, you need to configure your machine to be a Ceph client, as you would with a regular CentOS host. I recommend having a read of the excellent Ceph documentation. Once complete you should be able to list the currently available Ceph storage pools:

ceph osd lspools

Next you should create a XenServer “Storage Repository” to allow VM virtual disks to be stored on Ceph-- this is where libvirt comes in. In dom0, create a libvirt “storage pool” XML file, as if you were going to issue a “virsh pool-create”. I created a file “ceph.xml” which looks like the following:

<pool type='rbd'>
  <name>ceph</name>
  <source>
    <name>rbd</name>
    <host name='10.80.237.208' port='6789'/>
  </source>
</pool>

Then type:

xe sr-create type=libvirt name-label=ceph device-config:xml-filename=ceph.xml

You should now have a functioning XenServer Storage Repository which can be managed via the XenAPI and from XenCenter. At this point you should be able to install VMs (both PV and HVM should work) and run them from the Ceph storage.

For more in-depth info about how it all works, known issues and ways you could get involved, have a look at the:

 

How did we increase VM density in XenServer 6.2?
Evolving XenServer for the cloud

Related Posts

 

Comments 26

Tobias Kreidl on Monday, 08 July 2013 11:03

Dave,
Awesome... Ceph (or something that can be extended to huge storage farms, supports thin provisioning, block and logical I/O, and built-in self-replication options that can be used to increase high availability) is a fantastic step forward. My biggest concern -- having looked over some of the ceph manuals -- is that if enough GUI-based set-up and management isn't made available. a lot of users will be left very confused and in the dark. What are the longer-term plans to support an intuitive GUI (a la "XenCenter" or ximilar), as well as a reasonable basic set of management and set-up commands within the XenServer environment?
Best,
--Tobias

0
Dave, Awesome... Ceph (or something that can be extended to huge storage farms, supports thin provisioning, block and logical I/O, and built-in self-replication options that can be used to increase high availability) is a fantastic step forward. My biggest concern -- having looked over some of the ceph manuals -- is that if enough GUI-based set-up and management isn't made available. a lot of users will be left very confused and in the dark. What are the longer-term plans to support an intuitive GUI (a la "XenCenter" or ximilar), as well as a reasonable basic set of management and set-up commands within the XenServer environment? Best, --Tobias
Neil Levine on Tuesday, 09 July 2013 16:17

Hi Tobias, I'm the VP Product at Inktank, who sponsor the Ceph project. We are working on a GUI for Ceph which will be avaiable as part of our commercial enterprise subscription, due in Q3. It's initially focused on monitoring and diagnostics for the first release but we will be adding deployment and management functions shortly thereafter.

Neil

1
Hi Tobias, I'm the VP Product at Inktank, who sponsor the Ceph project. We are working on a GUI for Ceph which will be avaiable as part of our commercial enterprise subscription, due in Q3. It's initially focused on monitoring and diagnostics for the first release but we will be adding deployment and management functions shortly thereafter. Neil
Tobias Kreidl on Wednesday, 10 July 2013 05:01

Hello, Neil:
Thank you for your note. Will Inktank's ceph GUI be integrated with the general XenServer management interface (not sure we can call it "XenCenter" any more) or will this be run separately, and perhaps, integrated into the same XenServer management GUI later? See my notes below regarding an initial list of desiderata as to features and convenience items. FYI, we're always willing to try to squeeze in beta testing here!

Another item I had to add to the list of desired features: Having pool-based IntelliCache would be a great feature, and since ceph can leverage thin provisioning, it would be especially beneficial if it could be used (1) on dedicated LUNs on a storage array that could, for example, consist of a number of SSD drives and be made into pool-aware SRs used directly as IntelliCache from multiple XenServers, or (2) as a secondary IntelliCache that could be used in conjunction with any XenServer's local SR IntelliCache (such that all writes would go to both (as well as, of course, to the real VM's SRs), but reads would come from the local XS's IntelliCache SR); however, if the VM and its storage were migrated to a different XS within the pool, the local SR IntelliCache would be abandoned, and once the VM and its storage are resident on the different XS, the local SR IntelliCache could be re-populated from the pooled cache on the storage array. I think an extension like this would make for incredible flexibility and also make more potentially efficient use of SSD storage that would not have to remain underutilized on any given local XenServer, and could rather become a pool-wide resource, or even (3) as an overflow for IntelliCache that could no longer fit on a local XS's IntelliCache storage.
Best regards,
--Tobias

0
Hello, Neil: Thank you for your note. Will Inktank's ceph GUI be integrated with the general XenServer management interface (not sure we can call it "XenCenter" any more) or will this be run separately, and perhaps, integrated into the same XenServer management GUI later? See my notes below regarding an initial list of desiderata as to features and convenience items. FYI, we're always willing to try to squeeze in beta testing here! Another item I had to add to the list of desired features: Having pool-based IntelliCache would be a great feature, and since ceph can leverage thin provisioning, it would be especially beneficial if it could be used (1) on dedicated LUNs on a storage array that could, for example, consist of a number of SSD drives and be made into pool-aware SRs used directly as IntelliCache from multiple XenServers, or (2) as a secondary IntelliCache that could be used in conjunction with any XenServer's local SR IntelliCache (such that all writes would go to both (as well as, of course, to the real VM's SRs), but reads would come from the local XS's IntelliCache SR); however, if the VM and its storage were migrated to a different XS within the pool, the local SR IntelliCache would be abandoned, and once the VM and its storage are resident on the different XS, the local SR IntelliCache could be re-populated from the pooled cache on the storage array. I think an extension like this would make for incredible flexibility and also make more potentially efficient use of SSD storage that would not have to remain underutilized on any given local XenServer, and could rather become a pool-wide resource, or even (3) as an overflow for IntelliCache that could no longer fit on a local XS's IntelliCache storage. Best regards, --Tobias
Guest - Neil Levine on Wednesday, 10 July 2013 19:36

The Ceph GUI will be specific for managing the storage cluster and won't touch the compute side. It will use openly available APIs though, so XenCenter could use these to build its own hooks into Ceph.

Re: Intellicache - not sure how the blueprint process works for XenServer development but sounds like a good submission idea.

0
The Ceph GUI will be specific for managing the storage cluster and won't touch the compute side. It will use openly available APIs though, so XenCenter could use these to build its own hooks into Ceph. Re: Intellicache - not sure how the blueprint process works for XenServer development but sounds like a good submission idea.
Guy Brunsdon on Monday, 08 July 2013 14:00

Hi Tobias,
We certainly want to ensure the finished product is consumable and usable by the personas we are targeting with the feature/capability. That means the user experience is very important.
What use cases are you looking at, and how would you like to administer?
Guy

1
Hi Tobias, We certainly want to ensure the finished product is consumable and usable by the personas we are targeting with the feature/capability. That means the user experience is very important. What use cases are you looking at, and how would you like to administer? Guy
Tobias Kreidl on Monday, 08 July 2013 22:21

@Guy:
Greetings. Ceph is, of course, still in a rapid development stage, so it's hard to not get carried away with desired features to be incorporated into the first released product that makes use of it. As to basics, the obvious ones would parallel what's currently available: create and destroy SRs, ensure coalescing is performed correctly where applicable, support multipathing, provide for basic I/O monitoring, etc.
In addition, the option to specify thin provisioning is very important, especially for XenDesktop users, and if so, this needs to be able to be picked up by XenDesktop conenctions and recognized as such (in addition to the current ext and NFS devices). Intellicache should be supported.
Are there plans to accommodate VDI disks > 2 TB in size? Is general ALUA support an option or is this superfluous/impossible because of the nature of combining a wide variety of storage devices?
What about concerns about how data are distributed? This is really important from a security/liability stance, as as certain types of data should not be freely spread around. A built-in data encryption option would be another big item of interest in that regard.
Having some control over the storage access speed would also be important, e.g. at least low, medium, high (so low could use, say, slower SATA drives, and high could potentially leverage SSD drives, for example).
I also understand that in the fall 2013 (October?) release of ceph there are plans for creating replication groups that would ensure high availability by allowing groups to be defined so that there would be more control over replication and duplication locations (partly also addressing security concerns), which is a great idea but may be out too late to be incorporated in the first go-around.
I'm sure other thoughts will come to mind as soon as I post this, and I certainly hope others chime in with their wishes.
Best regards,
--Tobias

0
@Guy: Greetings. Ceph is, of course, still in a rapid development stage, so it's hard to not get carried away with desired features to be incorporated into the first released product that makes use of it. As to basics, the obvious ones would parallel what's currently available: create and destroy SRs, ensure coalescing is performed correctly where applicable, support multipathing, provide for basic I/O monitoring, etc. In addition, the option to specify thin provisioning is very important, especially for XenDesktop users, and if so, this needs to be able to be picked up by XenDesktop conenctions and recognized as such (in addition to the current ext and NFS devices). Intellicache should be supported. Are there plans to accommodate VDI disks > 2 TB in size? Is general ALUA support an option or is this superfluous/impossible because of the nature of combining a wide variety of storage devices? What about concerns about how data are distributed? This is really important from a security/liability stance, as as certain types of data should not be freely spread around. A built-in data encryption option would be another big item of interest in that regard. Having some control over the storage access speed would also be important, e.g. at least low, medium, high (so low could use, say, slower SATA drives, and high could potentially leverage SSD drives, for example). I also understand that in the fall 2013 (October?) release of ceph there are plans for creating replication groups that would ensure high availability by allowing groups to be defined so that there would be more control over replication and duplication locations (partly also addressing security concerns), which is a great idea but may be out too late to be incorporated in the first go-around. I'm sure other thoughts will come to mind as soon as I post this, and I certainly hope others chime in with their wishes. Best regards, --Tobias
Guest - Geraint Jones on Thursday, 11 July 2013 05:34

How usable is this code?

Were the changes needed limited in scope or large and wide ranging ?

We currently use ceph and are looking to deploy XS to a new build and would love to use a ceph backend for it :)

0
How usable is this code? Were the changes needed limited in scope or large and wide ranging ? We currently use ceph and are looking to deploy XS to a new build and would love to use a ceph backend for it :)
Super User on Thursday, 11 July 2013 14:09

@Geraint the code is experimental, so not suitable for production yet. I think the architecture is fairly solid and it has been working ok in my (simple) environment. The main thing we need to do is to fill in a few corner cases and then test it thoroughly. Having said that, it's still possible to change the design if it turns out that it's sub-optimal. So if you have the time to "kick the tyres", I'd love to hear your feedback!

0
@Geraint the code is experimental, so not suitable for production yet. I think the architecture is fairly solid and it has been working ok in my (simple) environment. The main thing we need to do is to fill in a few corner cases and then test it thoroughly. Having said that, it's still possible to change the design if it turns out that it's sub-optimal. So if you have the time to "kick the tyres", I'd love to hear your feedback!
Guest - Matt on Thursday, 11 July 2013 23:47

That's great. Thanks for your work.

A couple of comments. You have to disable cephx authentication or else it comes up with an error in libvirt after "xe sr-create type=libvirt name-label=ceph device-config:xml-filename=ceph.xml"

Setting "auth_supported=none" in ceph.conf did the trick for me.

Also, I can't get the ISO recognised in xen for some reason. I tried adding the replacement XenCentreMain.exe to my XenCentre installation but it came up with this error.

2013-07-12 11:39:38,159 ERROR XenAdmin.Program [Named pipe thread] - Exception in Invoke (ControlType=XenAdmin.MainWindow, MethodName=b__12)
System.ObjectDisposedException: Cannot access a disposed object.
Object name: 'MainWindow'.
at System.Windows.Forms.Control.MarshaledInvoke(Control caller, Delegate method, Object[] args, Boolean synchronous)
at System.Windows.Forms.Control.Invoke(Delegate method, Object[] args)
at XenAdmin.Program.Invoke(Control c, MethodInvoker f)

Which version is this supposed to be applied to?

0
That's great. Thanks for your work. A couple of comments. You have to disable cephx authentication or else it comes up with an error in libvirt after "xe sr-create type=libvirt name-label=ceph device-config:xml-filename=ceph.xml" Setting "auth_supported=none" in ceph.conf did the trick for me. Also, I can't get the ISO recognised in xen for some reason. I tried adding the replacement XenCentreMain.exe to my XenCentre installation but it came up with this error. 2013-07-12 11:39:38,159 ERROR XenAdmin.Program [Named pipe thread] - Exception in Invoke (ControlType=XenAdmin.MainWindow, MethodName=b__12) System.ObjectDisposedException: Cannot access a disposed object. Object name: 'MainWindow'. at System.Windows.Forms.Control.MarshaledInvoke(Control caller, Delegate method, Object[] args, Boolean synchronous) at System.Windows.Forms.Control.Invoke(Delegate method, Object[] args) at XenAdmin.Program.Invoke(Control c, MethodInvoker f) Which version is this supposed to be applied to?
GizmoChicken on Monday, 22 July 2013 21:22

Although the XAPI toolstack has been available on Debian/Ubuntu through Kronos for some time, the XAPI version seems to be a few iterations older than that found in XenServer 6.2. So it will be great to see a (hopefully) up-to-date version of XAPI available on CentOS 6.4!

Will an up-to-date version of XAPI be made available for Debian/Ubuntu (through Kronos or otherwise) any time soon?

0
Although the XAPI toolstack has been available on Debian/Ubuntu through Kronos for some time, the XAPI version seems to be a few iterations older than that found in XenServer 6.2. So it will be great to see a (hopefully) up-to-date version of XAPI available on CentOS 6.4! Will an up-to-date version of XAPI be made available for Debian/Ubuntu (through Kronos or otherwise) any time soon?
James Bulpin on Friday, 26 July 2013 16:20

I found that NetworkManager was setting up DHCP for the backend VIFs in dom0 so I follwed the instructions at http://lists.xen.org/archives/html/xen-users/2013-04/msg00150.html to tell NM not to manage VIFs.

0
I found that NetworkManager was setting up DHCP for the backend VIFs in dom0 so I follwed the instructions at http://lists.xen.org/archives/html/xen-users/2013-04/msg00150.html to tell NM not to manage VIFs.
Mario on Monday, 30 December 2013 17:58

Hi,
I have just tried your tech preview.
I cannot configure network from xencenter.
I also tried to add a nfs SR and it does not work.

Any news about it?

Can you help me?

Thanks,
Mario

0
Hi, I have just tried your tech preview. I cannot configure network from xencenter. I also tried to add a nfs SR and it does not work. Any news about it? Can you help me? Thanks, Mario
Tobias Kreidl on Friday, 14 March 2014 19:31

Is there no new development on Ceph and integration into the Xen Project? I see nothing these days about anything specific to Ceph since July 2013. Is Ceph integration even still being planned???

0
Is there no new development on Ceph and integration into the Xen Project? I see nothing these days about anything specific to Ceph since July 2013. Is Ceph integration even still being planned???
Tobias Kreidl on Sunday, 30 March 2014 22:24

So, if Ceph support integration with XenServer comes out Q4 2014 or Q1 2015, is this likely to be using the current XenServer architecture or would this be combined with the Windsor release, or is that decision still open? In particular in light of the recent release of Virtual SAN by VMware, it seems there should be some pressure to come up with a supported distributed storage option for XenServer sooner rather than later. Support for IntelliCache and thin provisioning on only very few storage options is starting to become a real cost issue for users who need to support hundreds of VMs and manage their storage efficiently and cost-effectively.

0
So, if Ceph support integration with XenServer comes out Q4 2014 or Q1 2015, is this likely to be using the current XenServer architecture or would this be combined with the Windsor release, or is that decision still open? In particular in light of the recent release of Virtual SAN by VMware, it seems there should be some pressure to come up with a supported distributed storage option for XenServer sooner rather than later. Support for IntelliCache and thin provisioning on only very few storage options is starting to become a real cost issue for users who need to support hundreds of VMs and manage their storage efficiently and cost-effectively.
Guest - Paul279 on Tuesday, 12 August 2014 13:36

Is Ceph integration coming? This is an article from July 2013, is there anywhere a statement about ceph integration or something out there? Will it be integrated in XenServer 6.3?
Thanks for any information

0
Is Ceph integration coming? This is an article from July 2013, is there anywhere a statement about ceph integration or something out there? Will it be integrated in XenServer 6.3? Thanks for any information
Tim Mackey on Tuesday, 12 August 2014 13:40

@Paul279,

This was a tech preview, but we're seeking feedback on what the post Creedence product makeup is here: http://xenserver.org/blog/entry/beyond-creedence-xenserver-2015-planning.html

0
@Paul279, This was a tech preview, but we're seeking feedback on what the post Creedence product makeup is here: http://xenserver.org/blog/entry/beyond-creedence-xenserver-2015-planning.html
Alex on Wednesday, 21 January 2015 12:27

Hello. Is it possible to attach Cpeh on XenServer 6.5 or Nightlybuild?

0
Hello. Is it possible to attach Cpeh on XenServer 6.5 or Nightlybuild?
Stefan on Tuesday, 27 January 2015 14:33

@Alex
Good question! In XenServer 6.5 rbd and libceph kernel modules exist so I guess that means that we have something to work with, but I'm not sure how to use them.

0
@Alex Good question! In XenServer 6.5 rbd and libceph kernel modules exist so I guess that means that we have something to work with, but I'm not sure how to use them.
Guest - Alex on Thursday, 26 March 2015 11:40

It's good news! I can mount rbd, but not understand how to create SR. Xenserver not have type rbd.

0
It's good news! I can mount rbd, but not understand how to create SR. Xenserver not have type rbd.
Guest - Stefan on Saturday, 28 March 2015 07:32

Ok, that is good news!
Could you please tell us how you managed to do that?
If you manually or script the mount of the rbd you should be able to just use the mounted rbd as any other disk device by using something like sr-create device-config:device=/dev/rbd/mycephrbd host-uuid= type=lvm name-label="Ceph rbd SR" but this is just written from my memorera so some parameter might be missing, but you should get the idea? :-)

0
Ok, that is good news! Could you please tell us how you managed to do that? If you manually or script the mount of the rbd you should be able to just use the mounted rbd as any other disk device by using something like sr-create device-config:device=/dev/rbd/mycephrbd host-uuid= type=lvm name-label="Ceph rbd SR" but this is just written from my memorera so some parameter might be missing, but you should get the idea? :-)

About XenServer

XenServer is the leading open source virtualization platform, powered by the Xen Project hypervisor and the XAPI toolstack. It is used in the world's largest clouds and enterprises.
 
Commercial support for XenServer is available from Citrix.