Virtualization Blog

Discussions and observations on virtualization.
James is head of technology for the Citrix XenServer product group. He is responsible for XenServer's technical strategy and system architecture. James joined XenSource Inc. at its foundation in 2005. Twitter: @jamesbulpin

Is it really “containers vs. VMs”?

There are some in the Docker and container world that believe that there is some kind of competition between Docker and hypervisors; they would have us believe that containers render VMs, and therefore hypervisors, redundant. Is that really true? I think not. I believe that containers and VMs perform complementary roles and add value to each other.

Let's look at what VMs and containers are really all about. Firstly let's consider what they have in common: they can both be used to encapsulate an application and therefore both use images containing the application, its libraries and other runtime dependencies (in fact you could argue that a Docker image is conceptually just a VM image without a kernel and init scripts). Hypervisor vendors have been telling us for years to have just one application per OS instance, that's the normal model with AWS AMIs too – again, this looks just like a Docker image.

But that's a top down, application-centric view. Let's now look at it from the infrastructure perspective. The boundary of a container is the boundary of an application, the separation between the internal workings of the applications and its external interface. The boundary of a VM is the boundary of the resource allocation, ownership, trust and availability of a piece of abstracted infrastructure.

By separating these application and infrastructure boundaries we get more flexibility than if a single entity tries to implement both:

  • I can put multiple application containers within one VM where they share the same level of trust and only have to worry about protecting the trust boundary once, rather than multiple times for individual applications. VMs' trust and ownership boundaries have long been used to provide multi-tenancy – this isn't just important in public clouds but matters for enterprises that increasingly see applications being provided by individual departments or individual employees.
  • Applications often work together with other applications, that's why Docker has inter-container communication mechanisms such as "links". I can use the application container to keep each app nicely encapsulated and I can use the VM boundary to put a hard shell around the set of cooperating applications. I can also use this VM boundary to define the unit of resource accounting and reporting.
  • I can put cooperating application containers in a VM to share a common availability boundary; if they're working together then I probably want them to fail and succeed together. Resource isolation boundaries are good for containing faults – I'd rather have the "blast radius" of a faulty container being the VM which contains that container and its collaborating applications rather than an entire server.

So am I arguing that VMs are better than containers? Absolutely not. I believe that both mechanisms have a valuable part to play in the deployment of scalable, efficient, secure and flexible systems. That's why we're looking at ways to enhance XenServer to make it a great platform for running containers within VMs. Our recent preview of Docker integration is just the start. As well as requests to support other Docker-optimized Linux distributions (the preview supports CoreOS) we heard that you want to see infrastructure level information made available to higher level management tools for audit and reporting. Stay tuned for more.

Tags:
Recent Comments
Tobias Kreidl
What about Microsoft's embracing of Docker and its container deployment "Hyper-V Containers" plus its "Nano Server" -- how will th... Read More
Friday, 10 April 2015 15:50
James Bulpin
Should mesh well. One of XenServer's strengths is it's equally happy with both Windows and Linux workloads - should extend nicely ... Read More
Friday, 10 April 2015 16:10
Continue reading
14092 Hits
2 Comments

Preview of XenServer support for Docker and Container Management

I'm excited to be able to share with you a preview of our new XenServer support for Docker and Container Management. Downloads can be found on the preview page, read on for installation instructions and more details.

Today many Docker applications run in containers within VMs hosted on hypervisors such as XenServer and other distributions of Xen. The synergy between containers as an application isolation mechanism and hypervisors as a secure physical infrastructure virtualization mechanism is something that I'll be blogging more about in the future. I firmly believe that these two technologies add value to each other, especially if they are aware of each other and designed to work together for an even better result.

That's why we've been looking at how we can enhance XenServer to be a great platform for Docker applications and how we can contribute to the Docker ecosystem to best leverage the capabilities and services available from the hypervisor. As a first step in this initiative I'm pleased to announce a preview of our new XenServer support for Docker applications. Those who attended Citrix Summit in January or FOSDEM in February may have seen an earlier version of this support being demo'd.

The preview is designed to work on top of XenServer 6.5 and comes in two parts: a supplemental pack for the servers and a build of XenCenter with the UI changes. XenCenter is installed in the normal Windows manner. The supplemental pack is installed in the same way as other XenServer supp-packs by copying the ISO file to each server in the pool and executing the following command in domain 0:

xe-install-supplemental-pack xscontainer-6.5.0-100205c.iso
mount: xscontainer-6.5.0-100205c.iso is write-protected, mounting read-only
Installing 'XenServer Container Management'...

Preparing...                ########################################### [100%]
   1:guest-templates        ########################################### [ 50%]
Waiting for xapi to signal init complete
Removing any existing built-in templates
Regenerating built-in templates
   2:xscontainer            ########################################### [100%]
Pack installation successful.

So what do you get with this preview? First off you get support for running CoreOS Linux VMs - CoreOS is a minimal Linux distribution popular for hosting Docker apps. The XenCenter VM installation wizard now includes a template for CoreOS and additional dialogs for setting the VM up (that's setting up a cloud config drive under the hood). This process also prepares the VM to be managed, to enable the main part of the preview's functionality to interact with it.

b2ap3_thumbnail_new_vm_coreos_cloudconfig.jpg

Secondly, and most importantly, XenServer becomes aware of “Container managed” VMs running Docker containers. It queries the VMs to enumerate the application containers running on each and then displays these within XenCenter's infrastructure view. XenCenter also allows interaction with the containers to start, stop and pause them. We want XenServer to be a platform for Docker and complement, not replace, the core part of the Docker application ecosystem, and therefore we expect that the individual Docker Engine instances in the VMs will be managed by one of the many Docker management tools such as Kubernetes, Docker Compose or ShipYard.

b2ap3_thumbnail_container_treeview.png

So what can you do with this preview?

Monitoring and visibility - knowing which VMs are in use for Docker hosting and which containers on them are actually running. Today's interface is more of a "pets" than "cattle" one but we've got experience  in showing what's going on at greater scale.

Diagnostics - easy access to basic container information such as forwarded network ports and originating Docker image name. This can help accelerate investigations into problems where either or both of the infrastructure and application layers may be implicated. Going forward we’d like to also provide easy access to the container-console.

Performance - spotted a VM that's using a lot of resource? This functionality allows you to see which containers are running on that VM, what processes run inside, how much CPU time each consumed, to help identify the one consuming the resource. In the future we'd like to add per-container resource usage reporting for correlation with the VM level metrics.

Control applications - using XenCenter you can start, stop and pause application containers. This feature has a number of use cases in both evaluation and deployment scenarios including rapidly terminating problematic applications.

We'd love to hear your feedback on this preview: what was useful, what wasn't? What would you like to see that wasn't there? Did you encounter problems or bugs? Please can share your feedback using our normal preview feedback mechanism by creating a ticket in the "XenServer Org" (XSO) project at bugs.xenserver.org

This preview is a first step towards a much richer Docker-XenServer mutual awareness and optimization to help bridge the gap between the worlds of the infrastructure administrator and the application developer/administrator. This is just the beginning, we expect to be improving, extending and enhancing the overall XenServer-Docker experience beyond that. Look out for more blog posts one this topic...

For a detailed guide to using this preview please see this article.

Tags:
Recent Comments
Thomas Subotitsch
Great news. Hope that API will also get commands for docker management.
Tuesday, 17 March 2015 07:41
Slava
Get this error when I try to start the Core-OS vm: Only 1 LUN may be used with shared OCFS I tried iSCSI SR and Local storage, s... Read More
Friday, 24 April 2015 19:23
James Bulpin
Slava: You need to use XenServer 6.5 "Creedence" for this preview. As the error message "Only 1 LUN may be used with shared OCFS" ... Read More
Monday, 27 April 2015 12:26
Continue reading
42959 Hits
11 Comments

Whatever happened to XenServer's Windsor architecture?

b2ap3_thumbnail_Slide1.JPGAt the 2012 Xen Project Developer Summit in San Diego I talked about the evolution of XenServer's architecture, specifically our forward looking R&D work looking at a set of architectural changes known as "Windsor". The architecture includes a number of foundational overhauls, such as moving to a 64 bit domain-0 with a PVops kernel and upgrading to the upstream version of qemu (XenServer currently uses a forked Xen Project version and therefore doesn't benefit from new features and improvements made in the more active upstream project). Those of you following the xenserver.org development snapshots will have seen a number of these key component overhauls already.

The more notable changes in the new architecture include various forms of improved modularity within the system including "domain-0 disaggregation" as well as improved intra-component modularity and better internal APIs.

We wanted to do this for various reasons including:

  1. To improve single-host scalability (e.g. the number of VMs and the amount of aggregate I/O the system can sustain) by parallelizing the handling of I/O over a number of driver domains
  2. To enable better multi-host scalability in scale-out cloud environments, primarily by allowing each host to run more independently and therefore reduce the bottleneck effect of the pool master
  3. To create the capability to have additional levels of tenant isolation by having per-tenant driver domains etc.
  4. To allow for possible future third party service VMs (driver domains etc.)


So where are we at with this? In the single-host scalability area, something that Citrix customers care a lot about, we had a parallel effort to try to improve scale and performance in the short term by scaling up domain-0 (i.e. adding more vCPUs and memory) and tactically removing bottlenecks. We actually did better that we expected with this so it's reduced the urgency to build the "scale-out" disaggregated solution. Some of this works is described in Jonathan Davies' blog posts: How did we increase VM density in XenServer 6.2? and How did we increase VM density in XenServer 6.2? (part 2)

XenServer today does have some (officially unsupported) mechanisms to run driver domains. These have been used within Citrix in a promising evaluation of the use of storage drivers domains for a physical appliance running the Citrix CloudBridge product, performing significant amounts of caching related I/O to a very large number of local SSDs spread across a number of RAID controllers. This is an area where the scale-out parallelism of Windsor is well suited.

On the multi-host scalability side we've made some changes to both XenServer and Apache CloudStack (the foundation of the Citrix CloudPlatform cloud orchestration product) to reduce the load on the pool master and therefore make it possible to use the maximum resource pool size. For the longer term we're evaluating the overlap between XenServer's pool-based clustering and the various forms of host aggregation offered by orchestration stacks such as CloudStack and OpenStack. With the orchestration stacks' ability to manage a large number of hosts do we really need to indirect all XenServer commands through a pool master?

Disaggregation has taken place in the Xen Project XAPI toolstack used in XenServer. A prerequisite to moving the xapi daemon into a service VM was to split the higher level clustering and policy part of the daemon from the low level VM lifecycle management and hypervisor interface. From XenServer 6.1 the latter function was split into a separate daemon called xenopsd with the original xapi daemon performing the clustering and policy functions. In the network management part of the stack a similar split has been made to separate the network control function into xcp-networkd - this created immediate value by having a better defined internal API but is also a prerequisite for network driver domains. The current development version of the XAPI project has had a number of other modularity clean-ups including various services being split into separate daemons with better build and packaging separation.

b2ap3_thumbnail_demu.jpgWe're also using intra-component disaggregation for XenServer's virtual GPU (vGPU) support. A "discrete" emulator (DEMU) is used to provide the glue to allow the GPU vendor's control plane multiplexer driver in domain0 to service the control path parts of the vGPU access from the guest VM. This is done by, in effect, disaggregating qemu and having the DEMU take ownership of the I/O ports associated with the device it is emulating. This mechanism is now being added the the Xen Project to allow other virtual devices to be handled by discrete emulators, perhaps even in separate domains. Eventually we'd like to put the DEMUs and GPU driver into a driver domain to decouple the maintenance (particular required kernel version) of domain-0 and the GPU driver.

I view Windsor like a concept car, a way to try out new ideas and get feedback on their value and desirability. Like a concept car some of Windsor's ideas have made it into the shipping XenServer releases, some are coming, some are on the wishlist and some will never happen. Having a forward looking technology pipeline helps us to ensure that we keep evolving XenServer to meet users' needs both now and in the future.

Recent Comments
Tobias Kreidl
@James C: You can do that with Linux VMs -- and have been able to for years, it's just not supported by Citrix. We have run some s... Read More
Saturday, 03 May 2014 15:42
James Bulpin
We are looking at using the kernel mode block backend (blkback) for raw block devices, such as LVM (not LVHD) on SSDs and raw SAN ... Read More
Wednesday, 07 May 2014 16:47
Tobias Kreidl
We have done this for years on Linux boxes with directly-attached iSCSI connections, with significant performance gains. I wrote a... Read More
Saturday, 03 May 2014 16:10
Continue reading
21102 Hits
7 Comments

Making sense of XenServer vs. xenserver-core vs. Citrix XenServer

So XenServer is now open-source, what does that mean? I look at XenServer as two things: firstly a set of components selected and engineered to work together as a system; and secondly a Linux distribution to provide the base platform to host and execute those components. Of course these two things are tightly coupled because the choice of base Linux distro and the set of packages installed will be part of the story when engineering the system as a whole. However we want to make it such that XenServer's core components, the stuff that does all the virtualization, management, monitoring and so on, can be used on a variety of Linux distros. This means we need to cleanly separate the components from the base distro, e.g. making XenServer components work with any reasonable distro and avoiding making assumptions about particular versions etc.

Let's start with the core components and refer to them collectively as "xenserver-core" (e.g. that could be the name of the meta-package to install them all to a distro "yum install xenserver-core"/"apt-get install xenserver-core" as used in Dave Scott's recent tech preview). These components include the xapi tool stack, storage manager, network daemon and related tools, HA daemon, etc. A second group of core components includes Xen, the Linux kernel, libvirt and qemu; although considered core components it is desirable to be able to use existing distro versions where possible. With suitable package dependencies it should be possible to manage all of the above.

b2ap3_thumbnail_xenserver-core.png

It's important to remember that many of the core components are derived from upstream projects. For example the xapi tool stack is part of the Linux Foundation's Xen Project but is consumed by XenServer, you could think of the XenServer version of xapi being a short term fork of the upstream code. In practice I don't expect to see much divergence between XenServer's xapi and the upstream xapi because it's the same people working on both and XenServer is the primary consumer of the project. For other components that are more widely used, such as qemu, libvirt and Xen I would expect short term divergence as critical features and fixes are ported to the version used by XenServer (just like Linux distros do) but with a rule that all required code is upstreamed to the relevant project to avoid long term divergence.

OK, we have xenserver-core which can now be installed on top of any reasonable Linux distribution. So when I talk about "XenServer" what do I mean? In general I mean the end result of both aspects of XenServer, the components and the base distro, all wrapped up in an ISO with an installer of some kind. This "appliance" model is how XenServer has been for years and provides a turn-key virtualization platform that does not require Linux sysadmin experience to install. This means we start with xenserver-core packages, choose a particular base distribution and set of packages from it and glue the whole lot together somehow. In a sense this is a distro-customization exercise. XenServer's build system has been doing this since day one albeit in a rather more complex way than described above. As part of the open-sourcing of XenServer we need to clean up this packaging and assembly phase by using standard tools and methods to take the core components and an off-the-shelf Linux distro and put the two together. This tooling, and all the configuration management (which versions of which packages etc.) will become part of the xenserver.org project.

b2ap3_thumbnail_xenserver-appliance.png

What does Citrix actually release? Citrix XenServer is a particular instance of the XenServer appliance built, packaged, assembled, tested, warranted and certified by Citrix. It is only Citrix XenServer that can be supported by Citrix (remember that Citrix XenServer is free, anyone can use it without paying, but to get support and maintenance a package can be purchased from Citrix).

If you want XenServer you have some choices of how to get it (once all the necessary pieces are published of course):

  1. xenserver-core installed on a distro of your choice (either compile the components yourself or use a distro or XenServer.org binary release)
  2. XenServer appliance - you can assemble one of these from core components and a base Linux distro using the tools on XenServer.org
  3. Citrix XenServer - like option 2 above but let Citrix do the hard work of build and assembly and benefit from the system testing and certification this binary build will get. This also gives you the option to buy support from Citrix.
Recent Comments
GizmoChicken
James, You wrote that "we have xenserver-core which can now be installed on top of any reasonable Linux distribution." Although... Read More
Monday, 22 July 2013 21:45
James Bulpin
GizmoChicken - hopefully there will be packages available for CentOS 6.4, Debian and Ubuntu at some point however there is still q... Read More
Wednesday, 07 August 2013 17:55
Matthew Fusaro
So does this mean that, within reason, we can essential build an appliance using other version of Xen? 4.3 for example? Im sure th... Read More
Saturday, 03 August 2013 20:58
Continue reading
29939 Hits
4 Comments

Evolving XenServer for the cloud

Today I presented at the CloudStack Collaboration Conference 2013 in Santa Clara on evolving XenServer to better meet the needs of large-scale cloud deployments. XenServer started life as "XenEnterprise" in 2006 and was aimed at SMBs and Enterprises and targeted Windows IT pros who may not have Linux experience. Therefore a lot of the Linux structure of XenServer was hidden. With many cloud shops being Linux shops this hiding is a barrier; for example XenServer's non-standard installation model doesn't fit well with large scale server deployment and management tools like Chef and Puppet - XenServer is built on a standard Linux OS so why can't we deploy lit like a standard Linux OS?

I outlined some of the initiatives currently underway within Citrix that will continue under the xenserver.org project:

Hyperspace: fixing up the build and packaging to allow individual components to be easily built without complex build system requirements. Making XenServer be a "Linux distro+packages" where we take an off-the-shelf Linux distro, add the XenServer components as packages, and bundle the whole thing as XenServer.

Fusion (I called it "Project Upstream" in the presentation to avoid conflict with a commercial product name): getting libvirt and upstream qemu into XenServer.

Windsor: making XenServer more modular, exploiting dom0 disaggregation.

As we move more of XenServer project planning and development to xenserver.org I look forward to more discussion here on these initiatives and the wider topic of making XenServer the best platform for the cloud.

You can find a copy of my slides at CCC13_EvolvingXenServerForTheCloud_JamesBulpin_20130625.pdf

A video of the talk should be up on buildacloud.org soon.

 

Recent Comments
Tobias Kreidl
Thank you, first off, for your efforts as well as push in this exciting direction. Aside from architectural inter-operational aspe... Read More
Saturday, 13 July 2013 19:41
Garrett Taylor
How about ceph caching in RAMDISK? Expensive, obviously, but greased lightning for VMs.
Wednesday, 17 July 2013 17:33
Tobias Kreidl
Interesting (and expensive) thought. The ceph architecture should allow incorporating various external cache mechanisms quite read... Read More
Thursday, 18 July 2013 10:53
Continue reading
11053 Hits
3 Comments

About XenServer

XenServer is the leading open source virtualization platform, powered by the Xen Project hypervisor and the XAPI toolstack. It is used in the world's largest clouds and enterprises.
 
Commercial support for XenServer is available from Citrix.