Virtualization Blog

Discussions and observations on virtualization.

Introduction to Saving VM Parameters: How I Metadata Backup

Introduction to Saving VM Parameters: How I Metadata Backup

XenServer Virtual Machines (VMs) certainly need no introduction, but even if you do not pardon the pun above, they still contain a lot of specialized and individualized information about their sizes, network connections, and myriad other settings that are generally not readily exposed, yet are integral to the operation and functionality of each VM. This blog entry is not intended to take a deep dive into the several hundred parameters that are defined for VMs, but rather just talk a bit about how to save, extract and potentially restore VM information based on them.

VM Metadata Backups

The purpose of backing up the metadata from a VM is to help you understand how it’s configured without having to search through the list of parameters accessible via various “xe” commands or XenAPI calls that require some programming efforts, plus also allow you to potentially track changes to your VMs without necessitating a full VM export/backup each time. You may not want or need to restore a VM from a full backup, but rather just revert a few parameters back to older values. You might also want to monitor what sorts of changes have taken place over time and equate those to performance or other metrics. In short, a number of reasons to maintain relatively frequent metadata backups of VMs can be justified.

Getting VM Metadata

There are a number of ways to obtain VM metadata settings. One of these is with the standard “xe” command to extract parameters, either individually, in a comma-separated string of multiple queries, or all of them such as with:

# xe vm-list uuid|name-label=UUID|NAME-LABEL params=all

Here, either the UUID or name-label can be used to select the VM.

Some parameters can also not only be obtained via “xe vm-param-get” but also changed, using the complementary “xe vm-param-set” operator. This gives you access to modifying around 30 parameters and reading over 80 of them.

The XenCenter console, xsconsole, provides a direct way to back up and restore VM metadata. From the "Backup, Restore and Update" menu, you can next navigate to the "Backup Virtual Machine Metadata" option and choose from available SRs onto which you wish to create a metadata backup of all available VMs. Note, though, that it will only evidently create metadata backups of running VMs! Likewise, the restore operation can only be performed on various subsets of VMs and not on an individual VM. Conversely, the metadata restore operation apparently can be applied to both running and halted VMs, but again cannot be performed on an individual VM.

Another option is to make use of the XenAPI library and extract tokens using XenAPI calls and access them for example using constructs such as these:

    vm_meta_status = gather_vm_meta(vm_object, full_backup_dir)

    vm_record = session.xenapi.VM.get_record(vm_object)

    vm_out = open ('%s/vm.cfg' % tmp_full_backup_dir, 'w')

    vm_out.write('name_label=%s\n' % vm_record['name_label'])

    vm_out.write('name_description=%s\n' % vm_record['name_description'])

    vm_out.write('memory_dynamic_max=%s\n' % vm_record['memory_dynamic_max'])

    vm_out.write('VCPUs_max=%s\n' % vm_record['VCPUs_max'])

    vm_out.write('VCPUs_at_startup=%s\n' % vm_record['VCPUs_at_startup'])

 

This can be time-consuming if you want to keep identifying and modifying code to deal with any additions or changes, plus you may periodically have to update your API libraries.

Yet another option is to make use of the not-well-documented features within the “xe” command set associated with vm-export and vm-import utilities. It is possible to export just the metadata from a VM using the following syntax:

# xe vm-export metadata=true uuid=UUID-OF-VM) filename=/full_path/OUTPUT_FILE.XVA

 

This will create what is in essence a tar file containing a single file that captures more than 300 parameters! The XML code has to be extracted from this tarball, which contains a single file that is always named ova.xml and can be pulled from the XVA files with a basic tar command in which specifying the file to extract as ova.xml is optional, since it’s the one and only file within the tar file:

 

# tar –xf OUTPUT_FULE.XVA

tar: ova.xml: implausibly old time stamp 1969-12-31 17:00:00

 

Note that you may get this rather interesting message regarding the timestamp, which can be ignored.  It may also turn out that the output file has absolutely no access permissions set, so you may want to run a “chmod 600 ova.xml” (or 644, etc.) to make it readable. You may also wish to rename it so it’s unambiguous and/or less likely to be overwritten.

For exported XVA files that are gzipped, you can extract the ova.xml file in a single operation with:

# tar xzf OUTPUT_FULE.XVA .gz

tar: ova.xml: implausibly old time stamp 1969-12-31 17:00:00

Once extracted, let’s take a look at the first part of the ova.xml file, which takes on a rather “ugly” appearance:

 

There is a wealth of information in here, but it’s not in a very friendly format. Fortunately, this can be readily rectified with the handy xmllint utility already present on XenServer (at least on 7.X):

# tar -xOf /ubuntu12-xs66-specialchars.XVA |  xmllint –format - >/tmp/output_VM.xml

The ”-O” flag causes the output to be redirected to stdout and hence it can be piped to the xmllint utility, which in turn can generate a very nicely formatted and properly indented XML file. Note that the “-“ before the redirection “>” operator signifies the output of xmllint to go to stdout and if desired, the command can be abbreviated as such, in which case the output will just appear on the terminal. It will be several hundred lines long, so you may as well redirect the output into a file where you can more conveniently deal with reviewing that amount of information.

Here is what the first part of the formatted file really looks like: 

 

OK, Great -- Now What?

Given the ability to now parse and peruse the XML metadata file associated with a particular VM, one could contemplate creating periodic backups of the VM metadata to have on hand in case one needs to reconstruct something or check if anything had changed. That’s all fine and good, but other than using “xe” commands or other means to change individual parameters, how does having these data help in the event of wanting to reconstruct or restore a VM?

The bottom line is that this feature has limited direct applications, though it does have a few. Consider the case of trying to use an XVA file that only contains the VM’s metadata to restore a VM. Note that the original VM must of course still exist or there will be nothing present to associate the VM storage with if a version of this VM cannot be found. However, if it is present, consider the following results:

# xe vm-import preserve=true filename=/ubuntu12-xs66-specialchars.XVA

The VM cannot be imported unforced because it is either the same version or an older version of an existing VM.

vm: 9538882a-c7e4-b8e5-c1f9-0d136f4a81b1 (TST-ubuntu12-vmtst3-xs66)

existing_version: 0

version_to_import: 0

 

Perhaps as anticipated, it will fail as the “preserve=true” flag will first check for a duplicate VM and upon discovering it, flags it as a command that would overwrite the existing VM. That’s a good thing. Leaving off that flag, we next try:

xe vm-import filename=/ubuntu12-xs66-specialchars.XVA metadata=true

This should yield success, but what kind of success? What happens is the VM created using the “metadata=true” flag produces a new VM copy with the same name, but a different UUID for the VM that is “Created by template provisioner.” What has happened is just that a fast clone has been created. You will see that if you delete such a VM created that way, it will not show any storage devices associated with it and in XenCenter, it will therefore not ask you if you want to delete the associated storage.

This is not entirely without use, however as you can still make use of this VM and perhaps even compare its characteristics to the original. Furthermore, you can export the VM, and import it as a new VM in which case it will gain the properties of a full clone. At that point, dependence on the original no longer exists.

This exercise might be useful in debugging or checking parameter-based performance or other issues between the original and subsequent metadata modifications. Such headers may also be useful just for tracking historical uses of VMs, checking to see what IP addresses may have been assigned, and numerous other things.

The Full XVA Export

The discussion up to this point should result in a mental lightbulb turning on and raising the question, well, if I restore a full export of a VM, isn’t all this information already in there? Since clearly it has to be for a vm-import to work properly, an examination of a full XVA export will indeed reveal that it consists of many fairly small files, numbering at times many thousands, but always starting with our old friend, ova.xml, as we see from this sample output that lists the contents instead of extracting it:

# tar -tvf  /exports/test-export.xva |less

---------- 0/0           29935 1969-12-31 17:00 ova.xml

---------- 0/0         1048576 1969-12-31 17:00 Ref:367/00000000

---------- 0/0              40 1969-12-31 17:00 Ref:367/00000000.checksum

---------- 0/0         1048576 1969-12-31 17:00 Ref:367/00000001

---------- 0/0              40 1969-12-31 17:00 Ref:367/00000001.checksum

---------- 0/0         1048576 1969-12-31 17:00 Ref:367/00000002

---------- 0/0              40 1969-12-31 17:00 Ref:367/00000002.checksum

---------- 0/0         1048576 1969-12-31 17:00 Ref:367/00000003

---------- 0/0              40 1969-12-31 17:00 Ref:367/00000003.checksum

---------- 0/0         1048576 1969-12-31 17:00 Ref:367/00000004

---------- 0/0              40 1969-12-31 17:00 Ref:367/00000004.checksum

etc.

The nice aspect of having the XVA file self-contained with all the metadata, as well as the data contents of the VM, makes this standalone file easy to move around and utilize as a backup.

The other nice aspect is that you, in fact, can use it as its own metadata storage mechanism and extract only that part of it if desired without needing to create a separate metadata backup (unless, of course, you want to do that more often and independently of a a vm-export). To extract just the metadata from this file shown above, all you need to do is to specify the embedded tar file name:

tar -xvf  /exports/test-export.xva ova.xml

We now have the same metadata file content we had when running the vm-export command combined with the “metadata=true” option.

In Summary

First off, it cannot be overstated that your XenServer environment should be backed up frequently and fastidiously, including both the pool metadata as well as the individual metadata for VMs. Even if you already have full exports of your VMs, having additional metadata can be useful for auditing purposes, as well as making it possible to check on parameters that are hard or impossible to glean through other means.

Nobody that I know was ever accused of creating too many backups.

Continue reading
646 Hits
0 Comments

Creating backups with XenServer

Backup is an essential part of the business workflow for many of our customers - be it SMB, Enterprise Server Virtualisation or Virtual Desktop Infrastructure. Making the backup experience smoother is high up on our wishlist at XenServer Engineering and the delivery of improved VM import/export performance in XS 7.1 shows our commitment to that end. To continue improving our services supporting the backup ecosystem, we would like to better understand how you use backup with XenServer

 

  • How often do you backup? Do you have multiple jobs for monthly, weekly, daily backups?

  • How do you create your backups?

    • Use VM Export to backup VM metadata + disks

    • Snapshot at the VM level and use transfer/service VM to read off the snapshots

    • Use vdi-export to create differential disks (.vhd)

  • Do you use a third-party vendor for handling your backups?

  • Would support for incremental backups be useful for your use case?

Please leave a comment with your answers and any issues you may have with your backup experience today. We look forward to hearing from you!

Thank you,

Chandrika

 

Recent Comments
Beck
I use storage replication for sr and bacula for vm data. I still afraid of coalesce issue for using snapshot export.
Friday, 07 April 2017 02:30
Chandrika Srinivasan
Hi Beck, Is there a specific issue you are facing with coalesce? Which version of XenServer are you using? -Chandrika... Read More
Tuesday, 02 May 2017 14:02
Olivier Lambert
Check this: https://xen-orchestra.com/blog/xenserver-coalesce-detection-in-xen-orchestra/ Should be integrated soon in our backup... Read More
Friday, 12 May 2017 19:23
Continue reading
8299 Hits
24 Comments

XenServer High-Availability Alternative HA-Lizard

XenServer High-Availability Alternative HA-Lizard

WHY HA AND WHAT IT DOES

XenServer (XS) contains a native high-availability (HA) option which allows quite a bit of flexibility in determining the state of a pool of hosts and under what circumstances Virtual Machines (VMs) are to be restarted on alternative hosts in the event of the loss of the ability of a host to be able to serve VMs. HA is a very useful feature that protects VMs from staying failed in the event of a server crash or other incident that makes VMs inaccessible. Allowing a XS pool to help itself maintain the functionality of VMs is an important feature and one that plays a large role in sustaining as much uptime as possible. Permitting the servers to automatically deal with fail-overs makes system administration easier and allows for more rapid reaction times to incidents, leading to increased up-time for servers and the applications they run.

XS allows for the designation of three different treatments of Virtual Machines: (1) always restart, (2) restart if possible, and (3) do not restart. The VMs designated with the highest restart priority will be the first to be attempted to restart and all will be handled, provided adequate resources (primarily, host memory) are available.  A specific start order, allowing for some VMs to be checked to be running before others, can also be established. VMs will be automatically distributed among whatever remaining XS hosts are considered active. Where necessary, note that hosts that contain expandable memory will be shrunk down to accommodate additional hosts and those hosts designated to be restarted will also be run with reduced memory, if necessary. If additional capacity exists to run more VMs, those designated as “start if possible” will be brought online. Whichever VMs that are not considered essential typically will be marked as “do not restart” and hence will be left “off” had they been running before, requiring any of those desired to be restarted to be done manually, resources permitting.

XS also allows for specifying the minimum number of active hosts to remain to accommodate failures; larger pools that are not overly populated with VMs can readily accommodate even two or more host failures.

The election of what hosts are “live” and should be considered active members of the pool follows a rather involved process of a combination of network accessibility plus access to an independent designated pooled Storage Repository (SR) that serves as an additional metric. The pooled SR can also be a fiber channel device, being independent of Ethernet connections. A quorum-based algorithm is applied to establish which servers are up and active as members of the pool and which -- in the event of a pool master failure -- should be elected the new pool master.

 

WHEN HA WORKS, IT WORKS GREAT

Without going into more detail, suffice it to say that this methodology works very well, however requiring a few prerequisite conditions that need to be taken into consideration. First of all, the mandate that a pooled storage device be available clearly means that a pool consisting of hosts that only make use of local storage will be precluded. Second, there is also a constraint that for a quorum to be possible, it is required to have a minimum of three hosts in the pool or HA results will be unpredictable as the election of a pool master can become ambiguous. This comes about because of the so-called “split brain” issue (http://linux-ha.org/wiki/Split_Brain) which is endemic in many different operating system environments that employ a quorum as means of making such a decision. Furthermore, while fencing (the process of isolating the host; see for example http://linux-ha.org/wiki/Fencing) is the typical recourse, the lack of intercommunication can result in a wrong decision being made and hence loss of access to VMs. Having experimented with two-host pools and the native XenServer HA, I would say that an estimate of it working about half the time is about right and from a statistical viewpoint, pretty much what you would expect.

This limitation is, however, still of immediate concern to those with either no pooled storage and/or only two hosts in a pool. With a little bit of extra network connectivity, a relatively simple and inexpensive solution to the external SR can be provided by making a very small NFS-based SR available. The second condition, however, is not readily rectified without the expense of at least one additional host and all the connectivity associated with it. In some cases, this may simply not be an affordable option.

 

ENTER HA-LIZARD

For a number of years now, an alternative method of providing HA has been available through the program package provided by HA-Lizard (http://www.halizard.com/) , a community project that provides a free alternative that is neither dependent on external SRs nor requires a minimum of three hosts within a pool. In this blog, the focus will be on the standard HA-Lizard version and because of the particularly harder-to-handle situation of a two-node pool, it will also be the subject of discussion.

I had been experimenting for some time with HA-Lizard and found in particular that I was able to create failure scenarios that needed some improvement. HA-Lizard’s Salvatore Costantino was more than willing to lend an ear to the cases I had found and this led further to a very productive collaboration on investigating and implementing means to deal with a number of specific cases involving two-host pools. The result of these several months of efforts is a new HA-Lizard release that manages to address a number of additional scenarios above and beyond its earlier capabilities.

It is worthwhile mentioning that there are two ways of deploying HA-Lizard:

1) Most use cases combine HA-Lizard and iSCSI-HA which creates a two-node pool using local storage while maintaining full VM agility with VMs being able to run on either host. In this case, DRBD (http://www.drbd.org/) is implemented in this type of deployment and it works very well making use of the real-time storage replication.

2) HA-Lizard, only, is used with an external Storage Repository (as in this particular case).

Before going into details of the investigation, a few words should go towards a brief explanation of how this works. Note that there is only Internet connectivity (the use of a heuristic network node) and no external SR, so how is a split brain situation then avoidable?

This is how I'd describe the course of action in this two-node situation:

If a node sees the gateway, assume it's alive. If it cannot, assume it's a good candidate for fencing. If the node that cannot see the gateway is the master, it should internally kill any running VMs and surrender its ability to be the master and fence itself. The slave node should promote itself to master and attempt to restart any missing VMs. Any that are on the previous master will probably fail though, because there is no communication to the old master. If the old VMs cannot be restarted, eventually the new master will be able to restart them regardless after a toolstack restart. If the slave node fails by not being able to communicate with the network, as long as the master still sees the network and not the slave’s network, it can assume the slave needs to fence itself, kill off its VMs and assume that they will be restarted on the current master. The slave needs to realize it cannot communicate out, and therefore should kill off any of its VMs and fence itself.

Naturally, the trickier part comes with the timing of the various actions, since each node has to blindly assume the other is going to conduct a sequence of events. The key here is that these are all agreed on ahead of time and as long as each follows its own specific instructions, it should not matter that each of the two nodes cannot see the other node. In essence, the lack of communication in this case allows for creating a very specific course of action! If both nodes fail, obviously the case is hopeless, but that would be true of any HA configuration in which no node is left standing.

Various test plans were worked out for various cases and the table below elucidates the different test scenarios, what was expected and what was actually observed. It is very encouraging that the vast majority of these cases can now be properly handled.

 

Particularly tricky here was the case of rebooting the master server from the shell, without first disabling HA-Lizard (something one could readily forget to do). Since the fail-over process takes a while, a large number of VMs cannot be handled before the communication breakdown takes place, hence one is left with a bit of a mess to clean up in the end. Nevertheless, it’s still good to know what happens if something takes place that rightfully shouldn’t!

The other cases, whether intentional or not, are handled predictably and reliably, which is of course the intent. Typically, a two-node pool isn’t going to have a lot of complex VM dependencies, so the lack of a start order of VMs should not be perceived as a big shortcoming. Support for this feature may even be added in a future release.

 

CONCLUSIONS

HA-Lizard is a viable alternative to the native Citrix HA configuration. It’s straightforward to set up and can handle standard failover cases with a selective “restart/do not restart” setting for each VM or can be globally configured. There are a quite a number of configuration parameters which the reader is encouraged to research in the extensive HA-Lizard documentation. There is also an on-line forum which serves as a source for information and prompt assistance with issues. This most recent release 2.1.3 is supported on both XenServer 6.5 and 7.0.

Above all, HA-Lizard shines when it comes to handling a non-pooled storage environment and in particular, all configurations of the dreaded two-node pool configuration. From my direct experience, HA-Lizard now handles the vast majority of issues involved in a two-node pool and can do so more reliably than the non-supported two-node pool using Citrix’ own HA application. It has been possible to conduct a lot of tests with various cases and importantly, and to do so multiple times to ensure the actions are predictable and repeatable.

I would encourage taking a look at HA-Lizard and giving it a good test run. The software is free (contributions are accepted) and it is in extensive use and has a proven track record.  For a two-host pool, I can frankly not think of a better alternative, especially with these latest improvements and enhancements.

I would also like to thank Salvatore Costantino for the opportunity to participate in this investigation and am very pleased to see the fruits of this collaboration. It has been one way of contributing to the Citrix XenServer user community that many can immediately benefit from.

 

 

 

 

 

 

Recent comment in this post
JK Benedict
I hath no idea why more have not read this intense article! As always: bravo, sir! BRAVO!
Wednesday, 04 January 2017 12:43
Continue reading
6422 Hits
1 Comment

About XenServer

XenServer is the leading open source virtualization platform, powered by the Xen Project hypervisor and the XAPI toolstack. It is used in the world's largest clouds and enterprises.
 
Commercial support for XenServer is available from Citrix.