Virtualization Blog

Discussions and observations on virtualization.

Introduction to Saving VM Parameters: How I Metadata Backup

Introduction to Saving VM Parameters: How I Metadata Backup

XenServer Virtual Machines (VMs) certainly need no introduction, but even if you do not pardon the pun above, they still contain a lot of specialized and individualized information about their sizes, network connections, and myriad other settings that are generally not readily exposed, yet are integral to the operation and functionality of each VM. This blog entry is not intended to take a deep dive into the several hundred parameters that are defined for VMs, but rather just talk a bit about how to save, extract and potentially restore VM information based on them.

VM Metadata Backups

The purpose of backing up the metadata from a VM is to help you understand how it’s configured without having to search through the list of parameters accessible via various “xe” commands or XenAPI calls that require some programming efforts, plus also allow you to potentially track changes to your VMs without necessitating a full VM export/backup each time. You may not want or need to restore a VM from a full backup, but rather just revert a few parameters back to older values. You might also want to monitor what sorts of changes have taken place over time and equate those to performance or other metrics. In short, a number of reasons to maintain relatively frequent metadata backups of VMs can be justified.

Getting VM Metadata

There are a number of ways to obtain VM metadata settings. One of these is with the standard “xe” command to extract parameters, either individually, in a comma-separated string of multiple queries, or all of them such as with:

# xe vm-list uuid|name-label=UUID|NAME-LABEL params=all

Here, either the UUID or name-label can be used to select the VM.

Some parameters can also not only be obtained via “xe vm-param-get” but also changed, using the complementary “xe vm-param-set” operator. This gives you access to modifying around 30 parameters and reading over 80 of them.

The XenCenter console, xsconsole, provides a direct way to back up and restore VM metadata. From the "Backup, Restore and Update" menu, you can next navigate to the "Backup Virtual Machine Metadata" option and choose from available SRs onto which you wish to create a metadata backup of all available VMs. Note, though, that it will only evidently create metadata backups of running VMs! Likewise, the restore operation can only be performed on various subsets of VMs and not on an individual VM. Conversely, the metadata restore operation apparently can be applied to both running and halted VMs, but again cannot be performed on an individual VM.

Another option is to make use of the XenAPI library and extract tokens using XenAPI calls and access them for example using constructs such as these:

    vm_meta_status = gather_vm_meta(vm_object, full_backup_dir)

    vm_record = session.xenapi.VM.get_record(vm_object)

    vm_out = open ('%s/vm.cfg' % tmp_full_backup_dir, 'w')

    vm_out.write('name_label=%s\n' % vm_record['name_label'])

    vm_out.write('name_description=%s\n' % vm_record['name_description'])

    vm_out.write('memory_dynamic_max=%s\n' % vm_record['memory_dynamic_max'])

    vm_out.write('VCPUs_max=%s\n' % vm_record['VCPUs_max'])

    vm_out.write('VCPUs_at_startup=%s\n' % vm_record['VCPUs_at_startup'])

 

This can be time-consuming if you want to keep identifying and modifying code to deal with any additions or changes, plus you may periodically have to update your API libraries.

Yet another option is to make use of the not-well-documented features within the “xe” command set associated with vm-export and vm-import utilities. It is possible to export just the metadata from a VM using the following syntax:

# xe vm-export metadata=true uuid=UUID-OF-VM) filename=/full_path/OUTPUT_FILE.XVA

 

This will create what is in essence a tar file containing a single file that captures more than 300 parameters! The XML code has to be extracted from this tarball, which contains a single file that is always named ova.xml and can be pulled from the XVA files with a basic tar command in which specifying the file to extract as ova.xml is optional, since it’s the one and only file within the tar file:

 

# tar –xf OUTPUT_FULE.XVA

tar: ova.xml: implausibly old time stamp 1969-12-31 17:00:00

 

Note that you may get this rather interesting message regarding the timestamp, which can be ignored.  It may also turn out that the output file has absolutely no access permissions set, so you may want to run a “chmod 600 ova.xml” (or 644, etc.) to make it readable. You may also wish to rename it so it’s unambiguous and/or less likely to be overwritten.

For exported XVA files that are gzipped, you can extract the ova.xml file in a single operation with:

# tar xzf OUTPUT_FULE.XVA .gz

tar: ova.xml: implausibly old time stamp 1969-12-31 17:00:00

Once extracted, let’s take a look at the first part of the ova.xml file, which takes on a rather “ugly” appearance:

 

There is a wealth of information in here, but it’s not in a very friendly format. Fortunately, this can be readily rectified with the handy xmllint utility already present on XenServer (at least on 7.X):

# tar -xOf /ubuntu12-xs66-specialchars.XVA |  xmllint –format - >/tmp/output_VM.xml

The ”-O” flag causes the output to be redirected to stdout and hence it can be piped to the xmllint utility, which in turn can generate a very nicely formatted and properly indented XML file. Note that the “-“ before the redirection “>” operator signifies the output of xmllint to go to stdout and if desired, the command can be abbreviated as such, in which case the output will just appear on the terminal. It will be several hundred lines long, so you may as well redirect the output into a file where you can more conveniently deal with reviewing that amount of information.

Here is what the first part of the formatted file really looks like: 

 

OK, Great -- Now What?

Given the ability to now parse and peruse the XML metadata file associated with a particular VM, one could contemplate creating periodic backups of the VM metadata to have on hand in case one needs to reconstruct something or check if anything had changed. That’s all fine and good, but other than using “xe” commands or other means to change individual parameters, how does having these data help in the event of wanting to reconstruct or restore a VM?

The bottom line is that this feature has limited direct applications, though it does have a few. Consider the case of trying to use an XVA file that only contains the VM’s metadata to restore a VM. Note that the original VM must of course still exist or there will be nothing present to associate the VM storage with if a version of this VM cannot be found. However, if it is present, consider the following results:

# xe vm-import preserve=true filename=/ubuntu12-xs66-specialchars.XVA

The VM cannot be imported unforced because it is either the same version or an older version of an existing VM.

vm: 9538882a-c7e4-b8e5-c1f9-0d136f4a81b1 (TST-ubuntu12-vmtst3-xs66)

existing_version: 0

version_to_import: 0

 

Perhaps as anticipated, it will fail as the “preserve=true” flag will first check for a duplicate VM and upon discovering it, flags it as a command that would overwrite the existing VM. That’s a good thing. Leaving off that flag, we next try:

xe vm-import filename=/ubuntu12-xs66-specialchars.XVA metadata=true

This should yield success, but what kind of success? What happens is the VM created using the “metadata=true” flag produces a new VM copy with the same name, but a different UUID for the VM that is “Created by template provisioner.” What has happened is just that a fast clone has been created. You will see that if you delete such a VM created that way, it will not show any storage devices associated with it and in XenCenter, it will therefore not ask you if you want to delete the associated storage.

This is not entirely without use, however as you can still make use of this VM and perhaps even compare its characteristics to the original. Furthermore, you can export the VM, and import it as a new VM in which case it will gain the properties of a full clone. At that point, dependence on the original no longer exists.

This exercise might be useful in debugging or checking parameter-based performance or other issues between the original and subsequent metadata modifications. Such headers may also be useful just for tracking historical uses of VMs, checking to see what IP addresses may have been assigned, and numerous other things.

The Full XVA Export

The discussion up to this point should result in a mental lightbulb turning on and raising the question, well, if I restore a full export of a VM, isn’t all this information already in there? Since clearly it has to be for a vm-import to work properly, an examination of a full XVA export will indeed reveal that it consists of many fairly small files, numbering at times many thousands, but always starting with our old friend, ova.xml, as we see from this sample output that lists the contents instead of extracting it:

# tar -tvf  /exports/test-export.xva |less

---------- 0/0           29935 1969-12-31 17:00 ova.xml

---------- 0/0         1048576 1969-12-31 17:00 Ref:367/00000000

---------- 0/0              40 1969-12-31 17:00 Ref:367/00000000.checksum

---------- 0/0         1048576 1969-12-31 17:00 Ref:367/00000001

---------- 0/0              40 1969-12-31 17:00 Ref:367/00000001.checksum

---------- 0/0         1048576 1969-12-31 17:00 Ref:367/00000002

---------- 0/0              40 1969-12-31 17:00 Ref:367/00000002.checksum

---------- 0/0         1048576 1969-12-31 17:00 Ref:367/00000003

---------- 0/0              40 1969-12-31 17:00 Ref:367/00000003.checksum

---------- 0/0         1048576 1969-12-31 17:00 Ref:367/00000004

---------- 0/0              40 1969-12-31 17:00 Ref:367/00000004.checksum

etc.

The nice aspect of having the XVA file self-contained with all the metadata, as well as the data contents of the VM, makes this standalone file easy to move around and utilize as a backup.

The other nice aspect is that you, in fact, can use it as its own metadata storage mechanism and extract only that part of it if desired without needing to create a separate metadata backup (unless, of course, you want to do that more often and independently of a a vm-export). To extract just the metadata from this file shown above, all you need to do is to specify the embedded tar file name:

tar -xvf  /exports/test-export.xva ova.xml

We now have the same metadata file content we had when running the vm-export command combined with the “metadata=true” option.

In Summary

First off, it cannot be overstated that your XenServer environment should be backed up frequently and fastidiously, including both the pool metadata as well as the individual metadata for VMs. Even if you already have full exports of your VMs, having additional metadata can be useful for auditing purposes, as well as making it possible to check on parameters that are hard or impossible to glean through other means.

Nobody that I know was ever accused of creating too many backups.

Continue reading
646 Hits
0 Comments

XSO DB for XS 7.1 Closing Down

Attention XenServer community:

Please be advised that, in alignment with our goal of frequent releases and the XSO database supporting one active release to help resolve issues found in that release while simultaneously preparing for the next release, we are closing down the XSO database for 7.1 issues. As such, users are encouraged to move to XS 7.2 and raise any issues they encounter with that release here. Users who wish to remain on the Long-Term Support Release (LTSR) stream are encouraged to visit Citrix.com.

As always, thank you for your support.

Andy

 

Continue reading
1008 Hits
0 Comments

Citrix Announces First Cumulative Update for XenServer 7.1

Citrix has posted a blog announcing the availability of XenServer 7.1 CU1, the first cumulative update for the XenServer 7.1 Long-Term Support Release (LTSR).

CU1 rolls-up all hotfixes released on XenServer 7.1 to date. While there are no new features introduced in this update, a number of additional fixes are included to further enhance and simplify XenApp and XenDesktop environments. Information pertaining to these fixes can be found in the release notes.

Of particular significance, this cumulative update provides customers on Long-Term Support Release (LTSR) 7.1 with the latest platform fixes applicable to v7.1. Unlike Current Releases (CR), which provide access to new features on an almost quarterly basis, Long-Term Support Releases are designed for customers who, possessing the features they need to optimize their XenApp and XenDesktop environments, intend to remain on the LTSR for the foreseeable future. For these customers, using a Long-Term Support Release offers stability - Citrix will support the LTSR for up to 10 years – as well as access to fixes as they become available. You can learn more about the differences between Long-Term Support and Current Releases here.

Wondering if XenServer 7.1 Enterprise edition is right for your organization? Click here to download a free trial version.

Cheers!

Andy

 

Recent comment in this post
Sam McLeod
Great to see a roll-up for all hotfixes for 7.1 to date, it was very painful figuring out what patches to apply and what not to in... Read More
Sunday, 17 September 2017 23:15
Continue reading
931 Hits
1 Comment

Changes to Hotfixes and Maintenance for XenServer Releases

Changes to Hotfixes and Maintenance for XenServer Releases

XenServer (XS) is changing as a product with more aggressive release cycles as well as the recent introduction of both Current Release (CR) and Long Term Service Release (LTSR) options. Citrix recently announced changes to the way XenServer updates (hotfixes) will be regulated and this article summarizes the way these will be dealt with looking forward.

To first define a few terms and clarify some points, the CR model is intended for those who always want to install the latest and greatest versions, and these are being targeted for roughly quarterly releases. The LTSR model is primarily intended for installations that are required to run with a fixed release. These are by definition required to be under a service contract in order to gain access to periodic updates, known as Cumulative Updates or CUs. The first of these will be CU1. Since Citrix is targeting support for LTSR versions for ten years, this incurs quite a bit of cost over time and is much longer than a general CR cycle is maintained, hence the requirement for active support. The support for those running XS as part of running licensed versions of XenApp/XenDesktop will remain the same, as these CR licenses are provided as part of the XenApp/XenDesktop purchase. There will still be a free version of XenServer and the only change at this point will be that on-demand hotfixes for a given CR will no longer be available once the next CR is released. Instead, updates to quarterly releases will be needed to access and stay on top of hotfixes. More will be discussed about this later on. End-of-life timelines should help customers in planning for these changes. Licensed versions of XenServer need to fall under active Customer Success Services (CSS) contracts to be eligible for access to the corresponding types of updates and releases that fall outside of current product cycles.

 

At a Glance

The summary of how various XS versions are treated looking forward is presented in the following table, which provides an overview at a glance.

XenServer Version

Active Hotfix Cut-Off

Action Required

6.2 & 6.5 SP1

EOL June 2018

None. Hotfixes will continue to be available as before until EOL.

7.0

December 2017

Upgrade or buy CSS.

7.1

December 2017

Upgrade or buy CSS (to stay on the LTSR stream by deploying CU1)

7.2

At release of next CR

Upgrade to latest CR, or buy CSS to obtain 4 months more of hotfixes (and thus skip one CR)

 

In the future, the only way to get hotfixes for XS 7.2 without paying for a CSS will be to upgrade to the most current CR once released. Customers who do pay for CSS will also be able to access hotfixes on 7.2 for a further four months after the next CR is released. This would in principle allow you to skip a release and transition to the next one beyond it.

 

What Stays the Same

While changes are fairly significant for especially users of the free edition of XS, some aspects will not change. These include:

·        There will still be free versions of XenSever released as part of the quarterly CR cycle.

·        The free version of XS can still be updated with all released hotfixes until the next CR becomes generally available.

·        XS code still remains open source.

·        Any hotfixes that have already been released will remain publically available.

·        There will be no change to XS 6.2 through 6.5 SP1. They will only receive security hotfixes at this point, anyway, until being EOLed in June 2018.

 

In Summary 

First off, stay calm and keep on maintaining XS pretty much as before except for the cut-offs on how long hotfixes will be available for non-paid-for versions. The large base of free XS users should not fear that all access to hotfixes is getting completely cut off. You will still be able to maintain your installation exactly as before, up to the point of a new CR. This will take place roughly every quarter. This is a pretty small amount of extra effort to keep up, especially given how much easier updating has become, in particular for pools. Most users would do well to keep up with current releases to benefit from improvements and to not have to apply huge numbers of hotfixes whenever installing a base version; the CR releases will include all hotfixes accumulated up to that point, which will be more convenient.

 

What To Do 

Any customers who already have active CSS contracts will be unaffected by any of these changes. Included are servers that support actively licensed XenApp/XenDesktop instances, who by the way, get all the XS Enterprise features incorporated into the latest XS releases – for all editions of XA/XD. Customers on XS 7.0 and 7.1 LTSR who have allowed their software maintenance to lapse and have not renewed/upgraded to CCS subscriptions should seriously consider that route.

Citrix will provide additional information as these changes are put into place.

Continue reading
928 Hits
0 Comments

From the Field: XenServer 7.X Pool Upgrades, Part 2: Updates and Aftermath

The XenServer upgrade process with XenServer 7 can be a bit more involved than with past upgrades, particularly if upgrading from a version prior to 7.0. In this series I’ll discuss various approaches to upgrading XS pools to 7.X and some of the issues you might encounter during the process based on my experiences in the field.

Part 1 dealt with some of the preliminary actions needed to be taken into consideration and the planning process, as well as the case for a clean installation. You can go back and review it here. Part 2 deals with the alternative, namely a true upgrade procedure, and what issues may arise during and after the procedure.

In-Place Upgrades to XS 7

An in-place upgrade to XS 7.X is a whole different beast compared to where the OS is overwritten from scratch. Mostly, it depends on whether or not you wish to retain the original partition scheme or go with the newer, larger, and more flexible disk partition layout. The bottom line is despite the added work and potential issues, you may as well go through with the update as it will make life easier later down the road.

Your choices here are to retain the current disk partitioning layout or to switch to the newer partition layout. If you want to know why the new XS disk layout changed, review the “Why was the XenServer 7 Disk Layout Changed?” section in Part 1 of this blog series.

In my experience, issues found with in-place upgrades cover five areas – I’m going to cover them as follows: 

  1. Pre-XS 7 Upgrade, Staying the Course with the Original Partition Layout
  2. The 72% Solution
  3. The (In)famous Dell Utility Partition
  4. XS 7.X to XS 7.X Maintaining the Current Partition Layout
  5. XS 6.X or7.X to XS 7.X With a Change to the New Partition Layout (plus possible issues on Dell Servers with iDRAC modules)

Pre-XS 7 Upgrade, Staying the Course with the Original Partition Layout

If you choose to stick with the original partition layout, the rest of your installation experience – whether you go with the rolling pool upgrade, or conventional upgrade – will be pretty much the same as before.

As with any such upgrade, make sure to check ahead of time that the pool is ready for this undertaking: 

  • XenCenter has been upgraded 
  • Assure the proper backups of VMs, metadata, etc. have been performed 
  • That all VMs are agile or in the case of a rolling pool upgrade, at least shut down on all local SRs. 
  • All hosts must not only be running the same version of XenServer, but must have the identical list of hotfixes applied. 

Depending on the various ages of the hosts and how hotfixes were applied, you could run into the situation where there is a discrepancy. This can be particularly frustrating if an older hotfix has since been superseded and the older version is no longer available, yet shows up in the applied hotfix list. Though I’d only recommend this in dire circumstances (such as this, where the XenServer version is going to be replaced by a new one anyway) there are ways to make XenServers believe they have had the same specific patches applied by manipulating the contents of /var/update/applied directory (see, for example, this forum thread).

One thing you might run into is the following issue, covered in the next section, which incidentally appears to be independent of whichever partition layout you use.

The 72% Solution

This is not really so much of a solution, but rather a warning as to what you may encounter and what to do before you potentially run into this or what to do afterwards should you encounter it. 

This issue apparently crops up frequently during the latter part of the installation process and is characterized by a long wait with a screen showing 72% completion of the installation that can last anywhere from about five minutes to over an hour in extreme cases. 

One apparent cause of this is if you have an exceedingly large number of message files on your current version of XS. If you have the chance, it would be worthwhile taking the time to examine the contents of the area where these are stored and manually clean up any ancient versions within those areas under /var/lib/xcp/blobs/ in particular as most of these reside under the “messages” and “refs” subdirectories, paying attention also to the symbolic links.

If you do get stuck in the midst of a long-lasting installation you can escape out by pressing ALT+F2 to get into the shell and check the installer logs under /tmp/install-log to see if it has thrown any errors (thanks, Nekta -- @nektasingh -- for that tip!). If all looks OK, continue to wait until the process completes.

The (In)famous Dell Utility Partition

Using the rolling pool upgrade for the first time, I ran into a terrible situation in which the upgrade proceeded all the way to the very end and just as the final reboot was about to happen, I got this error popping up on the console:

Which read:

An unrecoverable error has occurred. The error was:

Failed to install bootloader: installing for i386-pc platform.

/usr/sbin/grub-install: warning: your embedding area is unusually small. core.img won’t fit in it..

/usr/sbin/grub-install: warning: Embedding is not possible. GRUB can only be installed in this setup by using blocklists. However blocklists are UNRELIABLE and their use is discouraged..

/usr/sbin/grub-install: error: will not process with blocklists.

You can imagine the reaction.

What caused this and what can you do about it? After all, this was something I’d never encountered before in any previous XS upgrades.
 
 

It turns out that the reason for this is the Dell utility partition (see, for example, this link). The Dell utility partition is a five-block partition that Dell puts on for its own purposes at the beginning of many of its servers as they ship. This did not interfere with any installs up to and including XS 6.5 SP1, hence this came to me as a total surprise when I first encountered in during a XS 7 upgrade. 

And while this wasn't an issue initially or in one particular upgrade which for whatever reason managed to squeak by without any errors being reported whatsoever, it's too small to hold the initial configuration needed to do the installation under most circumstances when installing XS 7.

What's bad is that the XenServer installation doesn't apparently perform any pre-checks to see if that first partition on the disk is big enough. This is the case for both UEFI and Legacy/BIOS boots.

The solution was simply to delete that sda1 partition altogether using fdisk and re-install. Deleting the partition can be performed live on the host prior to the installation process.

You can then successfully bypass this issue. I have performed this surgical procedure on a number of hosts with success and have not experienced any adverse effects; you do not even require a vorpal sword to accomplish this task.

Possibly future upgrade processes will be better accommodating and should perform a pre-check for this along with looking at other potential inconsistencies such as the lack of VM agility or uniformity of applied hotfixes.

XS 6.X or 7.X to XS 7.X With a Change to the New Partition Layout 

This is the trickiest area where most, including me, seem to encounter issues.

One point that should be clarified right away is: 

Any local partition on or partly contained on the system disk to be upgraded is going to get destroyed in the process. 

There are no two ways about this. Hence, plan ahead: either 

  • Storage Xenmotion any local VMs to pooled storage, or
  • Export them. 

If you want or need to preserve your existing local storage SRs, you’ll have to stay with the original partition scheme.

Should you decide to update the new partition scheme, before the upgrade, you will need to perform the following action:

# touch /var/preserve/safe2upgrade
The upgrade process (to preserve the pool) will need to be performed using the rolling pool upgrade. The caveat is that if anything goes wrong during any part of the upgrade process, you will have to exit and try to start over. The end point can vary from it being simple to reconvene from where you left off, to having things in a broken state, depending on the circumstances.The pre-checks performed are supposed to catch issues beforehand and allow you to take care of them before launching into the upgrade process, but these do not always trap everything! Above all, make sure that:
  • All VMs are agile, i.e. that there are no VMs running or even resident on any local storage
  • High Availability (HA) and Workload Balancing (WLB) are disabled
  • You have created metadata backups to at least one pooled storage device or exported it to an external location 
  • You have plenty of space to hold VMs on whatever hosts remain in your pool, figuring you’ll have one out at any given point and that initially, only the master will be potentially able to take on VMs from whichever hosts is in the process of being upgraded 
Preferably also have recent backups of all your VMs. It’s generally also a good idea to keep good notes about your various host configurations, including all physical and virtual networks, VLANs, iSCSI and NFS connection and the like.  I’d recommend doing an “export resource data” for the pool, which has been available as a utility since XS 6.5 and can run from XenCenter.To export resource data:
  1. In the XenCenter Navigation pane, click Infrastructure and then click on the pool. 
  2. From the XenCenter menu, click Pool and then select Export Resource Data. 
  3. Browse to a location where you would like to save report and then click Save.
You can chose between an XLS or CVS output. Note that this feature is only available on XenCenter with paid-for licensed version of XenServer. However, it can also be run via the CLI for any (including free) version of XS 6.5 or newer using:
# xe pool-dump-database file-name=target-output-file-name
Being prepared now for the upgrade, you may still run into issues along the way, in particular if you run out of space to move VMs to a different server or if the upgrade process hangs. If the master server manages to make it through the upgrade and the upgrade process fails at some point later, one of the first consequences will be that you will no longer be able to add any external hosts to the pool because the pool will be in a mixed state of hosts running different XS versions. This is not a good situation and makes it very difficult under some circumstances to recover from.
 

Recommended Process for Changing XenServer Partition Layout

Through various discussions on the XenServer discussion forum as well as some of my own experimentation, this is what I recommend as the most solid and reliable way of doing an upgrade to XS 7.X with the new partition layout coming from a version that still has the old layout. It will take quite a bit longer, but is more reliable.  
In addition to all the preparatory steps listed above:

In addition to all the preparatory steps listed above:

  • You will be needing to eject all hosts from the pool at one point or another, so make sure you have carefully recorded the various network and other settings, in particular some which are not retained as such in the metadata exports, such as the individual iSCSI network connections or NFS connections. 
  • Copy your /etc/fstab file and also be sure to check for any other customizations you may have done, such as cron jobs or additions to /etc/rc.local (and also note that rc.local does not run on its own under XS 7.X, so you will need to manually enable it to do so – see the section “Enabling rc.local to Run On Its Own” below).

Once your enhanced preparation is complete: 

  1. Start with your pool master and do a rolling pool upgrade to it. Do not attempt to upgrade to the new partition layout! After it reboots, it should still have retained the role of pool master. Note that the rolling pool upgrade will not allow you to pick which host it will upgrade next beyond the master so be certain that any of these hosts can have all its VMs fit on the pool master. If necessary move some or all the VMs on the pool master onto other hosts within the pool before you commence the upgrade procedure.
  2. Pick another host and migrate all its VMs to the pool master.
  3. Follow this procedure for all hosts until the pool is completely upgraded to the identical version of XS 7.X on all pool members. You can either continue to use the rolling pool upgrade or if desired, switch to manual upgrade mode.
  4. Making sure you’ve carefully recorded all important storage and network information on that host, migrate all VMs off a non-master host and eject it from the pool.
  5. Touch the file /var/preserve/safe2upgrade and plan on the local SR being wiped and re-created. Then shutdown the host and perform a standalone installation to that just ejected host. It will have the new partition layout.
  6. Reduce the network settings on this host to just a single primary management interface NIC, one that matches the same one it had initially. Rejoin this just upgraded host back into the pool. Many of the pool metadata settings should automatically get recreated, in particular any of the pooled resources. Update any host-specific network settings as well as other customizations.
  7. Continue this process for the remainder of the non-master hosts.
  8. When these have all been completed, designate a new pool master using the command “xe pool-designate-new-master host-uuid=new-master-uuid”. Make sure the transition to the new pool master is properly completed. 
  9. Eject what was the original pool master and perform the standalone upgrade and rejoin the host to the pool as before.
  10. Celebrate your hard work!
While this process will take easily two times as long as a standard upgrade, at least you know that things should not break so badly that you may have to spend hours of additional time making things right. As a side benefit (if you want to consider it as such), it will also force you to take stock of how your pool is configured and require you to take careful inventory, In the event of a disaster, you will be grateful to have gone through this process as it may be very close to what you may have to go through under less controlled circumstances!I have done this myself several of times and had it work correctly each time.
 

An Additional Item: If the Console Shows No Network Configured on Dell Servers with iDRAC modules

This condition can show up unexpectedly and while normally something like this can be handled if the host is found to be in emergency mode with an “xe pool-recover-slaves” command, that’s not always the case. And even more oddly, if instead you ssh in and run xsconsole from the CLI, all looks perfectly normal, including all the network settings that appear present and correct and also match the settings visible in XenCenter. This condition, as far as I know, seems unique to Dell servers and was seen with iDRAC 7, 8 and 9 modules. Here's what it looks like:

The issue here appears to have been a change in behavior that kicked in starting with XenServer 7.1 and hence may not even be evident in XS 7.0.

The fix turned out to be an upgrade all the BIOS/firmware and iDRAC configurations. In this particular case, I made use of the method I described in this blog entry and that took care of it.  Note that this still does not seem to consistently address this issue, in particular with some older hardware (e.g., R715 with an iDRAC 6, even after updating to DSU 1.4.2, BIOS 3.2.1, but being stuck at iDRAC version 1.97 – apparently not upgradeable).

Enabling rc.local To Run On Its Own

The rc.local file is not automatically executed by default under RHEL/CentOS 7 installations, and since XS 7 is based on CentOS 7, it is no exception. If you wish to customize your environment and assure this file is executed at boot time, you will have to manually enable /etc/rc.local to run automatically on reboot on XenServer 7.X and to do so, will need to run these two commands:

# chmod u+x /etc/rc.d/rc.local
# systemctl start rc-local

You can verify that rc.local is now running with the following command, which in turn should produce output similar to what is shown below:

# systemctl status rc-local

   rc-local.service - /etc/rc.d/rc.local Compatibility
 Loaded: loaded (/usr/lib/systemd/system/rc-local.service; static; vendor preset: disabled)
Active: active (exited) since Sun 2017-06-18 09:09:50 MST; 1 weeks 6 days ago
Process: 4649 ExecStart=/etc/rc.d/rc.local start (code=exited, status=0/SUCCESS)

A Final Word

Once again, I will reiterate that feedback is always appreciated and the XenServer forum is one good option. Errors, on the other hand, should be reported to the XenServer bugs site with as much information as possible to help the engineers understand and reproduce the issues.

I sincerely hope some of the information in this series will have been useful to some degree.

I would like to most gratefully acknowledge Andrew Wood, fellow Citrix Technology Professional, for the review of and constructive additions to this blog.

 

Recent comment in this post
Tobias Kreidl
I would like to add that an option for a more efficient, yet safe update procedure, would be to follow what's in the article, but ... Read More
Thursday, 06 July 2017 19:00
Continue reading
3195 Hits
1 Comment

About XenServer

XenServer is the leading open source virtualization platform, powered by the Xen Project hypervisor and the XAPI toolstack. It is used in the world's largest clouds and enterprises.
 
Commercial support for XenServer is available from Citrix.