Virtualization Blog

Discussions and observations on virtualization.

History and Syslog Tweaks

Introduction

As XenServer Administrators already know (or will know), there is one user "to rule them all"... and that user is root.  Be it an SSH connection or command-line interaction with DOM0 via XenCenter, while you may be typing commands in RING3 (user space), you are doing it as the root user.

This is quite appropriate for XenServer's architecture as once the bare-metal is powered on, one is not booting into the the latest "re-spin" of some well-known (or completely obscure) Linux-spin.  Quite the opposite.  One is actually booting into the virtualization layer: dom0 or the Control Domain.  This is where separation of Guest VMs (domUs) and user space programmes (ping, fsck, and even XE) begins... even at the command line for root.

In summary, it is not uncommon for many Administrators to require root access to a XenServer... at one time.  Thus, this article will show my own means of adding granularity to the HISTORY command as well as logging (via Syslog) of each and every root user session.

Assumptions

As BASH is the default shell, this article assumes that one has knowledge of BASH, things "BASH", Linux-based utilities, and so forth.  If one isn't familiar with BASH, how BASH leverages global and local scripts to setup a user environment, etc I have provided the following resources:

  • BASH login scripts : http://www.linuxfromscratch.org/blfs/view/6.3/postlfs/profile.html
  • Terminal Colors : http://www.tldp.org/HOWTO/Bash-Prompt-HOWTO/x329.html
  • HISTORY command : http://www.tecmint.com/history-command-examples/

Purpose

The purpose I wanted to achieve was not just a more 'clean way' to look at the history command, but to also log the root user's session information: recording their access means, what command they ran, and WHEN.


In short, we go from this:

To this (plus record of each command in /var/log/user.log | /var/log/messages):

What To Do?

First, we want to backup /etc/bashrc to /etc/backup.bashrc in the event one would like to revert to the original HISTORY method, etc.  This can be done via the command-line of the XenServer:

cp /etc/bashrc /etc/backup.bashrc

Secondly, the following addition will should be added to the end of /etc/bashrc:

##[ HISTORY LOGGING ]#######################################################
#
# ADD USER LOGGING AND HISTORY COMMAND CONTEXT FOR SOME AUDITING
# DEC 2014, JK BENEDICT
# This email address is being protected from spambots. You need JavaScript enabled to view it. | @xenfomation
#
#########################################################################

# Grab current user's name
export CURRENT_USER_NAME=`id -un`

# Grab current user's level of access: pts/tty/or SSH
export CURRENT_USER_TTY="local `tty`"
checkSSH=`set | grep "^SSH_CONNECTION" | wc -l`

# SET THE PROMPT
if [ "$checkSSH" == "1" ]; then
     export CURRENT_USER_TTY="ssh `set | grep "^SSH_CONNECTION" | awk {' print $1 '} | sed -rn "s/.*?='//p"`"
     export PROMPT_COMMAND='history -a >(tee -a ~/.bash_history | logger -t "HISTORY for $CURRENT_USER_NAME[$$] via $SSH_CONNECTION : ")'
else
     export CURRENT_USER_TTY
     export PROMPT_COMMAND='history -a >(tee -a ~/.bash_history | logger -t "HISTORY for $CURRENT_USER_NAME[$$] via $CURRENT_USER_TTY : ")'
fi

# SET HISTORY SETTINGS
# Lines to retain, ignore dups, time stamp, and user information
# For date variables, check out http://www.computerhope.com/unix/udate.htm
export HISTSIZE=5000
export HISTCONTROL=ignoredups
export HISTTIMEFORMAT=`echo -e "e[1;31m$CURRENT_USER_NAMEe[0m[$$] via e[1;35m$CURRENT_USER_TTYe[0m on e[0;36m%d-%m-%y %H:%M:%S%ne[0m       "`

A link to a file providing this addition downloaded from https://github.com/xenfomation/bash-history-tweak

What Next?

Well, with the changes added and saved to /etc/bashrc, exit the command-line prompt or SSH session: logging back in to test the changes.

exit

hostname
whoami
history
tail -f /var/log/user.log

... And that is that.  So, while there are 1,000,000 more sophisticated ways to achieve this, I thought I'd share what I have used for a long time... have fun and enjoy!

--jkbs | @xenfomation

Continue reading
2733 Hits
0 Comments

XenServer Support Options

Now that Creedence has been released as XenServer 6.5, I'd like to take this opportunity to highlight where to obtain what level of support for your installation.

Commercial Support

Commercial support is available from Citrix and many of its partners. A commercial support contract is appropriate if you're running XenServer in a production environment, particularly if downtime is a critical component of your SLA. It's important to note that commercial support is only available if the deployment follows the Citrix deployment guidelines, uses third party components from the Citrix Ready Marketplace, and is operated in accordance with the terms of the commercial EULA. Of course, since your deployment might not precisely follow these guidelines, commercial support may not be able to resolve all issues and that's where community support comes in.

Community Support

Community support is available from the Citrix support forums. The people on the forum are both Citrix support engineers and also your fellow system administrators. They are generally quite knowledgeable and enthusiastic to help someone be successful with XenServer. It's important to note that while the product and engineering teams may monitor the support forums from time to time, engineering level support should not be expected on the community forums.

Developer Support

Developer level support is available from the xs-devel list. This is your traditional development mailing list and really isn't appropriate for general support questions. Many of the key engineers are part of this list, and do engage on topics related to performance, feature development and code level issues. It's important to remember that the XenServer software is actually built from many upstream components, so the best source of information might be an upstream developer list and not xs-devel.

Self-support tool

Citrix maintains an self-support tool called Citrix Insight Services, formerly known as Tools-as-a-Service (TaaS). Insight Services takes a XenServer status report, and analyzes it to determine if there are any operational issues present in the deployment. A best practice is to upload a report after installing a XenServer host to determine if any issues are present which can result in latent performance or stability problems. CIS is used extensively by the Citrix support teams, but doesn't require a commercial support contract for end users.

Submitting Defects

If you believe you have encountered a defect or limitation in the XenServer software, simply using one of these support options isn't sufficient for the incident to be added to the defect queue for evaluation. Commercial support users will need to have their case triaged and potentially escalated, with the result potentially being a hotfix. All other users will need to submit an incident report via bugs.xenserver.org. Please be as detailed as possible with any defect reports such that they can be reproduced, and it doesn't hurt to include the URL of any forum discussion or the TaaS ID in your report. Also, please be aware that while the issue may be urgent for you any potential fix may take some time to be created. If your issue is urgent, you are strongly encouraged to follow the commercial support route as Citrix escalation engineers have the ability to prioritize customer issues.

Additionally, its important to point out that submitting a defect or incident report doesn't guarantee it'll be fixed. Some things simply work the way they do for very important reasons, other things may behave the way they do due to the way components interact. XenServer is tuned to provide a highly scalable virtualization platform, and if an incident would require destabilizing that platform, it's unlikely to be changed.

Recent comment in this post
JK Benedict
Tim - thank you very much for this post and as always, we greatly appreciate your work here @ xenserver.org. #XenServer = #Citr... Read More
Friday, 16 January 2015 04:37
Continue reading
15730 Hits
1 Comment

Creedence Release Candidate Available

Just in time for the holiday season - we're pleased to announce a another tech toy for the geeks of the world to play with. Now of course XenServer is serious business, but just like many kids toys, the arrival of Creedence is eagerly awaited by many. As I mentioned earlier this week, you'll have to wait a bit longer for the official release, but today you can download the release candidate and see exactly what the world of Creedence should look like. Andy also mentioned last week that we're closing out the alpha/beta program, and as part of that effort the nightly Creedence snapshot page has been removed. You can still access the final beta (beta.3) on the pre-release page, but all prior builds have been removed. The pre-release page is also where you can download the release candidate.

What's in the Release Candidate

Performance tuning

The release candidate contains a number of bug fixes, but also has had some performance tuning done on it. This performance tuning is a little bit different than what we normally talk about, so if you've been benchmarking Creedence, you'll want to double check with the release candidate. What we've done is take a look at the interaction of a variety of system components and put in some limits on how hard we'll let you push them. Our first objective is a rock solid system, and while this work doesn't result in any configuration limit changes (at least not yet - that comes later in our cycle), it could reduce some of the headroom you might have experienced with a prior build. It's also possible that you could experience better headroom due to an overall improvement in system stability, so doing a performance test or two isn't a bad idea.

Core bug fixes over beta.3

  • mulitpath.conf is now preserved as multipath.conf.bak on upgrade
  • The default cpufreq governor is now set to performance
  • Fixes for XSA-109 through XSA-114 inclusive
  • Increase the number of PIRQs to more than 256 to support large quantities of NICs per host

What we'd like you to do with this build

The core two things we'd like you to do with this build are:

  1. If you've reported any issue at https://bugs.xenserver.org, please validate that we did indeed get the issue addressed.
  2. If you can, run this release candidate through its paces. We think it's nice and solid, and hope you do too.

Lastly, I'd like to take this opportunity to wish everyone in our community a festive end to 2014 and hope that what ever celebrating you might do is enjoyable. 2014 was an exciting year for XenServer, and that's in large part to the contributions of everyone reading this blog and working with Creedence. Thank you.

 

-tim     

Recent Comments
Tassos Papadopoulos
Boot from iSCSI is not supported on XS6.2. We had to do some hacks to do make it possible on Cisco UCS Blades. Are you going to su... Read More
Monday, 22 December 2014 07:21
Itaru OGAWA
Is "xe vm-export" performance improved in Creedence? From my test on RC, it looks similar, around 15MB/sec, even on 10GBE link: ... Read More
Monday, 22 December 2014 17:03
Tim Mackey
@Nathan, Since this is pre-release software intended for testing, no official upgrade path exists to the final release. Practica... Read More
Monday, 05 January 2015 19:53
Continue reading
15466 Hits
15 Comments

Basic Network Testing with IPERF

Purpose

I am often asked how one can perform simple network testing within, outside, and into XenServer.  This is a great question as – by itself – it is simple enough to answer.  However, depending on what one desires out of “network testing” the answer can quickly become more complex.

As such, this I have decided to answer this question using a long standing, free utility called IPERF (well, IPERF2).  It is a rather simple, straight-forward, but powerful utility I have used over many, many years.  Links to IPERF will be provided - along with documentation on its use - as it will serve in this guide as a way to:


- Test bandwidth between two or more points

- Determine bottlenecks

- Assists with black box testing or “what happens if” scenarios

- Use a tool that runs on both Linux and Windows

- And more…

IPERF: A Visual Breakdown

IPERF has to be installed on/at at least two separate end points.  One point acts a server/receiver and the other point acts as a client/transmitter.  This so network testing can be done on a simple subnet to a complex, routed network: end-to-end using TCP or UDP generated traffic:

The visual shows an IPERF client transmitting data over IPv4 to an IPERF receiver.  Packets traverse the network - from wireless routers and through firewalls - from the client side to the server side to over port 5001.

IPERF and XenServer

The key to network testing is in remembering that any device which is connected to a network infrastructure – Virtual or Physical – is a node, host, target, end point, or just simply … a networked device.

With regards to virtual machines, XenServer obviously supports Windows and Linux operating systems.  IPERF can be used to test virtual-to-virtual networking as well as virtual-to-physical networking.  If we stack virtual machines in a box to our left and stack physical machines in a box to our right – despite a common subnet or routed network – we can quickly see the permutations of how "Virtual and Physical Network Testing" can be achieved with IPERF transmitting data from one point to another:

And if one wanted, they could just as easily test networking for this:

Requirements

To illustrate a basic server/client model with IPERF, the following will be required:

- A Windows 7 VM that will act as an IPERF client

- A CentOS 5.x VM that will act as a receiver.

- IPERF2 (the latest version of IPERF, or "IPERF3" can be found at https://github.com/esnet/iperf or, more specifically, http://downloads.es.net/pub/iperf/)

The reason for using IPERF2 is quite simple: portability and compatibility on two of the most popular operating systems that I know are virtualized.  In addition, the same steps to installing IPERF2 on these hosts can be carried out on physical systems running similar operating systems, as well. 

The remainder of this article - regarding IPERF2 - will require use of the MS-DOS command-line as well as the Linux shell (of choice).  I will carefully explain all commands as so if you are “strictly a GUI” person, you should fit right in.

Disclaimer

When utilizing IPERF2, keep in mind that this is a traffic generator.  While one can control the quantity and duration of traffic, it is still network traffic

So, consider testing during non-peak hours or after hours as to not interfere with production-based network activity.

Windows and IPERF

The Windows port of IPERF 2.0.5 requires Windows XP (or greater) and can be downloaded from:

http://sourceforge.net/p/iperf/patches/_discuss/thread/20d4a4b0/5c44/attachment/Iperf.zip

Within the .zip file you will find two directories.  One is labeled DEBUG and the other is labeled RELEASE.  Export the Iperf.exe program to a directory you will remember, such as C:\iperf\

Now, accessing the command line (cmd.exe), navigate to C:\iperf\ and execute:

iperf

The following output should appear:

Linux and IPERF

If you have additional repos already configured for CentOS, you can simply execute (as root):

yum install iperf

If that fails, one will need to download the Fedora/RedHat EPEL-Release RPM file for the version of CentOS being used.  To do this (as root), execute:

wget  http://dl.fedoraproject.org/pub/epel/5/i386/epel-release-5-4.noarch.rpm
rpm -Uvh epel-release-5-4.noarch.rpm

 

*** Note that the above EPEL-Release RPM file is just an example (a working one) ***

 

Once epel-release-5-4.noarch.rpm is installed, execute:

yum install iperf

And once complete, as root execute iperf and one should see the following output:

http://cdn.ws.citrix.com/wp-content/uploads/2014/06/CMD2.png?__utma=222274247.1078613845.1409810797.1412210514.1412210784.2&__utmb=222274247.5.8.1412227628611&__utmc=222274247&__utmx=-&__utmz=222274247.1412210514.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none)&__utmv=222274247.|1=my%20account%20holder=y=1^14=industry=(Non-company%20Visitor)=1^15=sub_industry=(Non-company%20Visitor)=1^16=employee_count=(Non-company%20Visitor)=1^17=company_name=(Non-company%20Visitor)=1^18=primary_sic=(Non-company%20Visitor)=1^19=registry_dma_code=(Non-company%20Visitor)=1&__utmk=208580497

Notice that it is the same output as what is being displayed from Windows.  IPERF2 is expecting a "-s" (server) or "-c" (client) command-line option with additional arguments.

IPERF Command-Line Arguments

On either Windows or Linux, a complete list of options for IPERF2 can be listed by executing:

iperf –help

A few good resources of examples to use IPERF2 options for the server or client can be referenced at:

http://www.slashroot.in/iperf-how-test-network-speedperformancebandwidth

http://samkear.com/networking/iperf-commands-network-troubleshooting

http://www.techrepublic.com/blog/data-center/handy-iperf-commands-for-quick-network-testing/

For now, we will focus on the options needed for our server and client:

-f, –format    [kmKM]   format to report: Kbits, Mbits, KBytes, MBytes
-m, –print_mss          print TCP maximum segment size (MTU – TCP/IP header)
-i, –interval  #        seconds between periodic bandwidth reports
-s, –server             run in server mode
-c, –client    <host>   run in client mode, connecting to <host>
-t, –time      #        time in seconds to transmit for (default 10 secs)

Lastly, there is a TCP/IP Window setting.  This goes beyond the scope of this document as it relates to the TCP frame/windowing of data.  I highly recommend reading either of the two following links – especially for Linux – as there has always been some debate as what is “best to be used”:

https://kb.doit.wisc.edu/wiscnet/page.php?id=11779

http://kb.pert.geant.net/PERTKB/IperfTool

Running An IPERF Test

So, we have IPERF2 installed on Windows 7 and on CentOS 5.10.  Before one performs any testing, ensure any AV does not block iperf.exe from running as well as port 5001 being opened across the network network.

Again, another port can be specified, but the default port IPERF2 uses for both client and server is 5001.

Server/Receiver Side

The Server/Receiver side will be on the CentOS VM.

Following the commands above, we want to execute the following to run IPERF2 as a server/receiver from our Windows 7 client machine:

iperf -s -f M -m -i 10

The output should show:

————————————————————
Server listening on TCP port 5001
TCP window size: 0.08 MByte (default)
————————————————————

The TCP window size has been previously commented on and the server is now ready to accept connections (press Control+C or Control+Z to exit).

Client/Transmission Side

Let us now focus on the client side to start sending data from the Windows 7 VM to the CentOS VM.

From Windows 7, the command line to start transmitting data for 30 seconds to our CentOS host (x.x.x.48) is:

iperf -c x.x.x.48 -t 30 -f M

Pressing enter, the traffic flow begins and the output from the client side looks like this:

From the server side, the output looks something like this:

And there we have it – a first successful test from a Windows 7 VM (located on one XenServer) to a CentOS 5.10 VM (located on another XenServer).

Understanding the Results

From either the client side or server side, results are shown by time and average.  The key item to look for from either side is:

0.0-30.0 sec  55828 MBytes  1861 MBytes/sec

Why?  This shows the average over the course of 0.0 to 30.0 seconds in terms of total megabytes transmitted as well as average megabytes of data sent per second.  In addition, since the "-f M" argument was passed as a command-line option, the output is calculated in megabytes accordingly.

In this particular case, we simply illustrated that from one VM to another VM, we transferred data at 1861 megabytes per second.

*** Note that this test was performed in a local lab with lower-end hardware than what you probably have! ***

--jkbs | @xenfomation

 

Recent Comments
chaitanya
Hi, Nice article.. I have a simple question.. you did this test for windows and linux os. Any specific requirement on that? I d... Read More
Monday, 10 November 2014 16:59
JK Benedict
Exactly: to show that IPERF can be used in any configuration, any school of thought, etc! Windows Windows Linux Linux Linux Wi... Read More
Wednesday, 12 November 2014 03:08
Massimo De Nadal
Hi, your throughput is 1861 MB/sec which means more than 14Gb !!!! Can I ask you what kind of server/setup are you using ??? I'... Read More
Tuesday, 11 November 2014 12:24
Continue reading
46483 Hits
15 Comments

Increasing Ubuntu's Resolution

Increasing Ubuntu's Resolution

Maximizing Desktop Real-estate with Ubuntu

With the addition of Ubuntu (and the likes) to Creedence, you may have noticed that the default resolution is 1024x768.  I certainly noticed it and with much work on 6.2 and Creedence Beta, I have a quick solution to maximizing the screen resolution for you.

The thing to consider is that a virtual frame buffer is what is essentially being used.  You can re-invent X configs all day, but the shortest path is to - first - ensure that that the following files are installed on your Ubuntu guest VM:

sudo apt-get install xvfb xfonts-100dpi xfonts-75dpi xfstt

Once that is all done installing, the next step is to edit Grub -- specifically /etc/default/grub:

sudo vi /etc/default/grub

Considering your monitor's maximum resolution (or not if you want to remote into Ubuntu using XRDP), look for the variable GRUB_GFXMODE.  This is where you can specify your desired BOOT resolutions that we will instruct the guest VM to SUSTAIN into user-space:

GRUB_GFXMODE=1280x960,1280x800,1280x720,1152x768,1152x700,1024x768,800x600

Next, adjust the variable GRUB_PAYLOAD_LINUX to equal keep, or:

GRUB_PAYLOAD_LINUX=keep

Save the changes and be certain to execute the following:

sudo update-grub
sudo reboot

Now, you will notice that even during the boot phase that the resolution is large and this will carry into user space: Lightdm, Xfce, and the likes.

Finally, I would highly suggest installing XRDP for your Guest VM.  It allows you to access that Ubuntu/Xbunutu/etc desktop remotely.  Specific details regarding this can be found through Ubuntu's forum:

http://askubuntu.com/questions/449785/ubuntu-14-04-xrdp-grey


Enjoy!

--jkbs | @xenfomation

 

 

Recent Comments
JK Benedict
Thanks, YLK - I am so glad to hear this helped someone else! Now... install XRDP and leverage the power to Remote Desktop (secure... Read More
Thursday, 25 December 2014 04:46
gfpl
thanks guy is very good help me !!!
Friday, 06 March 2015 10:52
Fredrik Wendt
Would be really nice to see all steps needed (CLI on dom0) to go from http://se.archive.ubuntu.com/ubuntu/dists/vivid/main/install... Read More
Monday, 14 September 2015 21:48
Continue reading
25862 Hits
6 Comments

VGA over Cirrus in XenServer 6.2

Achieve Higher Resolution and 32Bpp

For many reasons – not exclusive to XenServer – the Cirrus video driver has been a staple wherein a basic/somewhat agnostic video driver is needed.  When one creates a VM within XenServer (specifically 6.2 and previous versions) the Cirrus video driver is used by default for video...and it does the job.

I had been working on a project with my mentor related to an eccentric OS, but I needed a way to get more real-estate to test a HID pointing device by increasing the screen resolution.  This led me to find that at some point in our upstream code there were platform (virtual machine metadata) options that allowed an one to "ditch" Cirrus and 1024x768 resolution for higher resolutions and color depth via a standard VGA driver addition.

This is not tied into GPU Pass through nor is it a hack.  It is a valuable way to achieve 32bpp color in Guest VMs with video support as well as obtaining higher resolutions.

Windows 7: A Before and After Example

To show the difference between "default Cirrus" and the Standard VGA driver (which I will discuss how to switch to shortly), Windows 7 Enterprise had the following resolution to offer me with Cirrus:


Now, after switching to standard VGA for the same Guest VM and rebooting, I now had the following resolution options within Windows 7 Enterprise:

Switching a Guest for VGA

After you create your VM – Windows, Linux, etc – perform the following steps to enable the VGA adapter:

 

  • Halt the Guest VM
  • From the command line, find the UUID of your VM:
 xe vm-list name-label=”Name of your VM”
  • Taking the UUID value, run the following two commands:
 xe vm-param-set uuid=<UUID of your VM> platform:vga=std
 xe vm-param-set uuid=<UUID of your VM> platform:videoram=4
  •  Finally, start your VM and one should be able to achieve higher resolution at 32bpp.

 

It is worth noting that the max amount of "videoram" that can be specified is 16 (megabytes).

Switching Back to Cirrus

If – for one reason or another – you want to reset/remove these settings as to stick with the Cirrus driver, run the following commands:

 xe vm-param-remove uuid=<UUID of your VM> param-name=platform param-key=vga
 xe vm-param-remove uuid=<UUID of your VM> param-name=platform param-key=videoram

Again, reboot your Guest VM and with the lack of VGA preference, the default Cirrus driver will be used.

What is the Catch?

There is no catch and no performance hit.  The VGA driver's "videoram" specification is carved out of the virtual memory allocated to the Guest VM.  So, for example, if you have 4GB allocated to a Guest VM, subtract at max 16 megabytes from 4GB.  Needless to say, that is a pittance and does not impact performance.

Speaking of performance, my own personal tests were simple and repeated several times:

 

  • Utilized a tool that will remain anonymous
  • Use various operating systems with Cirrus and resolution at 1024 x 768
  • Run 2D graphic test suite
  • Write down Product X, Y, or Z’s magic number that represents good or bad performance
  • Apply the changes to the VM to use VGA (keeping the resolution at 1024 x 768 for some kind of balance)
  • Run the same volley of 2D tests after a reboot
  • Write down Product X, Y or Z’s magic number that represents good or bad performance

 

In the end, I personally found from my experience that there was a very minor, but noticeable difference in Cirrus versus VGA.  Cirrus usually came in 10-40 points below VGA at the 1024 x 768 level.  Based on the test suite used, this is nothing spectacular, but it is certainly a benefit as I found no degraded performance across XenServer (other Guests), etc.

I hope this helps and as always: questions and comments are welcomed!

 

--jkbs | @xenfomation

 

Recent Comments
JK Benedict
Hey, Chris!! Excellent questions! So - I think I need to clear up my poor use of words: more importantly, tying words together. ... Read More
Saturday, 11 October 2014 22:50
Continue reading
32422 Hits
4 Comments

Security bulletin covering "Shellshock"

Over the past several weeks, there has been considerable interest in a series of vulnerabilities in bash with the attention grabbing name of "shellshock". These bash vulnerabilities are more properly known as CVE-2014-6271, CVE-2014-6277, CVE-2014-6278, CVE-2014-7169, CVE-2014-7186 and CVE-2014-7187. As was indicated in security bulletin CTX200217, XenServer hosts were potentially impacted, but investigation was continuing. That investigation has been completed and the associated impact is described in security bulletin CTX200223, which also contains patch information for these vulnerabilities.

Learning about new XenServer hotfixes

When a hotfix is released for XenServer, it will be posted to the Citrix support web site. You can receive alerts from the support site by registering at http://support.citrix.com/profile/watches and following the instructions there. You will need to create an account if you don't have one, but the account is completely free. Whenever a hotfix is released, there will be an accompanying security advisory in the form of a CTX knowledgebase article for it, and those same KB articles will be linked on xenserver.org in the download page.

Patching XenServer hosts

XenServer admins are encouraged to schedule patching of their XenServer installations. Please note that the items contained in the CTX200223 bulletin do impact XenServer 6.2 hosts, and to apply the patch, all XenServer 6.2 hosts will first need to be patched to service pack 1. The complete list of patches can be found on the XenServer download page.     

Continue reading
19774 Hits
0 Comments

Creedence: Debian 7.x and PVHVM Testing

Introduction

On my own time and on my own testing equipment, I have been able to run many Guests VMs in PVHVM containers - before Creedence after its release to the public back in June.  Last week's broadcast of Creedence Beta 3's release, I was naturally excited to see Tim's spotlight on PVHVM and the following article's intent is to show - in a test environment only - how I was able to run Debian 7.x (64-bit) in the same fashion.

For more information regarding PV + HVM as to establish a PVHVM container, Tim linked a great article in his Creedence Beta 3 post last Monday that I highly recommend you read as the finer details are out of scope for this article's intent and purpose.

Why is this important to me?  Quite simply we can go from this....

... to this ...

So now, let's make a PVHVM container for a Debian 7.x (64-Bit) Guest VM within XenCenter!

Requirements

1.  Creedence Beta 3 and XenCenter

2.  The full installation ISO for Debian 7.x (from https://www.debian.org/CD/http-ftp/#stable )

3.  Any changes mentioned below should not be applied to any of the stock Debian templates

4.  This should not be performed on your production environment

Creating A Default Template

With XenCenter open, ensure that from the View options one has "XenServer Templates" selected:

We should now see the default templates that XenServer installs:

1.  Right-click on the "Debian Wheezy 7 (64-bit)" template and save it as "Debian 7":

 

3.  This will produce a "custom template" - highlight it and copy the UUID of the custom template:

4.  The remainder of this configuration will take place from the command-line.

5.  To make the changes to the custom template easier, export the UUID of the custom template we created to avoid copy/paste errors:

export myTemp="af84ad43-8caf-4473-9c4d-8835af818335"
echo $myTemp
af84ad43-8caf-4473-9c4d-8835af818335

6.  With the $myTemp variable created, let us first convert this custom template to a default template by executing:

xe template-param-set uuid=$myTemp other-config:default_template=true

xe template-param-remove uuid=$myTemp param-name=other-config param-key=base_template_name

7.  Now configure the template's "platform" variable to leverage VGA graphics:

xe template-param-set uuid=$myTemp platform:viridian=false platform:device_id=0001 platform:vga=std platform:videoram=16

8.  Due to how some distros work with X, clear the PV-args and set a "vga=792" flag:

xe template-param-set uuid=$myTemp PV-args="vga=792"

9.  Disable the PV-bootloader:

xe template-param-set uuid=$myTemp PV-bootloader=""

10.  Specify that the template uses an HVM-style bootloader (DVD/CD first, then hard drive, and then network):

xe template-param-set uuid=$myTemp HVM-boot-policy="BIOS order"
xe template-param-set uuid=$myTemp HVM-boot-params:order="dcn"

 

Now, before creating a Debian 7.x Guest VM, one should now see in XenCenter that "Debian 7" is listed as a "default template":

 

Lastly, for the VGA flag and what it means to most distros, the following is a table explaining the VGA flag and bit settings to achieve XxY resoluton @ a color depth:

VGA Resolution and Color Depth reference Chart:

Depth 800×600 1024×768 1152×864 1280×1024 1600×1200
8 bit vga=771 vga=773 vga=353 vga=775 vga=796
16 bit vga=788 vga=791 vga=355 vga=794 vga=798
24 bit vga=789 vga=792   vga=795 vga=799

Create A New Debian Guest

From now, one should be able to create a new Guest VM using the template we have just created and should be able to walk through the entire install:

Post installation, tools can be installed as well!

Enjoy and happy testing!

 

jkbs | @xenfomation

Recent Comments
JK Benedict
Hey, Tobi - Thanks for the feedback! With regards to the graphical install, are you referring to how to do this with XenServer 6... Read More
Friday, 10 October 2014 19:40
JK Benedict
Alrighty -- Been busy, but the following BASH script should make a copy of your Debain 7 template and make a generic, HVM templat... Read More
Wednesday, 22 October 2014 03:10
JK Benedict
You should quite able to copy-n-paste the code above -- that will remove the emoticons from the colon + some other character.... Read More
Wednesday, 22 October 2014 03:21
Continue reading
20971 Hits
18 Comments

Before Electing a New Pool Master

Overview

The following is a reminder of specific steps to take before electing a new pool master - especially in High Availability-enabled deployments.  Albeit, there are circumstances where this will happen automatically due to High Availability (by design) or in an emergency situation, but never-the-less, the following steps should be taken when electing a new pool master where High Availability is enabled.

Disable High Availability

Before electing a new master one must disable High Availability.  The reason is quite simple:

If a new host is designated as master with HA enabled, the subsequent processes and transition time can lead to HA see that a pool member is down.  It is doing what it is supposed to do from the "mathematical" sense, but from "reality" it is actually confused.

The end result is that HA could either recover with some time or fence as it attempts to apply fault tolerance in contradiction to the desire to "simply elect a new master".

It is also worth noting that upon recovery - if any Guests which had a mounted ISO are rebooted on another host - that "VDI not found" errors can appear although this is not the case.  The ISO image that is mounted is seen as a VDI and if that resource is not available on another host, the Guest VM will fail to resume: presenting the generic VDI error.

Steps to Take

HA must be disabled and for safe practice, I always recommend ejecting all mounted ISO images.  The latter can be accomplished by executing the following from the pool master:

xe vm-cd-eject --multiple

As for HA it can be disabled in two ways: via the command-line or from XenCenter.

From the command line of the current pool master, execute:

xe pool-ha-disable
xe pool-sync

If desired - just for safe guarding one's work - those commands can be executed on every other pool member.

As for XenCenter one can select the Pool/Pool Master icon in question and from the "HA" tab, select the option to disable HA for the pool.

Workload Balancing

For versions of XenServer utilizing Workload Balancing it is not necessary to halt this.

Now that HA is disabled, switch Pool Masters and when all servers are in an active state: re-enable HA from XenCenter or from the command line:

xe pool-recover-slaves
xe pool-ha-enable

I hope this is helpful and as always: questions and comments are welcomed!

 

--jkbs | @xenfomation

Continue reading
17254 Hits
0 Comments

Log Rotation and Syslog Forwarding

A Continuation of Root Disk Management

First, this article is applicable to any sized XenServer deployment and secondly, it is a continuation off of my previous article regarding XenServer root disk maintenance.  The difference is that - for all XenServer deployments - the topic revolves specifically with that of Syslog: from tuning log rotation, specifying the amount of logs to retain, leveraging compression, and of course... Syslog forwarding.

All of this is an effort to share tips to new (or seasoned) XenServer Administrators in the options available to ensure necessary Syslog data does not fill a XenServer root disk while ensuring - for certain industry specific requirements - that log-specific data is retained without sacrafice.

Syslog: A Quick Introduction

So, what is this Syslog?  In short it can be compared to the Unix/Linux equivalent of Windows Event Log (along with other logging mechanisms popular to specific applications/Operating Systems). 

The slightly longer explanation is that Syslog is not only a daemon, but also a protocol: established long ago for Unix systems to record system and application to local disk as well as offering the ability to forward the same log information to its peers for redundancy, concentration, and to conserve disk space on highly active systems.  For more detailed information on the finer details of the Syslog protocol and daemon one can review the IETF's specification at http://tools.ietf.org/html/rfc5424.

On a stand-alone XenServer, the Syslog daemon is started on boot and its configuration file for handling source, severity, types of logs, and where to store them are defined in /etc/syslog.conf.  It is highly recommended that one does not alter this file unless necessary and if one knows what they are doing.  From boot to reboot, information is stored in various files: found under the root disk's /var/log directory.

Taken from a fresh installation of XenServer, the following shows various log files that store information specific to a purpose.  Note that the items in "brown" are sub-directories:

For those seasoned in administering XenServer it is visible that from the kernel-level and user-space level there are not many log files.  However, XenServer is verbose about logging for a very simple reason: collection, analysis, and troubleshooting if an issue should arise.

So for a lone XenServer (by default) logs are essentially received by the Syslog daemon and based on /etc/syslog.conf - as well as the source and type of message - stored on the local root file system as discussed:

Within a pooled XenServer environment things are pretty much the same: for the most part.  As a pool has a master server, log data for the Storage Manager (as a quick example) is trickled up to the master server.  This is to ensure that while each pool member is recording log data specific to itself, the master server has the aggregate log data needed to promote troubleshooting of the entire pool from one point.

Log Rotation

Log rotation, or "logrotate", is what ensures that Syslog files in /var/log do not grow out of hand.  Much like Syslog, logrotate utilizes a configuration file to dictate how often, at what size, and if compression should be used when archiving a particular Syslog file.  The term "archive" is truly meant for rotating out a current log in place of a fresh, current log to take its place.

Post XenServer installation and before usage, one can measure the amount of free root disk space by executing the following command:

df -h

The output will be similar to the following and the line one should be most concerned with is in bold font:

Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1             4.0G  1.9G  2.0G  49% /
none                  381M   16K  381M   1% /dev/shm
/opt/xensource/packages/iso/XenCenter.iso
                       52M   52M     0 100% /var/xen/xc-install

Once can see by the example that only 49% of the root disk on this XenServer host has been used.  Repeating this process as implementation ramps up, an administrator should be able to measure how best to tune logrotate's configuration file for after install, /etc/logrotate should resemble the following:

# see "man logrotate" for details
# rotate log files weekly
weekly
# keep 4 weeks worth of backlogs
rotate 4
# create new (empty) log files after rotating old ones
create
# uncomment this if you want your log files compressed
#compress
# RPM packages drop log rotation information into this directory
include /etc/logrotate.d
# no packages own wtmp -- we'll rotate them here
/var/log/wtmp {
    monthly
    minsize 1M
    create 0664 root utmp
    rotate 1
}
/var/log/btmp {
    missingok
    monthly
    minsize 1M
    create 0600 root utmp
    rotate 1
}
# system-specific logs may be also be configured here.

In previous versions, /etc/logrotate.conf was setup to retain 999 archived/rotated logs, but as of 6.2 the configuration above is standard. 

Before covering the basic premise and purpose of this configuration file, one can see this exact configuration file explained in more detail at http://www.techrepublic.com/article/manage-linux-log-files-with-logrotate/

The options declared in the default configuration are conditions that, when met, rotate logs accordingly:

  1. The first option specifies when to invoke log rotation.  By default this is set to weekly and may need to be adjusted for "daily".  This will only swap log files out for new ones and will not delete log files.
  2. The second option specifies how long to keep archived/rotate log files on the disk.  The default is to remove archived/rotated log files after a week.  This will delete log files that meet this age.
  3. The third options specifies what to do after rotating a log file out.  The default - which should not be changed is to create a new/fresh log after rotating out its older counterpart.
  4. The fourth option - which is commented out - specifies another what to do, but this time for the archived log files.  It is highly recommended to remove the comment mark so that archived log files are compressed: saving on disk space.
  5. A fifth option which is not present in the default conf is the "size" option.  This specifies how to handle logs that reach a certain size, such as "size 15M".  This option should be employed: especially if an administrator has SNMP logs that grow exponentially or notices that the particular XenServer's Syslog files are growing faster than logrotate can rotate and dispose of archived files.
  6. The "include" option specifies a sub-directory wherein unique, logrotate configurations can be specified for individual log files.
  7. The remaining portion should be left as is


In summary for logrotate, one is advised to measure use of the root disk using "df -h" and to tune logrotate.conf as needed to ensure Syslog does not inadvertently consume available disk space.

And Now: Syslog Forwarding

Again, this is a long standing feature and one I have been looking forward to explaining, highlighting, and providing examples for.  However, I have had a kind of writers block for many reasons: mainly that it ties into Syslog, Logrotate, and XenCenter, but also that there is a tradeoff.

I mentioned before that Syslog can forward messages to other hosts.  Furthermore, it can forward Syslog messages to other hosts without writing a copy of the log to local disk.  What this means is that a single XenServer or a pool of XenServers can send their log data to a "Syslog Aggregator".

The trade off is that one cannot generate a server status report via XenCenter, but instead gather the logs from the Syslog aggregate server and manually submit them for review.  That being said, one can ensure that low root disk space is not nearly as high of a concern on the "Admin Todo List" and can retain vast amounts of log data for a deployment of any size: based on dictated industry practices or for, sarcastically, nostalgic purposes.

The principles with Syslog and logrotate.conf will apply to the Syslog Aggregator as what good is a Syslog server if not configured properly as to ensure it does not fill itself up?  The requirements to instantiate a Syslog aggregation server, configure the forwarding of Syslog messages, and so forth are quite simple:

  1. Port 514 must be opened on the network
  2. The Syslog aggregation server must be reachable - either by being on the same network segment or not - by each XenServer host
  3. The Syslog aggregation server can be a virtual or physical machine; Windows or Linux-based with either a native Syslog daemon configured to receive external host messages or using a Windows-based Syslog solution offering the same "listening" capabilities.
  4. The Syslog aggregation server must have a static IP assigned to it
  5. The Syslog aggregation server should be monitored and tuned just as if it were Syslog/logrotate on a XenServer
  6. For support purposes, logs should be easily copied/compressed from the Syslog aggregation server - such as using WinSCP, scp, or other tools to copy log data for support's analysis

The quickest means to establish a simple virtual or physical Syslog aggregation server - in my opinion - is to reference the following two links.  These describe the installation of a base Debian-based system with specific intent to leverage Rsyslog for the recording of remote Syslog messages sent to it over UDP port 514 from one's XenServers:

http://www.aboutdebian.com/syslog.htm

http://www.howtoforge.com/centralized-rsyslog-server-monitoring

Alternatively, the following is an all-in-one guide (using Debian) with Syslog-NG:

http://www.binbert.com/blog/2010/04/syslog-server-installation-configuration-debian/

Once the server is instantiated and ready to record remote Syslog messages, it is time to open XenCenter.  Right click on a pool master or stand-alone XenServer and select "Properties":


In the window that appear - in the lower left-hand corner - is an option for "Log Destination":

To the right, one should notice the default option selected is "Local".  From there, select the "Remote" option and enter the IP address (or FQDN) of the remote Syslog aggregate server as follows:

Finally, select "OK" and the stand-alone XenServer (or pool) will update its Syslog configuration, or more specifically, /var/lib/syslog.conf.  The reason for this is so Elastic Syslog can take over the normal duties of Syslog: forwarding messages to the Syslog aggregator accordingly.

For example, once configured, the local /var/log/kern.log file will state:

Sep 18 03:20:27 bucketbox kernel: Kernel logging (proc) stopped.
Sep 18 03:20:27 bucketbox kernel: Kernel log daemon terminating.
Sep 18 03:20:28 bucketbox exiting on signal 15

Certain logs will still continue to record Syslog on the host, so it may be desirable to edit /var/lib/syslog.conf and add comments to lines where a "-/var/log/some_filename" is specified as lines with "@x.x.x.x" dictate to forward to the Syslog aggregator.  As an example, I have marked the lines in bold to show where comments should be added to prevent further logging to the local disk:

# Save boot messages also to boot.log
local7.*             @10.0.0.1
# local7.*         /var/log/boot.log

# Xapi rbac audit log echoes to syslog local6
local6.*             @10.0.0.1
# local6.*         -/var/log/audit.log

# Xapi, xenopsd echo to syslog local5
local5.*             @10.0.0.1
# local5.*         -/var/log/xensource.log

After one - The Administrator - has decided what logs to keep and what logs to forward, Elastic Syslog can be restarted as so the changes take affect by executing:

/etc/init.d/syslog restart

Since Elastic Syslog - a part of XenServer - is being utilized, the init script will ensure that Elastic Syslog is bounced and that it is responsible for handling Syslog forwarding, etc.

 

So, with this - I hope you find it useful and as always: feedback and comments are welcomed!

 

--jkbs | @xenfomation

 

 

 

Recent Comments
Tobias Kreidl
Super nice post, Jesse! One great reason to have logs on more than one server is that if there is ever a security issue, you stan... Read More
Thursday, 18 September 2014 17:12
JK Benedict
I could NOT agree more, Tobias and why I have been testing, experimenting, and really just trying to push the bounds as far as I c... Read More
Saturday, 27 September 2014 08:53
JK Benedict
Thank you, Tobias! Indeed, RSyslog and a base Debian install is my preferred choice for Syslog aggregation due to exactly what yo... Read More
Friday, 19 September 2014 03:08
Continue reading
51651 Hits
16 Comments

XenServer Root Disk Maintenance

The Basis for a Problem

UPDATE 21-MAR-2015: Thanks to feedback from our community, I have added key notes and additional information to this article.

For all that it does, XenServer has a tiny installation footprint: 1.2 GB (roughly).  That is the modern day equivalent of a 1.44" disk, really.  While the installation footprint is tiny, well, so is the "root/boot" partition that the XenServer installer creates: 4GB in size - no more, no less, and don't alter it! 

The same is also true - during the install process - for the secondary partition that XenServer uses for upgrades and backups:

The point is that this amount of space does not facilitate much room for log retention, patch files, and other content.  As such, it is highly important to tune, monitor, and perform clean-up operations on a periodic basis.  Without attention over time all hotfix files, syslog files, temporary log files, and other forms of data can accumulate until the point with which the root disk will become full.

UPDATE: If you are wondering where the swap partition is, wonder no more.  For XenServer, swap is file-based and is instantiated during the boot process of XenServer.  As for the 4GB partitions, never alter the size of these partitions upgrades, etc will re-align the partitions to match upstream XenServer release specifications.

One does not want a XenServer (or any server for that matter) to have a full root disk as this will lead to a full stop of processes as well as virtualization for the full disk will go "read only".  Common symptoms are:

  • VMs appear to be running, but one cannot manage a XenServer host with XenCenter
  • One can ping the XenServer host, but cannot SSH into it
  • If one can SSH into the box, one cannot write or create files: "read only file system" is reported
  • xsconsole can be used, but it returns errors when "actions" are selected

So, while there is a basis for a problem, the following article offers the basis for a solution (with emphasis on regular administration).

Monitoring the Root Disk

Shifting into the first person, I am often asked how I monitor my XenServer root disks.  In short, I utilize tools that are built into XenServer along with my own "Administrative Scripts".  The most basic way to see how much space is available on a XenServer's root disk is to execute the following:

df -h

This command will show you "disk file systems" and the "-h" means "human readable", ie Gigs, Megs, etc.  The output should resemble the following and I have made the line we care about in bold font:

Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1             4.0G  1.9G  1.9G  51% /
none                  299M   28K  299M   1% /dev/shm
/opt/xensource/packages/iso/XenCenter.iso
                       56M   56M     0 100% /var/xen/xc-install

A more "get to the point" way is to run:

df -h | grep "/$" | head -n 1

Which produces the line we are concerned with:

/dev/sda1             4.0G  1.9G  1.9G  51% /

The end result is that we know 51% of the root partition is used.  Not bad, really.  Still, I am a huge fan of automation and will now discuss a simple way that this task can be ran - automatically - for each of your XenServers.

What I am providing is essentially a simple BASH script that checks a XenServer's local disk.  If the local disk use exceeds a threshold (which you can change), it will send an alert to XenCenter so the the tactics described further in this document can be employed for the assurance of as much free space as possible.

Using nano or VI, create a file in the /root/ (root's home) directory called "diskmonitor" and paste in the following content:

#!/bin/bash
# Quick And Dirty Disk Monitoring Utility
# Get this host's UUID
thisUUID=`xe host-list name-label=$HOSTNAME params=uuid --minimal`
# Threshold of disk usage to report on
threshold=75    # an example of how much disk can be used before alerting
# Get disk usage
diskUsage=`df -h | grep "/$" | head -n 1 | awk {' print $5 '} | sed -n -e "s/%//p"`
# Check
if [ $diskUsage -gt $threshold ]; then
     xe message-create host-uuid=$thisUUID name="ROOT DISK USAGE" body="Disk space use has exceeded $diskUsage on `echo $HOSTNAME`!" priority="1"
fi

After saving this file be sure to make it executable:

chmod +x /root/diskmonitor

The "#!/bin/bash" at the start of this script now becomes imperative as it tells the user space (when called upon) to use the BASH interpreter.

UPDATE: To execute this script manually, one can execute the following command if in the same directory as this script:

./diskmonitor

This convention is used so that scripts can be execute just as if they were a binary/compiled piece of code.  If the "./" prefix is an annoyance, move /root/diskmonitor to /sbin/ -- this will ensure that one can execute diskmonitor without the "dot forward-slash" prefix while in other directories:

mv /root/diskmonitor /sbin/
# Now you should be able to execute diskmonitor from anywhere
diskmonitor

If you move the diskmonitor script make note of where you placed it as this directory will be needed for the cron entry.

For automation of the diskmonitor script one can now leverage cron: adding an entry to root's "crontab" and specify a recurring time diskmonitor should be executed (behind the scenes). 

The following is a basic outline as how to leverage cron so that diskmonitor will be executed four times per day.  Now, if you are looking for more information regarding cron, what it does, and how to configure it for other automation-based task then visit http://www.thegeekstuff.com/2009/06/15-practical-crontab-examples/ for more detailed examples and explanations.

1.  From the XenServer host command-line execute the following to add an entry to crontab for root:

crontab -e

2.  This will open root's crontab in VI or nano (text editors) where one will want to add one of the following lines based on where diskmonitor has been moved to or if it is still located in the /root/ directory:

# If diskmonitor is still located in /root/
00 00,06,12,18 * * * ./root/diskmonitor
# OR if it has been moved to the /sbin/ directory
00 00,06,12,18 * * * diskmonitor

3.  After saving this, we now have a cron entry that runs diskmonitor at midnight, six in the morning, noon, and 6 in the evening (military time) for every day of every week of every month.  If the script detects that the root drive on a XenServer is > 75% "used" (you can adjust this), it will send an alert to XenCenter where one can leverage - further - built in tools for email notifications, etc. 

The following is an example of the output of diskmonitor, but it is apropos to note that the following test was done using a threshold of 50% -- yes, in Creedence there is a bit more free space!  Kudos to Dev!

One can expand upon the script (and XenCenter), but lets focus on a few areas where root disk usage can be slowly consumed.

Removing Old Hotfixes

After applying one or more hotfixes to XenServer, copies of each decompressed hotfix are stored in /var/patch.  The main reason for this - in short - is that in pooled environments, hotfixes are distributed from a host master to each host slave to eliminate the need to repetitively download one hotfix multiplied by the number of hosts in a pool. 

The more complex reason is for consistency, for if a host becomes the master of the pool, it must reflect the same content and configuration as its predecessor did and this includes hotfixes.

The following is an example of what the /var/patch/ directory can look like after the application of one or more hotfixes:

Notice the /applied sub-directory?  We never want to remove that. 

UPDATE 21-MAR-2015:  Thanks to Tim, the Community Comments, and my Senior Lead for validating I was not "crazy" in my findings before composing this article: "xe patch-destroy" did not do its job as many commented.  It has been resolved post 6.2, so I thank everyone - especially Dev - for addressing this.

APPROPRIATE REMOVAL:

To appropriately remove these patch files, one can should utilize the "xe patch-destroy" command.  While I do not have a "clever" command-line example to take care of all files at once, the following should be ran against each file that has a UUID-based naming convention:

cd /var/patch/

xe patch-destroy uuid=<FILENAME, SUCH AS 4d2caa35-4771-ea0e-0876-080772a3c4a7>
(repeat "xe patch-destroy uuid=" command for each file with the UUID convention)

While this is not optimum, especially to run per-host in a pool, it is the prescribed method and as I have a more automated/controlled solution, I will naturally document it.

EMERGENCY SITUATIONS:

In the event that removal of other contents discussed in this article does not resolve a full root disk issue, the following can be used to remove these patch files.  However, it must be emphasized that a situation could arise wherein the lack of these files will require a re-download and install of said patches:

find /var/patch -maxdepth 1 | grep "[0-9]" | xargs rm -f

Finally, if you are in the middle of applying hotfixes do not perform the removal procedure (above) until all hosts are rebooted, fully patched, and verified as in working order.  This applies for pools - especially - where a missing patch file could throw off XenCenter's perspective of what hotfixes have yet to be installed and for which host.

The /tmp Directory

Plain and simple, the /tmp directory is truly meant for just that: holding temporary data.  Pre-Creedence, one can access a XenServer's command-line and execute the following to see a quantity of ".log" files:

cd /tmp
ls

As visualized (and overtime) one can see that an accumulation of many, many log files.  Albeit, these are small at the individual file perspective, but collectively... they take up space.

UPDATE 21-MAR-2015:  Again, thanks to everyone as these logs were always intended to be "removed" automatically once a Guest VM was started.  So, as of 6.5 and beyond -- this section is irrelevant!

cd /tmp/
rm -rf *.log

This will remove only ".log" files so any driver ISO images stored in /tmp (or elsewhere) should be manually addressed.

Compressed Syslog Files

The last item is to remove all compressed Syslog files stored under /var/log.  These usually consume the most disk space and as such, I will be authoring an article shortly to explain how one can tune logrotate and even forward these messages to a Syslog aggregator.

UPDATE:  As a word of of advice, we are only looking to clear "*.gz" (compressed/archived) log files.  Once these are deleted, they are gone.  Naturally this means an server status report gathered for collection will lack historical information so one may consider copying these off to another host (using scp or WinSCP) before following the next steps to remove them under a full root disk scenario.

In the meantime, just as before one can execute the following command to keep current syslog files in-tact, but remove old, compressed log files:

cd /var/log/
rm -rf *gz

So For Now...

It is at this point one has a tool to know when a disk has hit capacity and methods with which to clean-up specific items.  This can be taken by the admin to be ran in an automated fashion or manual fashion.  It is truly up to the admin's style of work.

Please be on the lookout for my next article involving Syslog forwarding, logrotation, and so forth as this will help any size deployment of XenServer: especially where regulations for log retention is a strict requirement.

Feel free to post any questions, suggestions, or methods you may even use to ensure XenServer's root disk does not fill up.

 

--jkbs | @xenfomation

 

 

Recent Comments
JK Benedict
Just as an update, Heinrich - my Beta 3 system is at 48% post-install and with a PV/HVM Debian 7 Guest running (I will be posting ... Read More
Saturday, 27 September 2014 08:55
JK Benedict
Heinrich, Quite welcome, sir!! Different versions of XenServer naturally leave different footprints, but 60-65% is where my syste... Read More
Tuesday, 16 September 2014 13:05
JK Benedict
Yup: my error. I grew this simple script to be modular: for pools and other data. The correct syntax for the "xe message" line ne... Read More
Tuesday, 16 September 2014 13:16
Continue reading
148427 Hits
51 Comments

Post-Creedence details to be presented at Xen Project User Summit

Last month I posted seeking feedback from you our community on what the post-Creedence world should look like. The response was impressive, and we've started incorporating what you want in a virtualization platform into our plans for the next release of XenServer; occurring after Creedence. While it's a bit early to divulge those details, I plan as part of my session on Creedence at the Xen Project User Summit to give you a roadmap for what to expect, what the code name will be for the project, and how you can help move the project forward. There will also be a few other surprises at the event for XenServer attendees, so if are able to be in New York City on September 15th please do try and join us. Not only will you be able to see what the new XenServer has to offer, but you'll see what the core hypervisor community is up to and potentially push them to help deliver features you feel valuable.     

Recent Comments
Russell Pavlicek
And don't forget that the leaders of the Xen Orchestra project will be presenting at Xen Project User Summit too. If you've ever ... Read More
Tuesday, 26 August 2014 20:06
Tobias Kreidl
I would like to see XenServer/XenDesktop embrace similar technology to the VMware Project Fargo, where a clone on demand of a live... Read More
Saturday, 30 August 2014 15:00
Sam McLeod
Hi, will the talks be uploaded online after for those who can't attend? Thanks,
Thursday, 11 September 2014 04:13
Continue reading
11595 Hits
5 Comments

XenServer Creedence Reaches Beta Stage

In May we announced to the world that the next version of XenServer would be code named Creedence, and officially opened a public alpha program around this new version of XenServer. That alpha program was more successful than I'd expected with well over a thousand participants. Drawing these participants to Creedence was a combination of enthusiasm for XenServer as a virtualization platform, and a desire to see us make significant improvements in that platform. What greeted them was a full platform refresh including a 64 bit dom0, modern Linux kernel and the most recent Xen Project hypervisor. Over the following weeks we invited this enthusiast community to test three additional builds, each with increasing capabilities and performance.

As the audience for the XenServer Creedence message increased, we heard from some that they wanted to wait until we exited alpha mode and entered a beta phase. I'm happy to report that we've done just that with the release on August 5th, 2014 of beta.1 for XenServer Creedence. This means we're largely feature complete, and are looking seriously at the overall performance and stability of the platform. It also means we want you to start stressing the platform with your real-world workloads. Everything is fair game at this point, and we want to know where the breaking points are. Over the coming weeks you'll see blog posts covering some of the key performance and scalability improvements that we've achieved internally, but that's internally. Your experiences will vary, and we want to know about them.

We want you to push XenServer and tell us where your expectations weren't met. Here's how to do just that:

  1. Download beta.1 and install it on your favorite hardware. If it doesn't install, we want to know. If it didn't detect the devices you have, we want to know. Oddly enough we also want to know if it does!!
  2. Create any number of VMs using your preferred operating systems, or alternatively use your favorite provisioning solution and provision some VMs. If something goes wrong here, let us know.
  3. Install into those VMs the applications you care about. Again if something goes wrong here, let us know.
  4. Exercise your applications, verify if the performance you see is what you'd expect. If it isn't let us know.
  5. While your applications are running, test core virtualization functions like live migration, storage migration, high availability, etc. Everything is fair game. If something doesn't behave as you expect, let us know.

How do you let us know of issues, you ask? Simply create a new account at https://bugs.xenserver.org and report an incident (or incidents). It's that simple. If you're looking to discuss at a deeper technical level why something might be behaving a certain way, or are seeking debugging help, then our developer list might be helpful. You can subscribe to it at: https://lists.xenserver.org/sympa/info/xs-devel, but please understand that the developers are working hard to build XenServer and aren't product support folks.

 

We want Creedence to be the best XenServer release ever, and with your input it can be.     

Recent Comments
Tim Mackey
James, Thanks for the words of encouragement. beta.1 *should* be able to successfully upgrade 6.2 SP1 pools, and if it doesn't w... Read More
Wednesday, 06 August 2014 17:30
Bob Martens
Are we able to import raw Xen images?
Thursday, 07 August 2014 00:19
Tobias Kreidl
No Ceph support in Creedence, unfortunately. Keep lobbying for it! See: http://xenserver.org/discuss-virtualization/virtualization... Read More
Wednesday, 13 August 2014 02:54
Continue reading
14919 Hits
13 Comments

Debian 7.4 and 7.6 Guest VMs

"Four Debians, Two XenServers"

The purpose of this article is to discuss my own success with virtualizing "four" releases of Debian (7.4/7.6; 32-bit/64-bit) in my own test labs.

For more information about Debian, head on over to Debian.org - specifically here to download the 7.6 ISO of your choice ( I used both the full DVD install ISO as well as the net install ISO ).

Note: If you are utilizing the Debian 7.4 net install ISO the OS will be updated to 7.6 during the install process.  This is just a "heads up" in the event you are keen to stick with a vanilla Debian 7.4 VM for test purposes.  And so you will need to download the full install DVD for the 7.4 32-bit/64-bit release instead of the net install ISO.

Getting A New VM Started

Once I had the install media of my choice, I copied it to my ISO repository that both XenServer 6.2 and Creedence utilize in my test environment.

From XenCenter (distributed with Creedence Alpha 4) I selected "New VM".

In both 6.2 and Creedence I chose the "Debian 7.0 (Wheezy) 64-bit" VM template:

I then continued through the "New VM" wizard: specifying processors, RAM, networking, and so forth.  On the last step, I made sure as to select "Start the new VM Automatically" before I pressed "Create Now":

Within a few moments, this familiar view appeared in the console:

I installed a minimum instance of both: SSH and BASE system.  I also used guided partitioning just because I was in quite a hurry.

After championing my way through the installer, as expected, Debian 7.4 and 7.6 both prompted that I reboot:

Since this is a PV install, I have access to the Shutdown, Reboot, and Suspend buttons, but I was curious about tools as memory consumption, etc were not present under each guest's "Performance" tab:

... and the "Network" tab stated "Unknown":

Before I logged in as root - in both XenServer 6.2 and Creedence Alpha 4 - I mounted the xs-tools.iso.  Once in with root access, I executed the following commands to install xs-tools for these guest VMs:


mkdir iso
mount /dev/xvdd/ iso/
cd iso/Linux/
./install.sh

The output was exactly the same in both VMs and naturally I selected "Y" to install the guest additions:

Detected `Debian GNU/Linux 7.6 (wheezy)' (debian version 7).

The following changes will be made to this Virtual Machine:
  * update arp_notify sysctl.conf.
  * packages to be installed/upgraded:
    - xe-guest-utilities_6.2.0-1137_amd64.deb

Continue? [y/n] y

Selecting previously unselected package xe-guest-utilities.
(Reading database ... 24502 files and directories currently installed.)
Unpacking xe-guest-utilities (from .../xe-guest-utilities_6.2.0-1137_amd64.deb) ...
Setting up xe-guest-utilities (6.2.0-1137) ...
Mounting xenfs on /proc/xen: OK
Detecting Linux distribution version: OK
Starting xe daemon:  OK

You should now reboot this Virtual Machine.

Following the installer's instructions, I rebooted the guest VMs accordingly.

Creedence Alpha 4 Results

As soon as the reboot was complete I was able to see each guest VM's memory performance as well as networking for both IPv4 and IPv6:

XenServer 6.2

With XenServer 6.2, I found that after installing the guest agent - under the "Network" tab - there still was no IPv4 information for my 64-bit Debian 7.4 and 7.6 guest VMs.  This does not apply to 32-Bit Debian 7.4 and 7.6 guest VMs as the tools installed just fine.

Then I thought about it and realized that by disabling IPv6, presto - the network information appeared for my IPv4 address.  To accomplish this, I edited the following file (as to avoid adjusting GRUB parameters):

/etc/sysctly.conf

And at the bottom of this file I added:

net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1 net.ipv6.conf.lo.disable_ipv6 = 1 net.ipv6.conf.eth0.disable_ipv6 = 1

After saving my changes, I rebooted and immediately was able to see my memory usage:

However... I still could not see my IPv4 address under the "Network" tab until I noticed the device ID of the network interface -- it was Device 1 (not 0):

I deleted this interface and re-added a new one from XenCenter.  Instantly, I could see my IPv4 address and the device ID for the network interface was back to 0:

And yes, I tested rebooting -- the address is still shown and memory usage is still measured.  In addition I did try to remove the flags to disable IPv6, but that resulted in seeing "UNKNOWN" - again - for 64-Bit Debian 7.4 and 7.6 guests.  That just means in XenServer 6.2 I have kept my changes in /etc/sysctl.conf as to ensure my 64-Bit Debian 7.4 and 7.6 hosts with XenTools' Guest Additions for Linux work just fine.

So, that's that -- something to experiment and test with: Debian 7.4 or 7.6 32-bit/64-bit in a XenServer 6.2 or Creedence Alpha test environment!

 

--jkbs

@xenfomation

Recent comment in this post
JK Benedict
Tested on Creedence Beta, as well. Love it!!!
Thursday, 07 August 2014 18:58
Continue reading
19342 Hits
1 Comment

Resetting Lost Root Password in XenServer 6.2

The Situation

Bad things can happen... badly.  In this case the root password to manage a XenServer (version 6.2) was... lost.

Physical or remote login to the XenServer 6.2 host failed authentication, naturally, and XenCenter had been disconnected from the host: requiring an administrator to provide these precious credentials, but in vein.

An Alternate Situation

Had XenCenter been left open ( offering command line access to the XenServer host in question) the following command could have been used from the XenServer's command line as to initiate a root password reset:

passwd

Once the root user's password has been changed the connection to the host from XenCenter to the XenServer host will need to be reestablished: using the root username and "new" password.

Once connected the remainder of this article becomes irrelevant otherwise you may very well need to...

Boot into Linux Single User Mode

Be it forgetfulness, change of guard, another administrator changing the password, or simply a typo in company documentation, the core problem being address via this post is that one cannot connect to XenServer 6.2 as the root password is... lost or forgotten.

As a secondary problem, one has lost patience and has obtained physical or iLO/iDRAC access to the XenServer in question, but still the root password is not accepted:

 

The Shortest Solution: Breaking The Law of Physical Security

I am not encouraging hacking, but that physical interaction with the XenServer in question and altering the boot to "linux single user mode" is the last solution to this problem.  To do this, one will need have/understand:

  • Physical Access, iLO, iDRAC, etc
  • A rebooted of the XenServer in question will be required

With disclaimers aside I now highly recommend reading and reviewing the steps outlined below before going through the motions. 

Some steps are time sensitive, so being prepared is merely a part of the overall pla.

  1. After gaining physical or iLO/iDRAC access to the XenServer in question, reboot it!  With iLO and iDRAC, there are options to hard or soft reset a system and either option is fine.
  2. Burn the following image into your mind for after the server reboots and runs through hardware BIOS/POST tests, you will see the following for 5 seconds (or so):
  3. Immediately grab the keyboard and enter the following:
    menu.c32 (press enter)
  4. The menu.c32 boot prompt will appear and again, you will only have 5 or so seconds to select the "XE" entry and pressing tab to edit boot options:
  5. Now, at the bottom of the screen one will see the boot entry information.  Don't worry, you have time so make sure it is similar to the following:
  6. Near the end of the, one should see "console=tty0 quiet vga=785 splash quiet": replace "quiet vga=785 splash" with "linux single".  More specifically - without the quotes - such as:
    linux single
  7. With that completed, simply press enter as to boot into Linux's single user mode.  You should eventually be dropped into a command line prompt (as illustrated below):
  8. Finally, we can reset the root password to something one can remember by executing the Linux command:
    passwd

  9. When prompted, enter the new root user password: you will be asked to verify it and upon success you should see the following:
  10. Now, enter the following command to reboot the XenServer in question:
    reboot
  11. Obviously, this will reboot the XenServer as illustrated below:
  12. Let the system fully reboot and present the xsconsole.  To verify that the new password has taken affect, select "Local Command Shell" from xsconsole.  This will require you to authenticate as the root user:
  13. If successful you will be dropped to the local command shell and this also means you can reconnect and manage this XenServer via XenCenter with the new root password!
Tags:
Recent Comments
Davide Poletto
Basically it's a matter of entering in Linux Single User mode to (re)initialize root's password before XenServer starts its boot p... Read More
Saturday, 12 July 2014 13:23
Davide Poletto
Basically it's a matter of entering in Linux Single User mode to (re)initialize root's password before XenServer starts its boot p... Read More
Saturday, 12 July 2014 13:31
JK Benedict
Davide, Thanks for the feedback: it is greatly appreciated. Sincerely, --jkbs @xenfomation... Read More
Wednesday, 16 July 2014 17:58
Continue reading
93274 Hits
12 Comments

The reality of a XenServer 64 bit dom0

One of the key improvements in XenServer Creedence is the introduction of a 64 bit control domain (dom0). This is something I've heard requests for over the past several years, and while there are many reasons to move to a 64 bit dom0, there are some equally good reasons for us to have waited this long to make the change. It's also important to distinguish between a 64 bit hypervisor, which XenServer has always used, and a 64 bit control domain which is the new bit.

Isn't XenServer 64 bit bare-metal virtualization?

The good news for people who think XenServer is 64 bit bare metal virtualization is they're not wrong. All versions of XenServer use the Xen Project hypervisor, and all versions of XenServer have configured that hypervisor to run in 64 bit mode. In a Xen Project based environment, the first thing the hypervisor does is load its control domain (dom0). For all versions of XenServer prior to Creedence, that control domain was 32 bit.

The goodness of 64 bit

A 64 bit Linux dom0 allows us to remove a number of bottlenecks which were arbitrarily present with a 32bit Linux dom0.

Goodbye low memory

One of the biggest limiting factors with a 32bit Linux dom0 is the concept of "low memory". On 32 bit computers, the maximum directly addressable memory is 4GB. There are a variety of extensions available to address beyond the 4GB limit, but they all come with a performance and complexity cost. Further, as 32bit Linux evolved, a concept of "Low" memory and "High" memory was created with most everything related to the kernel using low memory and userspace processes running in high memory. Since we're talking about kernel operations consuming low memory, that also means that any kernel memory maps and kernel drivers also consume low memory and you can see how low memory quickly becomes a precious commodity. It also can be increasingly consumed the more memory available to a 32bit Linux dom0. In a typical XenServer installation this value is only 752MB, and with the move to a 64bit dom0 will soon be a thing of the past.

Improved driver stability

In a 32bit BIOS, MMIO regions are always placed between the 1GB and 3GB physical address space, and with a 64bit BIOS they are always placed at the top of the physical memory. If an MMIO hole is created dom0 can choose to re-map that memory so that it is now shadowed in virtual address space. The kernel must map a roughly equal amount of memory to the size of the MMIO holes, which in a 32bit kernel must be done in low memory. Many drivers map kernel memory to MMIO holes on demand, resulting in boot time success but potential instability in the driver if there is insufficient low memory to satisfy the dynamic request for a MMIO hole remap. Additionally, while XenServer currently supports PAE, and can address more than 4GB, if the driver insists on having its memory allocated in 32bit physical address space, it will fail.

Support for more modern drivers

Hardware vendors want to ensure the broadest adoption for their devices, and with modern computers shipping with 64bit processors and memory configurations well in excess of 4GB, the majority of those drivers have been authored for 64bit operating systems, and 64bit Linux is no different. Moving to a 64bit dom0 gives us an opportunity to incorporate those newer devices and potentially have fewer restrictions on the quantity of devices in a system due to kernel memory usage.

Improved computational performance

During the initial wave of operating systems moving from 32bit to 64bit configurations, one of the often cited benefits was the computational improvements offered from such a migration. We fully intend for dom0 to take advantage of general purpose improvements offered from the processor no longer needing to operate in what is today more of a legacy configuration. One of the most immediate benefits is with a 64bit dom0, we can use 64bit compiler settings and take advantage of modern processor extensions. Additionally, there are several performance concessions (such as the quantity of event channels) and code paths which can be optimized when compiled for a 64bit system versus a 32bit system.

Challenges do exist, though

While there are clear benefits to a 64 bit migration, the move isn't without its set of issues as well. The most significant issue relates to XenServer pool communications. In a XenServer pool, all hosts have historically been required to be at the same version and patch level except during the process of upgrading the pool. This restriction serves to ensure that any data being passed between hosts, and configuration information being replicated in xenstore and all operations are consistent. With the introduction of Storage XenMotion in XenServer 6.1, this requirement was also extended to cross pool operations when a vdi is being migrated.

 

In essence you can think of the version requirement as being the insurance against bad things happening to your precious VM cargo. Unfortunately, that requirement has an assumption of dom0 having the same architecture and our migration from 32bit to 64bit complicates things. This is due to the various data structures used in pool operations having been designed with a consistent view of a world based on a 32bit operating system. This view of the world is directly challenged with a pool consisting of both 32bit and 64bit XenServer hosts as would be present during an upgrade. It's also challenged during cross pool storage live migration if one pool is 32bit while the second is 64bit. We are working to resolve this problem at least for the 32bit to 64bit upgrade, but it will be something we're going to want very active participation from our user community to test once we have completed our implementation efforts. After all we do subscribe to the notion that your virtual machines are pets and not cattle; regardless of the scale of your virtualized infrastructure.     

Recent Comments
chaitanya
Hi, Good concept!! so, will this be included in alpha.2? Chaitanya.
Friday, 06 June 2014 17:52
Tim Mackey
Chaitanya, Alpha.1 already has a 64 bit dom0, and with Alpha.2 we're bringing more improvements online. -tim... Read More
Tuesday, 10 June 2014 17:15
Tobias Kreidl
Great synopsis, Tim. This release is going to be the best ever.
Tuesday, 10 June 2014 14:38
Continue reading
32381 Hits
9 Comments

XenServer Driver Disks

Why do we need them

A matter of importance for any OS to consider, is how to support hardware. One part of that support is going to be the management of device drivers used to talk to network cards, storage controllers and other such devices.

For each XenServer release, Citrix works with hardware vendors to try and get the 'latest and greatest' (and obviously stable!) drivers for supporting upcoming hardware inbox - which is for the most part great. Customers installing XenServer can successfully install XenServer on their new hardware, and they don't need to think about drivers at all.

This is all good and well except for the fact that sometimes release schedules don't align so perfectly, and actually several months after a release an OEM may start shipping a new hardware that requires updated drivers. Or perhaps a customer discovers a issue with their system which is caused by a bug in the inbox device driver.

In either case, we need a mechanism to be able to provide the customer with a new driver: enter driver disks.

What are they

A 'Driver Disk' is a particular instance of a 'Supplemental Pack' which is the mechanism that XenServer uses for supporting the installation of RPMs in Dom0.

The format of a driver disk is simply a collection of RPMs, along with some meta-data regarding the packs contents and pack dependencies (obviously RPMs include their own dependency metadata too).

These Driver disks are built by hardware vendors using the 'Driver Development Kit' (DDK) which contains a Dom0 kernel for building drivers against.

The vendor simply:

  1. Creates a Makefile/specfile for generating a driver RPM.
  2. Uses build-supplemental-pack.sh to compile the driver RPM, and build an ISO with the appropriate metadata.

After which the vendor can use the ISO to install the new drivers on an appropriate XenServer version. Once hardware has been certified (using our self-certification kits) with the new drivers, the vendor can go ahead and submit the drivers to Citrix who will make them available for customers to download from support.citrix.com.

Respins

The astute among you, who are familiar with the Citrix support pages may have noticed that we don't just release a single driver disk for any given XenServer release, we actually release multiple.

The reason for this is that each driver is built against a specific kernel Application Binary Interface 'ABI' - which has a particular version number. When this ABI version is bumped, because of modifications to the kernel in a kernel hotfix, the new kernel will no longer load a driver module built against the old ABI.

This is why customers might encounter the following scenario:

  1. Customer installs XenServer 6.2 with driver foo x.y.z
  2. Customer installs a driver disk for foo x.y.z+1
  3. Customer installs a kernel hotfix
  4. Customer notices that XenServer is now using foo x.y.z (!)

In this scenario, because the customer has installed a kernel hotfix on 6.2, without installing a re-spun driver disk for foo x.y.z+1, the new kernel, shipped with the kernel hotfix, will load the driver included in the GA version of 6.2. This is in order to prevent customers from having to take driver updates with kernel hotfixes.

So for each kernel hotfix, Citrix will re-spin driver disks (previously released for that product version), and post them on support.citrix.com for customers to install. This gives customers maximum flexibility in being able to choose which kernel hotfixes/drivers they want to take, independently from one another.

More info

If you want to find out more about the packaging of driver disks, or in fact how to build supplemental packs - then please checkout the official documentation here:

 

http://docs.vmd.citrix.com/XenServer/6.2.0/1.0/en_gb/supplemental_pack_ddk.html     

Recent Comments
Robert
I'm wondering why there are not real word examples on how to build/create XenServer Driver Disks for a fresh installation. As exa... Read More
Tuesday, 30 September 2014 10:11
Continue reading
25501 Hits
2 Comments

About XenServer

XenServer is the leading open source virtualization platform, powered by the Xen Project hypervisor and the XAPI toolstack. It is used in the world's largest clouds and enterprises.
 
Technical support for XenServer is available from Citrix.