All Things Xen

General ramblings regarding Citrix XenServer & its open source counter part.

A Dad, A Husband, Artist, Musician, Curious Individual

XenServer Hotfix XS65ESP1035 Released

XenServer Hotfix XS65ESP1035 Released

News Flash: XenServer Hotfix XS65ESP1035 Released

Indeed, I was alerted early this morning (06:00 EST) via email that Citrix has released hotfix XS65ESP1035 for XenServer 6.5 SP1.  The official release and content is filed under CTX216249, which can be found here: http://support.citrix.com/article/CTX216249

As of the writing of this article, this hotfix has not yet been added to CTX138115 (entitled "Recommended Updates for XenServer Hotfixes") or, as we like to call it "The Fastest Way to Patch A Vanilla XenServer With One or Two Reboots!"  I imagine that resource will be updated to reflect XS65ESP1035 soon.

Personally/Professionally, I will be installing this hotfix as, per CTX216249, I am excited to read what is addressed/fixed:

  • Duplicate entry for XS65ESP1021 was created when both XS65ESP1021 and XS65ESP1029 were applied.
  • When BATMAP (Block Allocation Map) in Xapi database contains erroneous data, the parent VHD (Virtual Hard Disk) does not get inflated causing coalesce failures and ENOSPC errors.
  • After deleting a snapshot on a pool member that is not the pool master, a coalesce operation may not succeed. In such cases, the coalesce process can constantly retry to complete the operation, resulting in the creation of multiple RefCounts that can consume a lot of space on the pool member.
In addition, this hotfix contains the following improvement:
  • This fix lets users set a custom retrans value for their NFS SRs thereby giving them more fine-grained control over how they want NFS mounts to behave in their environment.

(Source: http://support.citrix.com/article/CTX216249)

So....

This is storage based hotfix and while we can create VMs all day, we rely on the storage substrate to hold our precious VHDs, so plan accordingly to deploy it!

Applying The Patch Manually

As a disclaimer of sorts, always plan your patching during a maintenance window to prevent any production outages.  For me, I am currently up-to-date and will be rebooting my XenServer host(s) in a few hours, so I manually applied this patch.

Why?  If you look in XenCenter for updates, you won't see this hotfix listed (yet).  If it was available in XenCenter, checks and balances would inform me I need to suspend, migrate, or shutdown VMs.  For a standalone host, I really can't do that.  In my pool, I can't reboot for a few hours, but I need this patch installed, so I simply do the following on my XenServer stand-alone server OR XenServer primary/master server:

Using the command line in XenCenter, I make a directory in /root/ called "ups" and then descend into that directory because I plan to use wget (Web Get) to download the patch via its link in http://support.citrix.com/article/CTX216249:

[root@colossus ~]# mkdir ups
[root@colossus ~]# cd ups

Now, using wget I specify what to download over port 80 and to save it as "hf35.zip":

[root@colossus ups]# wget http://support.citrix.com/supportkc/filedownload?uri=/filedownload/CTX216249/XS65ESP1035.zip -O hf35.zip

We then see the usual wget progress bar and once it is complete, I can unzip the file "hf35.zip":

HTTP request sent, awaiting response... 200 OK
Length: 110966324 (106M) [application/zip]
Saving to: `hf35.zip'

100%[======================================>] 110,966,324 1.89M/s   in 56s    
2016-08-25 11:06:32 (1.90 MB/s) - `hf35.zip' saved [110966324/110966324]
[root@colossus ups]# unzip hf35.zip 
Archive:  hf35.zip
  inflating: XS65ESP1035.xsupdate   
  inflating: XS65ESP1035-src-pkgs.tar.bz2

I'm a big fan of using shortcuts - especially where UUIDs are involved.  Now that I have the patch ready to expand onto my XenServer master/stand-alone server, I want to create some kind of variable so I don't have to remember my host's UUID or the patch's UUID. 

For the host, I can simply source in a file that contains the XenServer primary/master server's INSTALLATION_UUID (better known as the host's UUID):

[root@colossus ups]# source /etc/xensource-inventory 
[root@colossus ups]# echo $INSTALLATION_UUID
207cd7c1-da20-479b-98bc-e84cac64d0c0

With the variable $INSTALLATION_UUID set, I can now expand the patch and capture it's own UUID:

[root@colossus ups]# patchUUID=`xe patch-upload file-name=XS65ESP1035.xsupdate`
[root@colossus ups]# echo $patchUUID
cdf9eb54-c3da-423d-88ca-841b864f926b

NOW, I apply the patch to the host (yes, it still needs to be rebooted, but within a few hours) using both variables in the following command:

[root@colossus ups]# xe patch-apply uuid=$patchUUID host-uuid=$INSTALLATION_UUID
   
Preparing...                ##################################################
kernel                      ##################################################
unable to stat /sys/class/block//var/swap/swap.001: No such file or directory
Preparing...                ##################################################
sm                          ##################################################
Preparing...                ##################################################
blktap                      ##################################################
Preparing...                ##################################################
kpartx                      ##################################################
Preparing...                ##################################################
device-mapper-multipath-libs##################################################
Preparing...                ##################################################
device-mapper-multipath     ##################################################

At this point, I can back out of the "ups" directory and remove it.  Likewise, I can also check to see if the patch UUID is listed in the XAPI database:

[root@colossus ups]# cd ..
[root@colossus ~]# rm -rf ups/
[root@colossus ~]# ls
support.tar.bz2
[root@colossus ~]# xe patch-list uuid=$patchUUID
uuid ( RO)                    : cdf9eb54-c3da-423d-88ca-841b864f926b
              name-label ( RO): XS65ESP1035
        name-description ( RO): Public Availability: fixes to Storage
                    size ( RO): 21958176
                   hosts (SRO): 207cd7c1-da20-479b-98bc-e84cac64d0c0
    after-apply-guidance (SRO): restartHost

So, nothing really special -- just a quick way to apply patches to a XenServer primary/master server.  In the same manner, you can substitute the $INSTALLATION_UUID with other host UUIDs in a pool configuration, etc.

Well, off to reboot and thanks for reading!

 

-jkbs | @xenfomationMy Citrix Blog

To receive updates about the latest XenServer Software Releases, login or sign-up to pick and choose the content you need from http://support.citrix.com/customerservice/

 


Sources

Citrix Support Knowledge Center: http://support.citrix.com/article/CTX216249

Citrix Support Knowledge Center: http://support.citrix.com/customerservice/

Citrix Profile/RSS Feeds: http://support.citrix.com/profile/watches/

Original Image Source: http://www.gimphoto.com/p/download-win-zip.html

Continue reading
5788 Hits
0 Comments

Resetting Lost Root Password in XenServer 7.0

XenServer 7.0, Grub2, and a Lost Root Password

In a previous article I detailed how one could reset a lost root password to XenServer 6.2.  While the article is not limited to 6.2 (it works just as well for 6.5, 6.1, and 6.0.2), this article is dedicated to XenServer 7.0 as grub2 has been brought in to replace extlinux.

As such, if the local root user's (LRU) password for a XenServer 7.0 is forgotten physical (or "lights out") access to the host and a reboot will be required.  The contrast comes with grub2, the methods to boot the XenServer 7.0 host into single user mode, and how to reset the root password to a known token.

The Grub Boot Screen

Once obtaining physical or "lights out" to the XenServer 7.0 host in question, on reboot the following screen will appear:

It is important to note that once this screen appears, you only have four seconds to take action before the host proceeds to boot the kernel.

As should be default, the XenServer kernel is highlighted.  One will want to immediately press the key (for edit).

This will then refresh the grub interface - stopping any count-down-to-boot timers - which will reveal the boot entry.  It is within this window (using up, down, left, and right) one will want to navigate to around line 4 or five and isolate "ro nolvm":

 

Next, one will want to remove (or backspace/delete) the "ro" characters and type in "rw init=/sysroot/bin/sh", or as illustrated:

 

Don't worry if the directive is not on one line!

 

With this change made, press both Control and X at the same time as this will boot the XenServer kernel into single user style mode, or better known as Emergency Mode:

How to Change Root's Password

From the Emergency Mode prompt, execute the following command:

chroot /sysroot

Now, once can execute the "passwd" command to change root's credentials:

Finally....

Now that root's credentials have been changed, utilize Control+Alt+Delete to reboot the XenServer 7.0 host and one will find via SSH, XenCenter, or directly that the root password has been changed: the host is ready to be managed again.

 

Recent Comments
Tobias Kreidl
Many thanks for this update, Jesse! It should be turned into a KB article, as well, if not already.
Friday, 24 June 2016 10:52
JK Benedict
Jordan -- Thanks for the compliments! However, it seems more apropos to say "Sorry to hear about your situation!" So, the steps... Read More
Monday, 27 June 2016 10:11
Continue reading
19798 Hits
6 Comments

iSCSI and Jumbo Frames

So, you either just setup iSCSI or are having performance issues with your current iSCSI device. Here are some pointers to ensure "networking" is not the limiting factor:

1. Are my packets even making it to the iSCSI target?
Always check in XenCenter that your NICS responsible for storage are pointing to the correct target IPS. If they are, ensure you can ping these targets from within XenServer's command line:

ping x.x.x.x

If you cannot ping the target, that may be the issue.

Use the 'route' command to show if XenServer has a device and target to hit on the iSCSI target's subnet. If route shows nothing related to your iSCSI target IPs or takes a long time to show the target's IP/Route information, revisit your network configuration: working from the iSCSI device config, switch ports, and all the way up to the storage interface defined for your XenServer(s).

Odds are the packets are trying to route out via another interface or there is a cable mismatch/VLAN tag mismatch.  Or, at worse, the network cable is bad!

2. Is your network really setup for Jumbo Frames?
If you can ping our iSCSI targets, but Re having performance issues with Jumbo Frames (9000 or 4500 Mtu size, based on vendor) ensure your storage interface on XenServer is configured to leverage this Mtu size.

One can also execute a ping command to see if there is fragmentation or support enabled for the larger MTUs:

ping x.x.x.x -M do -s 8972

This tells XenServer to ping, without fragmenting frames, your iSCSI target with an Mtu of 9000 (the rest comes from the ping and other overhead, so use 8972).

If this return fragments or other errors, check the cabling from XenServer along with the switch settings AND iSCSI setup. Sometimes these attributes can be powered after firmware updates to the iSCSI enabled, managed storage devicd

3. Always make sure your network firmware and drivers are up to date!

And these are but three simple ways to isolate issues with iSCSI connectivity/performance.  The rest, well, more to come...



--jkbs | @xenfomation | XenServer.org Blog

Recent Comments
Tobias Kreidl
Thanks for posting this, Jesse. There are of course numerous tweaks possible to improve stock network settings, published by Citri... Read More
Saturday, 18 April 2015 05:23
JK Benedict
Quite welcome, Tobias and it is always great to hear from you! The article you sent is, well, quite amazing. I have seen in trad... Read More
Monday, 20 April 2015 14:20
Continue reading
21710 Hits
3 Comments

History and Syslog Tweaks

Introduction

As XenServer Administrators already know (or will know), there is one user "to rule them all"... and that user is root.  Be it an SSH connection or command-line interaction with DOM0 via XenCenter, while you may be typing commands in RING3 (user space), you are doing it as the root user.

This is quite appropriate for XenServer's architecture as once the bare-metal is powered on, one is not booting into the the latest "re-spin" of some well-known (or completely obscure) Linux-spin.  Quite the opposite.  One is actually booting into the virtualization layer: dom0 or the Control Domain.  This is where separation of Guest VMs (domUs) and user space programmes (ping, fsck, and even XE) begins... even at the command line for root.

In summary, it is not uncommon for many Administrators to require root access to a XenServer... at one time.  Thus, this article will show my own means of adding granularity to the HISTORY command as well as logging (via Syslog) of each and every root user session.

Assumptions

As BASH is the default shell, this article assumes that one has knowledge of BASH, things "BASH", Linux-based utilities, and so forth.  If one isn't familiar with BASH, how BASH leverages global and local scripts to setup a user environment, etc I have provided the following resources:

  • BASH login scripts : http://www.linuxfromscratch.org/blfs/view/6.3/postlfs/profile.html
  • Terminal Colors : http://www.tldp.org/HOWTO/Bash-Prompt-HOWTO/x329.html
  • HISTORY command : http://www.tecmint.com/history-command-examples/

Purpose

The purpose I wanted to achieve was not just a more 'clean way' to look at the history command, but to also log the root user's session information: recording their access means, what command they ran, and WHEN.


In short, we go from this:

To this (plus record of each command in /var/log/user.log | /var/log/messages):

What To Do?

First, we want to backup /etc/bashrc to /etc/backup.bashrc in the event one would like to revert to the original HISTORY method, etc.  This can be done via the command-line of the XenServer:

cp /etc/bashrc /etc/backup.bashrc

Secondly, the following addition will should be added to the end of /etc/bashrc:

##[ HISTORY LOGGING ]#######################################################
#
# ADD USER LOGGING AND HISTORY COMMAND CONTEXT FOR SOME AUDITING
# DEC 2014, JK BENEDICT
# This email address is being protected from spambots. You need JavaScript enabled to view it. | @xenfomation
#
#########################################################################

# Grab current user's name
export CURRENT_USER_NAME=`id -un`

# Grab current user's level of access: pts/tty/or SSH
export CURRENT_USER_TTY="local `tty`"
checkSSH=`set | grep "^SSH_CONNECTION" | wc -l`

# SET THE PROMPT
if [ "$checkSSH" == "1" ]; then
     export CURRENT_USER_TTY="ssh `set | grep "^SSH_CONNECTION" | awk {' print $1 '} | sed -rn "s/.*?='//p"`"
     export PROMPT_COMMAND='history -a >(tee -a ~/.bash_history | logger -t "HISTORY for $CURRENT_USER_NAME[$$] via $SSH_CONNECTION : ")'
else
     export CURRENT_USER_TTY
     export PROMPT_COMMAND='history -a >(tee -a ~/.bash_history | logger -t "HISTORY for $CURRENT_USER_NAME[$$] via $CURRENT_USER_TTY : ")'
fi

# SET HISTORY SETTINGS
# Lines to retain, ignore dups, time stamp, and user information
# For date variables, check out http://www.computerhope.com/unix/udate.htm
export HISTSIZE=5000
export HISTCONTROL=ignoredups
export HISTTIMEFORMAT=`echo -e "e[1;31m$CURRENT_USER_NAMEe[0m[$$] via e[1;35m$CURRENT_USER_TTYe[0m on e[0;36m%d-%m-%y %H:%M:%S%ne[0m       "`

A link to a file providing this addition downloaded from https://github.com/xenfomation/bash-history-tweak

What Next?

Well, with the changes added and saved to /etc/bashrc, exit the command-line prompt or SSH session: logging back in to test the changes.

exit

hostname
whoami
history
tail -f /var/log/user.log

... And that is that.  So, while there are 1,000,000 more sophisticated ways to achieve this, I thought I'd share what I have used for a long time... have fun and enjoy!

--jkbs | @xenfomation

Continue reading
2734 Hits
0 Comments

Basic Network Testing with IPERF

Purpose

I am often asked how one can perform simple network testing within, outside, and into XenServer.  This is a great question as – by itself – it is simple enough to answer.  However, depending on what one desires out of “network testing” the answer can quickly become more complex.

As such, this I have decided to answer this question using a long standing, free utility called IPERF (well, IPERF2).  It is a rather simple, straight-forward, but powerful utility I have used over many, many years.  Links to IPERF will be provided - along with documentation on its use - as it will serve in this guide as a way to:


- Test bandwidth between two or more points

- Determine bottlenecks

- Assists with black box testing or “what happens if” scenarios

- Use a tool that runs on both Linux and Windows

- And more…

IPERF: A Visual Breakdown

IPERF has to be installed on/at at least two separate end points.  One point acts a server/receiver and the other point acts as a client/transmitter.  This so network testing can be done on a simple subnet to a complex, routed network: end-to-end using TCP or UDP generated traffic:

The visual shows an IPERF client transmitting data over IPv4 to an IPERF receiver.  Packets traverse the network - from wireless routers and through firewalls - from the client side to the server side to over port 5001.

IPERF and XenServer

The key to network testing is in remembering that any device which is connected to a network infrastructure – Virtual or Physical – is a node, host, target, end point, or just simply … a networked device.

With regards to virtual machines, XenServer obviously supports Windows and Linux operating systems.  IPERF can be used to test virtual-to-virtual networking as well as virtual-to-physical networking.  If we stack virtual machines in a box to our left and stack physical machines in a box to our right – despite a common subnet or routed network – we can quickly see the permutations of how "Virtual and Physical Network Testing" can be achieved with IPERF transmitting data from one point to another:

And if one wanted, they could just as easily test networking for this:

Requirements

To illustrate a basic server/client model with IPERF, the following will be required:

- A Windows 7 VM that will act as an IPERF client

- A CentOS 5.x VM that will act as a receiver.

- IPERF2 (the latest version of IPERF, or "IPERF3" can be found at https://github.com/esnet/iperf or, more specifically, http://downloads.es.net/pub/iperf/)

The reason for using IPERF2 is quite simple: portability and compatibility on two of the most popular operating systems that I know are virtualized.  In addition, the same steps to installing IPERF2 on these hosts can be carried out on physical systems running similar operating systems, as well. 

The remainder of this article - regarding IPERF2 - will require use of the MS-DOS command-line as well as the Linux shell (of choice).  I will carefully explain all commands as so if you are “strictly a GUI” person, you should fit right in.

Disclaimer

When utilizing IPERF2, keep in mind that this is a traffic generator.  While one can control the quantity and duration of traffic, it is still network traffic

So, consider testing during non-peak hours or after hours as to not interfere with production-based network activity.

Windows and IPERF

The Windows port of IPERF 2.0.5 requires Windows XP (or greater) and can be downloaded from:

http://sourceforge.net/p/iperf/patches/_discuss/thread/20d4a4b0/5c44/attachment/Iperf.zip

Within the .zip file you will find two directories.  One is labeled DEBUG and the other is labeled RELEASE.  Export the Iperf.exe program to a directory you will remember, such as C:\iperf\

Now, accessing the command line (cmd.exe), navigate to C:\iperf\ and execute:

iperf

The following output should appear:

Linux and IPERF

If you have additional repos already configured for CentOS, you can simply execute (as root):

yum install iperf

If that fails, one will need to download the Fedora/RedHat EPEL-Release RPM file for the version of CentOS being used.  To do this (as root), execute:

wget  http://dl.fedoraproject.org/pub/epel/5/i386/epel-release-5-4.noarch.rpm
rpm -Uvh epel-release-5-4.noarch.rpm

 

*** Note that the above EPEL-Release RPM file is just an example (a working one) ***

 

Once epel-release-5-4.noarch.rpm is installed, execute:

yum install iperf

And once complete, as root execute iperf and one should see the following output:

http://cdn.ws.citrix.com/wp-content/uploads/2014/06/CMD2.png?__utma=222274247.1078613845.1409810797.1412210514.1412210784.2&__utmb=222274247.5.8.1412227628611&__utmc=222274247&__utmx=-&__utmz=222274247.1412210514.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none)&__utmv=222274247.|1=my%20account%20holder=y=1^14=industry=(Non-company%20Visitor)=1^15=sub_industry=(Non-company%20Visitor)=1^16=employee_count=(Non-company%20Visitor)=1^17=company_name=(Non-company%20Visitor)=1^18=primary_sic=(Non-company%20Visitor)=1^19=registry_dma_code=(Non-company%20Visitor)=1&__utmk=208580497

Notice that it is the same output as what is being displayed from Windows.  IPERF2 is expecting a "-s" (server) or "-c" (client) command-line option with additional arguments.

IPERF Command-Line Arguments

On either Windows or Linux, a complete list of options for IPERF2 can be listed by executing:

iperf –help

A few good resources of examples to use IPERF2 options for the server or client can be referenced at:

http://www.slashroot.in/iperf-how-test-network-speedperformancebandwidth

http://samkear.com/networking/iperf-commands-network-troubleshooting

http://www.techrepublic.com/blog/data-center/handy-iperf-commands-for-quick-network-testing/

For now, we will focus on the options needed for our server and client:

-f, –format    [kmKM]   format to report: Kbits, Mbits, KBytes, MBytes
-m, –print_mss          print TCP maximum segment size (MTU – TCP/IP header)
-i, –interval  #        seconds between periodic bandwidth reports
-s, –server             run in server mode
-c, –client    <host>   run in client mode, connecting to <host>
-t, –time      #        time in seconds to transmit for (default 10 secs)

Lastly, there is a TCP/IP Window setting.  This goes beyond the scope of this document as it relates to the TCP frame/windowing of data.  I highly recommend reading either of the two following links – especially for Linux – as there has always been some debate as what is “best to be used”:

https://kb.doit.wisc.edu/wiscnet/page.php?id=11779

http://kb.pert.geant.net/PERTKB/IperfTool

Running An IPERF Test

So, we have IPERF2 installed on Windows 7 and on CentOS 5.10.  Before one performs any testing, ensure any AV does not block iperf.exe from running as well as port 5001 being opened across the network network.

Again, another port can be specified, but the default port IPERF2 uses for both client and server is 5001.

Server/Receiver Side

The Server/Receiver side will be on the CentOS VM.

Following the commands above, we want to execute the following to run IPERF2 as a server/receiver from our Windows 7 client machine:

iperf -s -f M -m -i 10

The output should show:

————————————————————
Server listening on TCP port 5001
TCP window size: 0.08 MByte (default)
————————————————————

The TCP window size has been previously commented on and the server is now ready to accept connections (press Control+C or Control+Z to exit).

Client/Transmission Side

Let us now focus on the client side to start sending data from the Windows 7 VM to the CentOS VM.

From Windows 7, the command line to start transmitting data for 30 seconds to our CentOS host (x.x.x.48) is:

iperf -c x.x.x.48 -t 30 -f M

Pressing enter, the traffic flow begins and the output from the client side looks like this:

From the server side, the output looks something like this:

And there we have it – a first successful test from a Windows 7 VM (located on one XenServer) to a CentOS 5.10 VM (located on another XenServer).

Understanding the Results

From either the client side or server side, results are shown by time and average.  The key item to look for from either side is:

0.0-30.0 sec  55828 MBytes  1861 MBytes/sec

Why?  This shows the average over the course of 0.0 to 30.0 seconds in terms of total megabytes transmitted as well as average megabytes of data sent per second.  In addition, since the "-f M" argument was passed as a command-line option, the output is calculated in megabytes accordingly.

In this particular case, we simply illustrated that from one VM to another VM, we transferred data at 1861 megabytes per second.

*** Note that this test was performed in a local lab with lower-end hardware than what you probably have! ***

--jkbs | @xenfomation

 

Recent Comments
chaitanya
Hi, Nice article.. I have a simple question.. you did this test for windows and linux os. Any specific requirement on that? I d... Read More
Monday, 10 November 2014 16:59
JK Benedict
Exactly: to show that IPERF can be used in any configuration, any school of thought, etc! Windows Windows Linux Linux Linux Wi... Read More
Wednesday, 12 November 2014 03:08
Massimo De Nadal
Hi, your throughput is 1861 MB/sec which means more than 14Gb !!!! Can I ask you what kind of server/setup are you using ??? I'... Read More
Tuesday, 11 November 2014 12:24
Continue reading
46483 Hits
15 Comments

Increasing Ubuntu's Resolution

Increasing Ubuntu's Resolution

Maximizing Desktop Real-estate with Ubuntu

With the addition of Ubuntu (and the likes) to Creedence, you may have noticed that the default resolution is 1024x768.  I certainly noticed it and with much work on 6.2 and Creedence Beta, I have a quick solution to maximizing the screen resolution for you.

The thing to consider is that a virtual frame buffer is what is essentially being used.  You can re-invent X configs all day, but the shortest path is to - first - ensure that that the following files are installed on your Ubuntu guest VM:

sudo apt-get install xvfb xfonts-100dpi xfonts-75dpi xfstt

Once that is all done installing, the next step is to edit Grub -- specifically /etc/default/grub:

sudo vi /etc/default/grub

Considering your monitor's maximum resolution (or not if you want to remote into Ubuntu using XRDP), look for the variable GRUB_GFXMODE.  This is where you can specify your desired BOOT resolutions that we will instruct the guest VM to SUSTAIN into user-space:

GRUB_GFXMODE=1280x960,1280x800,1280x720,1152x768,1152x700,1024x768,800x600

Next, adjust the variable GRUB_PAYLOAD_LINUX to equal keep, or:

GRUB_PAYLOAD_LINUX=keep

Save the changes and be certain to execute the following:

sudo update-grub
sudo reboot

Now, you will notice that even during the boot phase that the resolution is large and this will carry into user space: Lightdm, Xfce, and the likes.

Finally, I would highly suggest installing XRDP for your Guest VM.  It allows you to access that Ubuntu/Xbunutu/etc desktop remotely.  Specific details regarding this can be found through Ubuntu's forum:

http://askubuntu.com/questions/449785/ubuntu-14-04-xrdp-grey


Enjoy!

--jkbs | @xenfomation

 

 

Recent Comments
JK Benedict
Thanks, YLK - I am so glad to hear this helped someone else! Now... install XRDP and leverage the power to Remote Desktop (secure... Read More
Thursday, 25 December 2014 04:46
gfpl
thanks guy is very good help me !!!
Friday, 06 March 2015 10:52
Fredrik Wendt
Would be really nice to see all steps needed (CLI on dom0) to go from http://se.archive.ubuntu.com/ubuntu/dists/vivid/main/install... Read More
Monday, 14 September 2015 21:48
Continue reading
25863 Hits
6 Comments

VGA over Cirrus in XenServer 6.2

Achieve Higher Resolution and 32Bpp

For many reasons – not exclusive to XenServer – the Cirrus video driver has been a staple wherein a basic/somewhat agnostic video driver is needed.  When one creates a VM within XenServer (specifically 6.2 and previous versions) the Cirrus video driver is used by default for video...and it does the job.

I had been working on a project with my mentor related to an eccentric OS, but I needed a way to get more real-estate to test a HID pointing device by increasing the screen resolution.  This led me to find that at some point in our upstream code there were platform (virtual machine metadata) options that allowed an one to "ditch" Cirrus and 1024x768 resolution for higher resolutions and color depth via a standard VGA driver addition.

This is not tied into GPU Pass through nor is it a hack.  It is a valuable way to achieve 32bpp color in Guest VMs with video support as well as obtaining higher resolutions.

Windows 7: A Before and After Example

To show the difference between "default Cirrus" and the Standard VGA driver (which I will discuss how to switch to shortly), Windows 7 Enterprise had the following resolution to offer me with Cirrus:


Now, after switching to standard VGA for the same Guest VM and rebooting, I now had the following resolution options within Windows 7 Enterprise:

Switching a Guest for VGA

After you create your VM – Windows, Linux, etc – perform the following steps to enable the VGA adapter:

 

  • Halt the Guest VM
  • From the command line, find the UUID of your VM:
 xe vm-list name-label=”Name of your VM”
  • Taking the UUID value, run the following two commands:
 xe vm-param-set uuid=<UUID of your VM> platform:vga=std
 xe vm-param-set uuid=<UUID of your VM> platform:videoram=4
  •  Finally, start your VM and one should be able to achieve higher resolution at 32bpp.

 

It is worth noting that the max amount of "videoram" that can be specified is 16 (megabytes).

Switching Back to Cirrus

If – for one reason or another – you want to reset/remove these settings as to stick with the Cirrus driver, run the following commands:

 xe vm-param-remove uuid=<UUID of your VM> param-name=platform param-key=vga
 xe vm-param-remove uuid=<UUID of your VM> param-name=platform param-key=videoram

Again, reboot your Guest VM and with the lack of VGA preference, the default Cirrus driver will be used.

What is the Catch?

There is no catch and no performance hit.  The VGA driver's "videoram" specification is carved out of the virtual memory allocated to the Guest VM.  So, for example, if you have 4GB allocated to a Guest VM, subtract at max 16 megabytes from 4GB.  Needless to say, that is a pittance and does not impact performance.

Speaking of performance, my own personal tests were simple and repeated several times:

 

  • Utilized a tool that will remain anonymous
  • Use various operating systems with Cirrus and resolution at 1024 x 768
  • Run 2D graphic test suite
  • Write down Product X, Y, or Z’s magic number that represents good or bad performance
  • Apply the changes to the VM to use VGA (keeping the resolution at 1024 x 768 for some kind of balance)
  • Run the same volley of 2D tests after a reboot
  • Write down Product X, Y or Z’s magic number that represents good or bad performance

 

In the end, I personally found from my experience that there was a very minor, but noticeable difference in Cirrus versus VGA.  Cirrus usually came in 10-40 points below VGA at the 1024 x 768 level.  Based on the test suite used, this is nothing spectacular, but it is certainly a benefit as I found no degraded performance across XenServer (other Guests), etc.

I hope this helps and as always: questions and comments are welcomed!

 

--jkbs | @xenfomation

 

Recent Comments
JK Benedict
Hey, Chris!! Excellent questions! So - I think I need to clear up my poor use of words: more importantly, tying words together. ... Read More
Saturday, 11 October 2014 22:50
Continue reading
32423 Hits
4 Comments

Creedence: Debian 7.x and PVHVM Testing

Introduction

On my own time and on my own testing equipment, I have been able to run many Guests VMs in PVHVM containers - before Creedence after its release to the public back in June.  Last week's broadcast of Creedence Beta 3's release, I was naturally excited to see Tim's spotlight on PVHVM and the following article's intent is to show - in a test environment only - how I was able to run Debian 7.x (64-bit) in the same fashion.

For more information regarding PV + HVM as to establish a PVHVM container, Tim linked a great article in his Creedence Beta 3 post last Monday that I highly recommend you read as the finer details are out of scope for this article's intent and purpose.

Why is this important to me?  Quite simply we can go from this....

... to this ...

So now, let's make a PVHVM container for a Debian 7.x (64-Bit) Guest VM within XenCenter!

Requirements

1.  Creedence Beta 3 and XenCenter

2.  The full installation ISO for Debian 7.x (from https://www.debian.org/CD/http-ftp/#stable )

3.  Any changes mentioned below should not be applied to any of the stock Debian templates

4.  This should not be performed on your production environment

Creating A Default Template

With XenCenter open, ensure that from the View options one has "XenServer Templates" selected:

We should now see the default templates that XenServer installs:

1.  Right-click on the "Debian Wheezy 7 (64-bit)" template and save it as "Debian 7":

 

3.  This will produce a "custom template" - highlight it and copy the UUID of the custom template:

4.  The remainder of this configuration will take place from the command-line.

5.  To make the changes to the custom template easier, export the UUID of the custom template we created to avoid copy/paste errors:

export myTemp="af84ad43-8caf-4473-9c4d-8835af818335"
echo $myTemp
af84ad43-8caf-4473-9c4d-8835af818335

6.  With the $myTemp variable created, let us first convert this custom template to a default template by executing:

xe template-param-set uuid=$myTemp other-config:default_template=true

xe template-param-remove uuid=$myTemp param-name=other-config param-key=base_template_name

7.  Now configure the template's "platform" variable to leverage VGA graphics:

xe template-param-set uuid=$myTemp platform:viridian=false platform:device_id=0001 platform:vga=std platform:videoram=16

8.  Due to how some distros work with X, clear the PV-args and set a "vga=792" flag:

xe template-param-set uuid=$myTemp PV-args="vga=792"

9.  Disable the PV-bootloader:

xe template-param-set uuid=$myTemp PV-bootloader=""

10.  Specify that the template uses an HVM-style bootloader (DVD/CD first, then hard drive, and then network):

xe template-param-set uuid=$myTemp HVM-boot-policy="BIOS order"
xe template-param-set uuid=$myTemp HVM-boot-params:order="dcn"

 

Now, before creating a Debian 7.x Guest VM, one should now see in XenCenter that "Debian 7" is listed as a "default template":

 

Lastly, for the VGA flag and what it means to most distros, the following is a table explaining the VGA flag and bit settings to achieve XxY resoluton @ a color depth:

VGA Resolution and Color Depth reference Chart:

Depth 800×600 1024×768 1152×864 1280×1024 1600×1200
8 bit vga=771 vga=773 vga=353 vga=775 vga=796
16 bit vga=788 vga=791 vga=355 vga=794 vga=798
24 bit vga=789 vga=792   vga=795 vga=799

Create A New Debian Guest

From now, one should be able to create a new Guest VM using the template we have just created and should be able to walk through the entire install:

Post installation, tools can be installed as well!

Enjoy and happy testing!

 

jkbs | @xenfomation

Recent Comments
JK Benedict
Hey, Tobi - Thanks for the feedback! With regards to the graphical install, are you referring to how to do this with XenServer 6... Read More
Friday, 10 October 2014 19:40
JK Benedict
Alrighty -- Been busy, but the following BASH script should make a copy of your Debain 7 template and make a generic, HVM templat... Read More
Wednesday, 22 October 2014 03:10
JK Benedict
You should quite able to copy-n-paste the code above -- that will remove the emoticons from the colon + some other character.... Read More
Wednesday, 22 October 2014 03:21
Continue reading
20971 Hits
18 Comments

Before Electing a New Pool Master

Overview

The following is a reminder of specific steps to take before electing a new pool master - especially in High Availability-enabled deployments.  Albeit, there are circumstances where this will happen automatically due to High Availability (by design) or in an emergency situation, but never-the-less, the following steps should be taken when electing a new pool master where High Availability is enabled.

Disable High Availability

Before electing a new master one must disable High Availability.  The reason is quite simple:

If a new host is designated as master with HA enabled, the subsequent processes and transition time can lead to HA see that a pool member is down.  It is doing what it is supposed to do from the "mathematical" sense, but from "reality" it is actually confused.

The end result is that HA could either recover with some time or fence as it attempts to apply fault tolerance in contradiction to the desire to "simply elect a new master".

It is also worth noting that upon recovery - if any Guests which had a mounted ISO are rebooted on another host - that "VDI not found" errors can appear although this is not the case.  The ISO image that is mounted is seen as a VDI and if that resource is not available on another host, the Guest VM will fail to resume: presenting the generic VDI error.

Steps to Take

HA must be disabled and for safe practice, I always recommend ejecting all mounted ISO images.  The latter can be accomplished by executing the following from the pool master:

xe vm-cd-eject --multiple

As for HA it can be disabled in two ways: via the command-line or from XenCenter.

From the command line of the current pool master, execute:

xe pool-ha-disable
xe pool-sync

If desired - just for safe guarding one's work - those commands can be executed on every other pool member.

As for XenCenter one can select the Pool/Pool Master icon in question and from the "HA" tab, select the option to disable HA for the pool.

Workload Balancing

For versions of XenServer utilizing Workload Balancing it is not necessary to halt this.

Now that HA is disabled, switch Pool Masters and when all servers are in an active state: re-enable HA from XenCenter or from the command line:

xe pool-recover-slaves
xe pool-ha-enable

I hope this is helpful and as always: questions and comments are welcomed!

 

--jkbs | @xenfomation

Continue reading
17254 Hits
0 Comments

Log Rotation and Syslog Forwarding

A Continuation of Root Disk Management

First, this article is applicable to any sized XenServer deployment and secondly, it is a continuation off of my previous article regarding XenServer root disk maintenance.  The difference is that - for all XenServer deployments - the topic revolves specifically with that of Syslog: from tuning log rotation, specifying the amount of logs to retain, leveraging compression, and of course... Syslog forwarding.

All of this is an effort to share tips to new (or seasoned) XenServer Administrators in the options available to ensure necessary Syslog data does not fill a XenServer root disk while ensuring - for certain industry specific requirements - that log-specific data is retained without sacrafice.

Syslog: A Quick Introduction

So, what is this Syslog?  In short it can be compared to the Unix/Linux equivalent of Windows Event Log (along with other logging mechanisms popular to specific applications/Operating Systems). 

The slightly longer explanation is that Syslog is not only a daemon, but also a protocol: established long ago for Unix systems to record system and application to local disk as well as offering the ability to forward the same log information to its peers for redundancy, concentration, and to conserve disk space on highly active systems.  For more detailed information on the finer details of the Syslog protocol and daemon one can review the IETF's specification at http://tools.ietf.org/html/rfc5424.

On a stand-alone XenServer, the Syslog daemon is started on boot and its configuration file for handling source, severity, types of logs, and where to store them are defined in /etc/syslog.conf.  It is highly recommended that one does not alter this file unless necessary and if one knows what they are doing.  From boot to reboot, information is stored in various files: found under the root disk's /var/log directory.

Taken from a fresh installation of XenServer, the following shows various log files that store information specific to a purpose.  Note that the items in "brown" are sub-directories:

For those seasoned in administering XenServer it is visible that from the kernel-level and user-space level there are not many log files.  However, XenServer is verbose about logging for a very simple reason: collection, analysis, and troubleshooting if an issue should arise.

So for a lone XenServer (by default) logs are essentially received by the Syslog daemon and based on /etc/syslog.conf - as well as the source and type of message - stored on the local root file system as discussed:

Within a pooled XenServer environment things are pretty much the same: for the most part.  As a pool has a master server, log data for the Storage Manager (as a quick example) is trickled up to the master server.  This is to ensure that while each pool member is recording log data specific to itself, the master server has the aggregate log data needed to promote troubleshooting of the entire pool from one point.

Log Rotation

Log rotation, or "logrotate", is what ensures that Syslog files in /var/log do not grow out of hand.  Much like Syslog, logrotate utilizes a configuration file to dictate how often, at what size, and if compression should be used when archiving a particular Syslog file.  The term "archive" is truly meant for rotating out a current log in place of a fresh, current log to take its place.

Post XenServer installation and before usage, one can measure the amount of free root disk space by executing the following command:

df -h

The output will be similar to the following and the line one should be most concerned with is in bold font:

Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1             4.0G  1.9G  2.0G  49% /
none                  381M   16K  381M   1% /dev/shm
/opt/xensource/packages/iso/XenCenter.iso
                       52M   52M     0 100% /var/xen/xc-install

Once can see by the example that only 49% of the root disk on this XenServer host has been used.  Repeating this process as implementation ramps up, an administrator should be able to measure how best to tune logrotate's configuration file for after install, /etc/logrotate should resemble the following:

# see "man logrotate" for details
# rotate log files weekly
weekly
# keep 4 weeks worth of backlogs
rotate 4
# create new (empty) log files after rotating old ones
create
# uncomment this if you want your log files compressed
#compress
# RPM packages drop log rotation information into this directory
include /etc/logrotate.d
# no packages own wtmp -- we'll rotate them here
/var/log/wtmp {
    monthly
    minsize 1M
    create 0664 root utmp
    rotate 1
}
/var/log/btmp {
    missingok
    monthly
    minsize 1M
    create 0600 root utmp
    rotate 1
}
# system-specific logs may be also be configured here.

In previous versions, /etc/logrotate.conf was setup to retain 999 archived/rotated logs, but as of 6.2 the configuration above is standard. 

Before covering the basic premise and purpose of this configuration file, one can see this exact configuration file explained in more detail at http://www.techrepublic.com/article/manage-linux-log-files-with-logrotate/

The options declared in the default configuration are conditions that, when met, rotate logs accordingly:

  1. The first option specifies when to invoke log rotation.  By default this is set to weekly and may need to be adjusted for "daily".  This will only swap log files out for new ones and will not delete log files.
  2. The second option specifies how long to keep archived/rotate log files on the disk.  The default is to remove archived/rotated log files after a week.  This will delete log files that meet this age.
  3. The third options specifies what to do after rotating a log file out.  The default - which should not be changed is to create a new/fresh log after rotating out its older counterpart.
  4. The fourth option - which is commented out - specifies another what to do, but this time for the archived log files.  It is highly recommended to remove the comment mark so that archived log files are compressed: saving on disk space.
  5. A fifth option which is not present in the default conf is the "size" option.  This specifies how to handle logs that reach a certain size, such as "size 15M".  This option should be employed: especially if an administrator has SNMP logs that grow exponentially or notices that the particular XenServer's Syslog files are growing faster than logrotate can rotate and dispose of archived files.
  6. The "include" option specifies a sub-directory wherein unique, logrotate configurations can be specified for individual log files.
  7. The remaining portion should be left as is


In summary for logrotate, one is advised to measure use of the root disk using "df -h" and to tune logrotate.conf as needed to ensure Syslog does not inadvertently consume available disk space.

And Now: Syslog Forwarding

Again, this is a long standing feature and one I have been looking forward to explaining, highlighting, and providing examples for.  However, I have had a kind of writers block for many reasons: mainly that it ties into Syslog, Logrotate, and XenCenter, but also that there is a tradeoff.

I mentioned before that Syslog can forward messages to other hosts.  Furthermore, it can forward Syslog messages to other hosts without writing a copy of the log to local disk.  What this means is that a single XenServer or a pool of XenServers can send their log data to a "Syslog Aggregator".

The trade off is that one cannot generate a server status report via XenCenter, but instead gather the logs from the Syslog aggregate server and manually submit them for review.  That being said, one can ensure that low root disk space is not nearly as high of a concern on the "Admin Todo List" and can retain vast amounts of log data for a deployment of any size: based on dictated industry practices or for, sarcastically, nostalgic purposes.

The principles with Syslog and logrotate.conf will apply to the Syslog Aggregator as what good is a Syslog server if not configured properly as to ensure it does not fill itself up?  The requirements to instantiate a Syslog aggregation server, configure the forwarding of Syslog messages, and so forth are quite simple:

  1. Port 514 must be opened on the network
  2. The Syslog aggregation server must be reachable - either by being on the same network segment or not - by each XenServer host
  3. The Syslog aggregation server can be a virtual or physical machine; Windows or Linux-based with either a native Syslog daemon configured to receive external host messages or using a Windows-based Syslog solution offering the same "listening" capabilities.
  4. The Syslog aggregation server must have a static IP assigned to it
  5. The Syslog aggregation server should be monitored and tuned just as if it were Syslog/logrotate on a XenServer
  6. For support purposes, logs should be easily copied/compressed from the Syslog aggregation server - such as using WinSCP, scp, or other tools to copy log data for support's analysis

The quickest means to establish a simple virtual or physical Syslog aggregation server - in my opinion - is to reference the following two links.  These describe the installation of a base Debian-based system with specific intent to leverage Rsyslog for the recording of remote Syslog messages sent to it over UDP port 514 from one's XenServers:

http://www.aboutdebian.com/syslog.htm

http://www.howtoforge.com/centralized-rsyslog-server-monitoring

Alternatively, the following is an all-in-one guide (using Debian) with Syslog-NG:

http://www.binbert.com/blog/2010/04/syslog-server-installation-configuration-debian/

Once the server is instantiated and ready to record remote Syslog messages, it is time to open XenCenter.  Right click on a pool master or stand-alone XenServer and select "Properties":


In the window that appear - in the lower left-hand corner - is an option for "Log Destination":

To the right, one should notice the default option selected is "Local".  From there, select the "Remote" option and enter the IP address (or FQDN) of the remote Syslog aggregate server as follows:

Finally, select "OK" and the stand-alone XenServer (or pool) will update its Syslog configuration, or more specifically, /var/lib/syslog.conf.  The reason for this is so Elastic Syslog can take over the normal duties of Syslog: forwarding messages to the Syslog aggregator accordingly.

For example, once configured, the local /var/log/kern.log file will state:

Sep 18 03:20:27 bucketbox kernel: Kernel logging (proc) stopped.
Sep 18 03:20:27 bucketbox kernel: Kernel log daemon terminating.
Sep 18 03:20:28 bucketbox exiting on signal 15

Certain logs will still continue to record Syslog on the host, so it may be desirable to edit /var/lib/syslog.conf and add comments to lines where a "-/var/log/some_filename" is specified as lines with "@x.x.x.x" dictate to forward to the Syslog aggregator.  As an example, I have marked the lines in bold to show where comments should be added to prevent further logging to the local disk:

# Save boot messages also to boot.log
local7.*             @10.0.0.1
# local7.*         /var/log/boot.log

# Xapi rbac audit log echoes to syslog local6
local6.*             @10.0.0.1
# local6.*         -/var/log/audit.log

# Xapi, xenopsd echo to syslog local5
local5.*             @10.0.0.1
# local5.*         -/var/log/xensource.log

After one - The Administrator - has decided what logs to keep and what logs to forward, Elastic Syslog can be restarted as so the changes take affect by executing:

/etc/init.d/syslog restart

Since Elastic Syslog - a part of XenServer - is being utilized, the init script will ensure that Elastic Syslog is bounced and that it is responsible for handling Syslog forwarding, etc.

 

So, with this - I hope you find it useful and as always: feedback and comments are welcomed!

 

--jkbs | @xenfomation

 

 

 

Recent Comments
Tobias Kreidl
Super nice post, Jesse! One great reason to have logs on more than one server is that if there is ever a security issue, you stan... Read More
Thursday, 18 September 2014 17:12
JK Benedict
I could NOT agree more, Tobias and why I have been testing, experimenting, and really just trying to push the bounds as far as I c... Read More
Saturday, 27 September 2014 08:53
JK Benedict
Thank you, Tobias! Indeed, RSyslog and a base Debian install is my preferred choice for Syslog aggregation due to exactly what yo... Read More
Friday, 19 September 2014 03:08
Continue reading
51652 Hits
16 Comments

XenServer Root Disk Maintenance

The Basis for a Problem

UPDATE 21-MAR-2015: Thanks to feedback from our community, I have added key notes and additional information to this article.

For all that it does, XenServer has a tiny installation footprint: 1.2 GB (roughly).  That is the modern day equivalent of a 1.44" disk, really.  While the installation footprint is tiny, well, so is the "root/boot" partition that the XenServer installer creates: 4GB in size - no more, no less, and don't alter it! 

The same is also true - during the install process - for the secondary partition that XenServer uses for upgrades and backups:

The point is that this amount of space does not facilitate much room for log retention, patch files, and other content.  As such, it is highly important to tune, monitor, and perform clean-up operations on a periodic basis.  Without attention over time all hotfix files, syslog files, temporary log files, and other forms of data can accumulate until the point with which the root disk will become full.

UPDATE: If you are wondering where the swap partition is, wonder no more.  For XenServer, swap is file-based and is instantiated during the boot process of XenServer.  As for the 4GB partitions, never alter the size of these partitions upgrades, etc will re-align the partitions to match upstream XenServer release specifications.

One does not want a XenServer (or any server for that matter) to have a full root disk as this will lead to a full stop of processes as well as virtualization for the full disk will go "read only".  Common symptoms are:

  • VMs appear to be running, but one cannot manage a XenServer host with XenCenter
  • One can ping the XenServer host, but cannot SSH into it
  • If one can SSH into the box, one cannot write or create files: "read only file system" is reported
  • xsconsole can be used, but it returns errors when "actions" are selected

So, while there is a basis for a problem, the following article offers the basis for a solution (with emphasis on regular administration).

Monitoring the Root Disk

Shifting into the first person, I am often asked how I monitor my XenServer root disks.  In short, I utilize tools that are built into XenServer along with my own "Administrative Scripts".  The most basic way to see how much space is available on a XenServer's root disk is to execute the following:

df -h

This command will show you "disk file systems" and the "-h" means "human readable", ie Gigs, Megs, etc.  The output should resemble the following and I have made the line we care about in bold font:

Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1             4.0G  1.9G  1.9G  51% /
none                  299M   28K  299M   1% /dev/shm
/opt/xensource/packages/iso/XenCenter.iso
                       56M   56M     0 100% /var/xen/xc-install

A more "get to the point" way is to run:

df -h | grep "/$" | head -n 1

Which produces the line we are concerned with:

/dev/sda1             4.0G  1.9G  1.9G  51% /

The end result is that we know 51% of the root partition is used.  Not bad, really.  Still, I am a huge fan of automation and will now discuss a simple way that this task can be ran - automatically - for each of your XenServers.

What I am providing is essentially a simple BASH script that checks a XenServer's local disk.  If the local disk use exceeds a threshold (which you can change), it will send an alert to XenCenter so the the tactics described further in this document can be employed for the assurance of as much free space as possible.

Using nano or VI, create a file in the /root/ (root's home) directory called "diskmonitor" and paste in the following content:

#!/bin/bash
# Quick And Dirty Disk Monitoring Utility
# Get this host's UUID
thisUUID=`xe host-list name-label=$HOSTNAME params=uuid --minimal`
# Threshold of disk usage to report on
threshold=75    # an example of how much disk can be used before alerting
# Get disk usage
diskUsage=`df -h | grep "/$" | head -n 1 | awk {' print $5 '} | sed -n -e "s/%//p"`
# Check
if [ $diskUsage -gt $threshold ]; then
     xe message-create host-uuid=$thisUUID name="ROOT DISK USAGE" body="Disk space use has exceeded $diskUsage on `echo $HOSTNAME`!" priority="1"
fi

After saving this file be sure to make it executable:

chmod +x /root/diskmonitor

The "#!/bin/bash" at the start of this script now becomes imperative as it tells the user space (when called upon) to use the BASH interpreter.

UPDATE: To execute this script manually, one can execute the following command if in the same directory as this script:

./diskmonitor

This convention is used so that scripts can be execute just as if they were a binary/compiled piece of code.  If the "./" prefix is an annoyance, move /root/diskmonitor to /sbin/ -- this will ensure that one can execute diskmonitor without the "dot forward-slash" prefix while in other directories:

mv /root/diskmonitor /sbin/
# Now you should be able to execute diskmonitor from anywhere
diskmonitor

If you move the diskmonitor script make note of where you placed it as this directory will be needed for the cron entry.

For automation of the diskmonitor script one can now leverage cron: adding an entry to root's "crontab" and specify a recurring time diskmonitor should be executed (behind the scenes). 

The following is a basic outline as how to leverage cron so that diskmonitor will be executed four times per day.  Now, if you are looking for more information regarding cron, what it does, and how to configure it for other automation-based task then visit http://www.thegeekstuff.com/2009/06/15-practical-crontab-examples/ for more detailed examples and explanations.

1.  From the XenServer host command-line execute the following to add an entry to crontab for root:

crontab -e

2.  This will open root's crontab in VI or nano (text editors) where one will want to add one of the following lines based on where diskmonitor has been moved to or if it is still located in the /root/ directory:

# If diskmonitor is still located in /root/
00 00,06,12,18 * * * ./root/diskmonitor
# OR if it has been moved to the /sbin/ directory
00 00,06,12,18 * * * diskmonitor

3.  After saving this, we now have a cron entry that runs diskmonitor at midnight, six in the morning, noon, and 6 in the evening (military time) for every day of every week of every month.  If the script detects that the root drive on a XenServer is > 75% "used" (you can adjust this), it will send an alert to XenCenter where one can leverage - further - built in tools for email notifications, etc. 

The following is an example of the output of diskmonitor, but it is apropos to note that the following test was done using a threshold of 50% -- yes, in Creedence there is a bit more free space!  Kudos to Dev!

One can expand upon the script (and XenCenter), but lets focus on a few areas where root disk usage can be slowly consumed.

Removing Old Hotfixes

After applying one or more hotfixes to XenServer, copies of each decompressed hotfix are stored in /var/patch.  The main reason for this - in short - is that in pooled environments, hotfixes are distributed from a host master to each host slave to eliminate the need to repetitively download one hotfix multiplied by the number of hosts in a pool. 

The more complex reason is for consistency, for if a host becomes the master of the pool, it must reflect the same content and configuration as its predecessor did and this includes hotfixes.

The following is an example of what the /var/patch/ directory can look like after the application of one or more hotfixes:

Notice the /applied sub-directory?  We never want to remove that. 

UPDATE 21-MAR-2015:  Thanks to Tim, the Community Comments, and my Senior Lead for validating I was not "crazy" in my findings before composing this article: "xe patch-destroy" did not do its job as many commented.  It has been resolved post 6.2, so I thank everyone - especially Dev - for addressing this.

APPROPRIATE REMOVAL:

To appropriately remove these patch files, one can should utilize the "xe patch-destroy" command.  While I do not have a "clever" command-line example to take care of all files at once, the following should be ran against each file that has a UUID-based naming convention:

cd /var/patch/

xe patch-destroy uuid=<FILENAME, SUCH AS 4d2caa35-4771-ea0e-0876-080772a3c4a7>
(repeat "xe patch-destroy uuid=" command for each file with the UUID convention)

While this is not optimum, especially to run per-host in a pool, it is the prescribed method and as I have a more automated/controlled solution, I will naturally document it.

EMERGENCY SITUATIONS:

In the event that removal of other contents discussed in this article does not resolve a full root disk issue, the following can be used to remove these patch files.  However, it must be emphasized that a situation could arise wherein the lack of these files will require a re-download and install of said patches:

find /var/patch -maxdepth 1 | grep "[0-9]" | xargs rm -f

Finally, if you are in the middle of applying hotfixes do not perform the removal procedure (above) until all hosts are rebooted, fully patched, and verified as in working order.  This applies for pools - especially - where a missing patch file could throw off XenCenter's perspective of what hotfixes have yet to be installed and for which host.

The /tmp Directory

Plain and simple, the /tmp directory is truly meant for just that: holding temporary data.  Pre-Creedence, one can access a XenServer's command-line and execute the following to see a quantity of ".log" files:

cd /tmp
ls

As visualized (and overtime) one can see that an accumulation of many, many log files.  Albeit, these are small at the individual file perspective, but collectively... they take up space.

UPDATE 21-MAR-2015:  Again, thanks to everyone as these logs were always intended to be "removed" automatically once a Guest VM was started.  So, as of 6.5 and beyond -- this section is irrelevant!

cd /tmp/
rm -rf *.log

This will remove only ".log" files so any driver ISO images stored in /tmp (or elsewhere) should be manually addressed.

Compressed Syslog Files

The last item is to remove all compressed Syslog files stored under /var/log.  These usually consume the most disk space and as such, I will be authoring an article shortly to explain how one can tune logrotate and even forward these messages to a Syslog aggregator.

UPDATE:  As a word of of advice, we are only looking to clear "*.gz" (compressed/archived) log files.  Once these are deleted, they are gone.  Naturally this means an server status report gathered for collection will lack historical information so one may consider copying these off to another host (using scp or WinSCP) before following the next steps to remove them under a full root disk scenario.

In the meantime, just as before one can execute the following command to keep current syslog files in-tact, but remove old, compressed log files:

cd /var/log/
rm -rf *gz

So For Now...

It is at this point one has a tool to know when a disk has hit capacity and methods with which to clean-up specific items.  This can be taken by the admin to be ran in an automated fashion or manual fashion.  It is truly up to the admin's style of work.

Please be on the lookout for my next article involving Syslog forwarding, logrotation, and so forth as this will help any size deployment of XenServer: especially where regulations for log retention is a strict requirement.

Feel free to post any questions, suggestions, or methods you may even use to ensure XenServer's root disk does not fill up.

 

--jkbs | @xenfomation

 

 

Recent Comments
JK Benedict
Just as an update, Heinrich - my Beta 3 system is at 48% post-install and with a PV/HVM Debian 7 Guest running (I will be posting ... Read More
Saturday, 27 September 2014 08:55
JK Benedict
Heinrich, Quite welcome, sir!! Different versions of XenServer naturally leave different footprints, but 60-65% is where my syste... Read More
Tuesday, 16 September 2014 13:05
JK Benedict
Yup: my error. I grew this simple script to be modular: for pools and other data. The correct syntax for the "xe message" line ne... Read More
Tuesday, 16 September 2014 13:16
Continue reading
148428 Hits
51 Comments

Debian 7.4 and 7.6 Guest VMs

"Four Debians, Two XenServers"

The purpose of this article is to discuss my own success with virtualizing "four" releases of Debian (7.4/7.6; 32-bit/64-bit) in my own test labs.

For more information about Debian, head on over to Debian.org - specifically here to download the 7.6 ISO of your choice ( I used both the full DVD install ISO as well as the net install ISO ).

Note: If you are utilizing the Debian 7.4 net install ISO the OS will be updated to 7.6 during the install process.  This is just a "heads up" in the event you are keen to stick with a vanilla Debian 7.4 VM for test purposes.  And so you will need to download the full install DVD for the 7.4 32-bit/64-bit release instead of the net install ISO.

Getting A New VM Started

Once I had the install media of my choice, I copied it to my ISO repository that both XenServer 6.2 and Creedence utilize in my test environment.

From XenCenter (distributed with Creedence Alpha 4) I selected "New VM".

In both 6.2 and Creedence I chose the "Debian 7.0 (Wheezy) 64-bit" VM template:

I then continued through the "New VM" wizard: specifying processors, RAM, networking, and so forth.  On the last step, I made sure as to select "Start the new VM Automatically" before I pressed "Create Now":

Within a few moments, this familiar view appeared in the console:

I installed a minimum instance of both: SSH and BASE system.  I also used guided partitioning just because I was in quite a hurry.

After championing my way through the installer, as expected, Debian 7.4 and 7.6 both prompted that I reboot:

Since this is a PV install, I have access to the Shutdown, Reboot, and Suspend buttons, but I was curious about tools as memory consumption, etc were not present under each guest's "Performance" tab:

... and the "Network" tab stated "Unknown":

Before I logged in as root - in both XenServer 6.2 and Creedence Alpha 4 - I mounted the xs-tools.iso.  Once in with root access, I executed the following commands to install xs-tools for these guest VMs:


mkdir iso
mount /dev/xvdd/ iso/
cd iso/Linux/
./install.sh

The output was exactly the same in both VMs and naturally I selected "Y" to install the guest additions:

Detected `Debian GNU/Linux 7.6 (wheezy)' (debian version 7).

The following changes will be made to this Virtual Machine:
  * update arp_notify sysctl.conf.
  * packages to be installed/upgraded:
    - xe-guest-utilities_6.2.0-1137_amd64.deb

Continue? [y/n] y

Selecting previously unselected package xe-guest-utilities.
(Reading database ... 24502 files and directories currently installed.)
Unpacking xe-guest-utilities (from .../xe-guest-utilities_6.2.0-1137_amd64.deb) ...
Setting up xe-guest-utilities (6.2.0-1137) ...
Mounting xenfs on /proc/xen: OK
Detecting Linux distribution version: OK
Starting xe daemon:  OK

You should now reboot this Virtual Machine.

Following the installer's instructions, I rebooted the guest VMs accordingly.

Creedence Alpha 4 Results

As soon as the reboot was complete I was able to see each guest VM's memory performance as well as networking for both IPv4 and IPv6:

XenServer 6.2

With XenServer 6.2, I found that after installing the guest agent - under the "Network" tab - there still was no IPv4 information for my 64-bit Debian 7.4 and 7.6 guest VMs.  This does not apply to 32-Bit Debian 7.4 and 7.6 guest VMs as the tools installed just fine.

Then I thought about it and realized that by disabling IPv6, presto - the network information appeared for my IPv4 address.  To accomplish this, I edited the following file (as to avoid adjusting GRUB parameters):

/etc/sysctly.conf

And at the bottom of this file I added:

net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1 net.ipv6.conf.lo.disable_ipv6 = 1 net.ipv6.conf.eth0.disable_ipv6 = 1

After saving my changes, I rebooted and immediately was able to see my memory usage:

However... I still could not see my IPv4 address under the "Network" tab until I noticed the device ID of the network interface -- it was Device 1 (not 0):

I deleted this interface and re-added a new one from XenCenter.  Instantly, I could see my IPv4 address and the device ID for the network interface was back to 0:

And yes, I tested rebooting -- the address is still shown and memory usage is still measured.  In addition I did try to remove the flags to disable IPv6, but that resulted in seeing "UNKNOWN" - again - for 64-Bit Debian 7.4 and 7.6 guests.  That just means in XenServer 6.2 I have kept my changes in /etc/sysctl.conf as to ensure my 64-Bit Debian 7.4 and 7.6 hosts with XenTools' Guest Additions for Linux work just fine.

So, that's that -- something to experiment and test with: Debian 7.4 or 7.6 32-bit/64-bit in a XenServer 6.2 or Creedence Alpha test environment!

 

--jkbs

@xenfomation

Recent comment in this post
JK Benedict
Tested on Creedence Beta, as well. Love it!!!
Thursday, 07 August 2014 18:58
Continue reading
19342 Hits
1 Comment

Running Scientific Linux Guest VMs on XenServer

Running Scientific Linux Guest VMs on XenServer

What is Scientific Linux?

In short, Scientific Linux is an customized RedHat/CentOS Linux distribution provided by CERN and Fermilab: popular in educational institutions as well as laboratory environments.  More can be read about Scientific Linux here: https://www.scientificlinux.org/

From my own long-term testing - before XenServer 6.2 and our pre-release/Alpha - Creedence - I have ran both Scientific Linux 5 and Scientific Linux 6 without issues.  This article's scope is to show how one can install Scientific Linux and, more specifically, ensure the XenTools Guest Additions for Linux are installed as these do not require any form of "Xen-ified" kernel.

XenServer and Creedence

The following are my own recommendations to run Scientific Linux in XenServer:

  1. I recommend using XenServer 6.1 through any of the Alpha releases due to improvements with XenTools
  2. I recommend using Scientific Linux 5 or Scientific Linux 6
  3. The XenServer VM Template one will need to use will either be of CentOS 5 or CentOS 6: 32 or 64 bit depends on the release of Scientific Linux you will be using

One will also require a URL as to install Scientific Linux from their repository, found at http://ftp.scientificlinux.org/linux/scientific/

The following are URLs I recommend for use during the Guest Installation process (discussed later):

Scientific Linux 5 or 6 Guest VM Installation

With XenCenter, the process of installing Scientific Linux 5.x or Scientific Linux 6 uses the same principles.  You need to create a new VM, select the appropriate CentOS template, and define the VM parameters for disk, RAM, and networking:

1.  In XenCenter, select "New VM":

2.  When prompted for the new VM Template, select the appropriate CentOS-based template (5 or 6, 32 or 64 bit):

3.  Follow the wizard to add processors, disc, and networking information

4.  From the console, follow the steps as to install Scientific Linux 5 or 6 based on your preferences.

5.  After rebooting, login as root and execute the following command within the Guest VM:

yum update

6.  Once yum has applied any updates, reboot the Scientific Linux 5 or 6 Guest VM by executing the following within the Guest VM:

reboot

7.  With the Guest VM back up, login as root and mount the xs-tools.iso within XenCenter:

7.  From the command line, execute the following commands to mount xs-tools.iso within the Guest VM as well as to run the install.sh utility:

cd ~
mkdir tools
mount /dev/xvdd tools/
cd tools/Linux/
./install.sh

8.  With Scientific Linux 5 you will be prompted to install the XenTools Guest Additions - select yes and when complete, reboot the VM:

reboot

9.  With Scientific Linux 6 you will notice the following output:

Fatal Error: Failed to determine Linux distribution and version.

10.  This is not a Fatal Error, but an error induced because the distro build and revision are not presented as expected.  This means that you will manually need to install the XenTools Guest Additions by executing the following commands and rebooting:

rpm -ivh xe-guest-utilities-xenstore-<version number here.x86_64.rpm
rpm -ivh xe-guest-utilities-<version number here>.x86_64.rpm
reboot

Finally after the last reboot (post guest addition install) one will notice from XenCenter that the network address, stats, and so forth are available (including the ability to migrate the VM):

 

I hope this article helps any of you out there and feedback is always welcomed!

--jkbs

@xenfomation

 

Recent Comments
Terry Wang
Running PV on HVM (also called PVHVM sometimes) is just fine. For modern Linux distros with Linux 3.0+ kernel (it'll unplug the QE... Read More
Monday, 28 July 2014 03:56
JK Benedict
Stay tuned! I have more to offer for Creedence... especially in lieu of Mr. Mackey's request from the following article @ http://... Read More
Saturday, 27 September 2014 09:03
Ian Yates
Hi, I'm new to this community but independently worked out a (pretty much identical) install routine for ScientificLinux on Xen so... Read More
Wednesday, 30 July 2014 10:24
Continue reading
20852 Hits
3 Comments

Resetting Lost Root Password in XenServer 6.2

The Situation

Bad things can happen... badly.  In this case the root password to manage a XenServer (version 6.2) was... lost.

Physical or remote login to the XenServer 6.2 host failed authentication, naturally, and XenCenter had been disconnected from the host: requiring an administrator to provide these precious credentials, but in vein.

An Alternate Situation

Had XenCenter been left open ( offering command line access to the XenServer host in question) the following command could have been used from the XenServer's command line as to initiate a root password reset:

passwd

Once the root user's password has been changed the connection to the host from XenCenter to the XenServer host will need to be reestablished: using the root username and "new" password.

Once connected the remainder of this article becomes irrelevant otherwise you may very well need to...

Boot into Linux Single User Mode

Be it forgetfulness, change of guard, another administrator changing the password, or simply a typo in company documentation, the core problem being address via this post is that one cannot connect to XenServer 6.2 as the root password is... lost or forgotten.

As a secondary problem, one has lost patience and has obtained physical or iLO/iDRAC access to the XenServer in question, but still the root password is not accepted:

 

The Shortest Solution: Breaking The Law of Physical Security

I am not encouraging hacking, but that physical interaction with the XenServer in question and altering the boot to "linux single user mode" is the last solution to this problem.  To do this, one will need have/understand:

  • Physical Access, iLO, iDRAC, etc
  • A rebooted of the XenServer in question will be required

With disclaimers aside I now highly recommend reading and reviewing the steps outlined below before going through the motions. 

Some steps are time sensitive, so being prepared is merely a part of the overall pla.

  1. After gaining physical or iLO/iDRAC access to the XenServer in question, reboot it!  With iLO and iDRAC, there are options to hard or soft reset a system and either option is fine.
  2. Burn the following image into your mind for after the server reboots and runs through hardware BIOS/POST tests, you will see the following for 5 seconds (or so):
  3. Immediately grab the keyboard and enter the following:
    menu.c32 (press enter)
  4. The menu.c32 boot prompt will appear and again, you will only have 5 or so seconds to select the "XE" entry and pressing tab to edit boot options:
  5. Now, at the bottom of the screen one will see the boot entry information.  Don't worry, you have time so make sure it is similar to the following:
  6. Near the end of the, one should see "console=tty0 quiet vga=785 splash quiet": replace "quiet vga=785 splash" with "linux single".  More specifically - without the quotes - such as:
    linux single
  7. With that completed, simply press enter as to boot into Linux's single user mode.  You should eventually be dropped into a command line prompt (as illustrated below):
  8. Finally, we can reset the root password to something one can remember by executing the Linux command:
    passwd

  9. When prompted, enter the new root user password: you will be asked to verify it and upon success you should see the following:
  10. Now, enter the following command to reboot the XenServer in question:
    reboot
  11. Obviously, this will reboot the XenServer as illustrated below:
  12. Let the system fully reboot and present the xsconsole.  To verify that the new password has taken affect, select "Local Command Shell" from xsconsole.  This will require you to authenticate as the root user:
  13. If successful you will be dropped to the local command shell and this also means you can reconnect and manage this XenServer via XenCenter with the new root password!
Tags:
Recent Comments
Davide Poletto
Basically it's a matter of entering in Linux Single User mode to (re)initialize root's password before XenServer starts its boot p... Read More
Saturday, 12 July 2014 13:23
Davide Poletto
Basically it's a matter of entering in Linux Single User mode to (re)initialize root's password before XenServer starts its boot p... Read More
Saturday, 12 July 2014 13:31
JK Benedict
Davide, Thanks for the feedback: it is greatly appreciated. Sincerely, --jkbs @xenfomation... Read More
Wednesday, 16 July 2014 17:58
Continue reading
93274 Hits
12 Comments

About XenServer

XenServer is the leading open source virtualization platform, powered by the Xen Project hypervisor and the XAPI toolstack. It is used in the world's largest clouds and enterprises.
 
Technical support for XenServer is available from Citrix.