Virtualization Blog

Discussions and observations on virtualization.

Configuring XenApp to use two NVIDIA GRID engines

SUMMARY

The configuration of a XenApp virtual machine (VM) hosted on XenServer that supports two concurrent graphics processing engines in passthrough mode is shown to work reliably and provide the opportunity to give more flexibility to a single XenApp VM rather than having to spread the access to the engines over two separate XenApp VMs. This in turn can provide more flexibility, save operating system licensing costs and ostensibly, could be extended to incorporate additional GPU engines.

INTRODUCTION

A XenApp virtual machine (VM) that supports two or more concurrent graphics processing units (GPUs) has a number of advantages over running separate VM instances, each with its own GPU engine. For one, if users happen to be unevenly relegated to particular XenApp instances, some XenApp VMs may idle while other instances are overloaded, to the detriment of users associated with busy instances. It is also simpler to add capacity to such a VM as opposed to building and licensing yet another Windows Server VM.  This study made use of an NVIDIA GRID K2 (driver release 340.66), comprised of two Kepler GK104 engines and 8 GB of GDDR5 RAM (4 GB per GPU). It is hosted in a base system that consists of a Dell R720 with dual Intel Xeon E5-2680 v2 CPUs (40 VCPUs, total, hyperthreaded) hosting XenServer 6.2 SP1 running XenApp 7.6 as a VM with 16 VCPUs and 16 GB of memory on Windows 2012 R2 Datacenter.

PROCEDURE

It is important to note that these steps constitute changes that are not officially supported by Citrix or NVIDIA and are to be regarded as purely experimental at this stage.

Registry changes to XenApp were made according to these instructions provided in the Citrix Product Documentation.

On the XenServer, first list devices and look for GRID instances:
# lspci|grep -i nvid
44:00.0 VGA compatible controller: NVIDIA Corporation GK104GL [GRID K2] (rev a1)
45:00.0 VGA compatible controller: NVIDIA Corporation GK104GL [GRID K2] (rev a1)

Next, get the UUID of the VM:
# xe vm-list
uuid ( RO)           : 0c8a22cf-461f-0030-44df-2e56e9ac00a4
     name-label ( RW): TST-Win7-vmtst1
    power-state ( RO): running
uuid ( RO)           : 934c889e-ebe9-b85f-175c-9aab0628667c
     name-label ( RW): DEV-xapp
    power-state ( RO): running

Get the address of the existing GPU engine, if one is currently associated:
# xe vm-param-get param-name=other-config uuid=934c889e-ebe9-b85f-175c-9aab0628667c
vgpu_pci: 0/0000:44:00.0; pci: 0/0000:44:0.0; mac_seed: d229f84d-73cc-e5a5-d105-f5a3e87b82b7; install-methods: cdrom; base_template_name: Windows Server 2012 (64-bit)
(Note: ignore any vgpu_pci parameters that are irrelevant now to this process, but may be left over from earlier procedures and experiments.)

Dissociate the GPU via XenCenter or via the CLI, set GPU type to “none”.
Then, add both GPU engines following the recommendations in assigning multiple GPUs to a VM in XenServer using the other-config:pci parameter:
# xe vm-param-set uuid=934c889e-ebe9-b85f-175c-9aab0628667c
   other-config:pci=0/0000:44:0.0,0/0000:45:0.0
In other words, do not use the vgpu_pci parameter at all.

Check if the new parameters took hold:
# xe vm-param-get param-name=other-config uuid=934c889e-ebe9-b85f-175c-9aab0628667c params=all
vgpu_pci: 0/0000:44:00.0; pci: 0/0000:44:0.0,0/0000:45:0.0; mac_seed: d229f84d-73cc-e5a5-d105-f5a3e87b82b7; install-methods: cdrom; base_template_name: Windows Server 2012 (64-bit)
Next, turn GPU passthrough back on for the VM in XenCenter or via the CLI and start up the VM.

On the XenServer you should now see no GPUs available:
# nvidia-smi
Failed to initialize NVML: Unknown Error
This is good, as both K2 engines now have been allocated to the XenApp server.
On the XenServer you can also run “xn –v pci-list  934c889e-ebe9-b85f-175c-9aab0628667c” (the UUID of the VM) and should see the same two PCI devices allocated:
# xn -v pci-list 934c889e-ebe9-b85f-175c-9aab0628667c
id         pos bdf
0000:44:00.0 2   0000:44:00.0
0000:45:00.0 1   0000:45:00.0
More information can be gleaned from the “xn diagnostics” command.

Next, log onto the XenApp VM and check settings using nvidia-smi.exe. The output will resemble that of the image in Figure 1.

 

 GRID-Fig-1.jpg
Figure 1. Output From the nvidia-smi utility, showing the allocation of both K2 engines.


Note the output shows correctly that 4096 MiB of memory are allocated for each of the two engines in the K2, totaling its full capacity of 8196 MiB. XenCenter will still show only one GPU engine allocated (see Figure 2) since it is not aware that both are allocated to the XenApp VM and has currently no way of making that distinction.

 

GRID-Fig-2.jpgFigure 2. XenCenter GPU allocation (showing just one engine – all XenServer is currently capable of displaying)

 

So, how can you tell if it is really using both GRID engines? If you run the nvidia-smi.exe program on the XenApp VM itself, you will see it has two GPUs configured in passthrough mode (see the earlier screenshot in Figure 1). Depending on how apps are launched, you will see one or the other or both of them active.  As a test, we ran two concurrent Unigine "Heaven" benchmark instances and both came out with the same metrics within 1% of each other as well as when just one instance was run, and both engines showed as being active. Displayed in Figure 3 is a sample screenshot of the Unigine ”Heaven” benchmark running with one active instance; note that it sees both K2 engines present, even though the process is making use of just one.


GRID-Fig-3.jpg
Figure 3. A sample Unigine “Heaven” benchmark frame. Note the two sets of K2 engine metrics displayed in the upper right corner.


It is evident from the display in the upper right hand corner that one engine has allocated memory and is working, as evidenced by the correspondingly higher temperature reading and memory frequency. The result of a benchmark using openGL and a 1024x768 pixel resolution is seen in Figure 4. Note again the difference between what is shown for the two engines, in particular the memory and temperature parameters.

 GRID-Fig-4.jpg

Figure 4. Outcome of the benchmark. Note the higher memory and temperature on the second K2 engine.

 

When another instance is running concurrently, you see its memory and temperature also rise accordingly in addition to the load evident on the first engine, as well as activity on both engines in the output from the nvidia-smi.exe utility (Figure 5).


CaptureDualK2c.JPG
Figure 5. Two simultaneous benchmarks running, using both GRID K2 engines, and the nvidia-smi output.

You can also see with two instances running concurrently how the load is affected. Note in the performance graphs from XenCenter shown in Figure 6 how one copy of the “Heaven” benchmark impacts the server and then about halfway across the graphs, a second instance is launched.

 GRID-Fig-6.jpg

Figure 6. XenCenter performance metrics of first one, then a second concurrent Unigine “Heaven” benchmark.


CONCLUSIONS

The combination of two GRID K2 engines associated with a single, hefty XenApp VM works well for providing adequate capacity to support a number of concurrent users in GPU passthrough mode without the need of hosting additional XenApp instances. As there is a fair amount of leeway in the allocation of CPUs and memory to a virtualized instance under XenServer (up to 16 vCPUs and 128 GB of memory under XenServer 6.2 when these tests were run), one XenApp VM should be able to handle a reasonably large number of tasks.  As many as six concurrent sessions of this high-demand benchmark with 800x600 high-resolution settings have been tested with the GPUs still not saturating. A more typical application, like Google Earth, consumes around 3 to 5% of the cycles of a GRID K2 engine per instance during active use, depending on the activity and size of the window, so fairly minimal. In other words, twenty or more sessions could be handled by each engine, or potentially 40 or more for the entire GRID K2 with a single XenApp VM, provided of course that the XenApp’s memory and its own CPU resources are not overly taxed.

XenServer 6.2 already supports as many as eight physical GPUs per host, so as servers expand, one could envision having even more available engines that could be associated with a particular VM. Under some circumstances, passthrough mode affords more flexibility and makes better use of resources compared to creating specific vGPU assignments. Windows Server 2012 R2 Datacenter supports up to 64 sockets and 4 TB of memory, and hence should be able to support a significantly larger number of associated GPUs. XenServer 6.2 SP1 has a processor limit of 16 VCPUs and 128 GB of virtual memory. XenServer 6.5, officially released in January 2015, supports up to four K2 GRID cards in some physical servers and up to 192 GB of RAM per VM for some guest operating systems as does the newer release documented in the XenServer 6.5 SP1 User's Guide, so there is a lot of potential processing capacity available. Hence, a very large XenApp VM could be created that delivers a lot of raw power with substantial Microsoft server licensing savings. The performance meter shown above clearly indicates that VCPUs are the primary limiting factor in the XenApp configuration and with just two concurrent “Heaven” sessions running, about a fourth of the available CPU capacity is consumed compared to less than 3 GB of RAM, which is only a small additional amount of memory above that allocated by the first session.

These same tests were run after upgrading to XenServer 6.5 and with newer versions of the NVIDIA GRID drivers and continue to work as before. At various times, this configuration was run for many weeks on end with no stability issues or errors detected during the entire time.

ACKNOWLEDGEMENTS

I would like to thank my co-worker at NAU, Timothy Cochran, for assistance with the configuration of the Windows VMs used in this study. I am also indebted to Rachel Berry, Product Manager of HDX Graphics at Citrix and her team, as well as Thomas Poppelgaard and also Jason Southern of the NVIDIA Corporation for a number of stimulating discussions. Finally, I would like to greatly thank Will Wade of NVIDIA for making available the GRID K2 used in this study.

Continue reading
19331 Hits
0 Comments

XenServer Pre-Release Programme

A very big thank you for everyone who participated in the Creedence Alpha/Beta programme! 
The programme was very successful and raised a total of 177 issues, of which 138 were resolved during the Alpha/Beta period.  We are reviewing how the pre-release process can be improved and streamlined going forward. 

The Creedence Alpha/Beta programme has now come to an end with the focus of nightly snapshots moving on to the next version of XenServer.   

The Creedence Alpha/Beta source code remains available and can be accessed here: 
http://xenserver.org/component/content/article/24-product/creedence/143-xs-2014-development-snapshots.html

Creedence Alpha/Beta bugs may still be reported on https://bugs.xenserver.org

Work is already progressing on the next version of XenServer and the nightly snapshots are available here:
http://xenserver.org/component/content/article/2-uncategorised/115-development-snapshots.html

As this work is new and still expected to be unstable, please do not raise any Creedence Alpha/Beta bugs against it.

Recent Comments
Andrew Halley
Hi there, we are working towards posting an updated build containing all the bug fixes received to date, and which is fully integr... Read More
Sunday, 14 December 2014 12:33
Tobias Kreidl
Andrew, Thanks go to you and the whole Citrix team for making this a really great overall experience. Each XenServer release seems... Read More
Wednesday, 10 December 2014 19:45
Andrew Halley
Appreciate it Tobias - and our thanks to all the excellent contributions received from our community.
Sunday, 14 December 2014 12:33
Continue reading
15472 Hits
25 Comments

Basic Network Testing with IPERF

Purpose

I am often asked how one can perform simple network testing within, outside, and into XenServer.  This is a great question as – by itself – it is simple enough to answer.  However, depending on what one desires out of “network testing” the answer can quickly become more complex.

As such, this I have decided to answer this question using a long standing, free utility called IPERF (well, IPERF2).  It is a rather simple, straight-forward, but powerful utility I have used over many, many years.  Links to IPERF will be provided - along with documentation on its use - as it will serve in this guide as a way to:


- Test bandwidth between two or more points

- Determine bottlenecks

- Assists with black box testing or “what happens if” scenarios

- Use a tool that runs on both Linux and Windows

- And more…

IPERF: A Visual Breakdown

IPERF has to be installed on/at at least two separate end points.  One point acts a server/receiver and the other point acts as a client/transmitter.  This so network testing can be done on a simple subnet to a complex, routed network: end-to-end using TCP or UDP generated traffic:

The visual shows an IPERF client transmitting data over IPv4 to an IPERF receiver.  Packets traverse the network - from wireless routers and through firewalls - from the client side to the server side to over port 5001.

IPERF and XenServer

The key to network testing is in remembering that any device which is connected to a network infrastructure – Virtual or Physical – is a node, host, target, end point, or just simply … a networked device.

With regards to virtual machines, XenServer obviously supports Windows and Linux operating systems.  IPERF can be used to test virtual-to-virtual networking as well as virtual-to-physical networking.  If we stack virtual machines in a box to our left and stack physical machines in a box to our right – despite a common subnet or routed network – we can quickly see the permutations of how "Virtual and Physical Network Testing" can be achieved with IPERF transmitting data from one point to another:

And if one wanted, they could just as easily test networking for this:

Requirements

To illustrate a basic server/client model with IPERF, the following will be required:

- A Windows 7 VM that will act as an IPERF client

- A CentOS 5.x VM that will act as a receiver.

- IPERF2 (the latest version of IPERF, or "IPERF3" can be found at https://github.com/esnet/iperf or, more specifically, http://downloads.es.net/pub/iperf/)

The reason for using IPERF2 is quite simple: portability and compatibility on two of the most popular operating systems that I know are virtualized.  In addition, the same steps to installing IPERF2 on these hosts can be carried out on physical systems running similar operating systems, as well. 

The remainder of this article - regarding IPERF2 - will require use of the MS-DOS command-line as well as the Linux shell (of choice).  I will carefully explain all commands as so if you are “strictly a GUI” person, you should fit right in.

Disclaimer

When utilizing IPERF2, keep in mind that this is a traffic generator.  While one can control the quantity and duration of traffic, it is still network traffic

So, consider testing during non-peak hours or after hours as to not interfere with production-based network activity.

Windows and IPERF

The Windows port of IPERF 2.0.5 requires Windows XP (or greater) and can be downloaded from:

http://sourceforge.net/p/iperf/patches/_discuss/thread/20d4a4b0/5c44/attachment/Iperf.zip

Within the .zip file you will find two directories.  One is labeled DEBUG and the other is labeled RELEASE.  Export the Iperf.exe program to a directory you will remember, such as C:\iperf\

Now, accessing the command line (cmd.exe), navigate to C:\iperf\ and execute:

iperf

The following output should appear:

Linux and IPERF

If you have additional repos already configured for CentOS, you can simply execute (as root):

yum install iperf

If that fails, one will need to download the Fedora/RedHat EPEL-Release RPM file for the version of CentOS being used.  To do this (as root), execute:

wget  http://dl.fedoraproject.org/pub/epel/5/i386/epel-release-5-4.noarch.rpm
rpm -Uvh epel-release-5-4.noarch.rpm

 

*** Note that the above EPEL-Release RPM file is just an example (a working one) ***

 

Once epel-release-5-4.noarch.rpm is installed, execute:

yum install iperf

And once complete, as root execute iperf and one should see the following output:

http://cdn.ws.citrix.com/wp-content/uploads/2014/06/CMD2.png?__utma=222274247.1078613845.1409810797.1412210514.1412210784.2&__utmb=222274247.5.8.1412227628611&__utmc=222274247&__utmx=-&__utmz=222274247.1412210514.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none)&__utmv=222274247.|1=my%20account%20holder=y=1^14=industry=(Non-company%20Visitor)=1^15=sub_industry=(Non-company%20Visitor)=1^16=employee_count=(Non-company%20Visitor)=1^17=company_name=(Non-company%20Visitor)=1^18=primary_sic=(Non-company%20Visitor)=1^19=registry_dma_code=(Non-company%20Visitor)=1&__utmk=208580497

Notice that it is the same output as what is being displayed from Windows.  IPERF2 is expecting a "-s" (server) or "-c" (client) command-line option with additional arguments.

IPERF Command-Line Arguments

On either Windows or Linux, a complete list of options for IPERF2 can be listed by executing:

iperf –help

A few good resources of examples to use IPERF2 options for the server or client can be referenced at:

http://www.slashroot.in/iperf-how-test-network-speedperformancebandwidth

http://samkear.com/networking/iperf-commands-network-troubleshooting

http://www.techrepublic.com/blog/data-center/handy-iperf-commands-for-quick-network-testing/

For now, we will focus on the options needed for our server and client:

-f, –format    [kmKM]   format to report: Kbits, Mbits, KBytes, MBytes
-m, –print_mss          print TCP maximum segment size (MTU – TCP/IP header)
-i, –interval  #        seconds between periodic bandwidth reports
-s, –server             run in server mode
-c, –client    <host>   run in client mode, connecting to <host>
-t, –time      #        time in seconds to transmit for (default 10 secs)

Lastly, there is a TCP/IP Window setting.  This goes beyond the scope of this document as it relates to the TCP frame/windowing of data.  I highly recommend reading either of the two following links – especially for Linux – as there has always been some debate as what is “best to be used”:

https://kb.doit.wisc.edu/wiscnet/page.php?id=11779

http://kb.pert.geant.net/PERTKB/IperfTool

Running An IPERF Test

So, we have IPERF2 installed on Windows 7 and on CentOS 5.10.  Before one performs any testing, ensure any AV does not block iperf.exe from running as well as port 5001 being opened across the network network.

Again, another port can be specified, but the default port IPERF2 uses for both client and server is 5001.

Server/Receiver Side

The Server/Receiver side will be on the CentOS VM.

Following the commands above, we want to execute the following to run IPERF2 as a server/receiver from our Windows 7 client machine:

iperf -s -f M -m -i 10

The output should show:

————————————————————
Server listening on TCP port 5001
TCP window size: 0.08 MByte (default)
————————————————————

The TCP window size has been previously commented on and the server is now ready to accept connections (press Control+C or Control+Z to exit).

Client/Transmission Side

Let us now focus on the client side to start sending data from the Windows 7 VM to the CentOS VM.

From Windows 7, the command line to start transmitting data for 30 seconds to our CentOS host (x.x.x.48) is:

iperf -c x.x.x.48 -t 30 -f M

Pressing enter, the traffic flow begins and the output from the client side looks like this:

From the server side, the output looks something like this:

And there we have it – a first successful test from a Windows 7 VM (located on one XenServer) to a CentOS 5.10 VM (located on another XenServer).

Understanding the Results

From either the client side or server side, results are shown by time and average.  The key item to look for from either side is:

0.0-30.0 sec  55828 MBytes  1861 MBytes/sec

Why?  This shows the average over the course of 0.0 to 30.0 seconds in terms of total megabytes transmitted as well as average megabytes of data sent per second.  In addition, since the "-f M" argument was passed as a command-line option, the output is calculated in megabytes accordingly.

In this particular case, we simply illustrated that from one VM to another VM, we transferred data at 1861 megabytes per second.

*** Note that this test was performed in a local lab with lower-end hardware than what you probably have! ***

--jkbs | @xenfomation

 

Recent Comments
chaitanya
Hi, Nice article.. I have a simple question.. you did this test for windows and linux os. Any specific requirement on that? I d... Read More
Monday, 10 November 2014 16:59
JK Benedict
Exactly: to show that IPERF can be used in any configuration, any school of thought, etc! Windows Windows Linux Linux Linux Wi... Read More
Wednesday, 12 November 2014 03:08
Massimo De Nadal
Hi, your throughput is 1861 MB/sec which means more than 14Gb !!!! Can I ask you what kind of server/setup are you using ??? I'... Read More
Tuesday, 11 November 2014 12:24
Continue reading
46513 Hits
15 Comments

Increasing Ubuntu's Resolution

Increasing Ubuntu's Resolution

Maximizing Desktop Real-estate with Ubuntu

With the addition of Ubuntu (and the likes) to Creedence, you may have noticed that the default resolution is 1024x768.  I certainly noticed it and with much work on 6.2 and Creedence Beta, I have a quick solution to maximizing the screen resolution for you.

The thing to consider is that a virtual frame buffer is what is essentially being used.  You can re-invent X configs all day, but the shortest path is to - first - ensure that that the following files are installed on your Ubuntu guest VM:

sudo apt-get install xvfb xfonts-100dpi xfonts-75dpi xfstt

Once that is all done installing, the next step is to edit Grub -- specifically /etc/default/grub:

sudo vi /etc/default/grub

Considering your monitor's maximum resolution (or not if you want to remote into Ubuntu using XRDP), look for the variable GRUB_GFXMODE.  This is where you can specify your desired BOOT resolutions that we will instruct the guest VM to SUSTAIN into user-space:

GRUB_GFXMODE=1280x960,1280x800,1280x720,1152x768,1152x700,1024x768,800x600

Next, adjust the variable GRUB_PAYLOAD_LINUX to equal keep, or:

GRUB_PAYLOAD_LINUX=keep

Save the changes and be certain to execute the following:

sudo update-grub
sudo reboot

Now, you will notice that even during the boot phase that the resolution is large and this will carry into user space: Lightdm, Xfce, and the likes.

Finally, I would highly suggest installing XRDP for your Guest VM.  It allows you to access that Ubuntu/Xbunutu/etc desktop remotely.  Specific details regarding this can be found through Ubuntu's forum:

http://askubuntu.com/questions/449785/ubuntu-14-04-xrdp-grey


Enjoy!

--jkbs | @xenfomation

 

 

Recent Comments
JK Benedict
Thanks, YLK - I am so glad to hear this helped someone else! Now... install XRDP and leverage the power to Remote Desktop (secure... Read More
Thursday, 25 December 2014 04:46
gfpl
thanks guy is very good help me !!!
Friday, 06 March 2015 10:52
Fredrik Wendt
Would be really nice to see all steps needed (CLI on dom0) to go from http://se.archive.ubuntu.com/ubuntu/dists/vivid/main/install... Read More
Monday, 14 September 2015 21:48
Continue reading
25912 Hits
6 Comments

Creedence: Debian 7.x and PVHVM Testing

Introduction

On my own time and on my own testing equipment, I have been able to run many Guests VMs in PVHVM containers - before Creedence after its release to the public back in June.  Last week's broadcast of Creedence Beta 3's release, I was naturally excited to see Tim's spotlight on PVHVM and the following article's intent is to show - in a test environment only - how I was able to run Debian 7.x (64-bit) in the same fashion.

For more information regarding PV + HVM as to establish a PVHVM container, Tim linked a great article in his Creedence Beta 3 post last Monday that I highly recommend you read as the finer details are out of scope for this article's intent and purpose.

Why is this important to me?  Quite simply we can go from this....

... to this ...

So now, let's make a PVHVM container for a Debian 7.x (64-Bit) Guest VM within XenCenter!

Requirements

1.  Creedence Beta 3 and XenCenter

2.  The full installation ISO for Debian 7.x (from https://www.debian.org/CD/http-ftp/#stable )

3.  Any changes mentioned below should not be applied to any of the stock Debian templates

4.  This should not be performed on your production environment

Creating A Default Template

With XenCenter open, ensure that from the View options one has "XenServer Templates" selected:

We should now see the default templates that XenServer installs:

1.  Right-click on the "Debian Wheezy 7 (64-bit)" template and save it as "Debian 7":

 

3.  This will produce a "custom template" - highlight it and copy the UUID of the custom template:

4.  The remainder of this configuration will take place from the command-line.

5.  To make the changes to the custom template easier, export the UUID of the custom template we created to avoid copy/paste errors:

export myTemp="af84ad43-8caf-4473-9c4d-8835af818335"
echo $myTemp
af84ad43-8caf-4473-9c4d-8835af818335

6.  With the $myTemp variable created, let us first convert this custom template to a default template by executing:

xe template-param-set uuid=$myTemp other-config:default_template=true

xe template-param-remove uuid=$myTemp param-name=other-config param-key=base_template_name

7.  Now configure the template's "platform" variable to leverage VGA graphics:

xe template-param-set uuid=$myTemp platform:viridian=false platform:device_id=0001 platform:vga=std platform:videoram=16

8.  Due to how some distros work with X, clear the PV-args and set a "vga=792" flag:

xe template-param-set uuid=$myTemp PV-args="vga=792"

9.  Disable the PV-bootloader:

xe template-param-set uuid=$myTemp PV-bootloader=""

10.  Specify that the template uses an HVM-style bootloader (DVD/CD first, then hard drive, and then network):

xe template-param-set uuid=$myTemp HVM-boot-policy="BIOS order"
xe template-param-set uuid=$myTemp HVM-boot-params:order="dcn"

 

Now, before creating a Debian 7.x Guest VM, one should now see in XenCenter that "Debian 7" is listed as a "default template":

 

Lastly, for the VGA flag and what it means to most distros, the following is a table explaining the VGA flag and bit settings to achieve XxY resoluton @ a color depth:

VGA Resolution and Color Depth reference Chart:

Depth 800×600 1024×768 1152×864 1280×1024 1600×1200
8 bit vga=771 vga=773 vga=353 vga=775 vga=796
16 bit vga=788 vga=791 vga=355 vga=794 vga=798
24 bit vga=789 vga=792   vga=795 vga=799

Create A New Debian Guest

From now, one should be able to create a new Guest VM using the template we have just created and should be able to walk through the entire install:

Post installation, tools can be installed as well!

Enjoy and happy testing!

 

jkbs | @xenfomation

Recent Comments
JK Benedict
Hey, Tobi - Thanks for the feedback! With regards to the graphical install, are you referring to how to do this with XenServer 6... Read More
Friday, 10 October 2014 19:40
JK Benedict
Alrighty -- Been busy, but the following BASH script should make a copy of your Debain 7 template and make a generic, HVM templat... Read More
Wednesday, 22 October 2014 03:10
JK Benedict
You should quite able to copy-n-paste the code above -- that will remove the emoticons from the colon + some other character.... Read More
Wednesday, 22 October 2014 03:21
Continue reading
20987 Hits
18 Comments

Debian 7.4 and 7.6 Guest VMs

"Four Debians, Two XenServers"

The purpose of this article is to discuss my own success with virtualizing "four" releases of Debian (7.4/7.6; 32-bit/64-bit) in my own test labs.

For more information about Debian, head on over to Debian.org - specifically here to download the 7.6 ISO of your choice ( I used both the full DVD install ISO as well as the net install ISO ).

Note: If you are utilizing the Debian 7.4 net install ISO the OS will be updated to 7.6 during the install process.  This is just a "heads up" in the event you are keen to stick with a vanilla Debian 7.4 VM for test purposes.  And so you will need to download the full install DVD for the 7.4 32-bit/64-bit release instead of the net install ISO.

Getting A New VM Started

Once I had the install media of my choice, I copied it to my ISO repository that both XenServer 6.2 and Creedence utilize in my test environment.

From XenCenter (distributed with Creedence Alpha 4) I selected "New VM".

In both 6.2 and Creedence I chose the "Debian 7.0 (Wheezy) 64-bit" VM template:

I then continued through the "New VM" wizard: specifying processors, RAM, networking, and so forth.  On the last step, I made sure as to select "Start the new VM Automatically" before I pressed "Create Now":

Within a few moments, this familiar view appeared in the console:

I installed a minimum instance of both: SSH and BASE system.  I also used guided partitioning just because I was in quite a hurry.

After championing my way through the installer, as expected, Debian 7.4 and 7.6 both prompted that I reboot:

Since this is a PV install, I have access to the Shutdown, Reboot, and Suspend buttons, but I was curious about tools as memory consumption, etc were not present under each guest's "Performance" tab:

... and the "Network" tab stated "Unknown":

Before I logged in as root - in both XenServer 6.2 and Creedence Alpha 4 - I mounted the xs-tools.iso.  Once in with root access, I executed the following commands to install xs-tools for these guest VMs:


mkdir iso
mount /dev/xvdd/ iso/
cd iso/Linux/
./install.sh

The output was exactly the same in both VMs and naturally I selected "Y" to install the guest additions:

Detected `Debian GNU/Linux 7.6 (wheezy)' (debian version 7).

The following changes will be made to this Virtual Machine:
  * update arp_notify sysctl.conf.
  * packages to be installed/upgraded:
    - xe-guest-utilities_6.2.0-1137_amd64.deb

Continue? [y/n] y

Selecting previously unselected package xe-guest-utilities.
(Reading database ... 24502 files and directories currently installed.)
Unpacking xe-guest-utilities (from .../xe-guest-utilities_6.2.0-1137_amd64.deb) ...
Setting up xe-guest-utilities (6.2.0-1137) ...
Mounting xenfs on /proc/xen: OK
Detecting Linux distribution version: OK
Starting xe daemon:  OK

You should now reboot this Virtual Machine.

Following the installer's instructions, I rebooted the guest VMs accordingly.

Creedence Alpha 4 Results

As soon as the reboot was complete I was able to see each guest VM's memory performance as well as networking for both IPv4 and IPv6:

XenServer 6.2

With XenServer 6.2, I found that after installing the guest agent - under the "Network" tab - there still was no IPv4 information for my 64-bit Debian 7.4 and 7.6 guest VMs.  This does not apply to 32-Bit Debian 7.4 and 7.6 guest VMs as the tools installed just fine.

Then I thought about it and realized that by disabling IPv6, presto - the network information appeared for my IPv4 address.  To accomplish this, I edited the following file (as to avoid adjusting GRUB parameters):

/etc/sysctly.conf

And at the bottom of this file I added:

net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1 net.ipv6.conf.lo.disable_ipv6 = 1 net.ipv6.conf.eth0.disable_ipv6 = 1

After saving my changes, I rebooted and immediately was able to see my memory usage:

However... I still could not see my IPv4 address under the "Network" tab until I noticed the device ID of the network interface -- it was Device 1 (not 0):

I deleted this interface and re-added a new one from XenCenter.  Instantly, I could see my IPv4 address and the device ID for the network interface was back to 0:

And yes, I tested rebooting -- the address is still shown and memory usage is still measured.  In addition I did try to remove the flags to disable IPv6, but that resulted in seeing "UNKNOWN" - again - for 64-Bit Debian 7.4 and 7.6 guests.  That just means in XenServer 6.2 I have kept my changes in /etc/sysctl.conf as to ensure my 64-Bit Debian 7.4 and 7.6 hosts with XenTools' Guest Additions for Linux work just fine.

So, that's that -- something to experiment and test with: Debian 7.4 or 7.6 32-bit/64-bit in a XenServer 6.2 or Creedence Alpha test environment!

 

--jkbs

@xenfomation

Recent comment in this post
JK Benedict
Tested on Creedence Beta, as well. Love it!!!
Thursday, 07 August 2014 18:58
Continue reading
19368 Hits
1 Comment

Running Scientific Linux Guest VMs on XenServer

Running Scientific Linux Guest VMs on XenServer

What is Scientific Linux?

In short, Scientific Linux is an customized RedHat/CentOS Linux distribution provided by CERN and Fermilab: popular in educational institutions as well as laboratory environments.  More can be read about Scientific Linux here: https://www.scientificlinux.org/

From my own long-term testing - before XenServer 6.2 and our pre-release/Alpha - Creedence - I have ran both Scientific Linux 5 and Scientific Linux 6 without issues.  This article's scope is to show how one can install Scientific Linux and, more specifically, ensure the XenTools Guest Additions for Linux are installed as these do not require any form of "Xen-ified" kernel.

XenServer and Creedence

The following are my own recommendations to run Scientific Linux in XenServer:

  1. I recommend using XenServer 6.1 through any of the Alpha releases due to improvements with XenTools
  2. I recommend using Scientific Linux 5 or Scientific Linux 6
  3. The XenServer VM Template one will need to use will either be of CentOS 5 or CentOS 6: 32 or 64 bit depends on the release of Scientific Linux you will be using

One will also require a URL as to install Scientific Linux from their repository, found at http://ftp.scientificlinux.org/linux/scientific/

The following are URLs I recommend for use during the Guest Installation process (discussed later):

Scientific Linux 5 or 6 Guest VM Installation

With XenCenter, the process of installing Scientific Linux 5.x or Scientific Linux 6 uses the same principles.  You need to create a new VM, select the appropriate CentOS template, and define the VM parameters for disk, RAM, and networking:

1.  In XenCenter, select "New VM":

2.  When prompted for the new VM Template, select the appropriate CentOS-based template (5 or 6, 32 or 64 bit):

3.  Follow the wizard to add processors, disc, and networking information

4.  From the console, follow the steps as to install Scientific Linux 5 or 6 based on your preferences.

5.  After rebooting, login as root and execute the following command within the Guest VM:

yum update

6.  Once yum has applied any updates, reboot the Scientific Linux 5 or 6 Guest VM by executing the following within the Guest VM:

reboot

7.  With the Guest VM back up, login as root and mount the xs-tools.iso within XenCenter:

7.  From the command line, execute the following commands to mount xs-tools.iso within the Guest VM as well as to run the install.sh utility:

cd ~
mkdir tools
mount /dev/xvdd tools/
cd tools/Linux/
./install.sh

8.  With Scientific Linux 5 you will be prompted to install the XenTools Guest Additions - select yes and when complete, reboot the VM:

reboot

9.  With Scientific Linux 6 you will notice the following output:

Fatal Error: Failed to determine Linux distribution and version.

10.  This is not a Fatal Error, but an error induced because the distro build and revision are not presented as expected.  This means that you will manually need to install the XenTools Guest Additions by executing the following commands and rebooting:

rpm -ivh xe-guest-utilities-xenstore-<version number here.x86_64.rpm
rpm -ivh xe-guest-utilities-<version number here>.x86_64.rpm
reboot

Finally after the last reboot (post guest addition install) one will notice from XenCenter that the network address, stats, and so forth are available (including the ability to migrate the VM):

 

I hope this article helps any of you out there and feedback is always welcomed!

--jkbs

@xenfomation

 

Recent Comments
Terry Wang
Running PV on HVM (also called PVHVM sometimes) is just fine. For modern Linux distros with Linux 3.0+ kernel (it'll unplug the QE... Read More
Monday, 28 July 2014 03:56
JK Benedict
Stay tuned! I have more to offer for Creedence... especially in lieu of Mr. Mackey's request from the following article @ http://... Read More
Saturday, 27 September 2014 09:03
Ian Yates
Hi, I'm new to this community but independently worked out a (pretty much identical) install routine for ScientificLinux on Xen so... Read More
Wednesday, 30 July 2014 10:24
Continue reading
20877 Hits
3 Comments

Overview of the Performance Improvements between XenServer 6.2 and Creedence Alpha 2

The XenServer Creedence Alpha 2 has been released, and one of the main focuses in Alpha 2 was the inclusion of many performance improvements that build on the architectural improvements seen in Alpha 1. This post will give you an overview of these performance improvements in Creedence, and will start a series of in-depth blog posts with more details about the most important ones.

Creedence Alpha 1 introduced several architectural improvements that aim to improve performance and fix a series of scalability limits found in XenServer 6.2:

  • A new 64-bit Dom0 Linux kernel. The 64-bit kernel will remove the cumbersome low/high-memory division present in the previous 32-bit Dom0 kernel, which limited the maximum amount of memory that Dom0 could use and which added memory access penalties in a Dom0 with more than 752MB RAM. This means that the Dom0 memory can now be arbitrarily scaled up to cope with memory demands of the latest vGPU, disk and network drivers, support for more VMs and internal caches to speed up disk access (see, for instance, the Read-caching section below).

  • Dom0 Linux kernel 3.10 with native support for the Xen Project hypervisor. Creedence Alpha 1 adopted a very recent long-term Linux kernel. This modern Linux kernel contains many concurrency, multiprocessing and architectural improvements over the old xen-Linux 2.6.32 kernel used previously in XenServer 6.2. It contains pvops features to run natively on the Xen Project hypervisor, and streamlined virtualization features used to increase datapath performance, such as a grant memory device that allows Dom0 user space processes to access memory from a guest (as long as the guest agrees in advance). Additionally, the latest drivers from hardware manufacturers containing performance improvements can be adopted more easily.

  • Xen Project hypervisor 4.4. This is the latest Xen Project hypervisor version available, and it improves on the previous version 4.1 on many accounts. It vastly increases the number of virtual event channels available for Dom0 -- from 1023 to 131071 -- which can translate into a correspondingly larger number of VMs per host and larger numbers of virtual devices that can be attached to them. XenServer 6.2 was using a special interim change that provided 4096 channels, which was enough for around 500 VMs per host with a few virtual devices in each VM. With the extra event channels in version 4.4, Creedence Alpha 1 can have each of these VMs endowed with a richer set of virtual devices. The Xen Project hypervisor 4.4 also handles grant-copy locking requests more efficiently, improving aggregate network and disk throughput; it facilitates future increases to the supported amount of host memory and CPUs; and it adds many other helpful scalability improvements.

  • Tapdisk3. The latest Dom0 disk backend design has been enabled by default for all the guest VBDs. While the previous tapdisk2 in XenServer 6.2 would establish a datapath to the guest in a circuitous way via a Dom0 kernel component, tapdisk3 in Creedence Alpha 1 establishes a datapath connected directly to the guest (via the grant memory device in the new kernel), minimizing latency and using less CPU. This results in big improvements in concurrent disk access and a much larger total aggregate disk throughput for the VBDs. We have measured aggregate disk throughput improvements of up to 100% on modern disks and machines accessing large blocksizes with large number of threads and observed local SSD arrays being maxed out when enough VMs and VBDs were used.

  • GRO enabled by default. The Generic Receive Offload is now enabled by default for all PIFs available to Dom0. This means that for GRO-capable NICs, incoming network packets will be transparently merged by the NIC and Dom0 will be interrupted less often to process the incoming data, saving CPU cycles and scaling much better with 10Gbps and 40Gbps networks. We have observed incoming single-stream network throughput improvements of 300% on modern machines.

  • Netback thread per VIF. Previously, XenServer 6.2 would have one netback thread for each existing Dom0 VCPU and a VIF would be permanently associated with one Dom0 VCPU. In the worst case, it was possible to end up with many VIFs forcibly sharing the same Dom0 VCPU thread, while other Dom0 VCPU threads were idle but unable to help. Creedence Alpha 2 improves this design and gives each VIF its own Dom0 netback thread that can run on any Dom0 VCPU. Therefore, the VIF load will now be spread evenly across all Dom0 VCPUs in all cases.

Creedence Alpha 2 then introduced a series of extra performance enhancements on top of the architecture improvements of Creedence Alpha 1:

  • Read-caching. In some situations, several VMs are all cloned from the same base disk so share much of their data while the few different blocks they write are stored in differencing-disks unique to each VM. In this case, it would be useful to be able to cache the contents of the base disk in memory, so that all the VMs can benefit from very fast access to the contents of the base disk, reducing the amount of I/O going to and from physical storage. Creedence Alpha 2 introduces this read caching feature enabled by default, which we expect to yield substantial performance improvements in the time it takes to boot VMs and other desktop and server applications where the VMs are mostly sharing a single base disk.

  • Grant-mapping on the network datapath. The pvops-Linux 3.10 kernel used in Alpha 1 had a VIF datapath that would need to copy the guest's network data into Dom0 before transmitting it to another guest or host. This memory copy operation was expensive and it would saturate the Dom0 VCPUs and limit the network throughput. A new design was introduced in Creedence Alpha 2, which maps the guest's network data into Dom0's memory space instead of copying it. This saves substantial Dom0 VCPU resources that can be used to increase the single-stream and aggregate network throughput even more. With this change, we have measured network throughput improvements of 250% for single-stream and 200% for aggregate stream over XenServer 6.2 on modern machines. 

  • OVS 2.1. An openvswitch network flow is a match between a network packet header and an action such as forward or drop. In OVS 1.4, present in XenServer 6.2, the flow had to have an exact match for the header. A typical server VM could have hundreds or more connections to clients, and OVS would need to have a flow for each of these connections. If the host had too many such VMs, the OVS flow table in the Dom0 kernel would become full and would cause many round-trips to the OVS userspace process, degrading significantly the network throughput to and from the guests. Creedence Alpha 2 has the latest OVS 2.1, which supports megaflows. Megaflows are simply a wildcarded language for the flow table allowing OVS to express a flow as group of matches, therefore reducing the number of required entries in the flow table for the most common situations and improving the scalability of Dom0 to handle many server VMs connected to a large number of clients.

Our goal is to make Creedence the most scalable and fastest XenServer release yet. You can help us in this goal by testing the performance features above and verifying if they boost the performance you can observe in your existing hardware.

Debug versus non-debug mode in Creedence Alpha

The Creedence Alpha releases use by default a version of the Xen Project hypervisor with debugging mode enabled to facilitate functional testing. When testing the performance of these releases, you should first switch to using the corresponding non-debugging version of the hypervisor, so that you can unleash its full potential suitable for performance testing. So, before you start any performance measurements, please run in Dom0:

cd /boot
ln -sf xen-*-xs?????.gz xen.gz   #points to the non-debug version of the Xen Project hypervisor in /boot

Double-check that the resulting xen.gz symlink is pointing to a valid file and then reboot the host.

You can check if the hypervisor debug mode is currently on or off by executing in Dom0:

xl dmesg | fgrep "Xen version"

and checking if the resulting line has debug=y or debug=n. It should be debug=n for performance tests.

You can reinstate the hypervisor debugging mode by executing in Dom0:

cd /boot
ln -sf xen-*-xs?????-d.gz xen.gz   #points to the debug (-d) version of the Xen Project hypervisor in /boot

and then rebooting the host.

Please report any improvements and regressions you observe on your hardware to the This email address is being protected from spambots. You need JavaScript enabled to view it. list. And keep an eye out for the next installments of this series!

Recent Comments
Tobias Kreidl
Turning off debugging will be very interesting to compare with the standard debug setting. I already have a set of benchmarks I ra... Read More
Wednesday, 18 June 2014 19:55
Tobias Kreidl
@James: Sure... Benchmarks with bonnie++ (V1.93c) on XenServer Creedence Alpha.2 with dom0 increased to 2 GB memory, otherwise th... Read More
Friday, 20 June 2014 04:30
Tobias Kreidl
Sure, here they are. The random operations seem to be worse for Alpha.2 in this case, but see an important note about this furthe... Read More
Monday, 23 June 2014 17:59
Continue reading
21263 Hits
11 Comments

XenServer Creedence Alpha 2 Released

We're pleased to announce that XenServer Creedence Alpha 2 has been released. Alpha 2 builds on the capabilities seen in Alpha 1, and we're interested your feedback on this release. With Alpha 1, we were primarily interested in receiving basic feedback on the stability of the code, with Alpha 2 we're interested in feedback not only on basic operations, but also storage performance.

The following functional enhancements are contained in Alpha 2.

  • Storage read caching. Boot storm conditions in environments using common templates can create unnecessary IO on shared storage systems. Storage read caching uses free dom0 memory to cache common read IO and reduce the impact of boot storms on storage networks and NAS devices.
  • DM Multipath storage support. For users of legacy MPP-RDAC, this functionality has been deprecated in XenServer Creedence following storage industry practices. If you are still using MPP-RDAC with XenServer 6.2 or prior, please enter an incident in https://bugs.xenserver.org to record your usage such that we can develop appropriate guidance.
  • Support for Ubuntu 14.04 and CentOS 5.10 as guest operating systems

The following performance improvements were observed with Alpha 2 compared to Alpha 1, but we'd like to hear your experiences.

  • GRO enabled physical network to guest network performance improved by 65%
  • Aggregate network throughput improved by 50%
  • Disk IO throughput improved by 100%

While these improvements are rather impressive, we do need to be aware this is alpha code. What this means in practice is that when we start looking at overall scalability the true performance numbers could go down a bit to ensure stable operations. That being said, if you have performance issues with this alpha we want to hear about them. Please also look to this blog space for updates from our performance engineering team detailing how some of these improvements were measured.

 

Please do download XenServer Creedence Alpha 2, and provide your feedback in our incident database.     

Recent Comments
Tobias Kreidl
In Creedence Alpha 1, we did not see any discernible storage performance difference compared to XS 6.2 SP1, so it will definitely ... Read More
Tuesday, 10 June 2014 14:42
James Bulpin
We'd very much like to move to CentOS 7 when it becomes available. However the timing of this, and the need to integrate and stabi... Read More
Friday, 13 June 2014 15:44
Bruno de Paula Larini
But there will be plans to support RHEL7/CentOS 7 guests?
Thursday, 26 June 2014 12:27
Continue reading
17479 Hits
17 Comments

Validation of the Creedence Alpha

On Monday May 19th early access to XenServer Creedence builds started from xenserver.org.  The  xenserver.org community has access to XenServer pre-release installation media of an alpha quality and is invited to provide feedback on it.

This blog describes the validation and system testing performed on the first alpha build.

Test Inventory

The XenServer development process incorporates daily automated regression testing complemented by various additional layers of testing, both automated and manual, that are executed less frequently.

In outline, these are the test suites and test cycles executed during XenServer development.

Automated short-cycle regression testing (“BVT”) for fast feedback to developers – on every build on every branch.

Automated medium-cycle regression testing (“BST”) to maintain quality on team branches

Automated long-cycle system regression test (“Nightly”) – on select builds on select branches, aimed at providing wide regression coverage on a daily basis

Automated performance regression test, measures several hundred key performance indicators – run on select builds on select branches on a regular basis

Automated stress test (huge numbers of lifecycle operations on single hosts) – run once per week on average

Automated pool stress test (huge numbers of lifecycle and storage operations on XS pools) - run once per week on average

Automated long-cycle system regression test (“Full regression”) – on select builds on select branches, aimed at providing extensive wide test coverage, this comprises a huge number of tests and takes several days to run, frequency of run is therefore one every two weeks on average

Automated large scale stability test (huge numbers of VMs on large XS pools, boot storm and other key ‘scale’ use-cases – run on demand, usually several times in the run up to a product release and ahead of key internal milestone including deliveries to other Citrix product groups

Automated soak test – run on demand, usually several times in the run up to a product release and ahead of key internal milestone including deliveries to other Citrix product groups, this comprises long-running tests aimed at validating XS over an extended time period

Automated upgrade test – run ahead of key milestones and deliveries to validate upgrade procedures for new releases.

Manual test – exploratory testing using XenCenter, aimed principally at testing edge cases and scenarios that are not well covered by automation, cycles of manual testing are carried out on a regular basis, and ahead of key milestones and deliveries.

Exit Criteria

Each stage of a XenServer release project requires different test suites to have been run “successfully” (usually meaning a particular pass-rate has been achieved and/or failures are understood and deemed acceptable).

However test pass rates are only a barometer of quality – if one test out of a hundred fails then that may not matter, but on the other what if that one test case failure represents a high impact problem affecting a common use-case? For this reason we also use defect counts and impact analyses as part of the exit criteria.

XenServer engineering maintains a high quality bar throughout the release cycle – the “Nightly” automated regression suite comprising several thousand test cases must always achieve over 95%. If it does not, then new feature development stops while bugs are fixed and code reverted until a high pass rate is restored.

The Alpha.next release is a drop from the Creedence project branch that has achieved the following pass rates on the following test suites.:

  • ·         Nightly regression  – 96.5%
  • ·         Stress –  passed (no pass rate for this suite)
  • ·         Pool stress –  passed (no pass rate for this suite)
  • ·         General regression –  91.3%

Drops later in the project lifecycle (e.g. Tech Preview) will be subjected to more testing and with more stringent exit criteria.

More Info

For more information on the automation framework used for these tests, please read my blog about XenRT!

Recent comment in this post
Rob Gilson
We downloaded and installed it earlier this month. Outwardly no changes but a lot of under-the-covers fixes and upgrades that were... Read More
Sunday, 22 June 2014 22:33
Continue reading
10160 Hits
1 Comment

XenServer.next Alpha Available for Download

XenServer.next Alpha Available

The XenServer engineering team is pleased to announce the availability an alpha of the next release of XenServer, code named “Creedence”. XenServer Creedence is intended to represent the latest capabilities in XenServer with a target release date determined by feature completeness. Several key areas have been improved over XenServer 6.2, and singificantly we have also introduced a 64 bit control domain architecture and updated the Xen Project hypervisor to version 4.4. Due to these changes, we are requesting tests using this alpha be limited to core functionality such as the installation process and basic operations like VM creation, start and stop. Performance and scalability tests should be deferred until a later build is nominated to alpha or beta status.

This is pre-release code and as such isn’t appropriate for production use, and is unlikely to function properly with provisioning solutions such as Citrix XenDesktop and Citrix CloudPlatform. It is expected that users of Citrix XenDesktop and Citrix CloudPlatform will be able to begin testing Creedence within the XenServer Tech Preview time-frame announced at Citrix Synergy. In preparation for the Tech Preview, all XenServer users, including those running XenDesktop, are encouraged to validate if Creedence is able to successfully install on their chosen hardware.

Key Questions

When does the alpha period start?

The alpha period starts on May 19th 2014

When does the alpha period end?

There is no pre-defined end to the alpha period. Instead, we’re providing access to nightly builds and from those nightly builds we’ll periodically promote builds to “alpha.x” status. The promotion will occur as key features are incorporated and stability targets are reached. As we progress the alpha period will naturally transition into a beta or Tech Preview stage ultimately ending with a XenServer release. Announcements will be made on xenserver.org when a new build is promoted.

Where do I get the build?

The build can be downloaded from xenserver.org at: http://xenserver.org/index.php?option=com_content&view=article&layout=edit&id=142

If I encounter a defect, how do I enter it?

Defects and incidents are expected with this alpha, and they can be entered at https://bugs.xenserver.org. Users wishing to submit or report issues are advised to review our submission guidelines to ensure they are collecting enough information for us to resolve any issues.

Where can I find more information on Creedence?

We are pleased to announce a public wiki has been created at https://wiki.xenserver.org to contain key architectural information about XenServer; including details about Creedence.

How do I report compatibility information?

The defect system offers Hardware and Vendor compatibility projects to collect information about your environment. Please report both successes and failures for our review.

What about upgrades?

The alpha will not upgrade any previous version of XenServer, including nightly builds from trunk, and there should be no expectation the alpha can be upgraded.

Do I need a new XenCenter?

Yes, XenCenter has been updated to work with the alpha and can be installed from the installation ISO.

Will I need a new SDK?

If you are integrating with XenServer, the SDK has also been updated. Please obtain the SDK for the alpha from the download page.

Where can I ask questions?

Since the Creedence alpha is being posted to and managed by the xenserver.org team, questions asked on Citrix Support Forums are likely to go unanswered. Those forums are intended for released and supported versions of XenServer. Instead we are inviting questions on the xs-devel mailing list, and via twitter to @XenServerArmy. In order to post questions, you will need to subscribe to the mailing list which can be done here: http://xenserver.org/discuss-virtualization/mailing-lists.html. Please note that the xs-devel mailing list is monitored by the engineering team, but really isn’t intended as a general support mechanism. If your question is more general purpose and would apply to any XenServer version, please validate if the issue being experienced is also present with XenServer 6.2 and if so ask the question on the Citrix support forums.  We've also created some guidelines for submitting incidents.

Recent Comments
Tim Mackey
James, This first release (alpha.1) is really about core functionality. With a new Xen Project hypervisor and 64bit dom0 there i... Read More
Monday, 19 May 2014 22:31
Tobias Kreidl
Tim, Awesome! The user community is collectively excited about this next evolutionary step for XenServer. It would be great to hav... Read More
Monday, 19 May 2014 18:04
Andrew Halley
Hi Tobias, there's a summary of the alpha(.1) content available on the wiki here: https://wiki.xenserver.org/index.php?title=XenSe... Read More
Tuesday, 20 May 2014 16:15
Continue reading
31970 Hits
20 Comments

Introducing Open Source XenRT

As a follow up activity to the open sourcing of XenServer, Citrix is pleased to announce the open sourcing of its automated test platform, XenRT.

XenRT ("Xen Regression Test") is a test automation framework, written in Python, providing abstractions for the various components under test (pool, host, VM, storage, network etc). The library code which makes up these abstractions simplifies the process of writing tests, allowing quite complex operations to be performed in a single method call.

In a full deployment, XenRT handles all aspects of the testing process - it will schedule a test job onto a host, bootstrap it (via DHCP/PXE), install the build to be tested, carry out the testing, and collect all necessary logs for troubleshooting, without any user interaction required.

In addition to basic functional, regression, and stress testing, XenRT has suites of tests that are used for testing performance, scalability, and interoperability.

Within Citrix, XenRT is used with a distributed lab comprised of an extremely wide range of hardware, and is developed and maintained by a team of some 25 developers. Tests are also written and executed directly by the wider XenServer engineering team, in a true "Test-as-a-Service" platform - see this post on the Citrix blog for more information.

XenRT has been open sourced to leverage Citrix's experience and resources in test automation to help improve the quality of open source Xen and XenServer releases, to benefit the entire community.

To get started with XenRT, follow the links below to the code and a README document (which contains getting started instructions - further documentation will follow in the near future). For discussion a mailing list has been created - information about this can be found at https://lists.xenserver.org/sympa/info/xenrt-users.

Download links:
README document
Main XenRT tarball
Third party test resource tarball
Source for third party resources (not required for normal operation)

Continue reading
16976 Hits
1 Comment

About XenServer

XenServer is the leading open source virtualization platform, powered by the Xen Project hypervisor and the XAPI toolstack. It is used in the world's largest clouds and enterprises.
 
Technical support for XenServer is available from Citrix.