All Things Xen

General ramblings regarding Citrix XenServer & its open source counter part.

iSCSI and Jumbo Frames

So, you either just setup iSCSI or are having performance issues with your current iSCSI device. Here are some pointers to ensure "networking" is not the limiting factor:

1. Are my packets even making it to the iSCSI target?
Always check in XenCenter that your NICS responsible for storage are pointing to the correct target IPS. If they are, ensure you can ping these targets from within XenServer's command line:

ping x.x.x.x

If you cannot ping the target, that may be the issue.

Use the 'route' command to show if XenServer has a device and target to hit on the iSCSI target's subnet. If route shows nothing related to your iSCSI target IPs or takes a long time to show the target's IP/Route information, revisit your network configuration: working from the iSCSI device config, switch ports, and all the way up to the storage interface defined for your XenServer(s).

Odds are the packets are trying to route out via another interface or there is a cable mismatch/VLAN tag mismatch.  Or, at worse, the network cable is bad!

2. Is your network really setup for Jumbo Frames?
If you can ping our iSCSI targets, but Re having performance issues with Jumbo Frames (9000 or 4500 Mtu size, based on vendor) ensure your storage interface on XenServer is configured to leverage this Mtu size.

One can also execute a ping command to see if there is fragmentation or support enabled for the larger MTUs:

ping x.x.x.x -M do -s 8972

This tells XenServer to ping, without fragmenting frames, your iSCSI target with an Mtu of 9000 (the rest comes from the ping and other overhead, so use 8972).

If this return fragments or other errors, check the cabling from XenServer along with the switch settings AND iSCSI setup. Sometimes these attributes can be powered after firmware updates to the iSCSI enabled, managed storage devicd

3. Always make sure your network firmware and drivers are up to date!

And these are but three simple ways to isolate issues with iSCSI connectivity/performance.  The rest, well, more to come...



--jkbs | @xenfomation | XenServer.org Blog

Is it really “containers vs. VMs”?
Preview of XenServer support for Docker and Contai...
 

Comments 3

Tobias Kreidl on Saturday, 18 April 2015 05:23

Thanks for posting this, Jesse. There are of course numerous tweaks possible to improve stock network settings, published by Citrites and others, which involve the interaction of these in complex ways, and as always, experimentation is always best carried out on non-production test environments.

One interesting point is that when using Jumbo Frames, I have seen in several places the recommendation for Linux hosts to set tcp_mtu_probing = 1 to help avoid the problem of so-called "MTU black holes" (see, for example, this link for more information: http://kb.pert.geant.net/PERTKB/PathMTU). As far as I know, the standard setting is "0" on most Linux distributions, including XenServer.

0
Thanks for posting this, Jesse. There are of course numerous tweaks possible to improve stock network settings, published by Citrites and others, which involve the interaction of these in complex ways, and as always, experimentation is always best carried out on non-production test environments. One interesting point is that when using Jumbo Frames, I have seen in several places the recommendation for Linux hosts to set tcp_mtu_probing = 1 to help avoid the problem of so-called "MTU black holes" (see, for example, this link for more information: http://kb.pert.geant.net/PERTKB/PathMTU). As far as I know, the standard setting is "0" on most Linux distributions, including XenServer.
JK Benedict on Monday, 20 April 2015 14:20

Quite welcome, Tobias and it is always great to hear from you!

The article you sent is, well, quite amazing. I have seen in traditional 1500-based MTU networks (and Jumbo Frame capable networks) generating these so called black holes (and the messages that Linux, etc returns). I half wonder if the probing is disabled as to prevent any extra overhead across the network, or mixed into a stream of traffic? Of course, this is a wild idea and I would need to breakdown a probe packet to determine its contents, ability to "shard/fragment" (as any other TCP-based frame should), as well of any remote possibility it could aggravate the "Too Big" issue.

As for the default setting per Linux-ish kernels, indeed 0 is set so that means one thing... time to exploit my own personal lab to see what I can trace!!!

Cheers, sir!

--jkbs

0
Quite welcome, Tobias and it is always great to hear from you! The article you sent is, well, quite amazing. I have seen in traditional 1500-based MTU networks (and Jumbo Frame capable networks) generating these so called black holes (and the messages that Linux, etc returns). I half wonder if the probing is disabled as to prevent any extra overhead across the network, or mixed into a stream of traffic? Of course, this is a wild idea and I would need to breakdown a probe packet to determine its contents, ability to "shard/fragment" (as any other TCP-based frame should), as well of any remote possibility it could aggravate the "Too Big" issue. As for the default setting per Linux-ish kernels, indeed 0 is set so that means one thing... time to exploit my own personal lab to see what I can trace!!! Cheers, sir! --jkbs
Guest - Thomas Williams on Tuesday, 28 April 2015 21:48

I liked your short article on iSCSI and jumbo frames. Another thing that I have found useful for iSCSI is to increase the receive buffer size on the server interface. It improves the performance and reduces the re-transmits.

First check the current settings...

[root@localhost ~]# ethtool -g em2
Ring parameters for em2:
Pre-set maximums:
RX: 2047
RX Mini: 0
RX Jumbo: 1023
TX: 511
Current hardware settings:
RX: 200
RX Mini: 0
RX Jumbo: 100
TX: 511

[root@localhost ~]#

Then increase the receive buffers...

[root@localhost ~]# ethtool -G em2 rx 2047 rx-jumbo 1023

Enjoy :-)

-Thomas

0
I liked your short article on iSCSI and jumbo frames. Another thing that I have found useful for iSCSI is to increase the receive buffer size on the server interface. It improves the performance and reduces the re-transmits. First check the current settings... [root@localhost ~]# ethtool -g em2 Ring parameters for em2: Pre-set maximums: RX: 2047 RX Mini: 0 RX Jumbo: 1023 TX: 511 Current hardware settings: RX: 200 RX Mini: 0 RX Jumbo: 100 TX: 511 [root@localhost ~]# Then increase the receive buffers... [root@localhost ~]# ethtool -G em2 rx 2047 rx-jumbo 1023 Enjoy :-) -Thomas

About XenServer

XenServer is the leading open source virtualization platform, powered by the Xen Project hypervisor and the XAPI toolstack. It is used in the world's largest clouds and enterprises.
 
Commercial support for XenServer is available from Citrix.