Virtualization Blog

Discussions and observations on virtualization.

Debugging Neutron Networking in XenServer

One of the tasks I was assigned was to fix the code preventing XenServer with Neutron from working properly. This configuration used to work well, but the support was broken when more and more changes were made in Neutron, and the lack of a CI environment with XenServer hid the problem. I began getting XenServer with Neutron back to a working state by following the outline in the Quantum with Grizzly blog post from a few years ago. It's important to note that with the Havana release, Quantum was renamed to Neutron, and we'll use Neutron throughout this post. During my work, I needed to debug why OpenStack images were not obtaining IP addresses. This blog post covers the workflow I used, and I hope you'll find it helpful.

Environment

  • XenServer: 6.5
  • OpenStack: September 2015 master code
  • Network: ML2 plugin, OVS driver, VLAN type
  • Single Box installation

I had made some changes in the DevStack script to let XenServer with Neutron to be installed and run properly. The following are some debugging processes I followed when newly launched VMs could not get an IP from Neutron DHCP agent automatically.

Brief description of the DHCP process

When guest VMs are booting, they will try to send DHCP request broadcast message within the same network broadcast domain and then wait for a DHCP server's reply. In OpenStack Neutron, the DHCP server, or DHCP agent, is responsible for allocating IP addresses. If VMs cannot get IP addresses, our first priority is to check whether the packets from the VMs can be received by the DHCP server.

 

b2ap3_thumbnail_XenServerNeutron_20160209-225337_1.png

 

Dump traffic in Network Node

Since I used DevStack with single box installation, all OpenStack nodes reside in the same DomU (VM). Perform the following steps

1. Check namespace that DHCP agent uses

In the DevStack VM, execute:

    sudo ip netns

The output will look something like this:

    qrouter-17bdbe51-93df-4bd8-93fd-bb399ed3d4c1
qdhcp-49a623fd-c168-4f27-ad82-946bfb6df3d7

Note: qdhcp-xxx is the namespace for the DHCP agent

2. Check interface DHCP agent uses for L3 packets

In the DevStack VM, execute:

    sudo ip netns exec \
qdhcp-49a623fd-c168-4f27-ad82-946bfb6df3d7 ifconfig

The results will look something like the following, and the "tapYYYY" entry is the one we care about.

    lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

tap7b39ecad-81 Link encap:Ethernet HWaddr fa:16:3e:e3:46:c1
inet addr:10.0.0.2 Bcast:10.0.0.255 Mask:255.255.255.0
inet6 addr: fe80::f816:3eff:fee3:46c1/64 Scope:Link
inet6 addr: fdff:631:9696:0:f816:3eff:fee3:46c1/64 Scope:Global
UP BROADCAST RUNNING MTU:1500 Metric:1
RX packets:42606 errors:0 dropped:0 overruns:0 frame:0
TX packets:38 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:4687150 (4.6 MB) TX bytes:4867 (4.8 KB)

3. Monitor traffic flow with DHCP agent's interface tapYYY

In the DevStack VM monitor the traffic flow to the tapdisk interface by executing this command:

    sudo ip netns exec \
qdhcp-49a623fd-c168-4f27-ad82-946bfb6df3d7 \
tcpdump -i tap7b39ecad-81 -s0 -w dhcp.cap

Theoretically, when launching a new instance, you should see DHCP request and reply messages like this:

    16:29:40.710953 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from fa:16:3e:f9:f6:b0 (oui Unknown), length 302
16:29:40.713625 IP 172.20.0.1.bootps > 172.20.0.10.bootpc: BOOTP/DHCP, Reply, length 330

Dump traffic in Compute Node

Meanwhile, you will definitely want to dump traffic in the OpenStack compute note, and with XenServer this is Dom0.

When new instance is launched, there will be a new virtual interface created named “vifX.Y”. 'X' is the domain ID for the new VM and Y is the ID if the VIF defined in XAPI. Domain IDs are sequential - if the latest interface is vif20.0, the next one will most likely be vif21.0. Then you can try tcpdump -i vif21.0. Note that it may fail at first if the virtual interface hasn't been created yet, but once the virtual interface is created, you can monitor the packets. Theoretically you should see DHCP request and reply packets in Dom0; just like you see in DHCP agent side.

Note: If you cannot catch the dump packet at the instance’s launching time, you can also try this using ifup eth0 after doing a login to the instance via XenCenter. "ifup eth0" will also trigger the instance to send a DHCP request.

Check DHCP request goes out of the compute node

In most case, you should see the DHCP request packets sent out from Dom0, this means that the VM itself is OK. It has sent out DHCP request message.

Note: Some images will try to send DHCP requests from time to time until it gets a response message. However, some images won’t. They will only try several times, e.g. three times, and if it cannot get DHCP response it won’t try again any more. In some scenarios, this will let the instance lose the chance of sending DHCP request. That’s why some people on the Internet suggest changing images when launching instance cannot get an IP address via DHCP.

Check DHCP request arrives at the DHCP server side

When I was first testing, I didn't see any DHCP request from the DHCP agent side. Where the request packet go? It’s possible that the packets are dropped? Then who dropped these packets? Why drop them?

If we think it a bit more, it’s either L2 or L3 that dropped. With this in mind, we can begin to check one by one. For L3/L4, I don’t have a firewall setup and the security group’s default rule is to let all packets go through. So, I don’t spent so much effort on this part. For L2, since we use OVS, I began by checking OVS rules. If you are not familiar with OVS, this can take some time. At least I spent a lot of time on it to completely understand the mechanism and the rules.

The main aim is to check all existing rules in Dom0 and DomU, and then try to find out which rule let the packets dropped.

Check OVS flow rules

OVS flow rules in Network Node

To get the port information on the network bridge "br-int" execute the following in the DevStack VM

    sudo ovs-ofctl show br-int 
  
    stack@DevStackOSDomU:~$ sudo ovs-ofctl show br-int
OFPT_FEATURES_REPLY (xid=0x2): dpid:0000ba78580d604a
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_NW_TOS SET_TP_SRC SET_TP_DST ENQUEUE
1(int-br-eth1): addr:1a:2d:5f:48:64:47
config: 0
state: 0
speed: 0 Mbps now, 0 Mbps max
2(tap7b39ecad-81): addr:00:00:00:00:00:00
config: PORT_DOWN
state: LINK_DOWN
speed: 0 Mbps now, 0 Mbps max
3(qr-78592dd4-ec): addr:00:00:00:00:00:00
config: PORT_DOWN
state: LINK_DOWN
speed: 0 Mbps now, 0 Mbps max
4(qr-55af50c7-32): addr:00:00:00:00:00:00
config: PORT_DOWN
state: LINK_DOWN
speed: 0 Mbps now, 0 Mbps max
LOCAL(br-int): addr:9e:04:94:a4:95:bb
config: PORT_DOWN
state: LINK_DOWN
speed: 0 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0

To get the flow rules, execute:

    sudo ovs-ofctl dump-flows br-int
  
    stack@DevStackOSDomU:~$ sudo ovs-ofctl dump-flows br-int
    NXST_FLOW reply (xid=0x4):
      cookie=0x9bf3d60450c2ae94, duration=277625.02s, table=0, n_packets=31, n_bytes=4076, idle_age=15793, hard_age=65534, priority=3,in_port=1,dl_vlan=1041 actions=mod_vlan_vid:1,NORMAL
      cookie=0x9bf3d60450c2ae94, duration=277631.928s, table=0, n_packets=2, n_bytes=180, idle_age=65534, hard_age=65534, priority=2,in_port=1 actions=drop
      cookie=0x9bf3d60450c2ae94, duration=277632.116s, table=0, n_packets=42782, n_bytes=4706099, idle_age=1, hard_age=65534, priority=0 actions=NORMAL
      cookie=0x9bf3d60450c2ae94, duration=277632.103s, table=23, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop
      cookie=0x9bf3d60450c2ae94, duration=277632.09s, table=24, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop

These rules in DomU look normal without concerns, so let's go on with Dom0 and try to find more.

OVS flow rules in Compute Node

Looking at the traffic flow in picture 1, the traffic direction from VM to DHCP server is xapiX->xapiY(Dom0), then ->br-eth1->br-int(DomU). So, maybe some rules filtered the packets at the layer 2 level by OVS. While I do suspect xapiY, I cannot provide any specific reasons why.

To determine the xapiY in your environment, execute:

    xe network-list

In the results, look for the "bridge" which matches the name-label for your network. In our case, it was xapi3, so to determine the port information, execute:

   ovs-ofctl show xapi3 get port information
  
   [root@rbobo ~]# ovs-ofctl show xapi3
   OFPT_FEATURES_REPLY (xid=0x2): dpid:00008ec00170b013
   n_tables:254, n_buffers:256
   capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
   actions: OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_NW_TOS SET_TP_SRC SET_TP_DST ENQUEUE
    1(vif15.1): addr:fe:ff:ff:ff:ff:ff
      config:     0
      state:      0
      speed: 0 Mbps now, 0 Mbps max
    2(phy-xapi3): addr:d6:37:17:1d:01:ee
      config:     0
      state:      0
      speed: 0 Mbps now, 0 Mbps max
    LOCAL(xapi3): addr:5a:46:65:a2:3b:4f
      config:     0
      state:      0
      speed: 0 Mbps now, 0 Mbps max
   OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0

Execute ovs-ofctl dump-flows xapi3 to get flow rules

  [root@rbobo ~]# ovs-ofctl dump-flows xapi3
  NXST_FLOW reply (xid=0x4):
    cookie=0x0, duration=278700.004s, table=0, n_packets=42917, n_bytes=4836933, idle_age=0, hard_age=65534, priority=0 actions=NORMAL
    cookie=0x0, duration=276117.558s, table=0, n_packets=31, n_bytes=3976, idle_age=16859, hard_age=65534, priority=4,in_port=2,dl_vlan=1 actions=mod_vlan_vid:1041,NORMAL
    cookie=0x0, duration=278694.945s, table=0, n_packets=7, n_bytes=799, idle_age=65534, hard_age=65534, priority=2,in_port=2 actions=drop

Please pay attention to port 2(phy-xapi3), it has two specific rules:

  • The higher priority=4 will be matched first. If the dl_vlan=1, it will modify the tag and then with normal process, which will let the flow through
  • The lower priority=2 will be matched second, and it will drop the flow. So, will the flows be dropped? If the flow doesn’t have dl_vlan=1, it will be definitely be dropped.

Note:

(1) For dl_vlan=1, this is the virtual LAN tag id which corresponding to the Port tag
(2) I didn’t realize the problem was a missing tag for the new launched instance for a long time due to my lack of OVS understanding. Thus I didn’t have know to check the port’s tag first. So next time when we meet this problem, we can check these part first.

With this question, I checked the new launched instance’s port information, ran command ovs-vsctl show in Dom0, you can get results like these:

Bridge "xapi5"
fail_mode: secure
Port "xapi5"
Interface "xapi5"
type: internal
Port "vif16.0"
Interface "vif16.0"
Port "int-xapi3"
Interface "int-xapi3"
type: patch
options: {peer="phy-xapi3"}

 

For port vif16.0, it really doesn’t have tag with value 1, so the flow will be unconditionally dropped.

Note: When launching a new instance under XenServer, it will have a virtual network interface named vifx.0, and from OVS’s point of view, it will also create a port and bind that interface correspondingly.
Check why tag is not set

The next step is to find out why the newly launched instance don’t have a tag in OVS. There is no obvious findings for new comers like me. Just read the code over and over and make assumptions and test and so forth. But after trying various ideas, I did find that each time when I restart neutron-openvswitch-agent(q-agt) in the Compute Node, the VM can get IP if I execute ifup eth0 command. So, there must be something which is done when q-agt restarts and is not done when launching a new instance. With this information, I can focus my code inspection. Finally I found that, with XenServer, when a new instance is launched, q-agt cannot detect the newly added port and it will not add a tag to the corresponding port.

That then left the question of why q-agt cannot detect port changes. We have a session from DomU to Dom0 to monitor port changes, which seems not to work as we expect. With this in mind, I first ran command ovsdb-client monitor Interface name,ofport in Dom0, which produces output like this:

   [root@rbobo ~]# ovsdb-client monitor Interface name,ofport
    row                                  action  name        ofport
    ------------------------------------ ------- ----------- ------
    54bcda61-de64-4d0e-a1c8-d339a2cabb50 initial "eth1"      1     
    987be636-b352-47a3-a570-8118b59c7bbc initial "xapi3"     65534 
    bb6a4f70-9f9c-4362-9397-010760f85a06 initial "xapi5"     65534 
    9ddff368-0be5-4f23-a03c-7940543d0ccc initial "vif15.2"   1     
    ba3af0f5-e8ed-4bdb-8c3d-67a638b81091 initial "phy-xapi3" 2     
    b57284cf-1dcd-4a10-bee1-42516afe2573 initial "eth0"      1     
    38a0dd37-173f-421c-9aba-3e03a5b8c900 initial "vif16.0"   2     
    58b83fe4-5f33-40f3-9dd9-d5d4b3f25981 initial "xenbr0"    65534 
    6c792964-3930-477c-bafa-5415259dea96 initial "int-xapi3" 1     
    caa52d63-59ed-4917-9ec3-1ea957470d5e initial "vif15.1"   1     
    d8805d05-bbd2-40cb-b219-eb9177c217dc initial "vif15.0"   6     
    8131dcd2-69ea-401a-a65e-4d4a17203e0c initial "xapi4"     65534 
    086e6e3a-1ab2-469f-9604-56bbd4c2fe86 initial "xenbr1"    65534 

Then I launched a new instance try to find whether OVS monitor can give new output for the new launched instance, and I do get outputs like:

           row                       action name      ofport
------------------------------------ ------ --------- ------
249c424a-4c9a-47b4-991a-bded9ec63ada insert "vif17.0" []    

row                                  action name      ofport
------------------------------------ ------ --------- ------
249c424a-4c9a-47b4-991a-bded9ec63ada old              []    
 new    "vif17.0" 3    

So, this means the OVS monitor itself works well! There maybe other errors with the code that makes the monitoring. Seems I'm getting closer to the the root cause :)

Finally, I found that with XenServer, our current implementation cannot read the OVS monitor's output, and thus q-agt doesn't know there is a new port added. But lucky enough, L2 Agent provides another way of getting the port changes, and thus we can use that way instead.

Setting minimize_polling=false in the L2 agent's configuration file ensures the Agent does not rely on ovsdb-client monitor, which means that the port will be identified and the tag gets added properly!

In this case, this is all that was needed to get an IP address and everything else worked normally. I hope the process I went through in debugging this problem will be beneficial to others.     

Continue reading
6031 Hits
1 Comment

Integrating XenServer, RDO and Neutron

XenServer is a great choice of hypervisor for OpenStack based clouds, but there is no native integration between it and RedHat's RDO packages. This means that setting up an integrated environment using XenServer and RDO is more difficult than it should be. This blog post aims to resolve that, giving a method where CentOS can be set up easily to use XenServer as the hypervisor.

Environment

  • Hypervisor: XenServer: 6.5
  • Guest: CentOS 7.0
  • OpenStack: Liberty
  • Network: Neutron, ML2 plugin, OVS, VLAN

Install XenServer

The XenServer integration with OpenStack has some optimizations which means that only EXT3 storage is supported. Make sure when installing your XenServer you select Optimized for XenDesktop when prompted. Use XenCenter to check that the SR type is EXT3 as fixing it after creating the VMs will require deleting the VMs and starting again.

Install OpenStack VM

With XenServer, the Nova Compute service must run in a virtual machine on the hypervisor that they will be controlling. As we're using CentOS 7.0 for this environment, create a VM using the CentOS 7.0 template in XenCenter. If you want to copy and paste the scripts from the rest of the blog, use the name "CentOS_RDO" for this VM. Install the CentOS 7.0 VM but shut it down before installing RDO.

Create network for OpenStack VM

In single box environment, we need three networks, "Integration network", "External network", "VM network". If you have appropriate networks for the above (e.g. a network that gives you external access) then rename the existing network to have the appropriate name-label. Note that a helper script rdo_xenserver_helper.sh is provided for some of the later steps in this blog rely on these specific name labels, so if you choose not to use them then please also update the helper script.

You can do this via XenCenter or run the following commands in dom0:

    xe network-create name-label=openstack-int-network
    xe network-create name-label=openstack-ext-network
    xe network-create name-label=openstack-vm-network

Create virtual network interfaces for OpenStack VM

This step requires the VM to be shut down, as it's modifying the network setup and the PV tools have not been installed in the guest.

    vm_uuid=$(xe vm-list name-label=CentOS_RDO minimal=true)
    vm_net_uuid=$(xe network-list name-label=openstack-vm-network minimal=true)
    next_device=$(xe vm-param-get uuid=$vm_uuid param-name=allowed-VIF-devices | cut -d';' -f1)
    vm_vif_uuid=$(xe vif-create device=$next_device network-uuid=$vm_net_uuid vm-uuid=$vm_uuid)
    xe vif-plug uuid=$vm_vif_uuid
    ext_net_uuid=$(xe network-list name-label=openstack-ext-network minimal=true)
    next_device=$(xe vm-param-get uuid=$vm_uuid param-name=allowed-VIF-devices | cut -d';' -f1)
    ext_vif_uuid=$(xe vif-create device=$next_device network-uuid=$ext_net_uuid vm-uuid=$vm_uuid)
    xe vif-plug uuid=$ext_vif_uuid

You can also choose use helper script to do these in dom0.

    source rdo_xenserver_helper.sh 
create_vif

Configure OpenStackVM/Hypervisor communications

Use HIMN tool (plugin for XenCenter) to add internal management network to OpenStack VMs. This effectively performs the following operations, which could also be performed manually in dom0 or use rdo_xenserver_helper.sh.

    source rdo_xenserver_helper.sh
create_himn

Note: If using the commands manually, they should be run when the OpenStack VM is shut down.

Set up DHCP on the HIMN network for OpenStack VM, allowing OpenStack VM to access its own hypervisor on the static address 169.254.0.1. Run helper script in domU.

    source rdo_xenserver_helper.sh 
active_himn_interface

Install RDO

Using the RDO Quickstart detailed installation guide, please follow the instructions step by step. This manual only points out the steps that you must pay attention to during installation.

Run Packstack to install OpenStack

Rather than running packstack immediately, we need to generate an answerfile so that we can tweak the configuration.

Generate answer file:

    packstack --gen-answer-file=

Install OpenStack services:

    packstack --answer-file=

These items in should be changed as below:

    CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vlan 
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vlan

These items in should be changed according to your environment:

    CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=

NOTE:

: physnet1 is physical network name for VLAN provider and tenant networks. 1000:1050 is VLAN tag ranges on each physical network for allocation to tenant networks.
: br-eth1 is OVS bridge for VM network. br-ex is OVS bridge for External network, neutron L3 agent use it for external traffic.
: eth1 is OpenStack VM's NIC which connected to VM network. eth2 is OpenStack VM's NIC which connected to External network.

Configure Nova and Neutron

Copy Nova and Neutron plugins to XenServer host.

    source rdo_xenserver_helper.sh 
install_dom0_plugins

Edit /etc/nova/nova.conf, switch compute driver to XenServer.

    [DEFAULT] 
compute_driver=xenapi.XenAPIDriver

[xenserver]
connection_url=http://169.254.0.1
connection_username=root
connection_password=
vif_driver=nova.virt.xenapi.vif.XenAPIOpenVswitchDriver
ovs_int_bridge=

NOTE:

The integration_bridge above can be found from dom0:
    xe network-list name-label=openstack-int-network params=bridge
169.254.0.1 is hypervisor dom0's address which OpenStack VM can reach via HIMN.

Install XenAPI Python XML RPC lightweight bindings.

    yum install -y python-pip 
pip install xenapi

Configure Neutron

Edit /etc/neutron/rootwrap.conf to support uing XenServer remotely.

    [xenapi] 
# XenAPI configuration is only required by the L2 agent if it is to
# target a XenServer/XCP compute host's dom0.
xenapi_connection_url=http://169.254.0.1
xenapi_connection_username=root
xenapi_connection_password=

Restart Nova and Neutron Services

    for svc in api cert conductor compute scheduler; do 
service openstack-nova-$svc restart;
done
service neutron-openvswitch-agent restart

Launch another neutron-openvswitch-agent to talk with dom0

XenServer has a seperation of dom0 and domU and all instances' VIFs are actually managed by dom0. Their corresponding OVS ports are created in dom0. Thus, we should manually start the other ovs agent which is in charge of these ports and is talking to dom0, refer xenserver_neutron picture.

Create ovs configuration file

    cp /etc/neutron/plugins/ml2/openvswitch_agent.ini etc/neutron/plugins/ml2/openvswitch_agent.ini.dom0
    [ovs] 
integration_bridge = xapi3
bridge_mappings = physnet1:xapi2

[agent]
root_helper = neutron-rootwrap-xen-dom0 /etc/neutron/rootwrap.conf
root_helper_daemon =
minimize_polling = False

[securitygroup]
firewall_driver = neutron.agent.firewall.NoopFirewallDriver

NOTE:

xapi3 the integration bridge is xapX in the graph. xapi2 is vm network bridge, it's xapiY in the graph.
    xe network-list name-label=openstack-int-network params=bridgexe network-list name-label=openstack-vm-network params=bridge

Launch neutron-openvswitch-agent

Replace cirros guest with one setup to work for XenServer

    /usr/bin/python2 /usr/bin/neutron-openvswitch-agent \
--config-file /usr/share/neutron/neutron-dist.conf \
--config-file /etc/neutron/neutron.conf --config-file \
/etc/neutron/plugins/ml2/openvswitch_agent.ini.dom0 \
--config-dir /etc/neutron/conf.d/neutron-openvswitch-agent \
--log-file /var/log/neutron/openvswitch-agent.log.dom0 &
    nova image-delete cirros
wget http://ca.downloads.xensource.com/OpenStack/cirros-0.3.4-x86_64-disk.vhd.tgz

glance image-create --name cirros --container-format ovf \
--disk-format vhd --property vm_mode=xen --visibility public \
--file cirros-0.3.4-x86_64-disk.vhd.tgz

Launching instance and test its connectivity

    source keystonerc_demo

[root@localhost ~(keystone_demo)]# glance image-list
+--------------------------------------+--------+
| ID | Name |
+--------------------------------------+--------+
| 5c227c8e-3cfa-4368-963c-6ebc2f846ee1 | cirros |
+--------------------------------------+--------+

 

    [root@localhost ~(keystone_demo)]# neutron net-list
+--------------------------------------+---------+--------------------------------------------------+
| id | name | subnets |
+--------------------------------------+---------+--------------------------------------------------+
| 91c0f6ac-36f2-46fc-b075-6213a241fc2b | private | 3a4eebdc-6727-43e3-b5fe-8760d64c00fb 10.0.0.0/24 |
| 7ccf5c93-ca20-4962-b8bb-bff655e29788 | public | 4e023f19-dfdd-4d00-94cc-dbea59b31698 |
+--------------------------------------+---------+--------------------------------------------------+
    nova boot --flavor m1.tiny --image cirros --nic \
net-id=91c0f6ac-36f2-46fc-b075-6213a241fc2b demo-instance
    [root@localhost ~(keystone_demo)]# neutron floatingip-create public
    Created a new floatingip:
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| fixed_ip_address | |
| floating_ip_address | 172.24.4.228 |
| floating_network_id | 7ccf5c93-ca20-4962-b8bb-bff655e29788 |
| id | 2f0e7c1e-07dc-4c7e-b9a6-64f312e7f693 |
| port_id | |
| router_id | |
| status | DOWN |
| tenant_id | 838ec33967ff4f659b808e4a593e7085 |
+---------------------+--------------------------------------+
    nova add-floating-ip demo-instance 172.24.4.228

After these above steps, we have succefully booted an instance with floating ip, use "nova list" will output the instances

    [root@localhost ~(keystone_demo)]# nova list
+--------------------------------------+---------------+--------+------------+-------------+--------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+---------------+--------+------------+-------------+--------------------------------+
| ac82fcc8-1609-4d34-a4a7-80e5985433f7 | demo-inst1 | ACTIVE | - | Running | private=10.0.0.3, 172.24.4.227 |
| f302a03f-3761-48e6-a786-45b324182545 | demo-instance | ACTIVE | - | Running | private=10.0.0.4, 172.24.4.228 |
+--------------------------------------+---------------+--------+------------+-------------+--------------------------------+

Test the connectivity via floating ip, "ping 172.24.4.228" at the OpenStack VM, will properbly get outputs like:

    [root@localhost ~(keystone_demo)]# ping 172.24.4.228
    PING 172.24.4.228 (172.24.4.228) 56(84) bytes of data.
64 bytes from 172.24.4.228: icmp_seq=1 ttl=63 time=1.76 ms
64 bytes from 172.24.4.228: icmp_seq=2 ttl=63 time=0.666 ms
64 bytes from 172.24.4.228: icmp_seq=3 ttl=63 time=0.284 ms
Recent Comments
HuanXie
security group is not supported with the already released OpenStack version. But we have fixed this and it is now waiting for more... Read More
Friday, 08 January 2016 01:40
HuanXie
thanks... Read More
Thursday, 21 January 2016 01:25
Continue reading
13762 Hits
5 Comments

Citrix Joins OpenStack Foundation

Some of you might have noticed that Citrix joined the OpenStack Foundation yesterday and may be wondering what this means for two key technologies I've been closely involved with; Apache CloudStack and XenServer. The first, and arguably most important thing to note is that as Steve Wilson has stated, we're embracing both OpenStack and CloudStack to help further innovation. Nand Mulchandani also highlights that a culture of “anyness” is a core part of Citrix. With all the noise in the market about the various IaaS cloud solutions, supporting user choice is an important point to be clear on. So with that as backdrop, what does this really mean?

The XenServer Perspective on OpenStack

As I mentioned in my blog about OpenStack Summit, I really want XenServer to be a first class citizen within OpenStack. I tried to further that objective through submission of presentations to OpenStack Summit, but if you look at the schedule you'll note that no XenServer related talks were accepted. That's unfortunate, and really speaks to the challenge we face within a community when we're not the obvious or default choice. Obviously we can raise our profile through contributions and simply showing up at OpenStack events, but there is also a pretty important and easy thing we can change.

When a vendor evaluates a technology, they look at the ecosystem around it. OpenStack technology has a ton of buzz. If you look on job boards, you'll see many postings for OpenStack positions. If you search for cloud technologies, key supporters of OpenStack will be listed. Importantly, when selecting a technology suite, you'll look at who supports their technology with the suite and use them in your short list. Until today, it was unclear if Citrix actively supported the use of XenServer within OpenStack. Our joining the OpenStack Foundation is one way of signaling to those who prefer OpenStack that Citrix is supportive of their efforts. So if you've been quietly using XenServer in an OpenStack environment, I want to learn more about it. I want to learn what works, and where the pain points are so they might be addressed. If you've ever questioned if production support for XenServer when used with OpenStack could be supported, the answer is yes, and here's a link to buy support (hard sell over)!

The XenServer Perspective on CloudStack

For those of you who have adopted XenServer for your CloudStack clouds, nothing has changed and you should feel nothing change. XenServer will remain a first class citizen in CloudStack, and we'll continue to improve all aspects of XenServer operation within CloudStack such that XenServer remains an obvious choice. You'll continue to see XenServer content proposed to CloudStack events, and I hope you'll continue to accept those talks. I promise to continue to work on cool things like the Packer work I presented at CloudStack Day Austin which showed a method to migrate legacy infrastructure running on XenServer to a CloudStack cloud powered by XenServer; all without the users even noticing the migration happened. My hope is that the OpenStack community will want some of those same cool things, but that will take time and can't be forced.

So in the end this really isn't a commentary about which cloud solution is better, but a case of allowing customer choice. OpenStack has mindshare, and it only makes sense for Citrix and its technology suite to have a seat at the table. With Citrix openly supporting its technologies when deployed with OpenStack, everyone has the freedom to choose which solution works best.     

Recent Comments
Sebastian
I would like to see XenServer in OpenStack. At the moment we use XenServer on all our servers but we are looking for a solution li... Read More
Tuesday, 28 April 2015 08:07
Tim Mackey
Sebastian, XenServer is supported through the use of the "xapi" Nova driver in OpenStack, and also within CloudStack. Both OpenS... Read More
Tuesday, 28 April 2015 13:29
Continue reading
11930 Hits
2 Comments

XenServer at OpenStack Summit

It's coming up on time for OpenStack Summit Vancouver where OpenStack developers and administrators will come together to discuss what it means and takes to run a successful cloud based on OpenStack technologies. As in past Summits, there will be a realistic focus on KVM based deployments due to KVM, or more precisely libvirt, having "Group A" status within the compute driver test matrix. XenServer currently has "Group B" status, and when you note that the distinction between A and B really boils down to which can gate a commit, there is no logical reason why XenServer shouldn't be a more prevalent option.

Having XenServer be thought of as completely appropriate for OpenStack deployments is something I'm looking to increase, and I'm asking for your help. The OpenStack Summit organizers want to ensure the content matches the needs of the community. In order to help ensure this, they invite their community to vote on the potential merit of all proposals. This is pretty cool since it helps ensure that the audience gets what they want, but it also makes it a bit harder if you're not part of the "mainstream". That's where I reach out to you in the XenServer community. If you're interested in seeing XenServer have greater mindshare within OpenStack, then please vote for one or both of my submissions. If your personal preference is for another cloud solution, I hope that you agree with me that increasing our install base strengthens both our community and XenServer, and will still take the time to vote. Note that you may be required to create an account, and that voting closes on February 23rd.

Packaging GPU intensive applications for OpenStack

If you'd like to see the GPU capabilities of XenServer materialize within OpenStack, please vote for this session using this link: https://www.openstack.org/vote-vancouver/Presentation/packaging-gpu-intensive-applications-for-openstack. The session will encompass some of the Packer work I've been involved with, and also the GPU work XenServer is leading on with NVIDIA.

Avoiding the 1000 dollar VM in your first cloud

This session covers the paradigm shifts involved when an organization decides to move from traditional data center operations to "the could". Since this is a technology talk, it's not strictly XenServer orientated, but XenServer examples are present. To vote for this session, use this link: https://www.openstack.org/vote-vancouver/Presentation/avoiding-the-1000-dollar-vm-in-your-first-cloud

Thank you to everyone who decides to support this effort.

Continue reading
20772 Hits
0 Comments

About XenServer

XenServer is the leading open source virtualization platform, powered by the Xen Project hypervisor and the XAPI toolstack. It is used in the world's largest clouds and enterprises.
 
Commercial support for XenServer is available from Citrix.