XLcloud Blog

Category: OpenStack (4 posts) [RSS]

Oct 28 2013

Call for common work for GPU support in Openstack and hypervisors

The support of GPUs in OpenStack and hypervisors will make the high performance clouds share the power of one GPU between several VMs. ...

Jul 02 2013

We played Doom 3 at the first Rhone-Alpes OpenStack Meet-UP

You may rightfully wonder what on earth has Doom 3 anything to do with OpenStack? Well, it does and this is what the XLcloud Project tried to demonstrate last week at the first Rhone-Alpes OpenStack Meet-up organized by Dave Neary. ...

Jun 28 2013

Devstack with GRE tunnels in Havana

This post is an update to the Devstack in a multi-node configuration tutorial.

The latest version of the Quantum Neutron code has a cool feature that enables the OVS plugin to use VXLAN tunnels instead of GRE. If you want to test it, just make sure that you're using a version of OpenvSwitch >= 1.10.

To select which encapsulation to use with devstack, new parameters have been added:

  • Q_SRV_EXTRA_OPTS
  • Q_AGENT_EXTRA_AGENT_OPTS
  • Q_AGENT_EXTRA_OVS_OPTS

Check the README for details.

This also means that until Kyle's patch gets merged, it is required to select explicitly the encapsulation method as shown in the following localrc files.

localrc file of the controller node:

# Network settings
#
FLAT_INTERFACE=eth0
ENABLE_TENANT_TUNNELS=True
Q_AGENT_EXTRA_AGENT_OPTS=(tunnel_type=gre)
Q_AGENT_EXTRA_OVS_OPTS=(tenant_network_type=gre)
Q_SRV_EXTRA_OPTS=(tenant_network_type=gre)
Q_USE_NAMESPACE=True
Q_USE_SECGROUP=True

#
# Other parameters omitted for simplicity

localrc file of the compute node(s):

# Network settings
#
FLAT_INTERFACE=eth0
Q_AGENT_EXTRA_AGENT_OPTS=(tunnel_type=gre)
Q_AGENT_EXTRA_OVS_OPTS=(tenant_network_type=gre)
Q_USE_NAMESPACE=True
Q_USE_SECGROUP=True

#
# Other parameters omitted for simplicity

Happy devstacking!

Apr 08 2013

Devstack with Quantum in a multi-node configuration

This blog post will show you how to run devstack with Quantum and the Open vSwitch plugin in a multi-node deployment.

The OpenStack testbed will be composed of 2 nodes:

  • 1 controller node running Nova (including nova-compute) + Quantum + Glance + Keystone services.
  • 1 compute node running only the nova-compute service + the Quantum agent.

This tutorial has been tested on Ubuntu. It should be quite easy to adapt to other distros but as usual YMMV.

Setup of the nodes

I run the nodes as virtual machines using libvirt+KVM but it could be something else (VirtualBox for instance).

Let's define 2 networks in libvirt:

  • the management network that will be used to SSH to the nodes and for the management traffic between the OpenStack nodes.
  • the transport network that will be used only for the traffic between the virtual machines.

devstack_quantum_setup.png

Here is the XML definition of the management network (NAT mode)

<network>
  <name>management</name>
  <uuid>0d2ef087-4bcc-6fd2-4e69-6e355d2ae9d1</uuid>
  <forward mode='nat'/>
  <bridge name='virbr2' stp='on' delay='0' />
  <mac address='52:54:00:FE:6D:7F'/>
  <ip address='192.168.1.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='192.168.1.128' end='192.168.1.254' />
    </dhcp>
  </ip>
</network>

And the transport network (isolated mode)

<network>
  <name>transport</name>
  <uuid>ff591b52-6352-9306-440f-4a7130f17232</uuid>
  <bridge name='virbr1' stp='on' delay='0' />
  <mac address='52:54:00:9D:60:D8'/>
</network>

With virsh or virt-manager, create one virtual machine connected to the 2 networks: the first interface (eth0) is connected to management and the second (eth1) to transport.

Boot the VM with a Ubuntu Precise ISO image and proceed with the OS configuration

Once the machine has rebooted, install the openvswitch + git packages

$ sudo apt-get install openvswitch-switch openvswitch-datapath-dkms git

Add this snippet to the /etc/network/interfaces file

auto eth1
iface eth1 inet manual
up ip link set $IFACE up
down ip link set $IFACE down

Once all is finished, create a copy of that VM for the second node and boot the 2 nodes.

Run devstack

Install devstack on both nodes

$ git clone git://github.com/openstack-dev/devstack.git

Edit the localrc file in the devstack directory on the controller

ADMIN_PASSWORD=password
MYSQL_PASSWORD=password
RABBIT_PASSWORD=password
SERVICE_PASSWORD=password
SERVICE_TOKEN=tokentoken

SCHEDULER=nova.scheduler.simple.SimpleScheduler
LOGFILE=/opt/stack/data/stack.log
SCREEN_LOGDIR=/opt/stack/data/log
RECLONE=yes

# Network settings
FLAT_INTERFACE=eth1
# Use VLAN to segregate the virtual networks
ENABLE_TENANT_VLANS=True
TENANT_VLAN_RANGE=1000:1999
PHYSICAL_NETWORK=physnet1
OVS_PHYSICAL_BRIDGE=br-eth1

# Use Quantum instead of nova-network
disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service quantum

Edit the localrc file in the devstack directory on the compute node

ADMIN_PASSWORD=password
MYSQL_PASSWORD=password
RABBIT_PASSWORD=password
SERVICE_PASSWORD=password
SERVICE_TOKEN=tokentoken

ENABLED_SERVICES=n-cpu,rabbit,quantum,q-agt

LOGFILE=/opt/stack/data/stack.log
SCREEN_LOGDIR=/opt/stack/data/log
RECLONE=yes

HOST_IP=192.168.1.82 # replace this with the IP address of the compute node

# Openstack services running on controller node
SERVICE_HOST=192.168.1.94 # replace this with the IP address of the controller node
MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST
Q_HOST=$SERVICE_HOST
GLANCE_HOSTPORT=$SERVICE_HOST:9292

# Network settings
FLAT_INTERFACE=eth1
ENABLE_TENANT_VLANS=True
TENANT_VLAN_RANGE=1000:1999
PHYSICAL_NETWORK=physnet1
OVS_PHYSICAL_BRIDGE=br-eth1

On both nodes, configure the OVS bridge that will be used for the inter-VM traffic

$ sudo ovs-vsctl add-br br-eth1
$ sudo ovs-vsctl add-port br-eth1 eth1

You can now start devstack on both nodes

$ ./stack.sh

Be patient, it can take up to 10 minutes for the controller node to be ready...

Test it

Connect to the controller node and setup the OpenStack credentials (we'll use the admin account because some operations require administrator rights)

$ . openrc admin

devstack has created 2 networks:

  • private: private virtual network for the demo tenant.
  • public: virtual network for the floating IP addresses.
$ quantum net-list

+--------------------------------------+---------+------------------------------------------------------+
| id                                   | name    | subnets                                              |
+--------------------------------------+---------+------------------------------------------------------+
| 781a4073-d908-4128-b0c1-aac6547f0ff9 | private | 98d520c5-715b-4bfa-bbb9-fbdf6a3267e1 10.0.0.0/24     |
| e3f35b9e-962a-40d6-8c73-baec18c070ae | public  | 7cbeff79-ed64-4800-90bd-7a943c84a148                 |
+--------------------------------------+---------+------------------------------------------------------+

Import your key to login via SSH to the virtual instances
$ nova keypair-add --pub-key ~/.id_rsa.pub myKey

Add 2 rules to the default security group that will allow pinging and SSH to the VM from any IP

$ quantum security-group-rule-create --protocol icmp --direction ingress --remote-ip-prefix 0.0.0.0/0 default
Created a new security_group_rule:
+-------------------+--------------------------------------+
| Field             | Value                                |
+-------------------+--------------------------------------+
| direction         | ingress                              |
| ethertype         | IPv4                                 |
| id                | 61d0bd9f-a4f4-49f3-be50-48c53807e2f2 |
| port_range_max    |                                      |
| port_range_min    |                                      |
| protocol          | icmp                                 |
| remote_group_id   |                                      |
| remote_ip_prefix  | 0.0.0.0/0                            |
| security_group_id | 8cb9ee3c-609e-463f-8246-408d9e5b9cad |
| tenant_id         | 9b0f588ee12948ceaf8a7fcc7eaab53e     |
+-------------------+--------------------------------------+

$ quantum security-group-rule-create --protocol tcp --port-range-min 22 --port-range-max 22 --direction ingress --remote-ip-prefix 0.0.0.0/0 default
Created a new security_group_rule:
+-------------------+--------------------------------------+
| Field             | Value                                |
+-------------------+--------------------------------------+
| direction         | ingress                              |
| ethertype         | IPv4                                 |
| id                | 9abc5bae-018f-4669-9a17-5d5ed1ce5fa0 |
| port_range_max    | 22                                   |
| port_range_min    | 22                                   |
| protocol          | tcp                                  |
| remote_group_id   |                                      |
| remote_ip_prefix  | 0.0.0.0/0                            |
| security_group_id | 8cb9ee3c-609e-463f-8246-408d9e5b9cad |
| tenant_id         | 9b0f588ee12948ceaf8a7fcc7eaab53e     |
+-------------------+--------------------------------------+

Boot a VM attached to the private network on the compute node.

$ nova boot --image cirros-0.3.1-x86_64-uec --flavor m1.micro --availability-zone nova:compute --nic net-id=781a4073-d908-4128-b0c1-aac6547f0ff9 --key-name myKey vm1

Boot a VM attached to the private network on the controller node

$ nova boot --image cirros-0.3.1-x86_64-uec --flavor m1.micro --availability-zone nova:controller --nic net-id=781a4073-d908-4128-b0c1-aac6547f0ff9 --key-name myKey vm2

After a few seconds, the VMs should up and running

$ nova list
+--------------------------------------+------+--------+------------------+
| ID                                   | Name | Status | Networks         |
+--------------------------------------+------+--------+------------------+
| ebf2e0db-f3a6-4fa0-9f2f-b200ddd0da5b | vm1  | ACTIVE | private=10.0.0.3 |
| a341b43c-28f8-46f9-9aeb-3c207048bb5c | vm2  | ACTIVE | private=10.0.0.4 |
+--------------------------------------+------+--------+------------------+

Our VM are connected to the 10.0.0.0/24 network and received their IP addresses from a DHCP server running on the controller and managed by the DHCP agent. For proper isolation, that DHCP server runs in a separate network namespace.

To test our connectivity to vm1 and vm2, execute the ping command from the network namespace where the DHCP server is running

$ ip netns list # Note that the namespace id derived from the network UUID
qdhcp-781a4073-d908-4128-b0c1-aac6547f0ff9
qrouter-7377bf5c-0409-4f6d-bb51-7c512fb77631

$ sudo ip netns exec qdhcp-781a4073-d908-4128-b0c1-aac6547f0ff9 ping -c 1 10.0.0.3
PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
64 bytes from 10.0.0.3: icmp_req=1 ttl=64 time=9.40 ms

--- 10.0.0.3 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 9.401/9.401/9.401/0.000 ms

$ sudo ip netns exec qdhcp-781a4073-d908-4128-b0c1-aac6547f0ff9 ping -c 1 10.0.0.4
PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
64 bytes from 10.0.0.4: icmp_req=1 ttl=64 time=7.70 ms

--- 10.0.0.4 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 7.701/7.701/7.701/0.000 ms

Similarly we can SSH to vm2 and ping vm1

$ sudo ip netns exec qdhcp-781a4073-d908-4128-b0c1-aac6547f0ff9 ssh -l cirros -i ./test.pem 10.0.0.4 ping -c 1 10.0.0.3
PING 10.0.0.3 (10.0.0.3): 56 data bytes
64 bytes from 10.0.0.3: seq=0 ttl=64 time=0.792 ms

--- 10.0.0.3 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.792/0.792/0.792 ms

That's it for now!


This wiki is licensed under a Creative Commons 2.0 license
XWiki Enterprise 4.5.1 - Documentation - Legal Notice

Site maintained by