XLcloud Blog News Feeds

RSS Feed

Nov 05 2014

Public launch & quotes from partners

Here is a list of quotes provided by all XLcloud project members.

Jun 02 2014

XLcloud Management Service version 1.0.0 released!

The XLcloud team is proud to announce the first public release of XLcloud Management Service (XMS), version 1.0.0.


XLcloud Management Service gives users the ability to manage XLcloud platform. XMS works atop OpenStack infrastructure, introducing Platform-as-a-Service (PaaS) capabilities to OpenStack IaaS. Its main features include:
- support for complex stack topologies backed by Heat stack templates (which means support for AWS-compatible Heat resources) stack lifecycle management including (but not limited to) automatic and manual layer scaling,
- stack suspend/resume operations automated middleware and software installation and configuration (throughout entire stack lifecycle)
- precise network topology management with support for logical separation of stack layers
- possibility of scheduling bare-metal reservations and running stacks on dedicated hardware
- catalogs of predefined blueprints and cookbooks that greatly simplify stack composition process
- advanced identity and access management entirely based on open standards (like SAML 2.0 for cross-domain SSO or OAuth 2.0 with fine-grained entitlements for REST API authorization).
Please visit this page for more information and details provided by Tomek Adamczewski, AMG.net about XMS.

May 27 2014

The hen, the egg and the open source

The times we are coding or debugging or what else, we can feel the open source as an old fellow. It is true as the open source was born a few decades ago, but only a few. On the other hand, the proprietary softwares have a longer life because they were supported from the beginning by the companies which they belong to. The open source has a completely different state of mind, it is a shared effort between individuals and companies to make the effort lighter for each participant and more efficient for all.

But the companies have to make business even with the open source and here comes a dilemma : should companies have a part of their software as proprietay, or only open source and sell services, or both ? Keeping a part of the code as proprietary implies that you should maintain this part compatible at each release with the open source part. That means people involved in the development, the tests, the validation, the support and so on, and it has a cost. Staying full open source makes it easier to upgrade from one release to the other and the resources can be set on the development of the common code, the services and the support of the customers.

OpenStack is an interesting example because releases come twice a year and each new version brings new functionnalities, so new code, and also an important refactoring of the previous version. So if you keep a part of the code as yours, you have to make a lot of efforts to keep it functional with the open part. On the contrary, if you stay in the common track, you can focus your efforts developing some parts you are interested in in the common code. In the end choosing to have a code full open source or not can be seen as not so important.

The point is that using open source makes you an actor of the community, and the spirit of the open source is that when you use it, it is fair to contribute to it. Companies which do not play that way are not only unfair to the community but they are also ineffective because they do not maintain their expertise.

Mar 06 2014

HPC Cloud enablement using XenServer, OpenStack and NVIDIA GRID GPUs – XLcloud

Following the first blog post published on February 7th in Citrix Blog, we are glad to relay this second blog post, written by Rachel Berry on March 6th. Please click here to read the full post.

"A few weeks my colleague Bob, blogged here about how GPU-passthrough on XenServer under OpenStack had been achieved by the xlcloud project. This is a delightful demonstration of Citrix’s open technology stance. Although Citrix has heavily invested in CloudStack, over the last few years we have remained active in OpenStack development. You can find out more about building products using XenServer as the hypervisor using both OpenStack and CloudStack, here.

XLcloud – a fascinating project

XLcloud is an open sourced project, with sound financial backing ($Millions) and technological commitment from some heavyweight companies including Bull SAS.

The XLcloud project strives to establish the demonstration of a High Performance Cloud Computing (HPCC) platform based on OpenStack, that is designed to run a representative set of compute intensive workloads, including more specifically interactive games, interactive simulations and 3D graphics. XLcloud is a three-year long collaborative project funded by the French FSN (Fonds national pour la Société Numérique) programme. It’s one of those great projects where state funding has enabled both commercial and academic organisations to collaborate to define large scale technologies. They’ve have some great names involved in the consortium (see here) with a wealth of experience in networking, hosting and graphics. ...".

Feb 27 2014

Running an OpenGL application on a GPU-accelerated Nova Instance (Part 2)

I left the presentation of the part 1 with the promise that I will show you a real 3D application running into a GPU-accelerated instance.

And so, here it is: screencast.

In this screencast you will see the bootstrap of an instance in devstack that has a GPU attached to it. From within the instance, I run an OpenGL benchmark called Heaven from Unigine. The screencast shows that the GPU load is for real thanks to a Ganglia monitoring dashboard. The software at work in this demo is comprised of the NVIDIA GPU GRID driver, TurboVNC, VirtualGL, the Unigine benchmark and Ganglia to visualize the ongoing workload.

Cherry on the cake, the VM is deployed with Heat and Chef Solo using a template you can download from [1].

I hope you will enjoy the Unigine video. It's pretty cool.

Feb 25 2014

How we plan to manage autoscaling using the new notification alarming service of Ceilometer

In this post, I'd like to describe how we plan to use the new alarming capabilities offered in Heat and Ceilometer to be notified of stack state changes resulting from an autoscaling operation. Indeed, with Icehouse, it will be possible to specify an new type of alarm whereby you can associate a user-land defined webhook with an autoscaling notification.

There are three different types of autoscaling notifications you will able to subscribe to.

  • orchestration.autoscaling.start
  • orchestration.autoscaling.error
  • orchestration.autoscaling.end

The first two notifications are self explanatory. The third one orchestration.autoscaling.end is sent by Heat when an auto-scaling-group resize has completed successfully. Which more specifically means, when the state of the (hidden) stack associated with an autoscaling group has effectively transitioned from UPDATE_IN_PROGRESS state to UPDATE_COMPLETE state.

The Ceilometer blueprint which introduces the feature in Icehouse is here.

We tested it, and it seems to work fine as shown in the screen scraping below.

The CLI looks like this:

ceilometer --debug alarm-notification-create  --name foo --enabled True --alarm-action "http://localhost:9998?action=UP" --notification-type  "orchestration.autoscaling.end" -q "capacity>0"

Then the curl equivalent:

curl -i -X POST -H 'X-Auth-Token: a-very-logn-string' -H 'Content-Type: application/json' -H 'Accept: application/json' -H 'User-Agent: python-ceilometerclient' -d '{"alarm_actions": ["http://localhost:9998?action=UP"], "name": "foo", "notification_rule": {"query": [{"field": "capacity", "type": "", "value": "0", "op": "gt"}], "period": 0, "notification_type": "orchestration.autoscaling.end*"}, "enabled": true, "repeat_actions": false, "type": "notification"}

And the callback handling:

nc -l 9998

POST /?action=UP HTTP/1.1
Host: localhost:9998
Content-Length: 1650
Accept-Encoding: gzip, deflate, compress
Accept: */*
User-Agent: python-requests/2.2.1 CPython/2.7.3 Linux/3.2.0-48-virtual

{"current": "alarm", "alarm_id": "e7dafd2d-18a3-4c9d-a4af-efe927007ae6", "reason": "Transition to alarm from insufficient data due to notification matching the defined condition for alarm  foo.end with type orchestration.autoscaling.start and period 0", "reason_data": {"_context_request_id": "req-480768ed-c5a2-46f6-b720-8ac2542e3eb8", "event_type": "orchestration.autoscaling.start", "_context_auth_token": null, "_context_user_id": null, "payload": {"state_reason": "Stack create completed successfully", "adjustment": 1, "user_id": "admin", "stack_identity": "arn:openstack:heat::6db81240677b4326b94a595c0159baa5:stacks/AS4/5eef9488-5274-4305-bedb-91f5ed45cdd6", "stack_name": "AS4", "tenant_id": "6db81240677b4326b94a595c0159baa5", "adjustment_type": "ChangeInCapacity", "create_at": "2014-02-19T10:35:59Z", "groupname": "AS4-ASGroup-nyywzf4x5hif", "state": "CREATE_COMPLETE", "capacity": 1, "message": "Start resizing the group AS4-ASGroup-nyywzf4x5hif", "project_id": null}, "_context_username": "admin", "_context_show_deleted": false, "_context_trust_id": null, "priority": "INFO", "_context_is_admin": false, "_context_user": "admin", "publisher_id": "orchestration.ds-swann-precise-node-s3fbwjntypxv", "message_id": "738be905-1ec3-47e3-811a-ab7975426567", "_context_roles": [], "_context_auth_url": "http://172.16.0.46:5000/v2.0", "timestamp": "2014-02-19 10:42:23.960329", "_unique_id": "a32ff8a1a8144532b6312ef36790acec", "_context_tenant_id": "6db81240677b4326b94a595c0159baa5", "_context_password": "password", "_context_trustor_user_id": null, "_context_aws_creds": null, "_context_tenant": "demo"}, "previous": "insufficient data"}

At first glance, it may seem as a minor feature but it's not. Hence this post. For us, it is a significant stride toward closing the implementation gap we used to have with the integrated lifecycle management operations we want to support for the clusters we deploy in our platform. To help with the explanation, I sketched a diagram that shows how we handle the deployment orchestration and configuration management automation workflow (that I will call contextualization for short) which is taking place when an autoscaling condition occurs. The use case of choice is the remote rendering cluster that we already used in some cool cloud gaming demos.

__Figure 1: Remote Rendering Cluster contextualization workflow upon autoscaling

RRVC Auto-Scaling Workflow

The XLcloud Management Service (XMS) sits on top of OpenStack. It is responsible for supporting the seamless integration between resource deployment orchestration and configuration management automation. Autoscaling is just an example of a state change affecting condition that may occur in the platform. There are other state change affecting conditions such as deploying a new application onto the cluster or upgrading the software that we handle using the same contextualization mechanism. Note that a cluster, as we call it, is nothing more than a relatively complex multi-tiered Heat stack which lifecycle management operations are handled by XMS throughout its lifespan.

In (1) the deployment of the cluster is initiated by XMS which in turn delegates to Heat for the deployment orchestration. The cluster is created by submitting a master template which itself references embedded templates we call layers. Also, not shown here, a layer can benefit from interesting capabilities such as being attached to a specific subnet. Layers can be chosen from a catalog. They are used as blueprints of purpose-built instances to compose a given stack. The remote rendering cluster is therefore a stack composed of layers including in particular an auto-scaling-group layer composed of GPU-accelerated rendering node instances. They are all created and configured using the same parameters and set of Chef recipes. There are two types of alarm resources we specify in the rendering nodes layer template.

  • The OS::Ceilometer::Alarm resource type introduced in Havana which allows to associate an alarm with an auto-scaling-group policy
  • The OS::Ceilometer::Notification resource type which will allow to associate an alarm with a notification.

Note that the OS::Ceilometer::Notification is a new resource type proposal. It doesn't exist yet. It is intended to declaratively represent an alarm that is triggered by Ceilometer when a notification matching certain criteria is met. In our particular use case, when Ceilometer receives an orchestration.autoscaling.end notification that is sent by Heat when an auto-scaling-group resize has completed successfully. The alarm specification allows to distinguish between scale-up (capacity > 0) and scale-down (capacity <0).

Here is an example of how it would be used:

@@autoscaling-alarm-up:
     Type: OS::Ceilometer::Notification
     Properties:
       description: Send an alarm when Ceilometer receives a scale up notification
       notification_type: orchestration.autoscaling.end
       capacity: '0'
       comparison_operator: gt
       alarm_actions:
       - {  a user-land webhook URL... }
       matching_metadata: {'metadata.user_metadata.groupName': {'Ref':'compute-nodes-layer'}
autoscaling-alarm-down:
     Type: OS::Ceilometer::Notification
     Properties:
       description: Send an alarm when Ceilometer receives a scale down notification
       notification_type: orchestration.autoscaling.end
       capacity: '0'
       comparison_operator: lt
       alarm_actions:
       - {  a user-land webhook URL... }
       matching_metadata: {'metadata.user_metadata.groupName': {'Ref':'compute-nodes-layer'}@@

In (2) Heat creates these two alarms through the Ceilometer Alarming Service API

In (3) all the instances of the cluster execute their initial setup recipes. The role of the initial setup is to bring the cluster in a state that can be remotely managed by XMS. That is, download all the cookbooks from their respective repositories and along that line resolve their dependencies, install the MCollective Agent, Chef Solo and the rendering engine middleware. During the setup phase, the metadata associated with the stack are exposed as ohai facts through the MCollective Agent. Certain ohai facts, such as the stack id, will be used as MColletive filters to selectively reach a particular instance, a layer or the entire cluster.

In (4) a workload is generated against the cluster by gamers who want to play. A cloud gaming sessions load balancer running in the Virtual Cluster Agent takes those requests to further dispatch them across the auto-scaling-group of the cluster. Gaming sessions are dispatched according to their processing requirements, which may vary quite a lot, depending on the game being played and the viewing resolution.

In (5) a gmond which runs on every GPU-accelereated instance uses a specific Ganglia GPU module to monitor the GPU(s) that are attached to the rendering instances via PCI-passthrough. In another layer of the cluster, a Ganglia Collector, which runs gmetad, collects the GPU usage metrics to pass them through a Ganglia Pollster we developed for that cluster, which in turn, pushes them (after some local processing) as Ceilometer samples. You can observe that we have chosen not to use the cfn-push-stats helper within the monitored instances to rely instead on the Ganglia monitoring framework and the Ceilometer API. A direct benefit of this is that we get a nice Ganglia monitoring dashboard.

In (6) the Alarm Service of Ceilometer detects that a resource usage alarm condition caused by the current workload is met. We found, for example, that an increase of the GPU temperature is very representative of the ongoing GPU load. As a result, Ceilometer calls the webhook in the Auto Scaling Service of Heat, that was defined in the OS::Ceilometer::Alarm resource, which in turn will initiate a scale-up operation.

In (7) Heat spawns one or several new instances in the auto-scaling-group of the cluster

In (8) the new instance(s) execute the initial setup as above. Once the setup is complete, the auto-scaling-group enters in the UPDATE_COMPLETE state which makes the Auto Scaling Service of Heat trigger an orchestration.autoscaling.end notification.

In (9) the Alarm Service of Ceilometer detects that an autoscaling alarm condition is met. Ceilometer calls the webhook in XMS that was defined in the ''autoscaling-alarm-up" resource' of the template.

in (10) XMS makes an MCollective RPC call that will direct the instances of the cluster (expected those that are not concerned by the contextualization) that they must execute the recipes associated with an autoscaling event which we refer in the template as the 'configure' recipes.

In (11) the load-balancer can now dispatch the new incoming gaming session to the newly provisioned instance(s)

Note that the same workflow would roughly apply for a scale-down notification.

Do not hesitate to leave a note if you have a comment or suggestion to make.

Feb 10 2014

A GPU in your instance with Xen hypervisor

This is just a link to Running an OpenGL application on a GPU-accelerated Nova Instance (Part 1)

NOTE: We do this indirection because I didn't see the dot at the end of the original URL. So to be able to reach the URL without a dot at the end without breaking everything I added this ugly entry.

Feb 07 2014

Accelerated GPU with XenServer and OpenStack

We are very proud that Xlcloud was mentioned in Citrix's blog on February 7th, 2014.

Please find here an extract if this blog post, written by Bob Ball. To see the entire post on Citrix blog, click here.

"I’m thrilled to have been assisting the folk over at Bull SAS who have been working on a fascinating project called XLcloud.

XLcloud is a french collaboration project between a number of organisations aiming to create a reference architecture for a High Performance Cloud Computing system based on OpenStack. Of particular interest to Citrix is the requirement to use XenServer as the hypervisor due to its stable support for GPU passthrough. I’m very pleased that the ability to add a GPU to your instance under OpenStack is now up for review, ...".

Feb 03 2014

Running an OpenGL application on a GPU-accelerated Nova Instance (Part 1)

Have you ever dreamed of running graphic apps in OpenStack? Now, it's possible!

About a month ago we published a blueprint [1] to enable the support of PCI-passthrough in the XenAPI driver of Nova [2]. Our primary objective was to enable GPU-accelerated instances but we nonetheless scoped the blueprint with the intent to support "any" kind of PCI device. Since then, we published two patches [3] [4] that you can readily try using the trunk version of Nova. I would like to say that this work couldn't have been done without the help of the OpenStack and Xen communities.

In part 1 of this post, I will go through a step-by-step instruction that shows how to boot a Nova instance that has direct access to a GPU under Xen virtualization. In our particular setup, we used an Nvidia K2 graphic card but it should work equally well for other Nvidia GPUs like the Nvidia K520 or M2070Q that we booted successfully too in our lab.

First you need a working devstack into a domU. To do this, you must install Xenserver 6.2 on the machine that has the GPU installed then boot the domU with an Ubuntu Saucy (but other distribution should work as well) and install a devstack all-in-one in it. When you boot the dom0, you need to prepare the device for PCI passthrough. You do this by adding "pciback.hide=(87:00.0)(88:00.0)" to the dom0 Linux kernel command line. This will assign the pciback driver to the devices with BDF 87:00.0 and 88:00.0. Information about PCI passthrough with Xen are available on Xen wiki [5].

The next step is to download the code for the PCI passthrough.

  # cd /opt/stack/nova
  # git review -d 67125

This will download the two patches that are needed and will switch on the correct git branch. Before restarting the nova services you need to configure the nova scheduler and the compute node to be able to use PCI passthrough. For further information, check the wiki [6].

On the compute node you need to select which devices are eligible for passthrough. In our case we added the K2 cards. You do this by adding those devices into a list in /etc/nova/nova.conf

  # cat /etc/nova/nova.conf
  ...
  pci_passthrough_whitelist = [{"vendor_id":"10de","product_id":"11bf"}]
  ...

The vendor ID and the product ID of the K2 GPU are respectively 10de and 11bf. Thus we need to configure the scheduler as follows:

  # cat /etc/nova/nova.conf
  ...
  pci_alias={"vendor_id":"10de","product_id":"11bf","name":"k2"}
  scheduler_driver = nova.scheduler.filter_scheduler.FilterScheduler
  scheduler_available_filters=nova.scheduler.filters.all_filters
  scheduler_available_filters=nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter
  scheduler_default_filters=RamFilter,ComputeFilter,AvailabilityZoneFilter,Compute
  ...

The pci_alias is used to match the extra parameters of a flavor with the selected PCI device. Hence, you need create a flavor that will be associated with the PCI devices that you want to attach:

  # nova flavor-key  m1.small set "pci_passthrough:alias"="k2:1"

Last but not least. You need to copy plugin files from /opt/stack/nova/plugins/xenserver/xenapi/etc/xapi.d/ of your devstack installation into the /etc/xapi.d/plugins/ directory of dom0. Overlooking this step would most probably result in plugin errors.

Restart the nova services. On your n-cpu screen you should see your PCI resources as available as shown below:

    2014-01-30 19:20:48.340 DEBUG nova.compute.resource_tracker [-] Hypervisor: assignable PCI devices: [{"status": "available", "dev_id": "pci_87:00.0", "product_id": "11bf", "dev_type": "type-PCI", "vendor_id": "10de", "label": "label_10de_11bf", "address": "87:00.0"}, {"status": "available", "dev_id": "pci_88:00.0", "product_id": "11bf", "dev_type": "type-PCI", "vendor_id": "10de", "label": "label_10de_11bf", "address": "88:00.0"}] from (pid=10444) _report_hypervisor_resource_view /opt/stack/nova/nova/compute/resource_tracker.py:429

If it is not the case, then check that your file nova.conf is correctly configured as described above.

Now, when you boot an instance using the flavor m1.small, one k2 will be attached to this instance. To be noted that the resources tracker will keep track of the PCI devices that you attached to your instances and so, creating a new GPU-accelerated instance will return an error when those resources are exhausted on all the compute nodes.

Now, everything should be ready to boot a GPU-accelerated instance:

  # nova boot --flavor m1.small --image centos6 --key-name mykey testvm1
  xlcloud@devstackvm1:~$ nova list
  +--------------+---------+--------+------------+-------------+--------------------+
  | ID           | Name    | Status | Task State | Power State | Networks           |
  +--------------+---------+--------+------------+-------------+--------------------+
  | 92f4...f081a | testvm1 | ACTIVE | -          | Running     | private=10.11.12.2 |
  +--------------+---------+--------+------------+-------------+--------------------+

Log into your instance to check which PCI devices are available:

  xlcloud@devstackvm1:~$ ssh  -l cloud-user 10.11.12.2
  Last login: Thu Jan 30 18:26:32 2014 from 10.11.12.1
  [cloud-user@testvm1 ~]$ lspci 
  00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
  00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
  00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]
  00:01.2 USB controller: Intel Corporation 82371SB PIIX3 USB [Natoma/Triton II] (rev 01)
  00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 01)
  00:02.0 VGA compatible controller: Cirrus Logic GD 5446
  00:03.0 SCSI storage controller: XenSource, Inc. Xen Platform Device (rev 01)
  00:05.0 VGA compatible controller: NVIDIA Corporation GK104GL [GRID K2] (rev a1)

As you can see in the output above, my instance is attached to the K2 GPU. The next step to run an actual graphic application, since in the end that's what we want to do, requires to install drivers of your graphic card manufacturer into your GPU-accelerated instance (in this case that would be the Nvidia driver for the K2).

  [cloud-user@testvm1 ~]$ nvidia-smi 
  Mon Feb  3 08:51:42 2014       
  +------------------------------------------------------+                       
  | NVIDIA-SMI 331.38     Driver Version: 331.38         |                       
  |-------------------------------+----------------------+----------------------+
  | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
  | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
  |===============================+======================+======================|
  |   0  GRID K2             Off  | 0000:00:05.0     Off |                  Off |
  | N/A   32C    P0    37W / 117W |      9MiB /  4095MiB |      0%      Default |
  +-------------------------------+----------------------+----------------------+
                                                                                   
  +-----------------------------------------------------------------------------+
  | Compute processes:                                               GPU Memory |
  |  GPU       PID  Process name                                     Usage      |
  |=============================================================================|
  |  No running compute processes found                                         |
  +-----------------------------------------------------------------------------+

Okay, that's probably enough for today. In Part 2 of this post, I will show you how to setup a GPU-accelerated Nova instance to run an OpenGL application like the Unigine benchmark [7].

Guillaume Thouvenin XLcloud R&D

Sep 11 2013

Baremetal Driver and the Devstack.

You maybe know the Baremetal driver is quite experimental and planned to be replaced by the Ironic project. That said, there were recent improvements from the community which made the baremetal driver still very interesting to test. So as to get the latest updates, I tried to configure a Devstack for provisioning real baremetal hosts. Here are my notes from the install which can help some of you. I hope.

Configure your devstack

$ git clone https://github.com/openstack-dev/devstack.git
$ cd devstack

Edit your localrc as below. Make sure to change the network and baremetal settings to your own environment, of course.

# Credentials
ADMIN_PASSWORD=yourpassword
MYSQL_PASSWORD=yourpassword
RABBIT_PASSWORD=yourpassword
SERVICE_PASSWORD=yourpassword
SERVICE_TOKEN=yourtoken

# Logging
LOGFILE=/opt/stack/data/stack.log
 
# Services
disable_service n-net
enable_service q-svc
enable_service q-agt
disable_service q-dhcp
disable_service q-l3
disable_service q-meta
enable_service neutron
ENABLED_SERVICES+=,baremetal
ENABLED_SERVICES+=,heat,h-api,h-api-cfn,h-api-cw,h-eng
 
#Network 
HOST_IP=10.0.0.30
FIXED_RANGE=10.0.0.0/24
FIXED_NETWORK_SIZE=256
NETWORK_GATEWAY=10.0.0.1
PUBLIC_INTERFACE=eth0
 
#Neutron settings if FlatNetwork for Baremetal
PHYSICAL_NETWORK=ctlplane
OVS_PHYSICAL_BRIDGE=br-ctlplane
ALLOCATION_POOL="start=10.0.0.31,end=10.0.0.35"
 
# Baremetal Network settings
BM_DNSMASQ_IFACE=br-ctlplane
BM_DNSMASQ_RANGE=10.0.0.31,10.0.0.35

# Global baremetal settings for real nodes
BM_POWER_MANAGER=nova.virt.baremetal.ipmi.IPMI
VIRT_DRIVER=baremetal
BM_DNSMASQ_DNS=8.8.8.8

# Change at least BM_FIRST_MAC to match the MAC address of the baremetal node to deploy
BM_FIRST_MAC=AA:BB:CC:DD:EE:FF
BM_SECOND_MAC=11:22:33:44:55:66

# IPMI credentials for the baremetal node to deploy
BM_PM_ADDR=10.0.1.102
BM_PM_USER=yourlogin
BM_PM_PASS=yourpass

# Make sure to match your Devstack hostname
BM_HOSTNAME=bm-devstack

Start the stack.

$ ./stack.sh

Prevent dnsmasq to attribute other leases

As there is probably another DHCP server in the same subnet, we need to make sure local dnsmasq won't serve other PXE or DHCP requests. One workaround can be to deploy 75-filter-bootps-cronjob and filter-bootps from TripleO which iptables-blacklists all DHCP requests but the ones setup by baremetal driver.

Create a single Ubuntu image with a few additions and add it to Glance

As Devstack is only providing a CirrOS image, there is much of benefits to deploy a custom Ubuntu image. Thanks to diskimage-builder provided again by TripleO folks (thanks by the way!), we can add as many elements as we want.

 
$ git clone https://github.com/openstack/diskimage-builder.git
$ git clone https://github.com/openstack/tripleo-image-elements.git
$ export ELEMENTS_PATH=~/tripleo-image-elements/elements
$ diskimage-builder/bin/disk-image-create -u base local-config stackuser heat-cfntools -o ubuntu_xlcloud
$ diskimage-builder/bin/disk-image-get-kernel -d ./ -o ubuntu_xlcloud -i $(pwd)/ubuntu_xlcloud.qcow2
$ glance image-create --name ubuntu_xlcloud-vmlinuz --public --disk-format aki < ubuntu_xlcloud-vmlinuz
$ glance image-create --name ubuntu_xlcloud-initrd --public --disk-format ari < ubuntu_xlcloud-initrd
$ glance image-create --name ubuntu_xlcloud --public --disk-format qcow2 --container-format bare \
--property kernel_id=$UBUNTU_XLCLOUD_VMLINUZ_UUID --property ramdisk_id=$UBUNTU_XLCLOUD_INITRD_UUID < ubuntu_xlcloud.qcow2

Boot the stack !

Of course, we can provide a Heat template, but leave it simple for now :

$ nova keypair-add --pub-key ~/.ssh/id_rsa.pub sylvain
$ nova boot --flavor bm.small --image ubuntu_xlcloud --key-name sylvain mynewhost

Sep 11 2013

XLcloud Blog and News has changed home

This is the new home of the XLcloud Project Blog and News. We had to change home because our former blog server had some RSS feed problems and limitations with regard to spam detection and it wasn't possible to post comments neither. Now those impediments are fixed. We are looking forward to reading your comments and suggestions. Our previous OpenStack related post are still accessible on xlcloud.org.

See All News


This wiki is licensed under a Creative Commons 2.0 license
XWiki Enterprise 5.4.6 - Documentation - Legal Notice

Site maintained by