Differences between revisions 61 and 62
Revision 61 as of 2012-10-16 21:17:24
Size: 28090
Editor: ?dachary
Comment: remove all active models
Revision 62 as of 2012-10-20 08:47:33
Size: 28514
Editor: ?dachary
Comment: Explain how to configured preseed for disk and repository
Deletions are marked like this. Additions are marked like this.
Line 472: Line 472:
=== Check razor are working === === Configure preseed ===

To choose the Debian GNU/Linux mirror to be used for deployment ( for instance '''ftp.fr.debian.org''' ):
{{{
perl -pi -e 's/ftp.us.debian.org/ftp.fr.debian.org/' /opt/razor/lib/project_razor/model/debian/wheezy/preseed.erb
}}}

To change the disk ( for instance '''/dev/vda''' ):
{{{
perl -pi -e 's|/dev/sda|/dev/vda|' /opt/razor/lib/project_razor/model/debian/wheezy/preseed.erb
}}}

=== Check razor is working ===

HOWTO: Automated Openstack deployement on Debian GNU/Linux wheezy with razor

This howto aims to provide guidelines to automate the install & set up of a multi-node Openstack-Compute (aka Nova) environment with razor.

* THIS HOWTO IS UNDER CONSTRUCTION, DON'T USE IT YET *

This environment will include 4 hosts :

  • 1 puppetmaster/razor node:
    • puppet: pubnet@eth0=10.142.6.200 / privnet@eth1=192.168.100.200
  • 2 compute nodes or more:
    • compute1: pubnet@eth0=10.142.6.31 / privnet@eth1=192.168.100.31
    • compute2: pubnet@eth0=10.142.6.32 / privnet@eth1=192.168.100.32
    • computeX: pubnet@eth0=10.142.6.3X / privnet@eth1=192.168.100.3X
  • 1 proxy/controller node:
    • controller: pubnet@eth0=10.142.6.100 / privnet@eth1=192.168.100.100

Choices:

  • Virtualization technology: kvm/libvirt
  • Networking mode: ?VlanManger + multi_host

Services on puppet node:

  • razor
  • tftpd
  • dhcpd
  • puppet master
  • puppet agent

On compute* nodes :

  • puppet agent
  • nova-compute
  • nova-network
  • nova-api (metadata only)

On controller node:

  • puppet agent
  • mysql database
  • keystone
  • glance (local storage)
  • nova-api
  • nova-scheduler
  • nova-novncproxy

DOCUMENT CONVENTIONS

In formatted blocks :

  • command lines starting with a # must be run as root.

  • values between < and > must be replaced by your values.

PREREQUISITES

Things to prepare beforehand :

  • Machines :
    • They should have two network interfaces to ensure security. If only one interface is used the private part is more exposed to attacks coming from the public part.
      • a "public" one to communicate with the outside world
      • a "private" one for the guests VLans
  • Disk space: there *must* be at least as much free disk space as RAM in / on the controller host ( see https://labs.enovance.com/issues/374 for the rationale ).

  • Network :
    • public network
    • private network. If the machines are not on a LAN, create one with OpenVPN.

    • fixed ip range for guests
    • number of networks for guests
    • network size for guests
    • public "floating" IPs (optional)
  • Base distribution :
    • Debian GNU/Linux squeeze (will be upgraded to wheezy)

IMPORTANT

This HOWTO is valid for the OpenStack Nova, Glance and Keystone packages labelled 2012.1, currently available in Debian Wheezy and might need some adjustments with later versions.

Upgrade to Wheezy

Edit /etc/apt/sources.list to read :

deb http://ftp.fr.debian.org/debian/ wheezy main
deb-src http://ftp.fr.debian.org/debian/ wheezy main

Then :

# apt-get update
# apt-get dist-upgrade -y
# reboot

DNS notes

The puppet should be resolvable by a DNS server. If you run a DNS, add the entry "<puppet.hostname>" If you don't, just add in your /etc/hosts:

192.168.100.200 <puppet.hostname>

and install dnsmasq:

apt-get install -y dnsmasq

Installation

nodejs

As long as razor depends on /usr/bin/node instead of /usr/bin/nodejs, install the transitional module providing the alias:

apt-get install nodejs-legacy

Puppet

Install puppet agent and master on the puppet node:

# apt-get install -y puppet augeas-tools puppetmaster sqlite3 libsqlite3-ruby libactiverecord-ruby git mysql-server mysql-client rubygems libmysql-ruby curl

Note: leave the mysql password empty (if you do not, you will have to adapt the rest of the HOWTO)

Ensure ruby 1.9 is *not* used by default

update-alternatives --set gem  /usr/bin/gem1.8
update-alternatives --set ruby  /usr/bin/ruby1.8

Configure the Puppet

On the puppet node:

  • Enable storedconfig and configure database

augtool << EOT
set /files/etc/puppet/puppet.conf/master/storeconfigs true
set /files/etc/puppet/puppet.conf/master/dbadapter mysql
set /files/etc/puppet/puppet.conf/master/dbname puppet
set /files/etc/puppet/puppet.conf/master/dbuser puppet
set /files/etc/puppet/puppet.conf/master/dbpassword password
set /files/etc/puppet/puppet.conf/master/dbserver localhost
set /files/etc/puppet/puppet.conf/master/dbsocket /var/run/mysqld/mysqld.sock
set /files/etc/puppet/puppet.conf/agent/pluginsync true
set /files/etc/puppet/puppet.conf/agent/server <puppet.hostname>
save
EOT

Note: make sure <puppet.hostname> is either the hostname(1) of the puppet server or one of the entries returned when doing a reverse DNS lookup on the primary interface (i.e. eth0 in most cases).

  • Create the mysql database:

mysqladmin create puppet
mysql -e "grant all on puppet.* to 'puppet'@'localhost' identified by 'password';"
  • Setup autosigning:

echo '*' > /etc/puppet/autosign.conf
  • Create a dummy site manifest

cat > /etc/puppet/manifests/site.pp << EOT
node default {
  notify { "Hey ! It works !": }
}
EOT
  • Restart puppetmaster:

    service puppetmaster restart
  • Test the puppet agent:

puppet agent -vt

There should be no error and you should see a message saying "Hey ! It works !"

Install the openstack modules

Get the modules

cd /etc/puppet/modules
git clone git://git.labs.enovance.com/puppet.git .
git checkout openstack
git submodule init
git submodule update

Add razor modules

# Remove conflict modules
git rm -rf  sudo
sed -i '/nodejs/d' .gitmodules
git rm --cached nodejs
rm -rf nodejs
rm -rf .git/modules/nodejs

# Add new one
git submodule add https://github.com/puppetlabs/puppetlabs-mongodb.git mongodb
git submodule add https://github.com/puppetlabs/puppetlabs-dhcp dhcp
git submodule add https://github.com/puppetlabs/puppetlabs-tftp.git tftp
git submodule add https://github.com/puppetlabs/puppetlabs-apt.git apt
git submodule add https://github.com/puppetlabs/puppetlabs-ruby ruby
git submodule add https://github.com/puppetlabs/puppetlabs-nodejs nodejs
git submodule add https://github.com/saz/puppet-sudo.git sudo
git submodule add https://github.com/puppetlabs/puppetlabs-razor razor
git submodule add https://github.com/attachmentgenie/puppet-module-network.git network



(cd sudo && git checkout v2.0.0)
(cd mongodb && git checkout 0.1.0)
(cd dhcp && git checkout 1.1.0)
(cd tftp && git checkout 0.2.1)
(cd apt && git checkout 0.0.4)
(cd ruby && git checkout 0.0.2)
(cd nodejs && git checkout 0.2.0)
cd razor 
git checkout ba9503d805d788d44291b1f3fbf142c044bd2e02 # the master commit tested for the howto

Import all needed puppet providers to fully control razor with puppet from future 0.2.2 version
(see pull request https://github.com/puppetlabs/puppetlabs-razor/pull/48) :

curl https://github.com/puppetlabs/puppetlabs-razor/pull/48.patch | git am

This pull request is not yet finished, perhaps this patch is needed (found on 1st oct 12)

patch -p1 <<'EOF'
diff --git a/lib/puppet/type/rz_broker.rb b/lib/puppet/type/rz_broker.rb
index f5647ba..023fbc9 100644
--- a/lib/puppet/type/rz_broker.rb
+++ b/lib/puppet/type/rz_broker.rb
@@ -14,6 +14,11 @@ EOT
     newvalues(/\w+/)
   end
 
+  newproperty(:description) do
+    desc "The broker description."
+    newvalues(/\w+/)
+  end
+
   newproperty(:plugin) do
     desc "The broker plugin."
     newvalues(/\w+/)
EOF

Setup the gateway (Only if your private network is not connected to internet)

If your private network is not connected to internet, the razor server can be used as gateway by:

  • Changing in the dhcp::pool the gateway to 192.168.100.200
  • typing and adding the following to /etc/rc.local:

sysctl -w net.ipv4.ip_forward=1
iptables -t nat -A POSTROUTING -s '192.168.100.0/24' ! -d '192.168.100.0/24' -j MASQUERADE

Nodejs/npm installation

Note: In the future, this paragraph should be removed, when nodejs and npm packages are properly installed by puppet

On 06/09/2012, the npm package is only available in debian sid, but is actually not installable
(the dependency node-semver need to be updated, see http://lists.alioth.debian.org/pipermail/pkg-javascript-devel/2012-August/004071.html)

To allow puppet to install npm from sid archive with success, the following should be done :

apt-get -y install libc-ares-dev libev-dev libssl-dev libv8-dev libv8-3.8.9.20 libev4 libc-ares2 zlib1g-dev libssl-doc

wget https://github.com/sileht/vagrant-razor-env/raw/master/nodejs_0.6.19%7Edfsg1-2_amd64.deb
wget https://github.com/sileht/vagrant-razor-env/raw/master/nodejs-dev_0.6.19%7Edfsg1-2_amd64.deb
dpkg -i nodejs_0.6.19~dfsg1-2_amd64.deb nodejs-dev_0.6.19~dfsg1-2_amd64.deb  
echo 'nodejs hold' | dpkg --set-selections
echo 'nodejs-dev hold' | dpkg --set-selections
apt-get install -f -y

Apply this patch to the puppet nodejs module (/etc/puppet/modules/nodejs):

diff --git a/manifests/init.pp b/manifests/init.pp
index ee90e54..8eb8b20 100644
--- a/manifests/init.pp
+++ b/manifests/init.pp
@@ -57,7 +57,7 @@ class nodejs(
 
   package { 'nodejs':
     name    => $nodejs::params::node_pkg,
-    ensure  => present,
+    ensure  => held,
     require => Anchor['nodejs::repo']
   }
 
@@ -70,7 +70,7 @@ class nodejs(
   if $dev_package and $nodejs::params::dev_pkg {
     package { 'nodejs-dev':
       name    => $nodejs::params::dev_pkg,
-      ensure  => present,
+      ensure  => held,
       require => Anchor['nodejs::repo']
     }
   }

Build the manifest

The general idea is to instruct razor to configure one machine as the OpenStack controller (i.e. running all services) and all the others as compute nodes (i.e. can only run VM). This is done by providing the MAC address of the machine designated to be the controller. All machines with a different MAC address are compute nodes.

For the <puppet.hostname>, in the site manifest /etc/puppet/manifests/site.pp add the following.

node "<puppet.hostname>" {
        # dhcpd
    class { 'dhcp':
            dnsdomain   => [
                    'razor.lan',
                    '100.168.192.in-addr.arpa',
                    ],
            nameservers => ['8.8.8.8'],
            interfaces  => ['eth1'],
            ntpservers  => ['us.pool.ntp.org'],
            pxeserver   => '192.168.100.200',
            pxefilename => 'pxelinux.0',
    }
    dhcp::pool{ 'razor.lan':
            network => '192.168.100.0',
            mask    => '255.255.255.0',
            range   => '192.168.100.180 192.168.100.199',
            gateway => '192.168.100.1',
    }

    # razor 
    class { 'sudo':
            config_file_replace => false,
    }
    class { 'razor':
            address => $ipaddress_eth1,
            mk_name => "rz_mk_prod-image.0.9.0.5.iso",
            mk_source => "https://github.com/downloads/puppetlabs/Razor-Microkernel/rz_mk_prod-image.0.9.0.5.iso",
    }

    # provider is a bit recent, and for now, no error is reported if a field is wrong, and the name don't finish by .iso
    rz_image { "debian-wheezy-netboot-amd64.iso": 
            ensure  => present,
            type    => 'os',  
            version => '7.0b1',  
            source  => "http://ftp.debian.org/debian/dists/wheezy/main/installer-amd64/current/images/netboot/mini.iso",
    }

    rz_model { 'controller_model':
      ensure      => present,
      description => 'Controller Wheezy Model',
      image       => 'debian-wheezy-netboot-amd64.iso',
      metadata    => {'domainname' => 'razor.lan', 'hostname_prefix' => 'controller', 'root_password' => 'password'},
      template    => 'debian_wheezy',
    }

    rz_model { 'compute_model':
      ensure      => present,
      description => 'Compute Wheezy Model',
      image       => 'debian-wheezy-netboot-amd64.iso',
      metadata    => {'domainname' => 'razor.lan', 'hostname_prefix' => 'compute', 'root_password' => 'password'},
      template    => 'debian_wheezy',
    }

    rz_broker { 'puppet_broker':
      ensure      => present,
      plugin      => 'puppet',
      description => 'puppet',
      servers     => [ '<puppet.hostame>' ]
    }

    # a tag to identify my <controller.hostname>
    rz_tag { "mac_eth1_of_the_controller":
        tag_label   => "mac_eth1_of_the_controller",
        tag_matcher => [ {
                        'key'     => 'mk_hw_nic1_serial',
                        'compare' => 'equal',
                        'value'   => "08:00:27:64:9b:22",
                    } ],
    }

    # a tag to identify my <compute?.hostname>
    rz_tag { "not_mac_eth1_of_the_controller":
        tag_label   => "not_mac_eth1_of_the_controller",
        tag_matcher => [ {
                        'key'     => 'mk_hw_nic1_serial',
                        'compare' => 'equal',
                        'value'   => "08:00:27:64:9b:22",
                        'inverse' => "yes",
                    } ],
    }
   
    rz_policy { 'controller_policy':
      ensure  => present,
      broker  => 'puppet_broker',
      model   => 'controller_model',
      enabled => 'true',
      tags    => ['mac_eth1_of_the_controller'],
      template => 'linux_deploy',
      maximum => 1,
    }

    rz_policy { 'compute_policy':
      ensure  => present,
      broker  => 'puppet_broker',
      model   => 'compute_model',
      enabled => 'true',
      tags    => ['not_mac_eth1_of_the_controller'],
      template => 'linux_deploy',
      maximum => 3,
    }

}
  • Replace replace <puppet.hostname>

  • Replace 08:00:27:64:9b:22 with the MAC address of the interface carrying the 192.168.100.X IP on <controller.hostname> : eth1

NB: Actually (6/09/2012), the puppet provider for razor only support add/remove *not* update

Razor installation and configuration

Simply issue, this:

puppet agent -vt

Update the "not_the_mac_eth1_of_the_controller" (the puppet module don't handle the field invert correctly)

razor tag matcher update $(razor tag get $(razor tag | awk '/not_/{print $3}') | awk '/mk_hw_nic1_serial/{print $1}') invert=yes

Rebuild the ipxe configuration (the one provided by puppet is broken: check https://github.com/puppetlabs/puppetlabs-razor/issues/43 and the pull request https://github.com/puppetlabs/puppetlabs-razor/pull/63 for updates)

razor config ipxe > /srv/tftp/razor.ipxe 
/etc/init.d/xinetd restart

Note: this should be done each time you rerun puppet on the razor server

Note: Because the razor puppet module can't update the razor resource , if the razor configuration need to be updated, do

razor policy remove all
razor tag remove all
razor broker remove all
razor model remove all
razor image get 
remove all images listed with razor image remove UUID
razor active_model get all
remove all active models 
puppet agent -vt
razor config ipxe > /srv/tftp/razor.ipxe 
/etc/init.d/xinetd restart

Configure preseed

To choose the Debian GNU/Linux mirror to be used for deployment ( for instance ftp.fr.debian.org ):

perl -pi -e 's/ftp.us.debian.org/ftp.fr.debian.org/' /opt/razor/lib/project_razor/model/debian/wheezy/preseed.erb

To change the disk ( for instance /dev/vda ):

perl -pi -e 's|/dev/sda|/dev/vda|' /opt/razor/lib/project_razor/model/debian/wheezy/preseed.erb

Check razor is working

# razor policy | grep eth1
2

If the result is 2, razor has been successfully installed and configured

Add openstack configuration to the manifest

Copy the example manifest into the one used by the puppetmaster

cat /etc/puppet/modules/examples/openstack.pp >> /etc/puppet/manifests/site.pp

Edit /etc/puppet/manifests/site.pp to change the following lines:

The actual private IPs of the controller and compute hosts (see at the beginning of this HOWTO):

$db_host = '192.168.100.100' # IP address of the host on which the database will be installed (the controller for instance)
$db_allowed_hosts = ['192.168.100.%'] # IP addresses for all compute hosts : they need access to the database

The FQDN of the host providing the API server which must be the same as the <controller.hostname> used above.

# The public fqdn of the controller host
$public_server = '<controller.hostname>'

# The internal fqdn of the controller host
$api_server = '<controller.hostname>'

If the interface used for the private network is not eth1, replace eth1 with the actual interface on which the IPs 192.168.66.0/24 are found (for instance br0).

By default, the network is configured in dhcp by razor, change it with puppet by adding this class in the site manifest:

class openstack_network {
        class { "network::interfaces":
          interfaces => {
            "eth0" => {
              "method" => "static",
              "address" => $ipaddress_eth0,
              "netmask" => "255.255.255.0",
            },
            "eth1" => {
              "method" => "static",
              "address" => $ipaddress_eth1,
              "netmask" => "255.255.255.0",
              "gateway" => "192.168.100.1"
            },
          },
          auto => ["eth0", "eth1"],
        }
}

And adding the network configuration on the top of each kind of node:

node /controller/ inherits controller {
        $ipaddress_eth0 = "10.142.6.100"
        $ipaddress_eth1 = "192.168.100.100"
        $ipaddress = $ipaddress_eth0

        exec{"killall dhclient": onlyif => "pidof dhclient" }
        class {"openstack_network": }

}

node /compute/ inherits compute {
        $nodeid = split($hostname, 'compute')
        $ipaddress_eth0 = "10.142.6.3$nodeid"
        $ipaddress_eth1 = "192.168.100.3$nodeid"
        $ipaddress = $ipaddress_eth0

        exec{"killall dhclient": onlyif => "pidof dhclient" }
        class {"openstack_network": }

}

Install the openstack cluster

Boot the other nodes in pxe, wait a bit and then wait, the nodes will appear and the installation starts

Razor start to setup this three nodes:

# watch -n5 -- 'razor node ; echo ; razor active_model'
    Every 5,0s: razor node ; echo ; razor active_model

Discovered Nodes
         UUID           Last Checkin  Status                   Tags
5HoZYY8oZMwbT6gUFzORzj  3.2 min       B       [not_mac_eth1_of_the_controller,memsize_1015MiB,virtualbox_vm,nics_2]  
7fvQgY2Y2gT7YGeWH89ZMN  1.1 min       B       [mac_eth1_of_the_controller,memsize_1511MiB,virtualbox_vm,nics_2]  

Active Models:
      Label          State           Node UUID            Broker      Bind #           UUID           
compute_policy     preinstall  5HoZYY8oZMwbT6gUFzORzj  puppet_broker  1       5ODuPdXzKLXSb42Rk5Ktpf  
controller_policy  preinstall  7fvQgY2Y2gT7YGeWH89ZMN  puppet_broker  1       7ioI7dvlg9DZCCoCH8sqAZ  

The installation is finish when the state is broker_success

On the 9 august 2012, the broker is a bit broken, the last reported state can be wrong <<br>> ie: razor report broker_failed, but the puppet agent have run successfully.

Checking if it really works

On the controller node:

The required services are advertised in the database

root@controller:~# nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-consoleauth controller1                          nova             enabled    :-)   2012-05-03 08:56:29
nova-scheduler   controller1                          nova             enabled    :-)   2012-05-03 08:56:31
nova-cert        controller1                          nova             enabled    :-)   2012-05-03 08:56:32
nova-compute     compute1                             nova             enabled    :-)   2012-05-03 08:56:50
nova-network     compute1                             nova             enabled    :-)   2012-05-03 08:56:49
nova-compute     compute2                             nova             enabled    :-)   2012-05-03 08:56:47
nova-network     compute2                             nova             enabled    :-)   2012-05-03 08:56:48

A file named 'openrc.sh' has been created in /root on the controller node. Source it & check the nova api works

root@controller1:~# source openrc.sh
root@controller1:~# nova list
+----+------+--------+----------+
| ID | Name | Status | Networks |
+----+------+--------+----------+
+----+------+--------+----------+
root@controller1:~# nova flavor-list
+----+-----------+-----------+------+-----------+------+-------+-------------+
| ID |    Name   | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor |
+----+-----------+-----------+------+-----------+------+-------+-------------+
| 1  | m1.tiny   | 512       | 0    | 0         |      | 1     | 1.0         |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         |
+----+-----------+-----------+------+-----------+------+-------+-------------+
root@controller:~# nova image-list
+----+------+--------+--------+
| ID | Name | Status | Server |
+----+------+--------+--------+
+----+------+--------+--------+

The openstack cluster is quite empty and useless like this, let's upload an image in glance::

root@controller1:~# wget https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img
root@controller1:~# glance add name="CirrOS 0.3" disk_format=qcow2 container_format=ovf < cirros-0.3.0-x86_64-disk.img
Uploading image 'CirrOS 0.3'
================================================================[100%] 7.73M/s, ETA  0h  0m  0s
Added new image with ID: 949bbc5c-e6fa-4ec3-91cb-65cbb6123c54
root@controller:~# glance index
ID                                   Name         Disk Format   Container Format  Size
------------------------------------ ------------ ------------- ----------------- ----------
949bbc5c-e6fa-4ec3-91cb-65cbb6123c54 CirrOS 0.3   qcow2         ovf                  9761280

Does it show up in nova ?

root@controller1:~# nova image-list
+--------------------------------------+-----------------+--------+--------+
|                  ID                  |       Name      | Status | Server |
+--------------------------------------+-----------------+--------+--------+
| 949bbc5c-e6fa-4ec3-91cb-65cbb6123c54 | CirrOS 0.3      | ACTIVE |        |
+--------------------------------------+-----------------+--------+--------+

The nova network puppet module create a private network for the VM to use. Check that it has been created.

root@controller1:~# nova-manage network list
id    IPv4                IPv6            start address   DNS1            DNS2            VlanID          project         uuid
1     169.254.200.0/24    None            169.254.200.3   None            None            2000            None            71681e09-c072-4281-b5b4-37f26ddc97bf

And some floating (public) ips (choose an IP range addressable on your network)::

root@controller:~# nova-manage floating create --ip_range 10.142.6.224/27
root@controller:~# nova-manage floating list
None  10.142.6.225  None  nova  eth0
None  10.142.6.226  None  nova  eth0

Now create a keypair (for ssh access) and save the output in a file

root@controller1:~# nova keypair-add test_keypair > test_keypair.pem
root@controller1:~# chmod 600 test_keypair.pem

Boot an instance and get the console log

root@controller1:~# nova boot --image 949bbc5c-e6fa-4ec3-91cb-65cbb6123c54 --flavor 1 --key_name test_keypair FirstTest --poll
+-------------------------------------+--------------------------------------+
|               Property              |                Value                 |
+-------------------------------------+--------------------------------------+
| OS-DCF:diskConfig                   | MANUAL                               |
| OS-EXT-SRV-ATTR:host                | None                                 |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None                                 |
| OS-EXT-SRV-ATTR:instance_name       | instance-00000001                    |
| OS-EXT-STS:power_state              | 0                                    |
| OS-EXT-STS:task_state               | scheduling                           |
| OS-EXT-STS:vm_state                 | building                             |
| accessIPv4                          |                                      |
| accessIPv6                          |                                      |
| adminPass                           | yab49fMqVHJf                         |
| config_drive                        |                                      |
| created                             | 2012-05-03T10:09:00Z                 |
| flavor                              | m1.tiny                              |
| hostId                              |                                      |
| id                                  | 06dd6129-f94a-488d-9670-7171491899e5 |
| image                               | CirrOS 0.3                           |
| key_name                            | meh                                  |
| metadata                            | {}                                   |
| name                                | FirstTest                            |
| progress                            | 0                                    |
| status                              | BUILD                                |
| tenant_id                           | d1c9085272d542eda98f7e08a1a779d6     |
| updated                             | 2012-05-03T10:09:00Z                 |
| user_id                             | cd04222b81004af5b0ff20c840fb629e     |
+-------------------------------------+--------------------------------------+
root@controller1:~# nova console-log FirstTest

Allocate a floating ip and associate it to the instance::

root@controller1:~# nova floating-ip-create
+--------------+-------------+----------+------+
|      Ip      | Instance Id | Fixed Ip | Pool |
+--------------+-------------+----------+------+
| 10.142.6.225 | None        | None     | nova |
+--------------+-------------+----------+------+
root@controller:~# nova add-floating-ip FirstTest 10.142.6.225

Update the rules for the default security group (allow icmp & ssh)::

root@controller1:~# nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port |  IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
| icmp        | -1        | -1      | 0.0.0.0/0 |              |
+-------------+-----------+---------+-----------+--------------+
root@controller:~# nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port |  IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
| tcp         | 22        | 22      | 0.0.0.0/0 |              |
+-------------+-----------+---------+-----------+--------------+

We now should be able to ping the instance::

root@controller1:~# ping -c 1 10.142.6.225
PING 10.142.6.225 (10.142.6.225) 56(84) bytes of data.
64 bytes from 10.142.6.225: icmp_req=1 ttl=63 time=0.626 ms
 --- 10.142.6.225 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms

And ssh into it with the identity we created before::

root@controller1:~# ssh -i test_key cirros@10.142.6.225
$ uname -a
Linux cirros 3.0.0-12-virtual #20-Ubuntu SMP Fri Oct 7 18:19:02 UTC 2011 x86_64 GNU/Linix
$ exit
Connection to 10.142.6.114 closed.

Et voilà !