HOWTO: Automated Openstack deployement on Debian GNU/Linux wheezy with razor

This howto aims to provide guidelines to automate the install & set up of a multi-node Openstack-Compute (aka Nova) environment with razor.

* THIS HOWTO IS UNDER CONTRUCTION, DON'T USE IT YET *

This environment will include 4 hosts :

Choices:

Services on puppet node:

On compute* nodes :

On controller node:

DOCUMENT CONVENTIONS

In formatted blocks :

PREREQUISITES

Things to prepare beforehand :

IMPORTANT

This HOWTO is valid for the OpenStack Nova, Glance and Keystone packages labelled 2012.1, currently available in Debian Wheezy and might need some adjustments with later versions.

Upgrade to Wheezy

Edit /etc/apt/sources.list to read :

deb http://ftp.fr.debian.org/debian/ wheezy main
deb-src http://ftp.fr.debian.org/debian/ wheezy main

deb http://security.debian.org/ wheezy/updates main
deb-src http://security.debian.org/ wheezy/updates main

# squeeze-updates, previously known as 'volatile'
deb http://ftp.fr.debian.org/debian/ squeeze-updates main
deb-src http://ftp.fr.debian.org/debian/ squeeze-updates main

Then :

# apt-get update
# apt-get dist-upgrade -y
# reboot

Installation

Puppet

Install puppet agent and master on the puppet node:

# apt-get install -y puppet augeas-tools puppetmaster sqlite3 libsqlite3-ruby libactiverecord-ruby git 

Install razor dependencies (need to be fixed in razor puppet module):

# apt-get install -y ruby1.9.1-dev ruby-daemons rubygems
# gem install uuid

Ensure ruby 1.9 is *not* used by default

update-alternatives --set gem  /usr/bin/gem1.8
update-alternatives --set ruby  /usr/bin/ruby1.8

Configure the Puppet

On the puppet node:

augtool << EOT
set /files/etc/puppet/puppet.conf/master/storeconfigs true
set /files/etc/puppet/puppet.conf/master/dbadapter sqlite3
set /files/etc/puppet/puppet.conf/master/dblocation /var/lib/puppet/server_data/storeconfigs.sqlite
set /files/etc/puppet/puppet.conf/agent/pluginsync true
set /files/etc/puppet/puppet.conf/agent/server <controller.hostname>
save
EOT

cat > /etc/puppet/manifests/site.pp << EOT
node default {
  notify { "Hey ! It works !": }
}
EOT

    service puppetmaster restart

puppet agent -vt

There should be no error and you should see a message saying "Hey ! It works !"

⚠ Warning ⚠: With sqlite3 as database backend, only one puppet agent can run at once.

Install the openstack modules

Get the modules

cd /etc/puppet/modules
git clone git://git.labs.enovance.com/puppet.git .
git checkout openstack
git submodule init
git submodule update

Add razor modules

# Remove conflict modules
git rm -rf  sudo
sed -i '/nodejs/d' .gitmodules
git rm --cached nodejs
rm -rf nodejs
rm -rf .git/modules/nodejs

# Add new one
git submodule add https://github.com/puppetlabs/puppetlabs-mongodb.git mongodb
git submodule add https://github.com/puppetlabs/puppetlabs-dhcp dhcp
git submodule add https://github.com/puppetlabs/puppetlabs-tftp.git tftp
git submodule add https://github.com/puppetlabs/puppetlabs-apt.git apt
git submodule add https://github.com/puppetlabs/puppetlabs-ruby ruby
git submodule add https://github.com/puppetlabs/puppetlabs-nodejs nodejs
git submodule add https://github.com/saz/puppet-sudo.git sudo
git submodule add https://github.com/puppetlabs/puppetlabs-razor razor
git submodule add https://github.com/attachmentgenie/puppet-module-network.git network



(cd sudo && git checkout v2.0.0)
(cd mongodb && git checkout 0.1.0)
(cd dhcp && git checkout 1.1.0)
(cd tftp && git checkout 0.2.0)
(cd apt && git checkout 0.0.4)
(cd ruby && git checkout 0.0.2)
(cd nodejs && git checkout 0.2.0)
(cd razor && git checkout 0.2.1)

* Apply this patch to the razor module (/etc/puppet/modules/razor) bug present in 0.2.1 and the master branch on the 9 august (https://github.com/puppetlabs/puppetlabs-razor/pull/51)

diff --git a/lib/puppet/provider/rz_image/default.rb b/lib/puppet/provider/rz_image/default.rb
index 9aed99e..d46a735 100644
--- a/lib/puppet/provider/rz_image/default.rb
+++ b/lib/puppet/provider/rz_image/default.rb
@@ -60,12 +60,12 @@ Puppet::Type.type(:rz_image).provide(:default) do
       else
         source = resource[:source]
       end
-      case resource[:type]
+      case resource[:type].to_s
       when 'os'
-        Puppet.debug "razor image add #{resource[:type]} #{resource[:source]} #{resource[:name]} #{resource[:version]}"
+        Puppet.debug "razor image add #{resource[:type]} #{source} #{resource[:name]} #{resource[:version]}"
         razor 'image', 'add', resource[:type], source, resource[:name], resource[:version]
       else
-        Puppet.debug "razor image add #{resource[:type]} #{resource[:source]}"
+        Puppet.debug "razor image add #{resource[:type]} #{source}"
         razor 'image', 'add', resource[:type], source
       end
     ensure

* Apply this patch to the tftp module (/etc/puppet/modules/tftp) bug present in 2.0.0 and the master branch on the 7 august (https://github.com/puppetlabs/puppetlabs-tftp/pull/17)

diff --git a/manifests/init.pp b/manifests/init.pp
index 4fe22be..f91763e 100644
--- a/manifests/init.pp
+++ b/manifests/init.pp
@@ -58,9 +58,10 @@ class tftp (
     xinetd::service { 'tftp':
       port        => $port,
       protocol    => 'udp',
-      server_args => "${options} ${directory}",
+      server_args => "${options} -u ${username} ${directory}",
       server      => $binary,
-      user        => $username,
+      user        => 'root',
+      group       => 'root',
       bind        => $address,
       socket_type => 'dgram',
       cps         => '100 2',

Note: The nodejs packages are installed from the debian sid repository. On the 7 August the nodejs packages in sid broken npm. The last working version of nodejs compatible with npm on debian sid is nodejs_0.6.19~dfsg1-2

Build the manifest

The following manifest setup on puppet node node the dhcp server and razor

node "<puppet.hostname>" {
        # dhcpd
        class { 'dhcp':
                dnsdomain   => [
                        'razor.lan',
                        '100.168.192.in-addr.arpa',
                        ],
                nameservers => ['8.8.8.8'],
                interfaces  => ['eth1'],
                ntpservers  => ['us.pool.ntp.org'],
                pxeserver   => '192.168.100.200',
                pxefilename => 'pxelinux.0',
        }
        dhcp::pool{ 'razor.lan':
                network => '192.168.100.0',
                mask    => '255.255.255.0',
                range   => '192.168.100.180 192.168.100.199',
                gateway => '192.168.100.1',
        }

        # razor 
        class { 'sudo':
                config_file_replace => false,
        }
        class { 'razor':
                address => $ipaddress_eth1
        }

        # provider is a bit recent, and for now, no error is reported if a field is wrong, and the name don't finish by .iso
        rz_image { "debian-wheezy-netboot-amd64.iso": 
                ensure  => present,
                type    => 'os',  
                version => '7.0b1',  
                source  => "http://ftp.debian.org/debian/dists/testing/main/installer-amd64/current/images/netboot/mini.iso",
                require => [ Class['razor'], Service['razor'] ],
        }

Note: Warning, the netboot iso is *not* the as the netinstall iso, only the netboot iso work with razor

And then apply the configuration

puppet agent -vt

If the following error occur, you have the broken version of npm: (On the 9 august 2012, nodejs currently migrate node-* and npm packages to use nodejs binary instead of node: http://packages.debian.org/changelogs/pool/main/n/nodejs/nodejs_0.6.19~dfsg1-4/changelog))

err: /Stage[main]/Nodejs/Package[npm]/ensure: change from purged to present failed: Execution of '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install npm' returned 100: Reading package lists...
Building dependency tree...
Reading state information...
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
 npm : Depends: nodejs but it is not going to be installed
       Depends: nodejs-dev but it is not going to be installed
...
       Depends: node-which but it is not going to be installed
E: Unable to correct problems, you have held broken packages.

To fix it, do this three step:

* downgrade to the latest working version:

dpkg -i nodejs_0.6.19~dfsg1-2_amd64.deb nodejs-dev_0.6.19~dfsg1-2_amd64.deb  
apt-get install -f

* Apply this patch to the puppet nodejs module (/etc/puppet/modules/nodejs):

diff --git a/manifests/init.pp b/manifests/init.pp
index ee90e54..8eb8b20 100644
--- a/manifests/init.pp
+++ b/manifests/init.pp
@@ -57,7 +57,7 @@ class nodejs(
 
   package { 'nodejs':
     name    => $nodejs::params::node_pkg,
-    ensure  => present,
+    ensure  => held,
     require => Anchor['nodejs::repo']
   }
 
@@ -70,7 +70,7 @@ class nodejs(
   if $dev_package and $nodejs::params::dev_pkg {
     package { 'nodejs-dev':
       name    => $nodejs::params::dev_pkg,
-      ensure  => present,
+      ensure  => held,
       require => Anchor['nodejs::repo']
     }
   }

* And then reapply puppet configuration:

puppet agent -vt

The problem should be resolved.

Razor configuration

razor is not yet fully configurable via puppet, so we must change some configuration file. By default, the razor daemon is configuration to provide the rest api and os image on eth0

To change it, in /opt/razor/conf/razor_server.conf, change "mk_uri" and "image_svc_host" fields: (https://github.com/puppetlabs/puppetlabs-razor/pull/46)

...
force_mk_uuid: ""
image_svc_host: *192.168.100.200*
image_svc_path: /opt/razor/image
...
mk_tce_mirror_uri: http://localhost:2157/tinycorelinux
mk_uri: *http://192.168.100.200:8026*
node_expire_timeout: 300
...

Apply this patch to razor (/opt/razor), it fix the preseed file for debian wheezy and remove prompt of which network interface to use for installation: (https://github.com/puppetlabs/Razor/pull/170, fixed in master branch)

diff --git a/lib/project_razor/model/debian/wheezy/kernel_args.erb b/lib/project_razor/model/debian/wheezy/kernel_args.erb
index dfef1f8..2a61ed3 100644
--- a/lib/project_razor/model/debian/wheezy/kernel_args.erb
+++ b/lib/project_razor/model/debian/wheezy/kernel_args.erb
@@ -1,4 +1,4 @@
-<% if @node.dhcp_mac %>BOOTIF=<%= @node.dhcp_mac %>  DEBCONF_DEBUG=5 install auto=true url=<%= "#{api_svc_uri}/policy/callback/#{policy_uuid}/preseed/file" %> debian-installer=en_US locale=en_US kbd-chooser/method=us netcfg/get_hostname=wheezy netcfg/get_domain=razor.lan fb=false debconf/frontend=noninteractive console-setup/ask_detect=false console-keymaps-at/keymap=us
+<% if @node.dhcp_mac %>BOOTIF=<%= @node.dhcp_mac %>  DEBCONF_DEBUG=5 install auto=true url=<%= "#{api_svc_uri}/policy/callback/#{policy_uuid}/preseed/file" %> debian-installer=en_US locale=en_US kbd-chooser/method=us netcfg/get_hostname=wheezy netcfg/get_domain=razor.lan fb=false debconf/frontend=noninteractive console-setup/ask_detect=false console-keymaps-at/keymap=us interface=auto
 <% else %>DEBCONF_DEBUG=5 install auto=true url=<%= "#{api_svc_uri}/policy/callback/#{policy_uuid}/preseed/file" %> debian-installer=en_US locale=en_US kbd-chooser/method=us netcfg/get_hostname=wheezy netcfg/get_domain=razor.lan fb=false debconf/frontend=noninteractive console-setup/ask_detect=false console-keymaps-at/keymap=us
 <% end %>

diff --git a/lib/project_razor/model/debian/wheezy/preseed.erb b/lib/project_razor/model/debian/wheezy/preseed.erb
index 50a2349..c80a55c 100644
--- a/lib/project_razor/model/debian/wheezy/preseed.erb
+++ b/lib/project_razor/model/debian/wheezy/preseed.erb
@@ -40,7 +40,9 @@ d-i passwd/root-password password <%= @root_password %>
 d-i passwd/root-password-again password <%= @root_password %>
 d-i user-setup/allow-password-weak boolean true
 #d-i apt-setup/restricted boolean true
-#d-i pkgsel/include string ruby openssh-server build-essential curl
+d-i pkgsel/include string ruby openssh-server build-essential curl
+d-i pkgsel/upgrade select dist-upgrade
+tasksel tasksel/first multiselect standard
 d-i grub-installer/only_debian boolean true
 d-i grub-installer/with_other_os boolean true
 popularity-contest popularity-contest/participate boolean false

Actually, razor don't handle correctly the ssh connection, to quickly fix it add to /root/.ssh/config this: (https://github.com/puppetlabs/Razor/issues/161 and https://github.com/puppetlabs/Razor/pull/163, fixed in master branch)

Host * 
        UserKnownHostsFile=/dev/null
        StrictHostKeyChecking=no

Then restart the razor daemn

/opt/razor/bin/razor_daemon.rb restart

Rebuild the ipxe configuration (the one provided by puppet is completely broken) (https://github.com/puppetlabs/puppetlabs-razor/issues/43)

razor config ipxe > /srv/tftp/razor.ipxe 

Check razor are working

# razor image get
Images
 UUID =>  5pdTSJNIj159VM0dUeW8Yp
 Type =>  OS Install
 ISO Filename =>  debian-wheezy-netboot-amd64.iso
 Path =>  /opt/razor/image/os/5pdTSJNIj159VM0dUeW8Yp
 Status =>  Valid
 OS Name =>  debian-wheezy-netboot-amd64.iso
 OS Version =>  7.0b1

 UUID =>  5wlj4lpU0QAuZ1lAsV8zTb
 Type =>  MicroKernel Image
 ISO Filename =>  rz_mk_prod-image.0.9.0.4.iso
 Path =>  /opt/razor/image/mk/5wlj4lpU0QAuZ1lAsV8zTb
 Status =>  Valid
 Version =>  0.9.0.4
 Built Time =>  Wed Jul 04 00:49:49 +0200 2012
}}

'''Note: we must have two images'''

Boot the __other nodes__ in pxe, wait a bit and then wait nodes appear

{{{
# watch razor node

Razor preparation

On the puppet node:

# razor imaget get
Images
 UUID =>  5pdTSJNIj159VM0dUeW8Yp
 Type =>  OS Install
 ISO Filename =>  debian-wheezy-netboot-amd64.iso
 Path =>  /opt/razor/image/os/5pdTSJNIj159VM0dUeW8Yp
 Status =>  Valid
 OS Name =>  debian-wheezy-netboot-amd64.iso
 OS Version =>  7.0b1
...

Create 2 models, one for controller node and one for compute node

# razor model add template=debian_wheezy label="install_openstack_controller" image_uuid=5pdTSJNIj159VM0dUeW8Yp
--- Building Model (debian_wheezy): 

Please enter node hostname prefix (will append node number) (example: node) 
default: node
(QUIT to cancel)
 > controller
Please enter root password (> 8 characters) (example: P@ssword!) 
default: test1234
(QUIT to cancel)
 > password
Please enter local domain name (will be used in /etc/hosts file) (example: example.com) 
default: localdomain
(QUIT to cancel)
 > razor.lan
Model created
 Label =>  install_openstack_controller
 Template =>  linux_deploy
 Description =>  Debian Wheezy Model
 UUID =>  2gr0aVxNFoGDNiJO9TNpS3
 Image UUID =>  5pdTSJNIj159VM0dUeW8Yp

# razor model add template=debian_wheezy label="install_openstack_compute" image_uuid=5pdTSJNIj159VM0dUeW8Yp
--- Building Model (debian_wheezy): 

Please enter local domain name (will be used in /etc/hosts file) (example: example.com) 
default: localdomain
(QUIT to cancel)
 > razor.lan
Please enter root password (> 8 characters) (example: P@ssword!) 
default: test1234
(QUIT to cancel)
 > password
Please enter node hostname prefix (will append node number) (example: node) 
default: node
(QUIT to cancel)
 > compute
Model created
 Label =>  install_openstack_compute
 Template =>  linux_deploy
 Description =>  Debian Wheezy Model
 UUID =>  3N30NpzhmdF0ve3Mc53r7H
 Image UUID =>  5pdTSJNIj159VM0dUeW8Yp

Note: to get all available razor model template: razor model get template

Add openstack configuration to the manifest

Copy the example manifest into the one used by the puppetmaster

cat /etc/puppet/modules/examples/openstack_compute_multihost.pp >> /etc/puppet/manifests/site.pp

note: On the 9 august, the file openstack_compute_multihost.pp is not update to date, take example to the attached one

Edit /etc/puppet/manifests/site.pp to change the following lines:

The actual private IPs of the controller and compute hosts (see at the beginning of this HOWTO):

$db_host = '192.168.100.100' # IP address of the host on which the database will be installed (the controller for instance)
$db_allowed_hosts = ['192.168.100.%'] # IP addresses for all compute hosts : they need access to the database

The FQDN of the host providing the API server which must be the same as the <controller.hostname> used above.

# The fqdn of the controller host
$api_server = '<controller.hostname>'

If the interface used for the private network is not eth1, replace eth1 with the actual interface on which the IPs 192.168.100.0/24 are found (for instance br0).

Actually, the network is configured in dhcp, change it with puppet by:

* Add this class

class openstack_network {
        class { "network::interfaces":
          interfaces => {
            "eth0" => {
              "method" => "static",
              "address" => $ipaddress_eth0,
              "netmask" => "255.255.255.0",
            },
            "eth1" => {
              "method" => "static",
              "address" => $ipaddress_eth1,
              "netmask" => "255.255.255.0",
              "gateway" => "192.168.100.1"
            },
          },
          auto => ["eth0", "eth1"],
        }
}

* And add the network configuration on the top of each node:

node /mgmt/ {
        $ipaddress_eth0 = "10.142.6.100"
        $ipaddress_eth1 = "192.168.100.100"
        $ipaddress = $ipaddress_eth0

        class {"openstack_network": }
...
}

node /controller/ {
        $nodeid = split($hostname, 'compute')
        $ipaddress_eth0 = "10.142.6.3$nodeid"
        $ipaddress_eth1 = "192.168.100.3$nodeid"
        $ipaddress = $ipaddress_eth0
        class {"openstack_network": }
...
}

Create razor policy for automatic deployement

List available model

# razor model
Models                        Label         Template        Description               UUID           
install_openstack_compute     linux_deploy  Debian Wheezy Model  3N30NpzhmdF0ve3Mc53r7H  
install_openstack_controller  linux_deploy  Debian Wheezy Model  2gr0aVxNFoGDNiJO9TNpS3  

Create broker to automatically start puppet after os installation

# razor broker add plugin=puppet name=puppet description=puppet servers="<puppet.hostname>"

 Name =>  puppet
 Description =>  puppet
 Plugin =>  puppet
 Servers =>  [<puppet.hostname>]
 UUID =>  7bmB3rPI62RuH7X9Klx5At

Create policy, we need 2 compute nodes on 1G memory and two nics and 1 controller node on 512Mo of memory and two nics

# razor policy add --template=linux_deploy --label="Openstack_Compute_nodes" --broker-uuid 7bmB3rPI62RuH7X9Klx5At --model-uuid=3N30NpzhmdF0ve3Mc53r7H --tags=memsize_1015MiB,nics_2 --maximum 2 --enabled
# razor policy add --template=linux_deploy --label="Openstack_Controller_nodes" --broker-uuid 7bmB3rPI62RuH7X9Klx5At --model-uuid=2gr0aVxNFoGDNiJO9TNpS3 --tags=memsize_500MiB,nics_2 --maximum 1 --enabled

Razor start to setup this three nodes:

# watch -n5 -- 'razor node ; echo ; razor active_model'
    Every 5,0s: razor node ; echo ; razor active_model

Discovered Nodes
         UUID           Last Checkin  Status                   Tags
3Olux9oaMQ2MBeeYveO37n  9.4 min       B       [memsize_1015MiB,virtualbox_vm,nics_2]
2SDW5Q7so8cv2xuksi9SQR  3.1 min       B       [memsize_500MiB,virtualbox_vm,nics_2]

Active Models:
          Label                State           Node UUID         Broker  Bind #           UUID
Openstack_Compute_Nodes     broker_wait  3Olux9oaMQ2MBeeYveO37n  puppet  3       3WLuWiog6Ln64TqxMPYtqR
Openstack_Controller_Nodes  preinstall   2SDW5Q7so8cv2xuksi9SQR  puppet  2       2Y8MAweS32tpcJvXmoh6QZ

When the state of the active model is broker_wait, on the "puppet node" do:

# puppetca list --all
  "6fb2ed90c426012ffa7f0800275db40f" (CF:2B:BC:D7:39:8E:2E:8D:E9:26:52:5D:2A:03:DC:96)
+ "puppet2.razor.lan"                (BB:6F:03:EF:61:FE:A9:94:C7:76:46:DA:7C:8F:05:D1) (alt names: "DNS:puppet", "DNS:puppet.razor.lan", "DNS:puppet.razor.lan")
# puppetca sign 6fb2ed90c426012ffa7f0800275db40f
notice: Signed certificate request for 6fb2ed90c426012ffa7f0800275db40f
notice: Removing file Puppet::SSL::CertificateRequest 6fb2ed90c426012ffa7f0800275db40f at '/var/lib/puppet/ssl/ca/requests/6fb2ed90c426012ffa7f0800275db40f.pem'

On the 9 august 2012, the broker is a bit broken, the state always stuck to broker_wait

Checking if it really works

On the controller node:

The required services are advertised in the database

root@controller:~# nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-consoleauth controller1                          nova             enabled    :-)   2012-05-03 08:56:29
nova-scheduler   controller1                          nova             enabled    :-)   2012-05-03 08:56:31
nova-cert        controller1                          nova             enabled    :-)   2012-05-03 08:56:32
nova-compute     compute1                             nova             enabled    :-)   2012-05-03 08:56:50
nova-network     compute1                             nova             enabled    :-)   2012-05-03 08:56:49
nova-compute     compute2                             nova             enabled    :-)   2012-05-03 08:56:47
nova-network     compute2                             nova             enabled    :-)   2012-05-03 08:56:48

A file named 'openrc.sh' has been created in /root on the controller node. Source it & check the nova api works

root@controller1:~# source openrc.sh
root@controller1:~# nova list
+----+------+--------+----------+
| ID | Name | Status | Networks |
+----+------+--------+----------+
+----+------+--------+----------+
root@controller1:~# nova flavor-list
+----+-----------+-----------+------+-----------+------+-------+-------------+
| ID |    Name   | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor |
+----+-----------+-----------+------+-----------+------+-------+-------------+
| 1  | m1.tiny   | 512       | 0    | 0         |      | 1     | 1.0         |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         |
+----+-----------+-----------+------+-----------+------+-------+-------------+
root@controller:~# nova image-list
+----+------+--------+--------+
| ID | Name | Status | Server |
+----+------+--------+--------+
+----+------+--------+--------+

The openstack cluster is quite empty and useless like this, let's upload an image in glance::

root@controller1:~# wget https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img
root@controller1:~# glance add name="CirrOS 0.3" disk_format=qcow2 container_format=ovf < cirros-0.3.0-x86_64-disk.img
Uploading image 'CirrOS 0.3'
================================================================[100%] 7.73M/s, ETA  0h  0m  0s
Added new image with ID: 949bbc5c-e6fa-4ec3-91cb-65cbb6123c54
root@controller:~# glance index
ID                                   Name         Disk Format   Container Format  Size
------------------------------------ ------------ ------------- ----------------- ----------
949bbc5c-e6fa-4ec3-91cb-65cbb6123c54 CirrOS 0.3   qcow2         ovf                  9761280

Does it show up in nova ?

root@controller1:~# nova image-list
+--------------------------------------+-----------------+--------+--------+
|                  ID                  |       Name      | Status | Server |
+--------------------------------------+-----------------+--------+--------+
| 949bbc5c-e6fa-4ec3-91cb-65cbb6123c54 | CirrOS 0.3      | ACTIVE |        |
+--------------------------------------+-----------------+--------+--------+

The nova network puppet module create a private network for the VM to use. Check that it has been created.

root@controller1:~# nova-manage network list
id    IPv4                IPv6            start address   DNS1            DNS2            VlanID          project         uuid
1     169.254.200.0/24    None            169.254.200.3   None            None            2000            None            71681e09-c072-4281-b5b4-37f26ddc97bf

And some floating (public) ips (choose an IP range addressable on your network)::

root@controller:~# nova-manage floating create --ip_range 10.142.6.224/27
root@controller:~# nova-manage floating list
None  10.142.6.225  None  nova  eth0
None  10.142.6.226  None  nova  eth0

Now create a keypair (for ssh access) and save the output in a file

root@controller1:~# nova keypair-add test_keypair > test_keypair.pem
root@controller1:~# chmod 600 test_keypair.pem

Boot an instance and get the console log

root@controller1:~# nova boot --image 949bbc5c-e6fa-4ec3-91cb-65cbb6123c54 --flavor 1 --key_name test_keypair FirstTest --poll
+-------------------------------------+--------------------------------------+
|               Property              |                Value                 |
+-------------------------------------+--------------------------------------+
| OS-DCF:diskConfig                   | MANUAL                               |
| OS-EXT-SRV-ATTR:host                | None                                 |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None                                 |
| OS-EXT-SRV-ATTR:instance_name       | instance-00000001                    |
| OS-EXT-STS:power_state              | 0                                    |
| OS-EXT-STS:task_state               | scheduling                           |
| OS-EXT-STS:vm_state                 | building                             |
| accessIPv4                          |                                      |
| accessIPv6                          |                                      |
| adminPass                           | yab49fMqVHJf                         |
| config_drive                        |                                      |
| created                             | 2012-05-03T10:09:00Z                 |
| flavor                              | m1.tiny                              |
| hostId                              |                                      |
| id                                  | 06dd6129-f94a-488d-9670-7171491899e5 |
| image                               | CirrOS 0.3                           |
| key_name                            | meh                                  |
| metadata                            | {}                                   |
| name                                | FirstTest                            |
| progress                            | 0                                    |
| status                              | BUILD                                |
| tenant_id                           | d1c9085272d542eda98f7e08a1a779d6     |
| updated                             | 2012-05-03T10:09:00Z                 |
| user_id                             | cd04222b81004af5b0ff20c840fb629e     |
+-------------------------------------+--------------------------------------+
root@controller1:~# nova console-log FirstTest

Allocate a floating ip and associate it to the instance::

root@controller1:~# nova floating-ip-create
+--------------+-------------+----------+------+
|      Ip      | Instance Id | Fixed Ip | Pool |
+--------------+-------------+----------+------+
| 10.142.6.225 | None        | None     | nova |
+--------------+-------------+----------+------+
root@controller:~# nova add-floating-ip FirstTest 10.142.6.225

Update the rules for the default security group (allow icmp & ssh)::

root@controller1:~# nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port |  IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
| icmp        | -1        | -1      | 0.0.0.0/0 |              |
+-------------+-----------+---------+-----------+--------------+
root@controller:~# nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port |  IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
| tcp         | 22        | 22      | 0.0.0.0/0 |              |
+-------------+-----------+---------+-----------+--------------+

We now should be able to ping the instance::

root@controller1:~# ping -c 1 10.142.6.225
PING 10.142.6.225 (10.142.6.225) 56(84) bytes of data.
64 bytes from 10.142.6.225: icmp_req=1 ttl=63 time=0.626 ms
 --- 10.142.6.225 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms

And ssh into it with the identity we created before::

root@controller1:~# ssh -i test_key cirros@10.142.6.225
$ uname -a
Linux cirros 3.0.0-12-virtual #20-Ubuntu SMP Fri Oct 7 18:19:02 UTC 2011 x86_64 GNU/Linix
$ exit
Connection to 10.142.6.114 closed.

Et voilà !