28122
Comment:
|
28303
|
Deletions are marked like this. | Additions are marked like this. |
Line 284: | Line 284: |
} | mk_name => "rz_mk_prod-image.0.9.0.5.iso", mk_source => "https://github.com/downloads/puppetlabs/Razor-Microkernel/rz_mk_prod-image.0.9.0.5.iso", |
HOWTO: Automated Openstack deployement on Debian GNU/Linux wheezy with razor
This howto aims to provide guidelines to automate the install & set up of a multi-node Openstack-Compute (aka Nova) environment with razor.
* THIS HOWTO IS UNDER CONTRUCTION, DON'T USE IT YET *
This environment will include 4 hosts :
- 1 puppetmaster/razor node:
- puppet: pubnet@eth0=10.142.6.200 / privnet@eth1=192.168.100.200
- 2 compute nodes or more:
- compute1: pubnet@eth0=10.142.6.31 / privnet@eth1=192.168.100.31
- compute2: pubnet@eth0=10.142.6.32 / privnet@eth1=192.168.100.32
- computeX: pubnet@eth0=10.142.6.3X / privnet@eth1=192.168.100.3X
- 1 proxy/controller node:
- controller: pubnet@eth0=10.142.6.100 / privnet@eth1=192.168.100.100
Choices:
- Virtualization technology: kvm/libvirt
Networking mode: ?VlanManger + multi_host
Services on puppet node:
- razor
- tftpd
- dhcpd
- puppet master
- puppet agent
On compute* nodes :
- puppet agent
- nova-compute
- nova-network
- nova-api (metadata only)
On controller node:
- puppet agent
- mysql database
- keystone
- glance (local storage)
- nova-api
- nova-scheduler
- nova-novncproxy
DOCUMENT CONVENTIONS
In formatted blocks :
command lines starting with a # must be run as root.
values between < and > must be replaced by your values.
PREREQUISITES
Things to prepare beforehand :
- Machines :
- They should have two network interfaces to ensure security. If only one interface is used the private part is more exposed to attacks coming from the public part.
- a "public" one to communicate with the outside world
- a "private" one for the guests VLans
- They should have two network interfaces to ensure security. If only one interface is used the private part is more exposed to attacks coming from the public part.
Disk space: there *must* be at least as much free disk space as RAM in / on the controller host ( see https://labs.enovance.com/issues/374 for the rationale ).
- Network :
- public network
private network. If the machines are not on a LAN, create one with OpenVPN.
- fixed ip range for guests
- number of networks for guests
- network size for guests
- public "floating" IPs (optional)
- Base distribution :
- Debian GNU/Linux squeeze (will be upgraded to wheezy)
IMPORTANT
This HOWTO is valid for the OpenStack Nova, Glance and Keystone packages labelled 2012.1, currently available in Debian Wheezy and might need some adjustments with later versions.
Upgrade to Wheezy
Edit /etc/apt/sources.list to read :
deb http://ftp.fr.debian.org/debian/ wheezy main deb-src http://ftp.fr.debian.org/debian/ wheezy main deb http://security.debian.org/ wheezy/updates main deb-src http://security.debian.org/ wheezy/updates main # squeeze-updates, previously known as 'volatile' deb http://ftp.fr.debian.org/debian/ squeeze-updates main deb-src http://ftp.fr.debian.org/debian/ squeeze-updates main
Then :
# apt-get update # apt-get dist-upgrade -y # reboot
DNS notes
The puppet should be resolvable by a DNS server. If you run a DNS, add the entry "puppet.razor.lan" If you don't, just add in your /etc/hosts:
192.168.100.200 puppet.razor.lan
and install dnsmasq:
apt-get install -y dnsmasq
Installation
Puppet
Install puppet agent and master on the puppet node:
# apt-get install -y puppet augeas-tools puppetmaster sqlite3 libsqlite3-ruby libactiverecord-ruby git mysql-server mysql-client rubygems
Ensure ruby 1.9 is *not* used by default
update-alternatives --set gem /usr/bin/gem1.8 update-alternatives --set ruby /usr/bin/ruby1.8
Configure the Puppet
On the puppet node:
- Enable storedconfig and configure database
augtool << EOT set /files/etc/puppet/puppet.conf/master/storeconfigs true set /files/etc/puppet/puppet.conf/master/dbadapter mysql set /files/etc/puppet/puppet.conf/master/dbname puppet set /files/etc/puppet/puppet.conf/master/dbuser puppet set /files/etc/puppet/puppet.conf/master/dbpassword password set /files/etc/puppet/puppet.conf/master/dbserver localhost set /files/etc/puppet/puppet.conf/master/dbsocket /var/run/mysqld/mysqld.sock set /files/etc/puppet/puppet.conf/agent/pluginsync true set /files/etc/puppet/puppet.conf/agent/server <controller.hostname> save EOT
- Create the mysql database:
mysqladmin create puppet mysql -e "grant all on puppet.* to 'puppet'@'localhost' identified by 'password';"
- Setup autosigning:
echo '*' > /etc/puppet/autosign.conf # note really but this is the only available via for now.
- Create a dummy site manifest
cat > /etc/puppet/manifests/site.pp << EOT node default { notify { "Hey ! It works !": } } EOT
- Restart puppetmaster:
service puppetmaster restart
- Test the puppet agent:
puppet agent -vt
There should be no error and you should see a message saying "Hey ! It works !"
Install the openstack modules
Get the modules
cd /etc/puppet/modules git clone git://git.labs.enovance.com/puppet.git . git checkout openstack git submodule init git submodule update
Add razor modules
# Remove conflict modules git rm -rf sudo sed -i '/nodejs/d' .gitmodules git rm --cached nodejs rm -rf nodejs rm -rf .git/modules/nodejs # Add new one git submodule add https://github.com/puppetlabs/puppetlabs-mongodb.git mongodb git submodule add https://github.com/puppetlabs/puppetlabs-dhcp dhcp git submodule add https://github.com/puppetlabs/puppetlabs-tftp.git tftp git submodule add https://github.com/puppetlabs/puppetlabs-apt.git apt git submodule add https://github.com/puppetlabs/puppetlabs-ruby ruby git submodule add https://github.com/puppetlabs/puppetlabs-nodejs nodejs git submodule add https://github.com/saz/puppet-sudo.git sudo git submodule add https://github.com/puppetlabs/puppetlabs-razor razor git submodule add https://github.com/attachmentgenie/puppet-module-network.git network (cd sudo && git checkout v2.0.0) (cd mongodb && git checkout 0.1.0) (cd dhcp && git checkout 1.1.0) (cd tftp && git checkout 0.2.1) (cd apt && git checkout 0.0.4) (cd ruby && git checkout 0.0.2) (cd nodejs && git checkout 0.2.0) cd razor git checkout master (or ba9503d805d788d44291b1f3fbf142c044bd2e02 commit tested for the howto)
Import all needed puppet provider to fully control razor with puppet from future 0.2.2 version :
curl https://github.com/puppetlabs/puppetlabs-razor/pull/48.patch | git am
The pull request is not yet finished, perhaps this patch is needed (found on 3/07/12)
diff --git a/lib/puppet/type/rz_broker.rb b/lib/puppet/type/rz_broker.rb index f5647ba..023fbc9 100644 --- a/lib/puppet/type/rz_broker.rb +++ b/lib/puppet/type/rz_broker.rb @@ -14,6 +14,11 @@ EOT newvalues(/\w+/) end + newproperty(:description) do + desc "The broker description." + newvalues(/\w+/) + end + newproperty(:plugin) do desc "The broker plugin." newvalues(/\w+/)
Note: The nodejs packages are installed from the debian sid repository. On the 7 August the nodejs packages in sid broken npm. The last working version of nodejs compatible with npm on debian sid is nodejs_0.6.19~dfsg1-2
Build the manifest
The following manifest setup on puppet node node the dhcp server and razor
node "<puppet.hostname>" { # dhcpd class { 'dhcp': dnsdomain => [ 'razor.lan', '100.168.192.in-addr.arpa', ], nameservers => ['8.8.8.8'], interfaces => ['eth1'], ntpservers => ['us.pool.ntp.org'], pxeserver => '192.168.100.200', pxefilename => 'pxelinux.0', } dhcp::pool{ 'razor.lan': network => '192.168.100.0', mask => '255.255.255.0', range => '192.168.100.180 192.168.100.199', gateway => '192.168.100.1', } # razor class { 'sudo': config_file_replace => false, } class { 'razor': address => $ipaddress_eth1 mk_name => "rz_mk_prod-image.0.9.0.5.iso", mk_source => "https://github.com/downloads/puppetlabs/Razor-Microkernel/rz_mk_prod-image.0.9.0.5.iso", # provider is a bit recent, and for now, no error is reported if a field is wrong, and the name don't finish by .iso rz_image { "debian-wheezy-netboot-amd64.iso": ensure => present, type => 'os', version => '7.0b1', source => "http://ftp.debian.org/debian/dists/wheezy/main/installer-amd64/current/images/netboot/mini.iso", } rz_model { 'controller_model': ensure => present, description => 'Controller Wheezy Model', image => 'debian-wheezy-netboot-amd64.iso', metadata => {'domainname' => 'razor.lan', 'hostname_prefix' => 'controller', 'root_password' => 'password'}, template => 'debian_wheezy', } rz_model { 'compute_model': ensure => present, description => 'Compute Wheezy Model', image => 'debian-wheezy-netboot-amd64.iso', metadata => {'domainname' => 'razor.lan', 'hostname_prefix' => 'compute', 'root_password' => 'password'}, template => 'debian_wheezy', } rz_broker { 'puppet_broker': ensure => present, plugin => 'puppet', servers => [ 'puppet2.razor.lan' ] } rz_policy { 'controller_policy': ensure => present, broker => 'puppet_broker', model => 'controller_model', enabled => 'true', tags => ['memsize_500MiB','nics_2'], template => 'linux_deploy', maximum => 1, } rz_policy { 'compute_policy': ensure => present, broker => 'puppet_broker', model => 'compute_model', enabled => 'true', tags => ['memsize_1015MiB','nics_2'], template => 'linux_deploy', maximum => 3, }
Note: Warning, the netboot iso is *not* the as the netinstall iso, only the netboot iso work with razor Note: If your private network is not connected to internet, the razor server can be used as gateway by:
* Changing in the dhcp::pool the gateway to 192.168.100.200 * typing and adding the following to /etc/rc.local:
sysctl -w net.ipv4.ip_forward=1 iptables -t nat -A POSTROUTING -s '192.168.100.0/24' ! -d '192.168.100.0/24' -j MASQUERADE
And then apply the configuration
puppet agent -vt
If the following error occur, you have the broken version of npm: (On the 9 august 2012, nodejs currently migrate node-* and npm packages to use nodejs binary instead of node: http://packages.debian.org/changelogs/pool/main/n/nodejs/nodejs_0.6.19~dfsg1-4/changelog))
err: /Stage[main]/Nodejs/Package[npm]/ensure: change from purged to present failed: Execution of '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install npm' returned 100: Reading package lists... Building dependency tree... Reading state information... Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: npm : Depends: nodejs but it is not going to be installed Depends: nodejs-dev but it is not going to be installed ... Depends: node-which but it is not going to be installed E: Unable to correct problems, you have held broken packages.
To fix it, do this three step:
* downgrade to the latest working version (or rebuild a new version of node-semver with this patch http://lists.alioth.debian.org/pipermail/pkg-javascript-devel/2012-August/004071.html) :
dpkg -i nodejs_0.6.19~dfsg1-2_amd64.deb nodejs-dev_0.6.19~dfsg1-2_amd64.deb echo 'nodejs hold' | dpkg --set-selections echo 'nodejs-dev hold' | dpkg --set-selections apt-get install -f -y apt-get install -y npm
* Apply this patch to the puppet nodejs module (/etc/puppet/modules/nodejs):
diff --git a/manifests/init.pp b/manifests/init.pp index ee90e54..8eb8b20 100644 --- a/manifests/init.pp +++ b/manifests/init.pp @@ -57,7 +57,7 @@ class nodejs( package { 'nodejs': name => $nodejs::params::node_pkg, - ensure => present, + ensure => held, require => Anchor['nodejs::repo'] } @@ -70,7 +70,7 @@ class nodejs( if $dev_package and $nodejs::params::dev_pkg { package { 'nodejs-dev': name => $nodejs::params::dev_pkg, - ensure => present, + ensure => held, require => Anchor['nodejs::repo'] } }
* And then reapply puppet configuration:
puppet agent -vt
The problem should be resolved.
Razor configuration
Rebuild the ipxe configuration (the one provided by puppet is completely broken) (https://github.com/puppetlabs/puppetlabs-razor/issues/43 and the pull request https://github.com/puppetlabs/puppetlabs-razor/pull/63)
razor config ipxe > /srv/tftp/razor.ipxe
Check razor are working
# razor image Images UUID => 6KJyat5iiwPFZ5n7Zl9iO7 Type => MicroKernel Image ISO Filename => rz_mk_prod-image.0.9.0.5.iso Path => /opt/razor/image/mk/6KJyat5iiwPFZ5n7Zl9iO7 Status => Valid Version => 0.9.0.5 Built Time => Mon Aug 13 19:30:14 +0200 2012 UUID => eZEB9Jzk5LIsCgtKKomUR Type => OS Install ISO Filename => debian-wheezy-netboot-amd64.iso Path => /opt/razor/image/os/eZEB9Jzk5LIsCgtKKomUR Status => Valid OS Name => debian-wheezy-netboot-amd64.iso OS Version => 7.0b1 # # razor model Models Label Template Description UUID controller_model linux_deploy Debian Wheezy Model h11IAW65zRocBkBwgNpFv compute_model linux_deploy Debian Wheezy Model g41if8FWuTKqTJ6fPFkUR # # razor broker razor broker Broker Targets: Name => puppet_broker Description => puppet Plugin => puppet Servers => [puppet2.razor.lan] UUID => 4qhk9fYz33Yzt9iw7Fiwzn # # razor policy Policies # Enabled Label Tags Model Label #/Max Counter UUID 0 true controller_policy [memsize_500MiB,nics_2] controller_model 0/1 0 4t2xedmwRWqrCHSADtelod 1 true compute_policy [memsize_1015MiB,nics_2] compute_model 0/3 0 4wRWuGts6uS68fkJWz3aX9
Note: we must have two images
Add openstack configuration to the manifest
Copy the example manifest into the one used by the puppetmaster
cat /etc/puppet/modules/examples/openstack_compute_multihost.pp >> /etc/puppet/manifests/site.pp
note: On the 9 august, the file openstack_compute_multihost.pp is not update to date, take example to the attached one
Edit /etc/puppet/manifests/site.pp to change the following lines:
The actual private IPs of the controller and compute hosts (see at the beginning of this HOWTO):
$db_host = '192.168.100.100' # IP address of the host on which the database will be installed (the controller for instance) $db_allowed_hosts = ['192.168.100.%'] # IP addresses for all compute hosts : they need access to the database
The FQDN of the host providing the API server which must be the same as the <controller.hostname> used above.
# The fqdn of the controller host $api_server = '<controller.hostname>'
If the interface used for the private network is not eth1, replace eth1 with the actual interface on which the IPs 192.168.100.0/24 are found (for instance br0).
Actually, the network is configured in dhcp, change it with puppet by:
* Add this class
class openstack_network { class { "network::interfaces": interfaces => { "eth0" => { "method" => "static", "address" => $ipaddress_eth0, "netmask" => "255.255.255.0", }, "eth1" => { "method" => "static", "address" => $ipaddress_eth1, "netmask" => "255.255.255.0", "gateway" => "192.168.100.1" }, }, auto => ["eth0", "eth1"], } }
* And add the network configuration on the top of each node:
node /mgmt/ { $ipaddress_eth0 = "10.142.6.100" $ipaddress_eth1 = "192.168.100.100" $ipaddress = $ipaddress_eth0 class {"openstack_network": } ... } node /controller/ { $nodeid = split($hostname, 'compute') $ipaddress_eth0 = "10.142.6.3$nodeid" $ipaddress_eth1 = "192.168.100.3$nodeid" $ipaddress = $ipaddress_eth0 class {"openstack_network": } ... }
== Install the openstack cluster ===
Boot the other nodes in pxe, wait a bit and then wait nodes appear and autoinstall If the puppet backends is sqlite
Razor start to setup this three nodes:
# watch -n5 -- 'razor node ; echo ; razor active_model' Every 5,0s: razor node ; echo ; razor active_model Discovered Nodes UUID Last Checkin Status Tags 3Olux9oaMQ2MBeeYveO37n 9.4 min B [memsize_1015MiB,virtualbox_vm,nics_2] 2SDW5Q7so8cv2xuksi9SQR 3.1 min B [memsize_500MiB,virtualbox_vm,nics_2] Active Models: Label State Node UUID Broker Bind # UUID Openstack_Compute_Nodes broker_wait 3Olux9oaMQ2MBeeYveO37n puppet 3 3WLuWiog6Ln64TqxMPYtqR Openstack_Controller_Nodes preinstall 2SDW5Q7so8cv2xuksi9SQR puppet 2 2Y8MAweS32tpcJvXmoh6QZ
When the state of the active model is broker_wait, on the "puppet node" do:
# puppetca list --all "6fb2ed90c426012ffa7f0800275db40f" (CF:2B:BC:D7:39:8E:2E:8D:E9:26:52:5D:2A:03:DC:96) + "puppet2.razor.lan" (BB:6F:03:EF:61:FE:A9:94:C7:76:46:DA:7C:8F:05:D1) (alt names: "DNS:puppet", "DNS:puppet.razor.lan", "DNS:puppet.razor.lan") # puppetca sign 6fb2ed90c426012ffa7f0800275db40f notice: Signed certificate request for 6fb2ed90c426012ffa7f0800275db40f notice: Removing file Puppet::SSL::CertificateRequest 6fb2ed90c426012ffa7f0800275db40f at '/var/lib/puppet/ssl/ca/requests/6fb2ed90c426012ffa7f0800275db40f.pem'
On the 9 august 2012, the broker is a bit broken, the state ofen stuck to broker_wait
Checking if it really works
On the controller node:
The required services are advertised in the database
root@controller:~# nova-manage service list Binary Host Zone Status State Updated_At nova-consoleauth controller1 nova enabled :-) 2012-05-03 08:56:29 nova-scheduler controller1 nova enabled :-) 2012-05-03 08:56:31 nova-cert controller1 nova enabled :-) 2012-05-03 08:56:32 nova-compute compute1 nova enabled :-) 2012-05-03 08:56:50 nova-network compute1 nova enabled :-) 2012-05-03 08:56:49 nova-compute compute2 nova enabled :-) 2012-05-03 08:56:47 nova-network compute2 nova enabled :-) 2012-05-03 08:56:48
A file named 'openrc.sh' has been created in /root on the controller node. Source it & check the nova api works
root@controller1:~# source openrc.sh root@controller1:~# nova list +----+------+--------+----------+ | ID | Name | Status | Networks | +----+------+--------+----------+ +----+------+--------+----------+ root@controller1:~# nova flavor-list +----+-----------+-----------+------+-----------+------+-------+-------------+ | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | +----+-----------+-----------+------+-----------+------+-------+-------------+ | 1 | m1.tiny | 512 | 0 | 0 | | 1 | 1.0 | | 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | | 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | | 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | | 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | +----+-----------+-----------+------+-----------+------+-------+-------------+ root@controller:~# nova image-list +----+------+--------+--------+ | ID | Name | Status | Server | +----+------+--------+--------+ +----+------+--------+--------+
The openstack cluster is quite empty and useless like this, let's upload an image in glance::
root@controller1:~# wget https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img … root@controller1:~# glance add name="CirrOS 0.3" disk_format=qcow2 container_format=ovf < cirros-0.3.0-x86_64-disk.img Uploading image 'CirrOS 0.3' ================================================================[100%] 7.73M/s, ETA 0h 0m 0s Added new image with ID: 949bbc5c-e6fa-4ec3-91cb-65cbb6123c54 root@controller:~# glance index ID Name Disk Format Container Format Size ------------------------------------ ------------ ------------- ----------------- ---------- 949bbc5c-e6fa-4ec3-91cb-65cbb6123c54 CirrOS 0.3 qcow2 ovf 9761280
Does it show up in nova ?
root@controller1:~# nova image-list +--------------------------------------+-----------------+--------+--------+ | ID | Name | Status | Server | +--------------------------------------+-----------------+--------+--------+ | 949bbc5c-e6fa-4ec3-91cb-65cbb6123c54 | CirrOS 0.3 | ACTIVE | | +--------------------------------------+-----------------+--------+--------+
The nova network puppet module create a private network for the VM to use. Check that it has been created.
root@controller1:~# nova-manage network list id IPv4 IPv6 start address DNS1 DNS2 VlanID project uuid 1 169.254.200.0/24 None 169.254.200.3 None None 2000 None 71681e09-c072-4281-b5b4-37f26ddc97bf
And some floating (public) ips (choose an IP range addressable on your network)::
root@controller:~# nova-manage floating create --ip_range 10.142.6.224/27 root@controller:~# nova-manage floating list None 10.142.6.225 None nova eth0 None 10.142.6.226 None nova eth0 …
Now create a keypair (for ssh access) and save the output in a file
root@controller1:~# nova keypair-add test_keypair > test_keypair.pem root@controller1:~# chmod 600 test_keypair.pem
Boot an instance and get the console log
root@controller1:~# nova boot --image 949bbc5c-e6fa-4ec3-91cb-65cbb6123c54 --flavor 1 --key_name test_keypair FirstTest --poll +-------------------------------------+--------------------------------------+ | Property | Value | +-------------------------------------+--------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-SRV-ATTR:host | None | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | | OS-EXT-SRV-ATTR:instance_name | instance-00000001 | | OS-EXT-STS:power_state | 0 | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | accessIPv4 | | | accessIPv6 | | | adminPass | yab49fMqVHJf | | config_drive | | | created | 2012-05-03T10:09:00Z | | flavor | m1.tiny | | hostId | | | id | 06dd6129-f94a-488d-9670-7171491899e5 | | image | CirrOS 0.3 | | key_name | meh | | metadata | {} | | name | FirstTest | | progress | 0 | | status | BUILD | | tenant_id | d1c9085272d542eda98f7e08a1a779d6 | | updated | 2012-05-03T10:09:00Z | | user_id | cd04222b81004af5b0ff20c840fb629e | +-------------------------------------+--------------------------------------+ root@controller1:~# nova console-log FirstTest …
Allocate a floating ip and associate it to the instance::
root@controller1:~# nova floating-ip-create +--------------+-------------+----------+------+ | Ip | Instance Id | Fixed Ip | Pool | +--------------+-------------+----------+------+ | 10.142.6.225 | None | None | nova | +--------------+-------------+----------+------+ root@controller:~# nova add-floating-ip FirstTest 10.142.6.225 …
Update the rules for the default security group (allow icmp & ssh)::
root@controller1:~# nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 +-------------+-----------+---------+-----------+--------------+ | IP Protocol | From Port | To Port | IP Range | Source Group | +-------------+-----------+---------+-----------+--------------+ | icmp | -1 | -1 | 0.0.0.0/0 | | +-------------+-----------+---------+-----------+--------------+ root@controller:~# nova secgroup-add-rule default tcp 22 22 0.0.0.0/0 +-------------+-----------+---------+-----------+--------------+ | IP Protocol | From Port | To Port | IP Range | Source Group | +-------------+-----------+---------+-----------+--------------+ | tcp | 22 | 22 | 0.0.0.0/0 | | +-------------+-----------+---------+-----------+--------------+
We now should be able to ping the instance::
root@controller1:~# ping -c 1 10.142.6.225 PING 10.142.6.225 (10.142.6.225) 56(84) bytes of data. 64 bytes from 10.142.6.225: icmp_req=1 ttl=63 time=0.626 ms --- 10.142.6.225 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms
And ssh into it with the identity we created before::
root@controller1:~# ssh -i test_key cirros@10.142.6.225 $ uname -a Linux cirros 3.0.0-12-virtual #20-Ubuntu SMP Fri Oct 7 18:19:02 UTC 2011 x86_64 GNU/Linix $ exit Connection to 10.142.6.114 closed.
Et voilà !