Differences between revisions 40 and 41
Revision 40 as of 2012-11-30 14:15:56
Size: 36910
Editor: ?Thomas Goirand
Comment:
Revision 41 as of 2012-11-30 14:38:19
Size: 39413
Editor: ?Thomas Goirand
Comment:
Deletions are marked like this. Additions are marked like this.
Line 224: Line 224:
{{attachment:67_cinder-common_dbconfig.png}}
{{attachment:68_cinder-common_db_type.png}}
{{attachment:69_cinder-common_con_type.png}}
{{attachment:70_cinder-common_db_admin_pass.png}}
{{attachment:71_cinder-common_app_pass.png}}
{{attachment:72_cinder-common_app_pass_confirm.png}}

The answers to these questions will form an SQL connection directive as folow:

{{
sql_connection = mysql://user:pass@server-hostname:port/dbname
}}

You can also edit this by hand on the different configuration files.
Line 226: Line 241:
Describe here how to configure api-paste.ini through debconf. Most Openstack services need to communicate with Keystone. To do so, they need the service administrator tenant name, username and password. This information is stored in each service configuration file. For example:

{{
/etc/nova/api-paste.ini
/etc/glance/glance-api-paste.ini
/etc/glance/glance-registry.conf
/etc/cinder/api-paste.ini
/etc/quantum/api-paste.ini
}}

Here's an example of debconf prompts asking for such keystone credentials:

{{attachment:26_glance-common_auth_server_hostname.png}}
{{attachment:27_glance-common_tenant_name.png}}
{{attachment:28_glance-common_auth_server_username.png}}
{{attachment:29_glance-common_auth_server_pass.png}}

these should match your setup of Keystone (explained above).
Line 230: Line 262:
Describe here how to configure endpoints through debconf. Keystone isn't only for auth. It's also a catalog of services, so that your users will be able to tell which IP address to use when contacting one of the Openstack services. Therefor, each service has to be registered in keystone using the keystone client. The Debian packages automate this task using debconf.

Answer yes to this one:

{{attachment:31_glance-api_register_endpoint.png}}

Enter here the address of your keystone server (if you used the metapackage openstack-proxy-node, then 127.0.0.1 will work):

{{attachment:32_glance-api_keystone_ip.png}}

Enter here the AUTH_TOKEN value stored in /etc/keystone/keystone.conf, which you configured using debconf when installing keystone:

{{attachment:33_glance-api_keystone_auth_token.png}}

Enter the public IP address used to reach your service:

{{attachment:34_glance-api_endpoint_ip.png}}

Openstack has the concept of availability zones, enter the name of it (if you have only one Openstack cloud, then any name is fine, as long as it is consistent across all the Openstack services):

{{attachment:35_glance-api_region_name.png}}
Line 236: Line 288:
{{attachment:25_glance-common_pipeline_flavor.png}}
Line 244: Line 298:
A number of hosts will need to have access to your MySQL over network. By default in Debian, a MySQL server is only accessible from localhost, so we need to change that. In '''/etc/mysql/my.cnf''' modify the '''bind-address''' value to read : A number of hosts will need to have access to your MySQL over network. For example, all of your Nova-compute (compute hosts) will need to have access to this central database. By default in Debian, a MySQL server is only accessible from localhost, so we need to change that. In '''/etc/mysql/my.cnf''' modify the '''bind-address''' value to read :
Line 254: Line 308:
In the file '''/etc/nova/nova.conf''' : When installing nova-api, it will be prompted what services you want to use. Choose osapi_compute for the nova-api management host. If you are running on a single server, you also need the metadata (but if it's a proxy node, without compute service running, then you do not want to activate the metadata service).

In the file '''/etc/nova/nova.conf''', edit the following directive to match this:
Line 258: Line 314:
## Network config
# A nova-network on each compute node
multi_host=true
# VLan manger
network_manager=nova.network.manager.VlanManager
Line 264: Line 315:
# My ip
my-ip=<the current machine publc ip address>
Line 271: Line 320:
## More general things
# The RabbitMQ host
rabbit_host=<mgmt.host>

Installing Openstack Folsom on Debian GNU/Linux

Scope of this howto

This page explains how to install Openstack Folsom (Folsom is the version 2012.2 of Openstack, released early in autumn 2012). The openstack Folsom packages are still a work in progress, though this howto is in constant rework to make it match the installation procedure using these packages.

The current focus is on a subset of the possible setups: KVM and nova-network. Quantum and Xen are kept for later. The goal is to make this page, and the experimental branches of the packages, evolve in parallel until "it works": errors in the HOWTO will be fixed, and bugs in the packages will be fixed to.

Wheezy vs SID

This howto focuses on installing Folsom on top of Debian Wheezy, though currently, since the next Stable is frozen, installing it on SID should work equally (currently in SID, due to a mistake by the maintainers of libvirt, you need to install either libnetcf1 from Wheezy, or libvirt0 from Experimental, but otherwise, there isn't any problem).

On Debian wheezy though, few packages are missing. Namely, Debian source packages nodejs, less.js and python-setuptools-git have to be backported from SID to Wheezy. We expect these to be in the official Debian backports (eg: backports.debian.org), but this will unfortunately not happen before Wheezy is out. Indeed, backports FTP masters have decided that stable backports can't be opened before a stable is released. If, like me, you would like this to change, and backports available during the freeze, get in touch with Debian backports FTP masters directly. As a consequence, our scripts temporarily also create a small Wheezy backport repository.

Final result goal

This howto aims to provide guidelines to install & set up a multi-node Openstack-Compute (aka Nova) environment. It doesn't aim at documenting how to install Swift (eg: Openstack object storage), which shall be documented elsewhere.

In order to make it more simple, this howto makes the assumption that you will be running a single "proxy node", which will hold all the Openstack API server components. Later on, if you have too much load on your single host proxy node, you can migrate them to another physical server. More servers (compute, or volume nodes) can also be added to join the cloud and make it scale.

As of today, this imply that your proxy node will run: - nova-api (compute API) - nova-scheduler - glance (api and registry: that's Openstack image storage) - keystone (the Openstack authentification server) - mysql (used by all daemons) - rabbitmq - memcached - openstack-dashboard (otherwise called horizon: the web GUI) - cinder-api - quantum-server using the openvswitch plugin (Quantum manages network on Openstack) - ceilometer metering (api, collector and agent-central)

These packages will be installed through a meta-package.

Note that it is also possible to use nova-network, in which case you wouldn't use Quantum.

Technical Choices

We will be using :

DOCUMENT CONVENTIONS

In formatted blocks :

  • command lines starting with a # must be run as root.

  • values between < and > must be replaced by your values.

  • replace <mgmt.host> with the actual hostname of the machine chosen to be the management node.

Before Installing Debian Openstack Folsom : building and configuration

Debian install

Install a base Debian Wheezy. Make sure you have enough space in your /tmp (dozens of GB) so that it can store files with the size of an operating system image. Your /var should also be big enough. If you plan on using cinder to store some VM partitions, make sure to use LVM and to leave enough free space on your volume group.

It might be a good idea to install a mail server, so that you can receive messages for root:

# apt-get install postfix
# echo "root: mymailbox@example.com" >>/etc/aliases
# newaliases
# /etc/init.d/postfix reload

Building the packages

Wheezy backports

There is a small shell script which copies packages from a Debian SID repository and creates a small Debian repository out of them. Just do this:

# git clone http://git.debian.org/git/openstack/openstack-auto-builder.git
# cd openstack-auto-builder
# ./build_backports

Then you can add your newly created repository to your sources.list:

# sudo echo "deb file://"`pwd`"/home/zigo/openstack-auto-builder/backports/debian wheezy-backports main" >>/etc/apt/sources.list
# sudo apt-get update

Openstack and dependency packages

There are many packages, and their build-time and run-time dependencies are complex. So building all the 25+ packages by hand, on the correct order can be quite painful. At the time of writing these lines, the Folsom packages aren't available in Debian yet. They are only available through our Git repository on alioth.debian.org. So there is an automatic way to build them all using the "openstack-auto-builder" script, also available on Alioth. Simply do the following steps to build:

# git clone git://git.debian.org/git/openstack/openstack-auto-builder.git
# cd openstack-auto-builder
# ./build_openstack

This script will automatically install the necessary build-depends, git clone the current Experimental packaging trees on Alioth and build all packages. You might need to set URL=git://anonscm.debian.org/git/openstack if you don't have an ssh access on Alioth, and set a GnuPG signing key under GIT_BUILD_OPT, so that packages and the repository are signed with the key of your choice. Building will be made in the "sources" folder at the same level as the build_openstack script and your Wheezy backports Debian repository.

Note that few packages will fail to build, due to problems in the unit tests. To solve that problem, go in such a package, and build without using these tests. For example:

# cd source/glance/glance
# DEB_BUILD_OPTIONS=nocheck git-buildpackage

Once built, you can go back to the rest of the building process:

# cd ../../..
# ./build_openstack

Once building is done, the build_openstack script will create a Debian repository fo you in the folder named "repo". You can use that one in your apt/sources.list:

echo "deb file:///home/username/openstack-auto-builder/repo/debian experimental main" >>/etc/apt/sources.list

Before Installing the proxy node

PREREQUISITES

Things to prepare beforehand :

  • Servers:
    • should have two network interfaces to ensure security. If only one interface is used the private part is more exposed to attacks coming from the public part.
      • a _public_ one to communicate with the outside world

      • a _private_ one for the guests VLans

  • Network :
    • public network
    • private network. If the machines are not on a LAN, create one with OpenVPN.

    • fixed ip range for guests
    • number of networks for guests
    • network size for guests
    • public “floating” IPs (optional)
    • echo 1 > /proc/sys/net/ipv4/ip_forward

    • echo "nbd max_part=65" >> /etc/modules # to enable key-file, network & metadata injection into instances images

  • Distribution :
    • Debian GNU/Linux wheezy
    • Add experimental to sources.list to use the OpenStack Folsom packages

    • apt-get update
    • Make sure /tmp has enough space to accomodate for snapshoting ( i.e. you might want to add /tmp none none none 0 0 in /etc/fstab to disable tmpfs on /tmp )

dbconfig-common

If dbconfig-common isn't installed before the setup of your server, important questions might be delayed. It will still work, but it is more convenient if you setup dbconfig-common by hand before:

# apt-get install dbconfig-common
# dpkg-reconfigure dbconfig-common

dbconfig-common has the following configuration screens:

01_dbconfig-common_keep_pass.png 02_dbconfig-common_remote_db.png

The dbconfig-common parameter is an important choice if you plan on using a remote MySQL server. This will have to be chosen from for all of your compute nodes.

Installing

Proxy node install

openstack-proxy-node meta-package

After you have both the backports and Openstack folsom installed in your sources.list, and ran apt-get update, simply do:

# apt-get install openstack-proxy-node

In this single command, all the necessary components for controlling your Openstack cloud will be installed on your server. Altogether, that's more than 240 packages. A lot of debconf questions will be asked (nearly 100). Here's few screenshots so that you know what to answer. Yes, that's really a lot of Debconf questions, but remember that:

* debconf answers can be preseeded (and eventually, fully preseed, so the installation can be fully automated) * you would otherwise configure everything by hand on config files, so that's really a time saver rather than borring useless questions.

Absolutely all of what is asked with debconf is required to have a working proxy node.

General consideration about answering to debconf prompts

A number of package needs the same kind of answers. For example, glance-common, nova-common, keystone, cinder-common (etc.) all need to use a database, and will ask about the connection information. In this howto, we use glance-common (for the keystone communication) and glance-api (for registering the endpoint) and cinder-api (for setting-up the database) as an example, but this will apply to the other packages as well. It is important that you understand what you are doing when you see each of the questions, otherwise your proxy node will not work. So we give here detailed explanations of what you should answer (together with screenshots).

Because of the current way debconf is designed, it isn't (to the best of my knowledge) impossible to order which questions will be prompted to the user first, the package will ask for configuration in a quite random way. For example, you will be asked to configure quantum and its API endpoint (see below for what this is) before configuring keystone. Do not worry, the packages will really be installed in the correct order. To be able to explain what to enter, it didn't make sense to do so respecting the order in which the debconf questions are prompted. Therefor, it is left as an exercise to the reader to unscramble all this.

MySQL server

The first Debconf screen you will see will be for setting-up the password of MySQL server as follow:

03_mysql_password.png 04_mysql_password_repeat.png

Keystone

Keystone is not only an auth server for all the openstack packages, but also a catalog of services that the Openstack clients will use to know where to contact each services. Both will need to be configured in order to use Keystone. Keystone uses an thing which it calls an AUTH_TOKEN as a kind of master password to do special administrative tasks (like creating an admin user). This AUTH_TOKEN is stored in /etc/keystone/keystone.conf, and is configured through debconf as folow:

41_keystone_auth_token.png

Make sure you use a strong enough password here (it is a good idea to generate one), and remember that password, because you will need it when setting-up the other components of Openstack. Next, you need to configure a first super admin:

52_keystone_admin_user_name.png 53_keystone_admin_user_email.png 54_keystone_admin_user_pass.png 55_keystone_admin_user_pass_confirm.png

By default, "admin" is used as tenant name, and "admin" as super user. You will also need to remember the tenant name, admin name and password, because other packages (like glance-common, nova-common, etc.) will need these to talk to keystone.

Keystone also needs to be registered as an endpoint (see below), so that it can be accessed and used by the cloud users.

56_keystone_register_endpoint.png

So you also need to enter the public IP address that the cloud users will contact to reach your keystone instance:

57_keystone_endpoing_ip.png

Finally, enter the region name (see below for what this means):

58_keystone_region_name.png

dbconfig-common

For each package that needs access to a database (eg: cinder-common, glance-common, keystone, nova-common and quantum-plugin-openvswitch), you will be asked for a database name, a SQL username, and a password, plus the SQL root admin password (if you choose MySQL or PGSQL) in order to create the database (if it doesn't exist). Here is an example with cinder (you will be asked for the same questions for the other packages listed above):

67_cinder-common_dbconfig.png 68_cinder-common_db_type.png 69_cinder-common_con_type.png 70_cinder-common_db_admin_pass.png 71_cinder-common_app_pass.png 72_cinder-common_app_pass_confirm.png

The answers to these questions will form an SQL connection directive as folow:

{{ sql_connection = mysql://user:pass@server-hostname:port/dbname }}

You can also edit this by hand on the different configuration files.

Keystone communication

Most Openstack services need to communicate with Keystone. To do so, they need the service administrator tenant name, username and password. This information is stored in each service configuration file. For example:

{{ /etc/nova/api-paste.ini /etc/glance/glance-api-paste.ini /etc/glance/glance-registry.conf /etc/cinder/api-paste.ini /etc/quantum/api-paste.ini }}

Here's an example of debconf prompts asking for such keystone credentials:

26_glance-common_auth_server_hostname.png 27_glance-common_tenant_name.png 28_glance-common_auth_server_username.png 29_glance-common_auth_server_pass.png

these should match your setup of Keystone (explained above).

Registering an endpoint

Keystone isn't only for auth. It's also a catalog of services, so that your users will be able to tell which IP address to use when contacting one of the Openstack services. Therefor, each service has to be registered in keystone using the keystone client. The Debian packages automate this task using debconf.

Answer yes to this one:

31_glance-api_register_endpoint.png

Enter here the address of your keystone server (if you used the metapackage openstack-proxy-node, then 127.0.0.1 will work):

32_glance-api_keystone_ip.png

Enter here the AUTH_TOKEN value stored in /etc/keystone/keystone.conf, which you configured using debconf when installing keystone:

33_glance-api_keystone_auth_token.png

Enter the public IP address used to reach your service:

34_glance-api_endpoint_ip.png

Openstack has the concept of availability zones, enter the name of it (if you have only one Openstack cloud, then any name is fine, as long as it is consistent across all the Openstack services):

35_glance-api_region_name.png

Package specific Debconf questions: glance

Glance-common will ask you which pipeline flavor you want. Choose keystone.

25_glance-common_pipeline_flavor.png

Package specific Debconf questions: nova

Package specific Debconf questions: cinder

Post configuration

Configuring MySQL server

A number of hosts will need to have access to your MySQL over network. For example, all of your Nova-compute (compute hosts) will need to have access to this central database. By default in Debian, a MySQL server is only accessible from localhost, so we need to change that. In /etc/mysql/my.cnf modify the bind-address value to read :

bind-address            = 0.0.0.0

And restart the mysql server :

# /etc/init.d/mysql restart

Nova

When installing nova-api, it will be prompted what services you want to use. Choose osapi_compute for the nova-api management host. If you are running on a single server, you also need the metadata (but if it's a proxy node, without compute service running, then you do not want to activate the metadata service).

In the file /etc/nova/nova.conf, edit the following directive to match this:

  • Add these configuration options :

vlan_interface=<the private interface eg. eth1>
public_interface=<the interface on which the public IP addresses are bound eg. eth0>
# Dmz & metadata things
dmz_cidr=169.254.169.254/32
ec2_dmz_host=169.254.169.254
metadata_host=169.254.169.254
## Glance
image_service=nova.image.glance.GlanceImageService
glance_api_servers=<mgmt.host>:9292
use-syslog=true
ec2_host=<mgmt.host>

Restart nova services :

# /etc/init.d/nova-api restart
# /etc/init.d/nova-scheduler restart

Now bootstrap nova :

# nova-manage network create private --fixed_range_v4=<10.1.0.0/16> --network_size=<256> --num_networks=<100>
# nova-manage floating create --ip_range=<192.168.0.224/28>

You should be able to see that nova-scheduler is running (OK state is :-) KO is XXX) :

# nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-scheduler   openstack04                          nova             enabled    :-)   2012-01-13 17:29:48

openstack-dashboard

Edit /etc/openstack-dashboard/local_settings.py and add

QUANTUM_ENABLED = False

Restart apache:

service apache2 restart

Point your browser to http://<mgmt.host>/, and you'll see the dashboard. You can login using <admin_user> password <secret>.

Install the VNC console. Add the following lines to /etc/nova/nova.conf

novncproxy_base_url=http://<mgmt.host>:6080/vnc_auto.html
vncserver_listen=0.0.0.0
vncserver_proxyclient_address=127.0.0.1

Note: <mgmt.host> will be exposed in horizon and must be a name that resolves from the client machine. It cannot be a name that only resolves on the nodes used to run OpenStack.

apt-get install nova-console novnc

compute nodes:

apt-get openstack-compute-node

Note that the <mgmt.node> can also be a compute node. There is no obligation for it to be a separate physical machine. Install the packages required to run instances :

apt-get install -y openstack-compute-node

Make sure to select the metadata only for nova-api.

Checking that it works

Restart services :

# /etc/init.d/nova-api restart
# /etc/init.d/nova-network restart
# /etc/init.d/nova-compute restart

On the proxy, check that all is running :

# nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-scheduler   <mgmt.host>                          nova             enabled    :-)   2012-01-16 12:29:53
nova-compute     compute.host                         nova             enabled    :-)   2012-01-16 12:29:52
nova-network     compute.host                         nova             enabled    :-)   2012-01-16 12:29:49

Using it

To use the nova cli, you will need to export some environment variables : n

export OS_USERNAME=<admin_user>
export OS_PASSWORD=<secret>
export OS_TENANT_NAME=<admin_project>
export OS_AUTH_URL=http://<mgmt.host>:5000/v2.0/
export OS_VERSION=1.1

You can now use the nova command line interface :

nova list
+----+------+--------+----------+
| ID | Name | Status | Networks |
+----+------+--------+----------+
+----+------+--------+----------+
# nova image-list
+----+------+--------+--------+
| ID | Name | Status | Server |
+----+------+--------+--------+
+----+------+--------+--------+
# nova flavor-list
+----+-----------+-----------+------+----------+-------+-------------+
| ID |    Name   | Memory_MB | Swap | Local_GB | VCPUs | RXTX_Factor |
+----+-----------+-----------+------+----------+-------+-------------+
| 1  | m1.tiny   | 512       |      | 0        | 1     | 1.0         |
| 2  | m1.small  | 2048      |      | 20       | 1     | 1.0         |
| 3  | m1.medium | 4096      |      | 40       | 2     | 1.0         |
| 4  | m1.large  | 8192      |      | 80       | 4     | 1.0         |
| 5  | m1.xlarge | 16384     |      | 160      | 8     | 1.0         |
+----+-----------+-----------+------+----------+-------+-------------+
# nova keypair-list
+------+-------------+
| Name | Fingerprint |
+------+-------------+
+------+-------------+

There is no instance, no image and some flavors. First we need to get an image and upload it to glance :

# wget https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img
[...]
#  glance add name="cirrOS-0.3.0-x86_64" is_public=true      container_format=bare disk_format=qcow2      distro="cirrOS-0.3.0-x86_64" < cirros-0.3.0-x86_64-disk.img

To later connect to the instance via ssh, we will need to upload a ssh public-key :

# nova keypair-add --pub_key <your_public_key_file.pub> <key_name>
# nova keypair-list
+--------+-------------------------------------------------+
| Name   | Fingerprint                                     |
+--------+-------------------------------------------------+
| my_key | 79:40:46:87:74:3a:0e:01:f4:59:00:1b:3a:94:71:72 |
+--------+-------------------------------------------------+

We can now boot an image with this image :

# nova boot --poll --flavor 1 --image 78651eea-02f6-4750-945a-4524a77f7da9 --key_name my_key my_first_instance
+------------------------+--------------------------------------+
|        Property        |                Value                 |
+------------------------+--------------------------------------+
| OS-EXT-STS:power_state | 0                                    |
| OS-EXT-STS:task_state  | scheduling                           |
| OS-EXT-STS:vm_state    | building                             |
| RAX-DCF:diskConfig     | MANUAL                               |
| accessIPv4             |                                      |
| accessIPv6             |                                      |
| adminPass              | HMs5tLK3bPCG                         |
| config_drive           |                                      |
| created                | 2012-01-16T14:14:20Z                 |
| flavor                 | m1.tiny                              |
| hostId                 |                                      |
| id                     | 677750ea-0dd4-43c3-8ae0-ef54cb29915f |
| image                  | Ubuntu 11.10 clouding amd64          |
| key_name               | pubkey                               |
| metadata               | {}                                   |
| name                   | my_first_instance                    |
| progress               | None                                 |
| status                 | BUILD                                |
| tenant_id              | 1                                    |
| updated                | 2012-01-16T14:14:20Z                 |
| user_id                | 1                                    |
+------------------------+--------------------------------------+

And after few seconds :

# nova show my_first_instance
+------------------------+----------------------------------------------------------+
|        Property        |                          Value                           |
+------------------------+----------------------------------------------------------+
| OS-EXT-STS:power_state | 1                                                        |
| OS-EXT-STS:task_state  | None                                                     |
| OS-EXT-STS:vm_state    | active                                                   |
| RAX-DCF:diskConfig     | MANUAL                                                   |
| accessIPv4             |                                                          |
| accessIPv6             |                                                          |
| config_drive           |                                                          |
| created                | 2012-01-16T14:14:20Z                                     |
| flavor                 | m1.tiny                                                  |
| hostId                 | 9750641c8c79637e01b342193cfa1efd5961c300b7865dc4a5658bdd |
| id                     | 677750ea-0dd4-43c3-8ae0-ef54cb29915f                     |
| image                  | Ubuntu 11.10 clouding amd64                              |
| key_name               | pubkey                                                   |
| metadata               | {}                                                       |
| name                   | my_first_instance                                        |
| private_0 network      | 10.1.0.3                                                 |
| progress               | None                                                     |
| status                 | ACTIVE                                                   |
| tenant_id              | 1                                                        |
| updated                | 2012-01-16T14:14:37Z                                     |
| user_id                | 1                                                        |
+------------------------+----------------------------------------------------------+

To see the instance console, we can go on our compute node and look at the file /var/lib/nova/instances/instance-00000001/console.log (if this is the first intance you created, else change 00000001 to the last available in the folder).

We can activate ssh access, create a floating ip, attach it to our instance and ssh into it (with user ubuntu for UEC images):

# nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
# nova floating-ip-create
+--------------+-------------+----------+
|      Ip      | Instance Id | Fixed Ip |
+--------------+-------------+----------+
| 172.24.4.224 | None        | None     |
+--------------+-------------+----------+
# nova add-floating-ip my_first_instance 172.24.4.224
# ssh -i my_key ubuntu@172.24.4.224
The authenticity of host '172.24.4.224 (172.24.4.224)' can't be established.
RSA key fingerprint is 55:bf:2e:7f:60:ef:ea:72:b4:af:2a:33:6b:2d:8c:62.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '172.24.4.224' (RSA) to the list of known hosts.
Welcome to Ubuntu 11.10 (GNU/Linux 3.0.0-14-virtual x86_64)

 * Documentation:  https://help.ubuntu.com/

System information as of Mon Jan 16 14:58:15 UTC 2012

System load:  0.59              Processes:           59
Usage of /:   32.6% of 1.96GB   Users logged in:     0
Memory usage: 6%                IP address for eth0: 10.1.0.5
Swap usage:   0%

Graph this data and manage this system at https://landscape.canonical.com/
Get cloud support with Ubuntu Advantage Cloud Guest
http://www.ubuntu.com/business/services/cloud

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

/usr/bin/xauth:  file /home/ubuntu/.Xauthority does not exist
To run a command as administrator (user 'root'), use 'sudo <command>'.
See &quot;man sudo_root&quot; for details.

ubuntu@my-first-instance:~$ 

If ssh does not work, check the logs in the horizon "Logs" tab associated with the instance. If it fails to find the metadata with an error that looks like:

DataSourceEc2.py[WARNING]: 'http://169.254.169.254' failed: url error [[Errno 111] Connection refused]

just try to restart

/etc/init.d/nova-compute restart
/etc/init.d/nova-api restart
/etc/init.d/nova-scheduler restart
/etc/init.d/nova-cert restart

the source of the problem is probably that it was not retarted after a modification of the configuration files and they were not taken into account.

nova-volume

Note: as of September 22nd, 2012, the iscsitarget-dkms package must be installed from sid http://packages.qa.debian.org/i/iscsitarget/news/20120920T101826Z.html until it is accepted in wheezy.

The following instructions must be run on the <mgmt.host> node.

apt-get install lvm2 nova-volume iscsitarget iscsitarget-dkms euca2ools

Installing the guestmount package requires a patch until the corresponding packaging bug is fixed http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=669246

apt-get install guestmount

When it fails, apply the following patch.

root@osc2:~# diff -uNr /etc/init.d/zfs-fuse*
--- /etc/init.d/zfs-fuse        2012-02-06 00:04:24.000000000 -0500
+++ /etc/init.d/zfs-fuse.mod    2012-05-16 05:57:35.000000000 -0400
@@ -1,8 +1,8 @@
 #! /bin/bash
 ### BEGIN INIT INFO
 # Provides:          zfs-fuse
-# Required-Start:    fuse $remote_fs
-# Required-Stop:     fuse $remote_fs
+# Required-Start:    $remote_fs
+# Required-Stop:     $remote_fs
 # Default-Start:     S
 # Default-Stop:      0 6
 # Short-Description: Daemon for ZFS support via FUSE

After applying the patch, install again.

apt-get install guestmount

Assuming /dev/<sda3> is an unused disk partition, create a volume group:

pvcreate /dev/<sda3>
vgcreate nova-volumes /dev/<sda3>

Add the following lines to /etc/nova/nova.conf

iscsi_ip_prefix=192.168.
volume_group=nova-volumes
iscsi_helper=iscsitarget

Apply the following patch to cope with the fact that --volume-group is not accepted as an option by the nova-volume command line.

diff --git a/init.d/nova-volume b/init.d/nova-volume
index 0cdda1b..1d6fa62 100755
--- a/init.d/nova-volume
+++ b/init.d/nova-volume
@@ -45,9 +47,9 @@ do_start()
        fi
 
        # Adds what has been configured in /etc/default/nova-volume
-       if [ -n ${nova_volume_group} ] ; then
-               DAEMON_ARGS="${DAEMON_ARGS} --volume_group=${nova_volume_group}"
-       fi
+#      if [ -n ${nova_volume_group} ] ; then
+#              DAEMON_ARGS="${DAEMON_ARGS} --volume_group=${nova_volume_group}"
+#      fi
 
        start-stop-daemon --start --quiet --background --chuid ${NOVA_USER}:nova --make-pidfile --pidfile $PIDFILE --startas $DAEMON --test > /dev/null \
                || return 1

Fix an absolute path problem in /usr/share/pyshared/nova/rootwrap/volume.py

perl -pi -e 's|/sbin/iscsiadm|/usr/bin/iscsiadm|' /usr/share/pyshared/nova/rootwrap/volume.py

Edit /etc/default/iscsitarget and set

ISCSITARGET_ENABLE=true

Run the iscsi services :

service iscsitarget start
service open-iscsi start

Start the nova-volume service

/etc/init.d/nova-volume start

Check that it shows (give it 10 seconds) with

nova-manage service list

should show a line looking like this:

nova-volume      openstack                            nova             enabled    :-)   2012-05-16 09:38:26

Go to the dashboard and you will be able to create a volume and attach it to a running instance. If anything goes wrong, check the /var/log/nova/nova-volume.log and /var/log/nova/nova-compute.log files first for errors. If you would like to try the euca2ools commands instead of the dashboard you can use the examples shown at http://docs.openstack.org/trunk/openstack-compute/admin/content/managing-volumes.html (as of May 16th, 2012). Before running these commands you need to do the following:

login to the dashboard as <admin_user>
go to Settings
click on "EC2 Credentials"
click on "Download EC2 Credentials"
unzip the downloaded file
source ec2rc.sh

This will define the environment variables necessary for commands such as

euca-describe-volumes

to display the list of active volumes as follows

root@openstack:~/euca2ools# euca-describe-volumes 
VOLUME  vol-00000002     1              nova    available (67af2aec0bb94cc29a43c5bca21ce3d4, openstack, None, None)     2012-05-16T09:54:23.000Z

swift nodes:

Assuming three machines installed with squeeze, the primary node being the openstack mgmt.host node and no puppet or puppetmaster installed.

swift primary node

apt-get install libmysql-ruby ruby-activerecord-2.3 sqlite3 puppetmaster puppet ruby-sqlite3

Puppet configuration:

diff --git a/puppet/puppet.conf b/puppet/puppet.conf
index b18fae3..ce4ed22 100644
--- a/puppet/puppet.conf
+++ b/puppet/puppet.conf
@@ -7,6 +7,8 @@ factpath=$vardir/lib/facter
 templatedir=$confdir/templates
 prerun_command=/etc/puppet/etckeeper-commit-pre
 postrun_command=/etc/puppet/etckeeper-commit-post
+pluginsync=true
+storeconfigs=true
 
 [master]
 # These are needed when the puppetmaster is run by passenger

commit 507105065306433eec8f03dd72ab52ccaf268ccc
Author: root <root@sd-16961.dedibox.fr>
Date:   Mon Apr 2 15:04:53 2012 +0200

    configure database storage

diff --git a/puppet/puppet.conf b/puppet/puppet.conf
index ce4ed22..af220e9 100644
--- a/puppet/puppet.conf
+++ b/puppet/puppet.conf
@@ -9,10 +9,19 @@ prerun_command=/etc/puppet/etckeeper-commit-pre
 postrun_command=/etc/puppet/etckeeper-commit-post
 pluginsync=true
 storeconfigs=true
+server=mgmt.host
 
 [master]
 # These are needed when the puppetmaster is run by passenger
 # and can safely be removed if webrick is used.
 ssl_client_header = SSL_CLIENT_S_DN 
 ssl_client_verify_header = SSL_CLIENT_VERIFY
+storeconfigs=true
 
+# Needed for storeconfigs=true
+dbadapter=mysql
+dbname=puppet
+dbuser=puppet
+dbpassword=password
+dbserver=localhost
+dbsocket=/var/run/mysqld/mysqld.sock

Setup mysql for puppet:

mysqladmin create puppet
mysql -e "grant all on puppet.* to 'puppet'@'localhost' identified by 'password';"

Install openstack modules for puppet:

cd /etc/puppet
git clone git://git.labs.enovance.com/openstack-puppet-modules.git modules && cd modules && git submodule init && git submodule update
cp /etc/puppet/modules/swift/examples/multi.pp /etc/puppet/manifests/site.pp

commit 8eb77223e25bfff1284612417efedd228e0c6696
Author: root <root@sd-16961.dedibox.fr>
Date:   Mon Apr 2 15:37:19 2012 +0200

    use tap0 for lan

diff --git a/puppet/manifests/site.pp b/puppet/manifests/site.pp
index a915aea..9b890b0 100644
--- a/puppet/manifests/site.pp
+++ b/puppet/manifests/site.pp
@@ -28,7 +28,7 @@
 $swift_shared_secret='changeme'
 # assumes that the ip address where all of the storage nodes
 # will communicate is on eth1
-$swift_local_net_ip = $ipaddress_eth0
+$swift_local_net_ip = $ipaddress_tap0
 
 Exec { logoutput => true }

Enable puppet autosign for all hosts:

echo '*' > /etc/puppet/autosign.conf

Deploy swift configuration on the proxy:

chown -R puppet:puppet /var/lib/puppet/
puppet agent --certname=swift_storage_1 --server=mgmt.host --verbose --debug --test
/etc/init.d/xinetd reload

swift secondary nodes

deb http://ftp.fr.debian.org/debian/ wheezy main 
deb http://ftp.fr.debian.org/debian/ sid main 

apt-get install  python2.7=2.7.2-8  python2.7-minimal=2.7.2-8 libpython2.7=2.7.2-8
echo libpython2.7 hold |  dpkg --set-selections
echo python2.7 hold |  dpkg --set-selections
echo python2.7-minimal hold |  dpkg --set-selections

apt-get install puppet ruby-sqlite3

puppet agent --certname=swift_storage_3 --server=openstack-online-0001.dedibox.fr --verbose --debug --test

create swift ring

puppet agent --certname=swift_proxy --server=openstack-online-0001.dedibox.fr --verbose --debug --test

propagate the swift configuration

puppet agent --certname=swift_storage_1 --server=openstack-online-0001.dedibox.fr --verbose --debug --test

puppet agent --certname=swift_storage_2 --server=openstack-online-0001.dedibox.fr --verbose --debug --test

puppet agent --certname=swift_storage_3 --server=openstack-online-0001.dedibox.fr --verbose --debug --test

check that it works

On proxy / mgmt.host :

# cd /etc/puppet/modules/swift/ext
# ruby swift.rb
getting credentials: curl -k -v -H "X-Storage-User: test:tester" -H 'X-Storage-Pass: testing' http://127.0.0.1:8080/auth/v1.0
verifying connection auth:  curl -k -v -H "X-Auth-Token: AUTH_tk5d5a63abdf90414eafd890ed710d357b" http://127.0.0.1:8080/v1/AUTH_test
Testing swift: swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing stat
found containers/objects: 0/0
Uploading file to swift with command: swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing upload my_container /tmp/foo1
tmp/foo1
Downloading file with command: swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing download my_container
tmp/foo1

Dude!!!! It actually seems to work, we can upload and download files!!!!

horizon

Edit /etc/keystone/default_catalog.templates like this:

catalog.RegionOne.object-store.publicURL = http://mgmt.host:8080/v1/AUTH_$(tenant_id)s
catalog.RegionOne.object-store.adminURL = http://mgmt.host:8080/
catalog.RegionOne.object-store.internalURL = http://mgmt.host:8080/v1/AUTH_$(tenant_id)s
catalog.RegionOne.object-store.name = 'Object Store Service'

diff --git a/swift/proxy-server.conf b/swift/proxy-server.conf
index 83dda1e..8364fe7 100644
--- a/swift/proxy-server.conf
+++ b/swift/proxy-server.conf
@@ -7,7 +7,8 @@ user = swift

 [pipeline:main]
 # ratelimit?
-pipeline = healthcheck cache tempauth proxy-server
+#pipeline = healthcheck cache tempauth proxy-server
+pipeline = healthcheck cache  tokenauth keystone  proxy-server

 [app:proxy-server]
 use = egg:swift#proxy
@@ -28,3 +29,17 @@ use = egg:swift#healthcheck
 use = egg:swift#memcache
 # multi-proxy config not supported
 memcache_servers = 127.0.0.1:11211
+
+[filter:tokenauth]
+paste.filter_factory = keystone.middleware.auth_token:filter_factory
+service_port = 5000
+service_protocol = http
+service_host = 127.0.0.1
+auth_port = 35357
+auth_protocol = http
+auth_host = 127.0.0.1
+admin_token = ADMIN
+
+[filter:keystone]
+paste.filter_factory = keystone.middleware.swift_auth:filter_factory
+operator_roles = admin, swiftoperator, projectmanager

/etc/init.d/swift-proxy restart

swift command line

apt-get install swift
swift -U $OS_TENANT_NAME:$OS_USERNAME list