Differences between revisions 18 and 19
Revision 18 as of 2012-10-14 17:12:53
Size: 32531
Editor: ?Thomas Goirand
Comment:
Revision 19 as of 2012-10-15 10:17:03
Size: 31467
Editor: ?RolandMas
Comment: Default users and roles are now created by the packaging scripts
Deletions are marked like this. Additions are marked like this.
Line 133: Line 133:
An admin user is created and given the necessary credentials (roles in the openstack parlance) to perform administrative actions.

Edit /etc/keystone/keystone.conf and modify
{{{admin_token=ADMIN}}}
by a secret admin token of your chosing:
{{{admin_token=<ADMIN>}}}
The packages create an admin user and grant it the necessary credentials (roles in the openstack parlance) to perform administrative actions. They also ask for an administrative token that can be used to authenticate further commands (the token is stored in the admin_token line of /etc/keystone/keystone.conf).
Line 143: Line 138:
export SERVICE_TOKEN=<ADMIN>
}}}

Many keystone arguments require numerical IDs that are unpractical to remember. The following function is defined to retrieve the numerical ID and store it in a variable.
{{{
function get_id () {
    echo `$@ | awk '/ id / { print $4 }'`
}
}}}

Create a tenant

{{{ADMIN_TENANT=$(get_id keystone tenant-create --name <admin_project>)}}}

Create a user with its password & email

{{{ADMIN_USER=$(get_id keystone user-create --name <admin_user> --pass <secret> --email <admin@example.com>)}}}

Grant admin rights to <admin_user> on tenant <admin_project>

{{{
ADMIN_ROLE=$(keystone role-list|awk '/ admin / { print $2 }')
keystone user-role-add --user $ADMIN_USER --role $ADMIN_ROLE --tenant_id $ADMIN_TENANT
KEYSTONEADMIN_ROLE=$(keystone role-list|awk '/ KeystoneAdmin / { print $2 }')
keystone user-role-add --user $ADMIN_USER --role $KEYSTONEADMIN_ROLE --tenant_id $ADMIN_TENANT
KEYSTONESERVICEADMIN_ROLE=$(keystone role-list|awk '/ KeystoneServiceAdmin / { print $2 }')
keystone user-role-add --user $ADMIN_USER --role $KEYSTONESERVICEADMIN_ROLE --tenant_id $ADMIN_TENANT
export SERVICE_TOKEN=<TOKEN>

Openstack Folsom on Debian GNU/Linux

This page deals with the Folsom release of Openstack; it started as a fork of OpenStackHowto and is meant to be a temporary workspace until the Folsom work is moved to sid. As such, it also describes the work in progress and serves as a synchronization point for involved developers.

The current focus is on a subset of the possible setups: KVM and nova-network. Quantum and Xen are kept for later. The goal is to make this page, and the experimental branches of the packages, evolve in parallel until "it works": errors in the HOWTO will be fixed in the HOWTO, and bugs in the packages will be fixed there.

Building the packages

The packages are many, and their build-time and run-time dependencies are complex. If you wish to rebuild absolutely all the current Folsom set of packages, best is to use our autobuilder script. Simply do:

git clone git://anonscm.debian.org/git/openstack/openstack-auto-builder.git

Inside this repository, you will find a build-openstack script. Edit the start of it to your linkings. Especially, you might need to set URL=git://anonscm.debian.org/git/openstack if you don't have an ssh access on Alioth, and set a GnuPG signing key under GIT_BUILD_OPT.

This script will take the current Experimental tree on Alioth and build all packages.

Here's a plan (incomplete) of how to build them and in which order.

Working:

  • python-warlock (builds by itself)
  • python-tox (builds by itself)
  • python-swiftclient (builds by itself)
  • python-keystoneclient (builds by itself)
  • python-novaclient (builds by itself)
  • python-glanceclient (requires python-warlock)
  • swift (requires python-swiftclient)

In progress:

  • python-quantumclient (requires python-cliff which is currently sitting in NEW)
  • nova (requires python-glanceclient, python-tox, python-tox, python-quantumclient)
  • keystone (requires python-nova, python-swift, python-keystoneclient)
  • glance (requires python-swiftclient, python-swift, python-keystone)

HOWTO: Openstack on Debian GNU/Linux unstable (sid+experimental)

This howto aims to provide guidelines to install & set up a multi-node Openstack-Compute (aka Nova) environment.

The environment includes the following software:

  • a “proxy” or "management" node (host name <mgmt.host>) with the following services :

    • nova-api
    • nova-scheduler
    • glance
    • keystone
    • mysql
    • rabbitmq
    • memcached
    • openstack-dashboard
    • nova-volume
  • one or more pure “compute” (host name <computeNN.host>) nodes with the following services :

    • nova-compute
    • nova-network
    • nova-api (with only the metadata api enabled)

DOCUMENT CONVENTIONS

In formatted blocks :

  • command lines starting with a # must be run as root.

  • values between < and > must be replaced by your values.

PREREQUISITES

Things to prepare beforehand :

  • Servers:
    • should have two network interfaces to ensure security. If only one interface is used the private part is more exposed to attacks coming from the public part.
      • a _public_ one to communicate with the outside world

      • a _private_ one for the guests VLans

  • Network :
    • public network
    • private network. If the machines are not on a LAN, create one with OpenVPN.

    • fixed ip range for guests
    • number of networks for guests
    • network size for guests
    • public “floating” IPs (optional)
    • echo 1 > /proc/sys/net/ipv4/ip_forward

    • echo "nbd max_part=65" >> /etc/modules # to enable key-file, network & metadata injection into instances images

  • Distribution :
    • Debian GNU/Linux wheezy
    • Add experimental to sources.list to use the OpenStack Folsom packages

    • apt-get update
    • Make sure /tmp has enough space to accomodate for snapshoting ( i.e. you might want to add /tmp none none none 0 0 in /etc/fstab to disable tmpfs on /tmp )

IMPORTANT

This HOWTO is valid for the OpenStack Nova, Glance, Volume and Keystone packages labelled 2012.2, currently in various stages of availability in Debian experimental and might need some adjustments with later versions.

Technical Choices

We will be using :

Installation

proxy node:

Hostname

In the following replace <mgmt.host> with the actual hostname of the machine chosen to be the management node.

Packages installation

Install dependencies:

# apt-get install -y mysql-server rabbitmq-server memcached

Note : do not set the MySQL password or add the -p option to all mysql related commands below.

In /etc/mysql/my.cnf modify the bind-address value to read :

bind-address            = 0.0.0.0

(or better, instead of 0.0.0.0, the IP address of a private interface on which other compute nodes can join the proxy.)

And restart the mysql server :

# /etc/init.d/mysql restart

Now install some OpenStack packages :

# apt-get install -y nova-api nova-scheduler keystone

Answer the debconf questions and chose the proposed defaults.

Configuration

Keystone

The packages create an admin user and grant it the necessary credentials (roles in the openstack parlance) to perform administrative actions. They also ask for an administrative token that can be used to authenticate further commands (the token is stored in the admin_token line of /etc/keystone/keystone.conf).

Variables to used by the keystone command line to connect to the keystone server with the proper credentials:

export SERVICE_ENDPOINT=http://127.0.0.1:35357/v2.0/
export SERVICE_TOKEN=<TOKEN>

In the file /etc/keystone/keystone.conf a variable

template_file = /etc/keystone/default_catalog.templates

shows the currently used template_files. The content of this file must be edited to match the local configuration by substituting localhost with <mgmt.host>.

Then restart keystone to make sure these values are taken into account:

/etc/init.d/keystone restart

export OS_USERNAME=<admin_user>
export OS_PASSWORD=<secret>
export OS_TENANT_NAME=<admin_project>
export OS_AUTH_URL=http://<mgmt.host>:5000/v2.0/
export OS_VERSION=1.1

Glance

# apt-get install -y glance

Glance-common will ask you which pipeline flavor you want. Choose keystone. Then it will ask you what the auth server URL is, answer with http://<mgmt.host>:5000. Then paste the service token (or admin token) set in /etc/keystone/keystone.conf (i.e. <ADMIN>) when debconf asks for it.

In the file /etc/glance/glance-api-paste.ini and /etc/glance/glance-registry-paste.ini: Comment out

#admin_tenant_name = %SERVICE_TENANT_NAME%
#admin_user = %SERVICE_USER%
#admin_password = %SERVICE_PASSWORD%

and add

admin_token = <ADMIN>

And restart the services

/etc/init.d/glance-api restart
/etc/init.d/glance-registry restart
  • NOTE*
    • If you have made a mistake on this step, doing "# dpkg-reconfigure glance-common" will give you one more chance.

Nova

In the file /etc/nova/api-paste.ini :

  • Look for the filter:authtoken section and replace 127.0.0.1 with <mgmt.host> and

admin_tenant_name = %SERVICE_TENANT_NAME%
admin_user = %SERVICE_USER%
admin_password = %SERVICE_PASSWORD%

with

admin_token = <ADMIN>

In the file /etc/nova/nova.conf :

  • Add these configuration options :

##  Network config
# A nova-network on each compute node
multi_host=true
# VLan manger
network_manager=nova.network.manager.VlanManager
vlan_interface=<the private interface eg. eth1>
# My ip
my-ip=<the current machine publc ip address>
public_interface=<the interface on which the public IP addresses are bound eg. eth0>
# Dmz & metadata things
dmz_cidr=169.254.169.254/32
ec2_dmz_host=169.254.169.254
metadata_host=169.254.169.254
## More general things
# The RabbitMQ host
rabbit_host=<mgmt.host>
## Glance
image_service=nova.image.glance.GlanceImageService
glance_api_servers=<mgmt.host>:9292
use-syslog=true
ec2_host=<mgmt.host>

Restart nova services :

# /etc/init.d/nova-api restart
# /etc/init.d/nova-scheduler restart

Now bootstrap nova :

# nova-manage db sync
# nova-manage network create private --fixed_range_v4=<10.1.0.0/16> --network_size=<256> --num_networks=<100>
# nova-manage floating create --ip_range=<192.168.0.224/28>

You should be able to see that nova-scheduler is running (OK state is :-) KO is XXX) :

# nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-scheduler   openstack04                          nova             enabled    :-)   2012-01-13 17:29:48

openstack-dashboard

# apt-get install -y openstack-dashboard openstack-dashboard-apache

Edit /etc/openstack-dashboard/local_settings.py and add

QUANTUM_ENABLED = False

The panel will attempt to create files in /var/www

chown www-data /var/www/

Restart apache:

service apache2 restart

Point your browser to http://<mgmt.host>:8080/, and you'll see the dashboard. You can login using <admin_user> password <secret>.

Install the VNC console. Add the following lines to /etc/nova/nova.conf

novncproxy_base_url=http://<mgmt.host>:6080/vnc_auto.html
vncserver_listen=0.0.0.0
vncserver_proxyclient_address=127.0.0.1

Note: <mgmt.host> will be exposed in horizon and must be a name that resolves from the client machine. It cannot be a name that only resolves on the nodes used to run OpenStack.

apt-get install nova-console novnc

compute nodes:

Note that the <mgmt.node> can also be a compute node. There is no obligation for it to be a separate physical machine.

Install the packages required to run instances :

apt-get install -y nova-compute nova-api nova-network nova-cert

Compute only nodes

The proxy can be installed as a compute node, in which case there no additional configuration necessary. However, if a new node is installed and is only running instances, the following configuration must be done.

The file /etc/nova/api-paste.ini can be copied verbatim from the proxy host. The file /etc/nova/nova.conf can be copied from the proxy host and modified as follows:

  • The IP of the machine

my-ip=<the current machine ip address>

  • Only load the metadata api on compute only nodes (the other APIs need only exist on one node of the cluster).

enabled_apis=metadata

Checking that it works

Restart services :

# /etc/init.d/nova-api restart
# /etc/init.d/nova-network restart
# /etc/init.d/nova-compute restart

On the proxy, check that all is running :

# nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-scheduler   <mgmt.host>                          nova             enabled    :-)   2012-01-16 12:29:53
nova-compute     compute.host                         nova             enabled    :-)   2012-01-16 12:29:52
nova-network     compute.host                         nova             enabled    :-)   2012-01-16 12:29:49

Using it

To use the nova cli, you will need to export some environment variables : n

export OS_USERNAME=<admin_user>
export OS_PASSWORD=<secret>
export OS_TENANT_NAME=<admin_project>
export OS_AUTH_URL=http://<mgmt.host>:5000/v2.0/
export OS_VERSION=1.1

You can now use the nova command line interface :

nova list
+----+------+--------+----------+
| ID | Name | Status | Networks |
+----+------+--------+----------+
+----+------+--------+----------+
# nova image-list
+----+------+--------+--------+
| ID | Name | Status | Server |
+----+------+--------+--------+
+----+------+--------+--------+
# nova flavor-list
+----+-----------+-----------+------+----------+-------+-------------+
| ID |    Name   | Memory_MB | Swap | Local_GB | VCPUs | RXTX_Factor |
+----+-----------+-----------+------+----------+-------+-------------+
| 1  | m1.tiny   | 512       |      | 0        | 1     | 1.0         |
| 2  | m1.small  | 2048      |      | 20       | 1     | 1.0         |
| 3  | m1.medium | 4096      |      | 40       | 2     | 1.0         |
| 4  | m1.large  | 8192      |      | 80       | 4     | 1.0         |
| 5  | m1.xlarge | 16384     |      | 160      | 8     | 1.0         |
+----+-----------+-----------+------+----------+-------+-------------+
# nova keypair-list
+------+-------------+
| Name | Fingerprint |
+------+-------------+
+------+-------------+

There is no instance, no image and some flavors. First we need to get an image and upload it to glance :

# wget https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img
[...]
#  glance add name="cirrOS-0.3.0-x86_64" is_public=true      container_format=bare disk_format=qcow2      distro="cirrOS-0.3.0-x86_64" < cirros-0.3.0-x86_64-disk.img

To later connect to the instance via ssh, we will need to upload a ssh public-key :

# nova keypair-add --pub_key <your_public_key_file.pub> <key_name>
# nova keypair-list
+--------+-------------------------------------------------+
| Name   | Fingerprint                                     |
+--------+-------------------------------------------------+
| my_key | 79:40:46:87:74:3a:0e:01:f4:59:00:1b:3a:94:71:72 |
+--------+-------------------------------------------------+

We can now boot an image with this image :

# nova boot --poll --flavor 1 --image 78651eea-02f6-4750-945a-4524a77f7da9 --key_name my_key my_first_instance
+------------------------+--------------------------------------+
|        Property        |                Value                 |
+------------------------+--------------------------------------+
| OS-EXT-STS:power_state | 0                                    |
| OS-EXT-STS:task_state  | scheduling                           |
| OS-EXT-STS:vm_state    | building                             |
| RAX-DCF:diskConfig     | MANUAL                               |
| accessIPv4             |                                      |
| accessIPv6             |                                      |
| adminPass              | HMs5tLK3bPCG                         |
| config_drive           |                                      |
| created                | 2012-01-16T14:14:20Z                 |
| flavor                 | m1.tiny                              |
| hostId                 |                                      |
| id                     | 677750ea-0dd4-43c3-8ae0-ef54cb29915f |
| image                  | Ubuntu 11.10 clouding amd64          |
| key_name               | pubkey                               |
| metadata               | {}                                   |
| name                   | my_first_instance                    |
| progress               | None                                 |
| status                 | BUILD                                |
| tenant_id              | 1                                    |
| updated                | 2012-01-16T14:14:20Z                 |
| user_id                | 1                                    |
+------------------------+--------------------------------------+

And after few seconds :

# nova show my_first_instance
+------------------------+----------------------------------------------------------+
|        Property        |                          Value                           |
+------------------------+----------------------------------------------------------+
| OS-EXT-STS:power_state | 1                                                        |
| OS-EXT-STS:task_state  | None                                                     |
| OS-EXT-STS:vm_state    | active                                                   |
| RAX-DCF:diskConfig     | MANUAL                                                   |
| accessIPv4             |                                                          |
| accessIPv6             |                                                          |
| config_drive           |                                                          |
| created                | 2012-01-16T14:14:20Z                                     |
| flavor                 | m1.tiny                                                  |
| hostId                 | 9750641c8c79637e01b342193cfa1efd5961c300b7865dc4a5658bdd |
| id                     | 677750ea-0dd4-43c3-8ae0-ef54cb29915f                     |
| image                  | Ubuntu 11.10 clouding amd64                              |
| key_name               | pubkey                                                   |
| metadata               | {}                                                       |
| name                   | my_first_instance                                        |
| private_0 network      | 10.1.0.3                                                 |
| progress               | None                                                     |
| status                 | ACTIVE                                                   |
| tenant_id              | 1                                                        |
| updated                | 2012-01-16T14:14:37Z                                     |
| user_id                | 1                                                        |
+------------------------+----------------------------------------------------------+

To see the instance console, we can go on our compute node and look at the file /var/lib/nova/instances/instance-00000001/console.log (if this is the first intance you created, else change 00000001 to the last available in the folder).

We can activate ssh access, create a floating ip, attach it to our instance and ssh into it (with user ubuntu for UEC images):

# nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
# nova floating-ip-create
+--------------+-------------+----------+
|      Ip      | Instance Id | Fixed Ip |
+--------------+-------------+----------+
| 172.24.4.224 | None        | None     |
+--------------+-------------+----------+
# nova add-floating-ip my_first_instance 172.24.4.224
# ssh -i my_key ubuntu@172.24.4.224
The authenticity of host '172.24.4.224 (172.24.4.224)' can't be established.
RSA key fingerprint is 55:bf:2e:7f:60:ef:ea:72:b4:af:2a:33:6b:2d:8c:62.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '172.24.4.224' (RSA) to the list of known hosts.
Welcome to Ubuntu 11.10 (GNU/Linux 3.0.0-14-virtual x86_64)

 * Documentation:  https://help.ubuntu.com/

System information as of Mon Jan 16 14:58:15 UTC 2012

System load:  0.59              Processes:           59
Usage of /:   32.6% of 1.96GB   Users logged in:     0
Memory usage: 6%                IP address for eth0: 10.1.0.5
Swap usage:   0%

Graph this data and manage this system at https://landscape.canonical.com/
Get cloud support with Ubuntu Advantage Cloud Guest
http://www.ubuntu.com/business/services/cloud

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

/usr/bin/xauth:  file /home/ubuntu/.Xauthority does not exist
To run a command as administrator (user 'root'), use 'sudo <command>'.
See &quot;man sudo_root&quot; for details.

ubuntu@my-first-instance:~$ 

If ssh does not work, check the logs in the horizon "Logs" tab associated with the instance. If it fails to find the metadata with an error that looks like:

DataSourceEc2.py[WARNING]: 'http://169.254.169.254' failed: url error [[Errno 111] Connection refused]

just try to restart

/etc/init.d/nova-compute restart
/etc/init.d/nova-api restart
/etc/init.d/nova-scheduler restart
/etc/init.d/nova-cert restart

the source of the problem is probably that it was not retarted after a modification of the configuration files and they were not taken into account.

nova-volume

Note: as of September 22nd, 2012, the iscsitarget-dkms package must be installed from sid http://packages.qa.debian.org/i/iscsitarget/news/20120920T101826Z.html until it is accepted in wheezy.

The following instructions must be run on the <mgmt.host> node.

apt-get install lvm2 nova-volume iscsitarget iscsitarget-dkms euca2ools

Installing the guestmount package requires a patch until the corresponding packaging bug is fixed http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=669246

apt-get install guestmount

When it fails, apply the following patch.

root@osc2:~# diff -uNr /etc/init.d/zfs-fuse*
--- /etc/init.d/zfs-fuse        2012-02-06 00:04:24.000000000 -0500
+++ /etc/init.d/zfs-fuse.mod    2012-05-16 05:57:35.000000000 -0400
@@ -1,8 +1,8 @@
 #! /bin/bash
 ### BEGIN INIT INFO
 # Provides:          zfs-fuse
-# Required-Start:    fuse $remote_fs
-# Required-Stop:     fuse $remote_fs
+# Required-Start:    $remote_fs
+# Required-Stop:     $remote_fs
 # Default-Start:     S
 # Default-Stop:      0 6
 # Short-Description: Daemon for ZFS support via FUSE

After applying the patch, install again.

apt-get install guestmount

Assuming /dev/<sda3> is an unused disk partition, create a volume group:

pvcreate /dev/<sda3>
vgcreate nova-volumes /dev/<sda3>

Add the following lines to /etc/nova/nova.conf

iscsi_ip_prefix=192.168.
volume_group=nova-volumes
iscsi_helper=iscsitarget

Apply the following patch to cope with the fact that --volume-group is not accepted as an option by the nova-volume command line.

diff --git a/init.d/nova-volume b/init.d/nova-volume
index 0cdda1b..1d6fa62 100755
--- a/init.d/nova-volume
+++ b/init.d/nova-volume
@@ -45,9 +47,9 @@ do_start()
        fi
 
        # Adds what has been configured in /etc/default/nova-volume
-       if [ -n ${nova_volume_group} ] ; then
-               DAEMON_ARGS="${DAEMON_ARGS} --volume_group=${nova_volume_group}"
-       fi
+#      if [ -n ${nova_volume_group} ] ; then
+#              DAEMON_ARGS="${DAEMON_ARGS} --volume_group=${nova_volume_group}"
+#      fi
 
        start-stop-daemon --start --quiet --background --chuid ${NOVA_USER}:nova --make-pidfile --pidfile $PIDFILE --startas $DAEMON --test > /dev/null \
                || return 1

Fix an absolute path problem in /usr/share/pyshared/nova/rootwrap/volume.py

perl -pi -e 's|/sbin/iscsiadm|/usr/bin/iscsiadm|' /usr/share/pyshared/nova/rootwrap/volume.py

Edit /etc/default/iscsitarget and set

ISCSITARGET_ENABLE=true

Run the iscsi services :

service iscsitarget start
service open-iscsi start

Start the nova-volume service

/etc/init.d/nova-volume start

Check that it shows (give it 10 seconds) with

nova-manage service list

should show a line looking like this:

nova-volume      openstack                            nova             enabled    :-)   2012-05-16 09:38:26

Go to the dashboard and you will be able to create a volume and attach it to a running instance. If anything goes wrong, check the /var/log/nova/nova-volume.log and /var/log/nova/nova-compute.log files first for errors. If you would like to try the euca2ools commands instead of the dashboard you can use the examples shown at http://docs.openstack.org/trunk/openstack-compute/admin/content/managing-volumes.html (as of May 16th, 2012). Before running these commands you need to do the following:

login to the dashboard as <admin_user>
go to Settings
click on "EC2 Credentials"
click on "Download EC2 Credentials"
unzip the downloaded file
source ec2rc.sh

This will define the environment variables necessary for commands such as

euca-describe-volumes

to display the list of active volumes as follows

root@openstack:~/euca2ools# euca-describe-volumes 
VOLUME  vol-00000002     1              nova    available (67af2aec0bb94cc29a43c5bca21ce3d4, openstack, None, None)     2012-05-16T09:54:23.000Z

swift nodes:

Assuming three machines installed with squeeze, the primary node being the openstack mgmt.host node and no puppet or puppetmaster installed.

swift primary node

apt-get install libmysql-ruby ruby-activerecord-2.3 sqlite3 puppetmaster puppet ruby-sqlite3

Puppet configuration:

diff --git a/puppet/puppet.conf b/puppet/puppet.conf
index b18fae3..ce4ed22 100644
--- a/puppet/puppet.conf
+++ b/puppet/puppet.conf
@@ -7,6 +7,8 @@ factpath=$vardir/lib/facter
 templatedir=$confdir/templates
 prerun_command=/etc/puppet/etckeeper-commit-pre
 postrun_command=/etc/puppet/etckeeper-commit-post
+pluginsync=true
+storeconfigs=true
 
 [master]
 # These are needed when the puppetmaster is run by passenger

commit 507105065306433eec8f03dd72ab52ccaf268ccc
Author: root <root@sd-16961.dedibox.fr>
Date:   Mon Apr 2 15:04:53 2012 +0200

    configure database storage

diff --git a/puppet/puppet.conf b/puppet/puppet.conf
index ce4ed22..af220e9 100644
--- a/puppet/puppet.conf
+++ b/puppet/puppet.conf
@@ -9,10 +9,19 @@ prerun_command=/etc/puppet/etckeeper-commit-pre
 postrun_command=/etc/puppet/etckeeper-commit-post
 pluginsync=true
 storeconfigs=true
+server=mgmt.host
 
 [master]
 # These are needed when the puppetmaster is run by passenger
 # and can safely be removed if webrick is used.
 ssl_client_header = SSL_CLIENT_S_DN 
 ssl_client_verify_header = SSL_CLIENT_VERIFY
+storeconfigs=true
 
+# Needed for storeconfigs=true
+dbadapter=mysql
+dbname=puppet
+dbuser=puppet
+dbpassword=password
+dbserver=localhost
+dbsocket=/var/run/mysqld/mysqld.sock

Setup mysql for puppet:

mysqladmin create puppet
mysql -e "grant all on puppet.* to 'puppet'@'localhost' identified by 'password';"

Install openstack modules for puppet:

cd /etc/puppet
git clone git://git.labs.enovance.com/openstack-puppet-modules.git modules && cd modules && git submodule init && git submodule update
cp /etc/puppet/modules/swift/examples/multi.pp /etc/puppet/manifests/site.pp

commit 8eb77223e25bfff1284612417efedd228e0c6696
Author: root <root@sd-16961.dedibox.fr>
Date:   Mon Apr 2 15:37:19 2012 +0200

    use tap0 for lan

diff --git a/puppet/manifests/site.pp b/puppet/manifests/site.pp
index a915aea..9b890b0 100644
--- a/puppet/manifests/site.pp
+++ b/puppet/manifests/site.pp
@@ -28,7 +28,7 @@
 $swift_shared_secret='changeme'
 # assumes that the ip address where all of the storage nodes
 # will communicate is on eth1
-$swift_local_net_ip = $ipaddress_eth0
+$swift_local_net_ip = $ipaddress_tap0
 
 Exec { logoutput => true }

Enable puppet autosign for all hosts:

echo '*' > /etc/puppet/autosign.conf

Deploy swift configuration on the proxy:

chown -R puppet:puppet /var/lib/puppet/
puppet agent --certname=swift_storage_1 --server=mgmt.host --verbose --debug --test
/etc/init.d/xinetd reload

swift secondary nodes

deb http://ftp.fr.debian.org/debian/ wheezy main 
deb http://ftp.fr.debian.org/debian/ sid main 

apt-get install  python2.7=2.7.2-8  python2.7-minimal=2.7.2-8 libpython2.7=2.7.2-8
echo libpython2.7 hold |  dpkg --set-selections
echo python2.7 hold |  dpkg --set-selections
echo python2.7-minimal hold |  dpkg --set-selections

apt-get install puppet ruby-sqlite3

puppet agent --certname=swift_storage_3 --server=openstack-online-0001.dedibox.fr --verbose --debug --test

create swift ring

puppet agent --certname=swift_proxy --server=openstack-online-0001.dedibox.fr --verbose --debug --test

propagate the swift configuration

puppet agent --certname=swift_storage_1 --server=openstack-online-0001.dedibox.fr --verbose --debug --test

puppet agent --certname=swift_storage_2 --server=openstack-online-0001.dedibox.fr --verbose --debug --test

puppet agent --certname=swift_storage_3 --server=openstack-online-0001.dedibox.fr --verbose --debug --test

check that it works

On proxy / mgmt.host :

# cd /etc/puppet/modules/swift/ext
# ruby swift.rb
getting credentials: curl -k -v -H "X-Storage-User: test:tester" -H 'X-Storage-Pass: testing' http://127.0.0.1:8080/auth/v1.0
verifying connection auth:  curl -k -v -H "X-Auth-Token: AUTH_tk5d5a63abdf90414eafd890ed710d357b" http://127.0.0.1:8080/v1/AUTH_test
Testing swift: swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing stat
found containers/objects: 0/0
Uploading file to swift with command: swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing upload my_container /tmp/foo1
tmp/foo1
Downloading file with command: swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing download my_container
tmp/foo1

Dude!!!! It actually seems to work, we can upload and download files!!!!

horizon

Edit /etc/keystone/default_catalog.templates like this:

catalog.RegionOne.object-store.publicURL = http://mgmt.host:8080/v1/AUTH_$(tenant_id)s
catalog.RegionOne.object-store.adminURL = http://mgmt.host:8080/
catalog.RegionOne.object-store.internalURL = http://mgmt.host:8080/v1/AUTH_$(tenant_id)s
catalog.RegionOne.object-store.name = 'Object Store Service'

diff --git a/swift/proxy-server.conf b/swift/proxy-server.conf
index 83dda1e..8364fe7 100644
--- a/swift/proxy-server.conf
+++ b/swift/proxy-server.conf
@@ -7,7 +7,8 @@ user = swift

 [pipeline:main]
 # ratelimit?
-pipeline = healthcheck cache tempauth proxy-server
+#pipeline = healthcheck cache tempauth proxy-server
+pipeline = healthcheck cache  tokenauth keystone  proxy-server

 [app:proxy-server]
 use = egg:swift#proxy
@@ -28,3 +29,17 @@ use = egg:swift#healthcheck
 use = egg:swift#memcache
 # multi-proxy config not supported
 memcache_servers = 127.0.0.1:11211
+
+[filter:tokenauth]
+paste.filter_factory = keystone.middleware.auth_token:filter_factory
+service_port = 5000
+service_protocol = http
+service_host = 127.0.0.1
+auth_port = 35357
+auth_protocol = http
+auth_host = 127.0.0.1
+admin_token = ADMIN
+
+[filter:keystone]
+paste.filter_factory = keystone.middleware.swift_auth:filter_factory
+operator_roles = admin, swiftoperator, projectmanager

/etc/init.d/swift-proxy restart

swift command line

apt-get install swift
swift -U $OS_TENANT_NAME:$OS_USERNAME list