19844
Comment:
|
30933
Update the sql connection line to point to the mgmt.host.
|
Deletions are marked like this. | Additions are marked like this. |
Line 1: | Line 1: |
= HOWTO: Openstack on Debian GNU/Linux unstable (sid) = This howto aims to provide guidelines to install & set up a multi-node Openstack-Compute (aka Nova) environment. This environment will include : * one “proxy” node (host name '''<proxy.host>''') with the following services : |
= OpenStack on Debian GNU/Linux testing = This HOWTO aims to provide guidelines to install & set up a multi-node Openstack-Compute (aka Nova) environment. This HOWTO is for running Openstack Essex (eg: v2012.1) running on Debian Testing. If you are looking for the HOWTO about Folsom, please go here: https://wiki.debian.org/OpenStackHowto/Folsom The environment includes the following software: * a “proxy” or "management" node (host name '''<mgmt.host>''') with the following services : |
Line 15: | Line 17: |
* openstack-dashboard * nova-volume |
|
Line 18: | Line 22: |
* nova-api (with only the metadata api enabled) == CONVENTIONS == In formatted blocks : * command lines starting with a '''#''' must be ran as root. * values between '''<''' and '''>''' must be replaced by your values. == PREREQUISITES == Things to prepare beforehand : * Machines : * They should have two network interfaces to ensure security. If only one interface is used the private part is more exposed to attacks coming from the public part. |
* nova-api (with only the metadata API enabled) == Document conventions == * Command lines starting with a '''#''' must be run as root. * Values between '''<''' and '''>''' must be replaced by your values. == Prerequisites == Things to prepare beforehand: * Servers: * should have two network interfaces to ensure security. If only one interface is used the private part is more exposed to attacks coming from the public part. |
Line 33: | Line 36: |
* a _private_ one for the guests VLans | * a _private_ one for the guests' VLans |
Line 36: | Line 39: |
* private network. If the machines are not on a LAN, [[L2-openvpn|create one with OpenVPN]]. * fixed ip range for guests |
* private network. If the machines are not on a LAN, [[https://labs.enovance.com/projects/openstack/wiki/L2-openvpn|create one with OpenVPN]]. * fixed IP range for guests |
Line 43: | Line 46: |
* Distribution : * Debian GNU/Linux squeeze * Add wheezy and sid in the /etc/apt/sources.list |
* Distribution: * Debian GNU/Linux Squeeze (there are no openstack packages for Squeeze as of October 24th, 2012, but backports may be available in future) * Make sure /tmp has enough space to accomodate for snapshoting ( i.e. you might want to add /tmp none none none 0 0 in /etc/fstab to disable tmpfs on /tmp ) * Add wheezy in the /etc/apt/sources.list |
Line 47: | Line 51: |
== IMPORTANT == This HOWTO is valid for the Openstack Nova packages labelled 2012.1~e2, currently available in Debian GNU/Linux unstable sid and might need some adjustments with later versions. |
* As of May 15th, 2012 you must do the following because the most recent python-prettytable is partly broken {{{ apt-get install python-prettytable=0.5-1 echo python-prettytable hold | dpkg --set-selections }}} Note from zigo: I think that the python-prettytable issue has been fixed, so using v 0.6 from Wheezy should be ok. == Important == This HOWTO is valid for the OpenStack Nova, Glance, Volume and Keystone packages labelled 2012.1, currently available in Debian testing (Wheezy) and might need some adjustments with later versions. |
Line 55: | Line 65: |
We will be using : | We will be using: |
Line 57: | Line 67: |
* Keystone for authentication | |
Line 59: | Line 68: |
* MySql as database backend | * MySql as database backend (for nova) |
Line 63: | Line 72: |
=== proxy node: === | === Proxy Node === |
Line 66: | Line 75: |
In the following replace '''<proxy.host>''' with the actual hostname of the machine chosen to be the proxy node. | In the following replace '''<mgmt.host>''' with the actual hostname of the machine chosen to be the management node. |
Line 76: | Line 85: |
In '''/etc/mysql/my.cnf''' modify the '''bind-address''' value to read : {{{bind-address = 0.0.0.0}}} |
In '''/etc/mysql/my.cnf''' modify the '''bind-address''' value to be 0.0.0.0. {{{# sed -i "s/127.0.0.1/0.0.0.0/" /etc/mysql/my.cnf}}} |
Line 83: | Line 91: |
And restart the mysql server : | And restart the MySQL server: |
Line 87: | Line 95: |
Create two MySql databases and associated users : {{{ # mysqladmin create nova # mysql -e "grant all on nova.* to '<nova_user>' identified by '<nova_secret>'" # mysqladmin flush-privileges }}} Now install Openstack packages : {{{# apt-get install -y nova-api nova-scheduler glance keystone}}} |
Now install some OpenStack packagese: {{{# apt-get install -y nova-api nova-scheduler keystone}}} Answer the debconf questions and chose the proposed defaults. |
Line 106: | Line 105: |
Answer the debconf questions and chose the defaults. Add a project (tenant) and an admin user : {{{ # keystone-manage tenant add admin # keystone-manage user add admin <admin_password> # keystone-manage role grant Admin admin admin # keystone-manage role grant Admin admin # keystone-manage role grant KeystoneAdmin admin # keystone-manage role grant KeystoneServiceAdmin admin }}} Add services : {{{ # keystone-manage service add nova compute "Nova Compute Service" # keystone-manage service add ec2 ec2 "EC2 Compatibility Layer" # keystone-manage service add glance image "Glance Image Service" # keystone-manage service add keystone identity "Keystone Identity Service" }}} Endpoint templates for the region : {{{ # keystone-manage endpointTemplates add RegionOne nova http://<proxy.host>:8774/v1.1/%tenant_id% http://<proxy.host>:8774/v1.1/%tenant_id% http://<proxy.host>:8774/v1.1/%tenant_id% 1 1 # keystone-manage endpointTemplates add RegionOne ec2 http://<proxy.host>:8773/services/Cloud http://<proxy.host>:8773/services/Admin http://<proxy.host>:8773/services/Cloud 1 1 # keystone-manage endpointTemplates add RegionOne glance http://<proxy.host>:9292/v1/%tenant_id% http://<proxy.host>:9292/v1/%tenant_id% http://<proxy.host>:9292/v1/%tenant_id% 1 1 # keystone-manage endpointTemplates add RegionOne keystone http://<proxy.host>:5000/v2.0 http://<proxy.host>:35357/v2.0 http://<proxy.host>:5000/v2.0 1 1 }}} And finally, a service token with a «far far away» expiration date (used by other services to talk to keystone) and the credentials for the admin account : {{{ # keystone-manage token add <service_token> admin admin 2047-12-31T13:37 # keystone-manage credentials add admin EC2 'admin' '<admin_password>' admin }}} |
An admin user is created and given the necessary credentials (roles in the openstack parlance) to perform administrative actions. Edit /etc/keystone/keystone.conf and modify {{{admin_token=ADMIN}}} by a secret admin token {{{<ADMIN>}}} of your chosing, and restart keystone. {{{ # sed -i 's/ADMIN/<ADMIN>/' /etc/keystone/keystone.conf # service keystone restart }}} Variables to used by the keystone command line to connect to the keystone server with the proper credentials: {{{ export SERVICE_ENDPOINT=http://127.0.0.1:35357/v2.0/ export SERVICE_TOKEN=<ADMIN> }}} Many keystone arguments require numerical IDs that are unpractical to remember. The following function is defined to retrieve the numerical ID and store it in a variable. {{{ function get_id () { echo `$@ | awk '/ id / { print $4 }'` } }}} Create a tenant {{{ADMIN_TENANT=$(get_id keystone tenant-create --name <admin_project>)}}} Create a user with its password & email {{{ADMIN_USER=$(get_id keystone user-create --name <admin_user> --pass <secret> --email <admin@example.com>)}}} Create roles for admins {{{ keystone role-create --name admin keystone role-create --name KeystoneAdmin keystone role-create --name KeystoneServiceAdmin }}} Grant admin rights to <admin_user> on tenant <admin_project> {{{ ADMIN_ROLE=$(keystone role-list|awk '/ admin / { print $2 }') keystone user-role-add --user $ADMIN_USER --role $ADMIN_ROLE --tenant_id $ADMIN_TENANT KEYSTONEADMIN_ROLE=$(keystone role-list|awk '/ KeystoneAdmin / { print $2 }') keystone user-role-add --user $ADMIN_USER --role $KEYSTONEADMIN_ROLE --tenant_id $ADMIN_TENANT KEYSTONESERVICEADMIN_ROLE=$(keystone role-list|awk '/ KeystoneServiceAdmin / { print $2 }') keystone user-role-add --user $ADMIN_USER --role $KEYSTONESERVICEADMIN_ROLE --tenant_id $ADMIN_TENANT }}} In the file {{{/etc/keystone/keystone.conf}}} a variable {{{ template_file = /etc/keystone/default_catalog.templates }}} shows the currently used template_files. The content of this file must be edited to match the local configuration by substituting '''localhost''' with <mgmt.host>, and restart keystone to make sure these values are taken into account: {{{ # sed -i 's/localhost/<mgmt.host>/' /etc/keystone/default_catalog.templates # /etc/init.d/keystone restart }}} {{{ export OS_USERNAME=<admin_user> export OS_PASSWORD=<secret> export OS_TENANT_NAME=<admin_project> export OS_AUTH_URL=http://<mgmt.host>:5000/v2.0/ export OS_VERSION=1.1 }}} ===== Glance ===== {{{# apt-get install -y glance}}} Glance-common will ask you which pipeline flavor you want. Choose ''keystone''. Then it will ask you what the ''auth server URL'' is, answer with ''http://<mgmt.host>:5000''. Then paste the service token (or admin token) set in /etc/keystone/keystone.conf (i.e. '''<ADMIN>''') when debconf asks for it. In '''BOTH''' of these files: '''/etc/glance/glance-api-paste.ini''' '''/etc/glance/glance-registry-paste.ini''': comment out {{{ #admin_tenant_name = %SERVICE_TENANT_NAME% #admin_user = %SERVICE_USER% #admin_password = %SERVICE_PASSWORD% }}} and add {{{ admin_token = <ADMIN> }}} And restart the services {{{ /etc/init.d/glance-api restart /etc/init.d/glance-registry restart }}} |
Line 149: | Line 211: |
The '''<service_token>''' value will be pasted into nova and glance configs later. ===== Glance ===== In the file '''/etc/glance/glance-api-paste.conf''' : * Section '''pipeline:glance-api''' : * comment the line : {{{pipeline = versionnegotiation context apiv1app}}} * and uncomment : {{{pipeline = versionnegotiation authtoken auth-context apiv1app}}} * Section '''filter:authtoken''': * Update the following host definitions to use '''<proxy.host>''' : * '''service_host''' * '''auth_host''' * '''auth_uri''' * Paste the previously generated service token in '''admin_token''' * {{{admin_token = <service_token>}}} In the file '''/etc/glance/glance-registry-paste.conf''' : * Section '''pipeline:glance-registry''' : * comment the line : {{{pipeline = context registryapp}}} * and uncomment : {{{pipeline = authtoken auth-context registryapp}}} * Section '''filter.authtoken''': * Update the following host definitions to use '''<proxy.host>''' : * '''service_host''' * '''auth_host''' * '''auth_uri''' * Paste the previously generated service token in '''admin_token''' * {{{admin_token = <service_token>}}} Restart glance : {{{ # /etc/init.d/glance-api restart # /etc/init.d/glance-registry restart }}} |
If you have made a mistake on this step, doing "# dpkg-reconfigure glance-common" will give you one more chance. |
Line 193: | Line 216: |
* In sections '''pipeline:ec2cloud''' and '''pipeline:ec2admin''' : * Replace "'''ec2noauth'''" with "'''authtoken keystonecontext'''" * In section '''pipeline:openstack_api_v2''': * Replace "'''noauth'''" with "'''authtoken keystonecontext'''" * Add the following sections and replace '''<proxy.host>''' and '''<service_token>''' : {{{ [filter:keystonecontext] paste.filter_factory = keystone.middleware.nova_keystone_context:NovaKeystoneContext.factory [filter:authtoken] paste.filter_factory = keystone.middleware.auth_token:filter_factory service_protocol = http service_host = <proxy.host> service_port = 5000 auth_host = <proxy.host> auth_port = 35357 auth_protocol = http auth_uri = http://<proxy.host>:5000/ admin_token = <service_token> |
* Look for the '''filter:authtoken''' section and comment out {{{ #admin_tenant_name = %SERVICE_TENANT_NAME% #admin_user = %SERVICE_USER% #admin_password = %SERVICE_PASSWORD% }}} and add {{{ admin_token = <ADMIN> }}} * Change the instances of 127.0.0.1 to <mgmt.host>: {{{ # sed -i 's/127.0.0.1/<mgmt.host>/' /etc/nova/api-paste.ini |
Line 220: | Line 237: |
--multi_host | multi_host=true |
Line 222: | Line 239: |
--network_manager=nova.network.manager.VlanManager --vlan_interface=<the private interface eg. eth1> # Tenants networks, e.g. prepare 100 networks, each one a /24, starting from 10.1.0.0 --num_networks=<100> --network_size=<256> --fixed_range=<10.1.0.0/16> |
network_manager=nova.network.manager.VlanManager vlan_interface=<the private interface eg. eth1> |
Line 229: | Line 242: |
--my-ip=<the current machine ip address> --public_interface=<the public interface eg. eth0> |
my-ip=<the current machine publc ip address> public_interface=<the interface on which the public IP addresses are bound eg. eth0> |
Line 232: | Line 245: |
--dmz_cidr=169.254.169.254/32 --ec2_dmz_host=169.254.169.254 --metadata_host=169.254.169.254 |
dmz_cidr=169.254.169.254/32 ec2_dmz_host=169.254.169.254 metadata_host=169.254.169.254 |
Line 236: | Line 249: |
# Sure, daemonize --daemonize=1 # The database connection string --sql_connection=mysql://<nova_user>:<nova_secret>'''<proxy.host>/nova |
|
Line 241: | Line 250: |
--rabbit_host=<proxy.host> | rabbit_host=<mgmt.host> |
Line 243: | Line 252: |
--image_service=nova.image.glance.GlanceImageService --glance_api_servers=<proxy.host>:9292 # if you want --use-syslog ## API --osapi_host=<proxy.host> --ec2_host=<proxy.host> # Load some extensions --osapi_extension=nova.api.openstack.v2.contrib.standard_extensions --osapi_extension=extensions.admin.Admin # Allow access to some “admin-only” api features --allow_admin_api |
image_service=nova.image.glance.GlanceImageService glance_api_servers=<mgmt.host>:9292 use-syslog=true ec2_host=<mgmt.host> }}} * Change the '''localhost''' in the sql_connection line to <mgmt.host> Create/sync nova-manage database as prereq for nova-scheduler start : {{{ # nova-manage db sync |
Line 268: | Line 276: |
# nova-manage db sync | |
Line 270: | Line 277: |
# nova-manage floating create <192.168.0.224/28> }}} Note: the values chosen for --fixed_range_v4=<10.1.0.0/16> --network_size=<256> --num_networks=<100> must match the values for the corresponding options set in the nova.conf file above |
# nova-manage floating create --ip_range=<192.168.0.224/28> }}} |
Line 282: | Line 288: |
===== openstack-dashboard ===== {{{# apt-get install -y openstack-dashboard openstack-dashboard-apache}}} Edit '''/etc/openstack-dashboard/local_settings.py''' and add {{{QUANTUM_ENABLED = False}}} The panel will attempt to create files in /var/www {{{ chown www-data /var/www/ }}} Edit '''/etc/apache2/ports.conf''' and add {{{ NameVirtualHost *:8080 Listen 8080 }}} Restart apache: {{{service apache2 restart}}} Point your browser to http://<mgmt.host>:8080/, and you'll see the dashboard. You can login using '''<admin_user>''' password '''<secret>'''. Install the VNC console. Add the following lines to /etc/nova/nova.conf {{{ novncproxy_base_url=http://<mgmt.host>:6080/vnc_auto.html vncserver_listen=0.0.0.0 vncserver_proxyclient_address=127.0.0.1 }}} Note: <mgmt.host> will be exposed in horizon and must be a name that resolves from the client machine. It cannot be a name that only resolves on the nodes used to run OpenStack. {{{ apt-get install nova-console novnc }}} |
|
Line 284: | Line 330: |
==== Packages installation ==== Now install Openstack packages : {{{# apt-get install -y nova-compute nova-api nova-network python-keystone}}} Apply "this patch":https://github.com/openstack/nova/commit/6ce042cafbf410a213c5d7937b93784e8f0a1655 to file '''/usr/share/pyshared/nova/api/metadata/handler.py''' if not already done. ==== Configuration ==== ===== Nova ===== |
Note that the <mgmt.node> can also be a compute node. There is no obligation for it to be a separate physical machine. Install the packages required to run instances : {{{apt-get install -y nova-compute nova-api nova-network nova-cert}}} ===== Compute only nodes ===== The proxy can be installed as a compute node, in which case there no additional configuration necessary. However, if a new node is installed and is only running instances, the following configuration must be done. |
Line 300: | Line 344: |
{{{--my-ip=<the current machine ip address>}}} * Only load the metadata api on compute-only nodes {{{--enabled_apis=metadata}}} |
{{{my-ip=<the current machine ip address>}}} * Only load the metadata api on compute only nodes (the other APIs need only exist on one node of the cluster). {{{enabled_apis=metadata}}} ===== Checking that it works ===== |
Line 313: | Line 359: |
On the proxy, check that all seems to be running : | On the proxy, check that all is running : |
Line 318: | Line 364: |
nova-scheduler <proxy.host> nova enabled :-) 2012-01-16 12:29:53 | nova-scheduler <mgmt.host> nova enabled :-) 2012-01-16 12:29:53 |
Line 323: | Line 369: |
It should be working \o/ |
|
Line 328: | Line 372: |
{{{ # export NOVA_USERNAME=admin # export NOVA_API_KEY=<admin_password> # export NOVA_PROJECT_ID=admin # export NOVA_URL=http://<proxy.host>:5000/v2.0/ # export NOVA_VERSION=1.1 |
n {{{ export OS_USERNAME=<admin_user> export OS_PASSWORD=<secret> export OS_TENANT_NAME=<admin_project> export OS_AUTH_URL=http://<mgmt.host>:5000/v2.0/ export OS_VERSION=1.1 |
Line 371: | Line 415: |
# wget http://uec-images.ubuntu.com/releases/11.10/release/ubuntu-11.10-server-cloudimg-amd64-disk1.img | # wget https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img |
Line 373: | Line 417: |
# glance --auth_token=<service_token> add name="Ubuntu 11.10 clouding amd64" < ubuntu-11.10-server-cloudimg-amd64-disk1.img Added new image with ID: 78651eea-02f6-4750-945a-4524a77f7da9 # nova image-list +--------------------------------------+-----------------------------+--------+--------+ | ID | Name | Status | Server | +--------------------------------------+-----------------------------+--------+--------+ | 78651eea-02f6-4750-945a-4524a77f7da9 | Ubuntu 11.10 clouding amd64 | ACTIVE | | +--------------------------------------+-----------------------------+--------+--------+ }}} To later connect to the instance via ssh, we will need to upload a ssh public-key : |
# glance add name="cirrOS-0.3.0-x86_64" is_public=true container_format=bare disk_format=qcow2 distro="cirrOS-0.3.0-x86_64" < cirros-0.3.0-x86_64-disk.img }}} On completion, this command will output an image ID needed later. To be able to connect later to the instance via ssh, we will need to upload an ssh public-key : |
Line 394: | Line 433: |
We can now boot an image with this image : {{{ # nova boot --flavor 1 --image 78651eea-02f6-4750-945a-4524a77f7da9 --key_name my_key my_first_instance |
We can now boot an instance of the image, specying the image ID obtained earlier from Glance: /!\ The next step may hang if rabbitmq does not have 1GB free space in /var/lib/rabbitmq (its default disk_free_limit setting). {{{ # nova boot --poll --flavor 1 --image 78651eea-02f6-4750-945a-4524a77f7da9 --key_name my_key my_first_instance |
Line 503: | Line 544: |
Et voilà ! | If ssh does not work, check the logs in the horizon "Logs" tab associated with the instance. If it fails to find the metadata with an error that looks like: {{{ DataSourceEc2.py[WARNING]: 'http://169.254.169.254' failed: url error [[Errno 111] Connection refused] }}} just try to restart {{{ /etc/init.d/nova-compute restart /etc/init.d/nova-api restart /etc/init.d/nova-scheduler restart /etc/init.d/nova-cert restart }}} the source of the problem is probably that it was not retarted after a modification of the configuration files and they were not taken into account. === nova-volume === The following instructions must be run on the <mgmt.host> node. {{{ apt-get install lvm2 nova-volume iscsitarget iscsitarget-dkms euca2ools guestmount }}} Assuming {{{/dev/<sda3>}}} is an unused disk partition, create a volume group: {{{ pvcreate /dev/<sda3> vgcreate nova-volumes /dev/<sda3> }}} Add the following lines to {{{/etc/nova/nova.conf}}} {{{ iscsi_ip_prefix=192.168. volume_group=nova-volumes iscsi_helper=iscsitarget }}} Apply the following patch to cope with the fact that --volume-group is not accepted as an option by the nova-volume command line. {{{ diff --git a/init.d/nova-volume b/init.d/nova-volume index 0cdda1b..1d6fa62 100755 --- a/init.d/nova-volume +++ b/init.d/nova-volume @@ -45,9 +47,9 @@ do_start() fi # Adds what has been configured in /etc/default/nova-volume - if [ -n ${nova_volume_group} ] ; then - DAEMON_ARGS="${DAEMON_ARGS} --volume_group=${nova_volume_group}" - fi +# if [ -n ${nova_volume_group} ] ; then +# DAEMON_ARGS="${DAEMON_ARGS} --volume_group=${nova_volume_group}" +# fi start-stop-daemon --start --quiet --background --chuid ${NOVA_USER}:nova --make-pidfile --pidfile $PIDFILE --startas $DAEMON --test > /dev/null \ || return 1 }}} Fix an absolute path problem in {{{/usr/share/pyshared/nova/rootwrap/volume.py}}} {{{ perl -pi -e 's|/sbin/iscsiadm|/usr/bin/iscsiadm|' /usr/share/pyshared/nova/rootwrap/volume.py }}} Edit ''/etc/default/iscsitarget'' and set {{{ ISCSITARGET_ENABLE=true }}} Run the iscsi services : {{{ service iscsitarget start service open-iscsi start }}} Start the nova-volume service {{{ /etc/init.d/nova-volume start }}} Check that it shows (give it 10 seconds) with {{{ nova-manage service list }}} should show a line looking like this: {{{ nova-volume openstack nova enabled :-) 2012-05-16 09:38:26 }}} Go to the dashboard and you will be able to create a volume and attach it to a running instance. If anything goes wrong, check the {{{/var/log/nova/nova-volume.log}}} and {{{/var/log/nova/nova-compute.log}}} files first for errors. If you would like to try the euca2ools commands instead of the dashboard you can use the examples shown at http://docs.openstack.org/trunk/openstack-compute/admin/content/managing-volumes.html (as of May 16th, 2012). Before running these commands you need to do the following: {{{ login to the dashboard as <admin_user> go to Settings click on "EC2 Credentials" click on "Download EC2 Credentials" unzip the downloaded file source ec2rc.sh }}} This will define the environment variables necessary for commands such as {{{ euca-describe-volumes }}} to display the list of active volumes as follows {{{ root@openstack:~/euca2ools# euca-describe-volumes VOLUME vol-00000002 1 nova available (67af2aec0bb94cc29a43c5bca21ce3d4, openstack, None, None) 2012-05-16T09:54:23.000Z }}} === swift nodes: === Assuming three machines installed with squeeze, the primary node being the openstack mgmt.host node and no puppet or puppetmaster installed. ==== swift primary node ==== {{{ apt-get install libmysql-ruby ruby-activerecord-2.3 sqlite3 puppetmaster puppet ruby-sqlite3 }}} Puppet configuration: {{{ diff --git a/puppet/puppet.conf b/puppet/puppet.conf index b18fae3..ce4ed22 100644 --- a/puppet/puppet.conf +++ b/puppet/puppet.conf @@ -7,6 +7,8 @@ factpath=$vardir/lib/facter templatedir=$confdir/templates prerun_command=/etc/puppet/etckeeper-commit-pre postrun_command=/etc/puppet/etckeeper-commit-post +pluginsync=true +storeconfigs=true [master] # These are needed when the puppetmaster is run by passenger commit 507105065306433eec8f03dd72ab52ccaf268ccc Author: root <root@sd-16961.dedibox.fr> Date: Mon Apr 2 15:04:53 2012 +0200 configure database storage diff --git a/puppet/puppet.conf b/puppet/puppet.conf index ce4ed22..af220e9 100644 --- a/puppet/puppet.conf +++ b/puppet/puppet.conf @@ -9,10 +9,19 @@ prerun_command=/etc/puppet/etckeeper-commit-pre postrun_command=/etc/puppet/etckeeper-commit-post pluginsync=true storeconfigs=true +server=mgmt.host [master] # These are needed when the puppetmaster is run by passenger # and can safely be removed if webrick is used. ssl_client_header = SSL_CLIENT_S_DN ssl_client_verify_header = SSL_CLIENT_VERIFY +storeconfigs=true +# Needed for storeconfigs=true +dbadapter=mysql +dbname=puppet +dbuser=puppet +dbpassword=password +dbserver=localhost +dbsocket=/var/run/mysqld/mysqld.sock }}} Setup mysql for puppet: {{{ mysqladmin create puppet mysql -e "grant all on puppet.* to 'puppet'@'localhost' identified by 'password';" }}} Install openstack modules for puppet: {{{ cd /etc/puppet git clone git://git.labs.enovance.com/openstack-puppet-modules.git modules && cd modules && git submodule init && git submodule update cp /etc/puppet/modules/swift/examples/multi.pp /etc/puppet/manifests/site.pp }}} {{{ commit 8eb77223e25bfff1284612417efedd228e0c6696 Author: root <root@sd-16961.dedibox.fr> Date: Mon Apr 2 15:37:19 2012 +0200 use tap0 for lan diff --git a/puppet/manifests/site.pp b/puppet/manifests/site.pp index a915aea..9b890b0 100644 --- a/puppet/manifests/site.pp +++ b/puppet/manifests/site.pp @@ -28,7 +28,7 @@ $swift_shared_secret='changeme' # assumes that the ip address where all of the storage nodes # will communicate is on eth1 -$swift_local_net_ip = $ipaddress_eth0 +$swift_local_net_ip = $ipaddress_tap0 Exec { logoutput => true } }}} Enable puppet autosign for all hosts: {{{ echo '*' > /etc/puppet/autosign.conf }}} Deploy swift configuration on the proxy: {{{ chown -R puppet:puppet /var/lib/puppet/ puppet agent --certname=swift_storage_1 --server=mgmt.host --verbose --debug --test /etc/init.d/xinetd reload }}} ==== swift secondary nodes ==== {{{ deb http://ftp.fr.debian.org/debian/ wheezy main apt-get install python2.7=2.7.2-8 python2.7-minimal=2.7.2-8 libpython2.7=2.7.2-8 echo libpython2.7 hold | dpkg --set-selections echo python2.7 hold | dpkg --set-selections echo python2.7-minimal hold | dpkg --set-selections apt-get install puppet ruby-sqlite3 puppet agent --certname=swift_storage_3 --server=openstack-online-0001.dedibox.fr --verbose --debug --test }}} ==== create swift ring ==== {{{ puppet agent --certname=swift_proxy --server=openstack-online-0001.dedibox.fr --verbose --debug --test }}} ==== propagate the swift configuration ==== {{{ puppet agent --certname=swift_storage_1 --server=openstack-online-0001.dedibox.fr --verbose --debug --test puppet agent --certname=swift_storage_2 --server=openstack-online-0001.dedibox.fr --verbose --debug --test puppet agent --certname=swift_storage_3 --server=openstack-online-0001.dedibox.fr --verbose --debug --test }}} ==== check that it works ==== On proxy / mgmt.host : {{{ # cd /etc/puppet/modules/swift/ext # ruby swift.rb getting credentials: curl -k -v -H "X-Storage-User: test:tester" -H 'X-Storage-Pass: testing' http://127.0.0.1:8080/auth/v1.0 verifying connection auth: curl -k -v -H "X-Auth-Token: AUTH_tk5d5a63abdf90414eafd890ed710d357b" http://127.0.0.1:8080/v1/AUTH_test Testing swift: swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing stat found containers/objects: 0/0 Uploading file to swift with command: swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing upload my_container /tmp/foo1 tmp/foo1 Downloading file with command: swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing download my_container tmp/foo1 Dude!!!! It actually seems to work, we can upload and download files!!!! }}} ==== horizon ==== Edit '''/etc/keystone/default_catalog.templates''' like this: {{{ catalog.RegionOne.object-store.publicURL = http://mgmt.host:8080/v1/AUTH_$(tenant_id)s catalog.RegionOne.object-store.adminURL = http://mgmt.host:8080/ catalog.RegionOne.object-store.internalURL = http://mgmt.host:8080/v1/AUTH_$(tenant_id)s catalog.RegionOne.object-store.name = 'Object Store Service' }}} {{{ diff --git a/swift/proxy-server.conf b/swift/proxy-server.conf index 83dda1e..8364fe7 100644 --- a/swift/proxy-server.conf +++ b/swift/proxy-server.conf @@ -7,7 +7,8 @@ user = swift [pipeline:main] # ratelimit? -pipeline = healthcheck cache tempauth proxy-server +#pipeline = healthcheck cache tempauth proxy-server +pipeline = healthcheck cache tokenauth keystone proxy-server [app:proxy-server] use = egg:swift#proxy @@ -28,3 +29,17 @@ use = egg:swift#healthcheck use = egg:swift#memcache # multi-proxy config not supported memcache_servers = 127.0.0.1:11211 + +[filter:tokenauth] +paste.filter_factory = keystone.middleware.auth_token:filter_factory +service_port = 5000 +service_protocol = http +service_host = 127.0.0.1 +auth_port = 35357 +auth_protocol = http +auth_host = 127.0.0.1 +admin_token = ADMIN + +[filter:keystone] +paste.filter_factory = keystone.middleware.swift_auth:filter_factory +operator_roles = admin, swiftoperator, projectmanager }}} {{{ /etc/init.d/swift-proxy restart }}} ==== swift command line ==== {{{ apt-get install swift swift -U $OS_TENANT_NAME:$OS_USERNAME list }}} |
OpenStack on Debian GNU/Linux testing
This HOWTO aims to provide guidelines to install & set up a multi-node Openstack-Compute (aka Nova) environment.
This HOWTO is for running Openstack Essex (eg: v2012.1) running on Debian Testing. If you are looking for the HOWTO about Folsom, please go here: https://wiki.debian.org/OpenStackHowto/Folsom
The environment includes the following software:
a “proxy” or "management" node (host name <mgmt.host>) with the following services :
- nova-api
- nova-scheduler
- glance
- keystone
- mysql
- rabbitmq
- memcached
- openstack-dashboard
- nova-volume
one or more pure “compute” (host name <computeNN.host>) nodes with the following services :
- nova-compute
- nova-network
- nova-api (with only the metadata API enabled)
Document conventions
Command lines starting with a # must be run as root.
Values between < and > must be replaced by your values.
Prerequisites
Things to prepare beforehand:
- Servers:
- should have two network interfaces to ensure security. If only one interface is used the private part is more exposed to attacks coming from the public part.
a _public_ one to communicate with the outside world
a _private_ one for the guests' VLans
- should have two network interfaces to ensure security. If only one interface is used the private part is more exposed to attacks coming from the public part.
- Network :
- public network
private network. If the machines are not on a LAN, create one with OpenVPN.
- fixed IP range for guests
- number of networks for guests
- network size for guests
- public “floating” IPs (optional)
echo 1 > /proc/sys/net/ipv4/ip_forward
echo "nbd max_part=65" >> /etc/modules # to enable key-file, network & metadata injection into instances images
- Distribution:
- Debian GNU/Linux Squeeze (there are no openstack packages for Squeeze as of October 24th, 2012, but backports may be available in future)
- Make sure /tmp has enough space to accomodate for snapshoting ( i.e. you might want to add /tmp none none none 0 0 in /etc/fstab to disable tmpfs on /tmp )
- Add wheezy in the /etc/apt/sources.list
- apt-get update
- As of May 15th, 2012 you must do the following because the most recent python-prettytable is partly broken
apt-get install python-prettytable=0.5-1 echo python-prettytable hold | dpkg --set-selections
Note from zigo: I think that the python-prettytable issue has been fixed, so using v 0.6 from Wheezy should be ok.
Important
This HOWTO is valid for the OpenStack Nova, Glance, Volume and Keystone packages labelled 2012.1, currently available in Debian testing (Wheezy) and might need some adjustments with later versions.
Technical Choices
We will be using:
"Multi-host VLan networking mode":http://docs.openstack.org/diablo/openstack-compute/admin/content/networking-options.html
- KVM as hypervisor
MySql as database backend (for nova)
Installation
Proxy Node
Hostname
In the following replace <mgmt.host> with the actual hostname of the machine chosen to be the management node.
Packages installation
Install dependencies:
# apt-get install -y mysql-server rabbitmq-server memcached
Note : do not set the MySQL password or add the -p option to all mysql related commands below.
In /etc/mysql/my.cnf modify the bind-address value to be 0.0.0.0.
# sed -i "s/127.0.0.1/0.0.0.0/" /etc/mysql/my.cnf
(or better, instead of 0.0.0.0, the IP address of a private interface on which other compute nodes can join the proxy.)
And restart the MySQL server:
# /etc/init.d/mysql restart
Now install some OpenStack packagese:
# apt-get install -y nova-api nova-scheduler keystone
Answer the debconf questions and chose the proposed defaults.
Configuration
Keystone
An admin user is created and given the necessary credentials (roles in the openstack parlance) to perform administrative actions.
Edit /etc/keystone/keystone.conf and modify admin_token=ADMIN by a secret admin token <ADMIN> of your chosing, and restart keystone.
# sed -i 's/ADMIN/<ADMIN>/' /etc/keystone/keystone.conf # service keystone restart
Variables to used by the keystone command line to connect to the keystone server with the proper credentials:
export SERVICE_ENDPOINT=http://127.0.0.1:35357/v2.0/ export SERVICE_TOKEN=<ADMIN>
Many keystone arguments require numerical IDs that are unpractical to remember. The following function is defined to retrieve the numerical ID and store it in a variable.
function get_id () { echo `$@ | awk '/ id / { print $4 }'` }
Create a tenant
ADMIN_TENANT=$(get_id keystone tenant-create --name <admin_project>)
Create a user with its password & email
ADMIN_USER=$(get_id keystone user-create --name <admin_user> --pass <secret> --email <admin@example.com>)
Create roles for admins
keystone role-create --name admin keystone role-create --name KeystoneAdmin keystone role-create --name KeystoneServiceAdmin
Grant admin rights to <admin_user> on tenant <admin_project>
ADMIN_ROLE=$(keystone role-list|awk '/ admin / { print $2 }') keystone user-role-add --user $ADMIN_USER --role $ADMIN_ROLE --tenant_id $ADMIN_TENANT KEYSTONEADMIN_ROLE=$(keystone role-list|awk '/ KeystoneAdmin / { print $2 }') keystone user-role-add --user $ADMIN_USER --role $KEYSTONEADMIN_ROLE --tenant_id $ADMIN_TENANT KEYSTONESERVICEADMIN_ROLE=$(keystone role-list|awk '/ KeystoneServiceAdmin / { print $2 }') keystone user-role-add --user $ADMIN_USER --role $KEYSTONESERVICEADMIN_ROLE --tenant_id $ADMIN_TENANT
In the file /etc/keystone/keystone.conf a variable
template_file = /etc/keystone/default_catalog.templates
shows the currently used template_files. The content of this file must be edited to match the local configuration by substituting localhost with <mgmt.host>, and restart keystone to make sure these values are taken into account:
# sed -i 's/localhost/<mgmt.host>/' /etc/keystone/default_catalog.templates # /etc/init.d/keystone restart
export OS_USERNAME=<admin_user> export OS_PASSWORD=<secret> export OS_TENANT_NAME=<admin_project> export OS_AUTH_URL=http://<mgmt.host>:5000/v2.0/ export OS_VERSION=1.1
Glance
# apt-get install -y glance
Glance-common will ask you which pipeline flavor you want. Choose keystone. Then it will ask you what the auth server URL is, answer with http://<mgmt.host>:5000. Then paste the service token (or admin token) set in /etc/keystone/keystone.conf (i.e. <ADMIN>) when debconf asks for it.
In BOTH of these files:
/etc/glance/glance-api-paste.ini
/etc/glance/glance-registry-paste.ini:
comment out
#admin_tenant_name = %SERVICE_TENANT_NAME% #admin_user = %SERVICE_USER% #admin_password = %SERVICE_PASSWORD%
and add
admin_token = <ADMIN>
And restart the services
/etc/init.d/glance-api restart /etc/init.d/glance-registry restart
- NOTE*
- If you have made a mistake on this step, doing "# dpkg-reconfigure glance-common" will give you one more chance.
Nova
In the file /etc/nova/api-paste.ini :
Look for the filter:authtoken section and comment out
#admin_tenant_name = %SERVICE_TENANT_NAME% #admin_user = %SERVICE_USER% #admin_password = %SERVICE_PASSWORD%
and add
admin_token = <ADMIN>
Change the instances of 127.0.0.1 to <mgmt.host>:
# sed -i 's/127.0.0.1/<mgmt.host>/' /etc/nova/api-paste.ini
In the file /etc/nova/nova.conf :
- Add these configuration options :
## Network config # A nova-network on each compute node multi_host=true # VLan manger network_manager=nova.network.manager.VlanManager vlan_interface=<the private interface eg. eth1> # My ip my-ip=<the current machine publc ip address> public_interface=<the interface on which the public IP addresses are bound eg. eth0> # Dmz & metadata things dmz_cidr=169.254.169.254/32 ec2_dmz_host=169.254.169.254 metadata_host=169.254.169.254 ## More general things # The RabbitMQ host rabbit_host=<mgmt.host> ## Glance image_service=nova.image.glance.GlanceImageService glance_api_servers=<mgmt.host>:9292 use-syslog=true ec2_host=<mgmt.host>
Change the localhost in the sql_connection line to <mgmt.host>
Create/sync nova-manage database as prereq for nova-scheduler start :
# nova-manage db sync
Restart nova services :
# /etc/init.d/nova-api restart # /etc/init.d/nova-scheduler restart
Now bootstrap nova :
# nova-manage network create private --fixed_range_v4=<10.1.0.0/16> --network_size=<256> --num_networks=<100> # nova-manage floating create --ip_range=<192.168.0.224/28>
You should be able to see that nova-scheduler is running (OK state is :-) KO is XXX) :
# nova-manage service list Binary Host Zone Status State Updated_At nova-scheduler openstack04 nova enabled :-) 2012-01-13 17:29:48
openstack-dashboard
# apt-get install -y openstack-dashboard openstack-dashboard-apache
Edit /etc/openstack-dashboard/local_settings.py and add
QUANTUM_ENABLED = False
The panel will attempt to create files in /var/www
chown www-data /var/www/
Edit /etc/apache2/ports.conf and add
NameVirtualHost *:8080 Listen 8080
Restart apache:
service apache2 restart
Point your browser to http://<mgmt.host>:8080/, and you'll see the dashboard. You can login using <admin_user> password <secret>.
Install the VNC console. Add the following lines to /etc/nova/nova.conf
novncproxy_base_url=http://<mgmt.host>:6080/vnc_auto.html vncserver_listen=0.0.0.0 vncserver_proxyclient_address=127.0.0.1
Note: <mgmt.host> will be exposed in horizon and must be a name that resolves from the client machine. It cannot be a name that only resolves on the nodes used to run OpenStack.
apt-get install nova-console novnc
compute nodes:
Note that the <mgmt.node> can also be a compute node. There is no obligation for it to be a separate physical machine.
Install the packages required to run instances :
apt-get install -y nova-compute nova-api nova-network nova-cert
Compute only nodes
The proxy can be installed as a compute node, in which case there no additional configuration necessary. However, if a new node is installed and is only running instances, the following configuration must be done.
The file /etc/nova/api-paste.ini can be copied verbatim from the proxy host. The file /etc/nova/nova.conf can be copied from the proxy host and modified as follows:
- The IP of the machine
my-ip=<the current machine ip address>
- Only load the metadata api on compute only nodes (the other APIs need only exist on one node of the cluster).
enabled_apis=metadata
Checking that it works
Restart services :
# /etc/init.d/nova-api restart # /etc/init.d/nova-network restart # /etc/init.d/nova-compute restart
On the proxy, check that all is running :
# nova-manage service list Binary Host Zone Status State Updated_At nova-scheduler <mgmt.host> nova enabled :-) 2012-01-16 12:29:53 nova-compute compute.host nova enabled :-) 2012-01-16 12:29:52 nova-network compute.host nova enabled :-) 2012-01-16 12:29:49
Using it
To use the nova cli, you will need to export some environment variables : n
export OS_USERNAME=<admin_user> export OS_PASSWORD=<secret> export OS_TENANT_NAME=<admin_project> export OS_AUTH_URL=http://<mgmt.host>:5000/v2.0/ export OS_VERSION=1.1
You can now use the nova command line interface :
nova list +----+------+--------+----------+ | ID | Name | Status | Networks | +----+------+--------+----------+ +----+------+--------+----------+ # nova image-list +----+------+--------+--------+ | ID | Name | Status | Server | +----+------+--------+--------+ +----+------+--------+--------+ # nova flavor-list +----+-----------+-----------+------+----------+-------+-------------+ | ID | Name | Memory_MB | Swap | Local_GB | VCPUs | RXTX_Factor | +----+-----------+-----------+------+----------+-------+-------------+ | 1 | m1.tiny | 512 | | 0 | 1 | 1.0 | | 2 | m1.small | 2048 | | 20 | 1 | 1.0 | | 3 | m1.medium | 4096 | | 40 | 2 | 1.0 | | 4 | m1.large | 8192 | | 80 | 4 | 1.0 | | 5 | m1.xlarge | 16384 | | 160 | 8 | 1.0 | +----+-----------+-----------+------+----------+-------+-------------+ # nova keypair-list +------+-------------+ | Name | Fingerprint | +------+-------------+ +------+-------------+
There is no instance, no image and some flavors. First we need to get an image and upload it to glance :
# wget https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img [...] # glance add name="cirrOS-0.3.0-x86_64" is_public=true container_format=bare disk_format=qcow2 distro="cirrOS-0.3.0-x86_64" < cirros-0.3.0-x86_64-disk.img
On completion, this command will output an image ID needed later.
To be able to connect later to the instance via ssh, we will need to upload an ssh public-key :
# nova keypair-add --pub_key <your_public_key_file.pub> <key_name> # nova keypair-list +--------+-------------------------------------------------+ | Name | Fingerprint | +--------+-------------------------------------------------+ | my_key | 79:40:46:87:74:3a:0e:01:f4:59:00:1b:3a:94:71:72 | +--------+-------------------------------------------------+
We can now boot an instance of the image, specying the image ID obtained earlier from Glance:
The next step may hang if rabbitmq does not have 1GB free space in /var/lib/rabbitmq (its default disk_free_limit setting).
# nova boot --poll --flavor 1 --image 78651eea-02f6-4750-945a-4524a77f7da9 --key_name my_key my_first_instance +------------------------+--------------------------------------+ | Property | Value | +------------------------+--------------------------------------+ | OS-EXT-STS:power_state | 0 | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | RAX-DCF:diskConfig | MANUAL | | accessIPv4 | | | accessIPv6 | | | adminPass | HMs5tLK3bPCG | | config_drive | | | created | 2012-01-16T14:14:20Z | | flavor | m1.tiny | | hostId | | | id | 677750ea-0dd4-43c3-8ae0-ef54cb29915f | | image | Ubuntu 11.10 clouding amd64 | | key_name | pubkey | | metadata | {} | | name | my_first_instance | | progress | None | | status | BUILD | | tenant_id | 1 | | updated | 2012-01-16T14:14:20Z | | user_id | 1 | +------------------------+--------------------------------------+
And after few seconds :
# nova show my_first_instance +------------------------+----------------------------------------------------------+ | Property | Value | +------------------------+----------------------------------------------------------+ | OS-EXT-STS:power_state | 1 | | OS-EXT-STS:task_state | None | | OS-EXT-STS:vm_state | active | | RAX-DCF:diskConfig | MANUAL | | accessIPv4 | | | accessIPv6 | | | config_drive | | | created | 2012-01-16T14:14:20Z | | flavor | m1.tiny | | hostId | 9750641c8c79637e01b342193cfa1efd5961c300b7865dc4a5658bdd | | id | 677750ea-0dd4-43c3-8ae0-ef54cb29915f | | image | Ubuntu 11.10 clouding amd64 | | key_name | pubkey | | metadata | {} | | name | my_first_instance | | private_0 network | 10.1.0.3 | | progress | None | | status | ACTIVE | | tenant_id | 1 | | updated | 2012-01-16T14:14:37Z | | user_id | 1 | +------------------------+----------------------------------------------------------+
To see the instance console, we can go on our compute node and look at the file /var/lib/nova/instances/instance-00000001/console.log (if this is the first intance you created, else change 00000001 to the last available in the folder).
We can activate ssh access, create a floating ip, attach it to our instance and ssh into it (with user ubuntu for UEC images):
# nova secgroup-add-rule default tcp 22 22 0.0.0.0/0 # nova floating-ip-create +--------------+-------------+----------+ | Ip | Instance Id | Fixed Ip | +--------------+-------------+----------+ | 172.24.4.224 | None | None | +--------------+-------------+----------+ # nova add-floating-ip my_first_instance 172.24.4.224 # ssh -i my_key ubuntu@172.24.4.224 The authenticity of host '172.24.4.224 (172.24.4.224)' can't be established. RSA key fingerprint is 55:bf:2e:7f:60:ef:ea:72:b4:af:2a:33:6b:2d:8c:62. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '172.24.4.224' (RSA) to the list of known hosts. Welcome to Ubuntu 11.10 (GNU/Linux 3.0.0-14-virtual x86_64) * Documentation: https://help.ubuntu.com/ System information as of Mon Jan 16 14:58:15 UTC 2012 System load: 0.59 Processes: 59 Usage of /: 32.6% of 1.96GB Users logged in: 0 Memory usage: 6% IP address for eth0: 10.1.0.5 Swap usage: 0% Graph this data and manage this system at https://landscape.canonical.com/ Get cloud support with Ubuntu Advantage Cloud Guest http://www.ubuntu.com/business/services/cloud The programs included with the Ubuntu system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law. /usr/bin/xauth: file /home/ubuntu/.Xauthority does not exist To run a command as administrator (user 'root'), use 'sudo <command>'. See "man sudo_root" for details. ubuntu@my-first-instance:~$
If ssh does not work, check the logs in the horizon "Logs" tab associated with the instance. If it fails to find the metadata with an error that looks like:
DataSourceEc2.py[WARNING]: 'http://169.254.169.254' failed: url error [[Errno 111] Connection refused]
just try to restart
/etc/init.d/nova-compute restart /etc/init.d/nova-api restart /etc/init.d/nova-scheduler restart /etc/init.d/nova-cert restart
the source of the problem is probably that it was not retarted after a modification of the configuration files and they were not taken into account.
nova-volume
The following instructions must be run on the <mgmt.host> node.
apt-get install lvm2 nova-volume iscsitarget iscsitarget-dkms euca2ools guestmount
Assuming /dev/<sda3> is an unused disk partition, create a volume group:
pvcreate /dev/<sda3> vgcreate nova-volumes /dev/<sda3>
Add the following lines to /etc/nova/nova.conf
iscsi_ip_prefix=192.168. volume_group=nova-volumes iscsi_helper=iscsitarget
Apply the following patch to cope with the fact that --volume-group is not accepted as an option by the nova-volume command line.
diff --git a/init.d/nova-volume b/init.d/nova-volume index 0cdda1b..1d6fa62 100755 --- a/init.d/nova-volume +++ b/init.d/nova-volume @@ -45,9 +47,9 @@ do_start() fi # Adds what has been configured in /etc/default/nova-volume - if [ -n ${nova_volume_group} ] ; then - DAEMON_ARGS="${DAEMON_ARGS} --volume_group=${nova_volume_group}" - fi +# if [ -n ${nova_volume_group} ] ; then +# DAEMON_ARGS="${DAEMON_ARGS} --volume_group=${nova_volume_group}" +# fi start-stop-daemon --start --quiet --background --chuid ${NOVA_USER}:nova --make-pidfile --pidfile $PIDFILE --startas $DAEMON --test > /dev/null \ || return 1
Fix an absolute path problem in /usr/share/pyshared/nova/rootwrap/volume.py
perl -pi -e 's|/sbin/iscsiadm|/usr/bin/iscsiadm|' /usr/share/pyshared/nova/rootwrap/volume.py
Edit /etc/default/iscsitarget and set
ISCSITARGET_ENABLE=true
Run the iscsi services :
service iscsitarget start service open-iscsi start
Start the nova-volume service
/etc/init.d/nova-volume start
Check that it shows (give it 10 seconds) with
nova-manage service list
should show a line looking like this:
nova-volume openstack nova enabled :-) 2012-05-16 09:38:26
Go to the dashboard and you will be able to create a volume and attach it to a running instance. If anything goes wrong, check the /var/log/nova/nova-volume.log and /var/log/nova/nova-compute.log files first for errors. If you would like to try the euca2ools commands instead of the dashboard you can use the examples shown at http://docs.openstack.org/trunk/openstack-compute/admin/content/managing-volumes.html (as of May 16th, 2012). Before running these commands you need to do the following:
login to the dashboard as <admin_user> go to Settings click on "EC2 Credentials" click on "Download EC2 Credentials" unzip the downloaded file source ec2rc.sh
This will define the environment variables necessary for commands such as
euca-describe-volumes
to display the list of active volumes as follows
root@openstack:~/euca2ools# euca-describe-volumes VOLUME vol-00000002 1 nova available (67af2aec0bb94cc29a43c5bca21ce3d4, openstack, None, None) 2012-05-16T09:54:23.000Z
swift nodes:
Assuming three machines installed with squeeze, the primary node being the openstack mgmt.host node and no puppet or puppetmaster installed.
swift primary node
apt-get install libmysql-ruby ruby-activerecord-2.3 sqlite3 puppetmaster puppet ruby-sqlite3
Puppet configuration:
diff --git a/puppet/puppet.conf b/puppet/puppet.conf index b18fae3..ce4ed22 100644 --- a/puppet/puppet.conf +++ b/puppet/puppet.conf @@ -7,6 +7,8 @@ factpath=$vardir/lib/facter templatedir=$confdir/templates prerun_command=/etc/puppet/etckeeper-commit-pre postrun_command=/etc/puppet/etckeeper-commit-post +pluginsync=true +storeconfigs=true [master] # These are needed when the puppetmaster is run by passenger commit 507105065306433eec8f03dd72ab52ccaf268ccc Author: root <root@sd-16961.dedibox.fr> Date: Mon Apr 2 15:04:53 2012 +0200 configure database storage diff --git a/puppet/puppet.conf b/puppet/puppet.conf index ce4ed22..af220e9 100644 --- a/puppet/puppet.conf +++ b/puppet/puppet.conf @@ -9,10 +9,19 @@ prerun_command=/etc/puppet/etckeeper-commit-pre postrun_command=/etc/puppet/etckeeper-commit-post pluginsync=true storeconfigs=true +server=mgmt.host [master] # These are needed when the puppetmaster is run by passenger # and can safely be removed if webrick is used. ssl_client_header = SSL_CLIENT_S_DN ssl_client_verify_header = SSL_CLIENT_VERIFY +storeconfigs=true +# Needed for storeconfigs=true +dbadapter=mysql +dbname=puppet +dbuser=puppet +dbpassword=password +dbserver=localhost +dbsocket=/var/run/mysqld/mysqld.sock
Setup mysql for puppet:
mysqladmin create puppet mysql -e "grant all on puppet.* to 'puppet'@'localhost' identified by 'password';"
Install openstack modules for puppet:
cd /etc/puppet git clone git://git.labs.enovance.com/openstack-puppet-modules.git modules && cd modules && git submodule init && git submodule update cp /etc/puppet/modules/swift/examples/multi.pp /etc/puppet/manifests/site.pp
commit 8eb77223e25bfff1284612417efedd228e0c6696 Author: root <root@sd-16961.dedibox.fr> Date: Mon Apr 2 15:37:19 2012 +0200 use tap0 for lan diff --git a/puppet/manifests/site.pp b/puppet/manifests/site.pp index a915aea..9b890b0 100644 --- a/puppet/manifests/site.pp +++ b/puppet/manifests/site.pp @@ -28,7 +28,7 @@ $swift_shared_secret='changeme' # assumes that the ip address where all of the storage nodes # will communicate is on eth1 -$swift_local_net_ip = $ipaddress_eth0 +$swift_local_net_ip = $ipaddress_tap0 Exec { logoutput => true }
Enable puppet autosign for all hosts:
echo '*' > /etc/puppet/autosign.conf
Deploy swift configuration on the proxy:
chown -R puppet:puppet /var/lib/puppet/ puppet agent --certname=swift_storage_1 --server=mgmt.host --verbose --debug --test /etc/init.d/xinetd reload
swift secondary nodes
deb http://ftp.fr.debian.org/debian/ wheezy main apt-get install python2.7=2.7.2-8 python2.7-minimal=2.7.2-8 libpython2.7=2.7.2-8 echo libpython2.7 hold | dpkg --set-selections echo python2.7 hold | dpkg --set-selections echo python2.7-minimal hold | dpkg --set-selections apt-get install puppet ruby-sqlite3 puppet agent --certname=swift_storage_3 --server=openstack-online-0001.dedibox.fr --verbose --debug --test
create swift ring
puppet agent --certname=swift_proxy --server=openstack-online-0001.dedibox.fr --verbose --debug --test
propagate the swift configuration
puppet agent --certname=swift_storage_1 --server=openstack-online-0001.dedibox.fr --verbose --debug --test puppet agent --certname=swift_storage_2 --server=openstack-online-0001.dedibox.fr --verbose --debug --test puppet agent --certname=swift_storage_3 --server=openstack-online-0001.dedibox.fr --verbose --debug --test
check that it works
On proxy / mgmt.host :
# cd /etc/puppet/modules/swift/ext # ruby swift.rb getting credentials: curl -k -v -H "X-Storage-User: test:tester" -H 'X-Storage-Pass: testing' http://127.0.0.1:8080/auth/v1.0 verifying connection auth: curl -k -v -H "X-Auth-Token: AUTH_tk5d5a63abdf90414eafd890ed710d357b" http://127.0.0.1:8080/v1/AUTH_test Testing swift: swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing stat found containers/objects: 0/0 Uploading file to swift with command: swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing upload my_container /tmp/foo1 tmp/foo1 Downloading file with command: swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing download my_container tmp/foo1 Dude!!!! It actually seems to work, we can upload and download files!!!!
horizon
Edit /etc/keystone/default_catalog.templates like this:
catalog.RegionOne.object-store.publicURL = http://mgmt.host:8080/v1/AUTH_$(tenant_id)s catalog.RegionOne.object-store.adminURL = http://mgmt.host:8080/ catalog.RegionOne.object-store.internalURL = http://mgmt.host:8080/v1/AUTH_$(tenant_id)s catalog.RegionOne.object-store.name = 'Object Store Service'
diff --git a/swift/proxy-server.conf b/swift/proxy-server.conf index 83dda1e..8364fe7 100644 --- a/swift/proxy-server.conf +++ b/swift/proxy-server.conf @@ -7,7 +7,8 @@ user = swift [pipeline:main] # ratelimit? -pipeline = healthcheck cache tempauth proxy-server +#pipeline = healthcheck cache tempauth proxy-server +pipeline = healthcheck cache tokenauth keystone proxy-server [app:proxy-server] use = egg:swift#proxy @@ -28,3 +29,17 @@ use = egg:swift#healthcheck use = egg:swift#memcache # multi-proxy config not supported memcache_servers = 127.0.0.1:11211 + +[filter:tokenauth] +paste.filter_factory = keystone.middleware.auth_token:filter_factory +service_port = 5000 +service_protocol = http +service_host = 127.0.0.1 +auth_port = 35357 +auth_protocol = http +auth_host = 127.0.0.1 +admin_token = ADMIN + +[filter:keystone] +paste.filter_factory = keystone.middleware.swift_auth:filter_factory +operator_roles = admin, swiftoperator, projectmanager
/etc/init.d/swift-proxy restart
swift command line
apt-get install swift swift -U $OS_TENANT_NAME:$OS_USERNAME list