26404
Comment:
|
26413
|
Deletions are marked like this. | Additions are marked like this. |
Line 64: | Line 64: |
* Keystone for authentication | |
Line 66: | Line 65: |
* MySql as database backend | * MySql as database backend (for nova) |
Line 97: | Line 96: |
Line 124: | Line 124: |
Line 125: | Line 126: |
Line 126: | Line 128: |
Line 127: | Line 130: |
Line 128: | Line 132: |
Line 143: | Line 148: |
{{{ | |
Line 148: | Line 154: |
}}} | |
Line 159: | Line 166: |
Line 164: | Line 172: |
Line 165: | Line 174: |
Line 170: | Line 180: |
HOWTO: Openstack on Debian GNU/Linux unstable (sid)
This howto aims to provide guidelines to install & set up a multi-node Openstack-Compute (aka Nova) environment.
This environment will include :
one “proxy” or "management" node (host name <mgmt.host>) with the following services :
- nova-api
- nova-scheduler
- glance
- keystone
- mysql
- rabbitmq
- memcached
- openstack-dashboard
one or more pure “compute” (host name <computeNN.host>) nodes with the following services :
- nova-compute
- nova-network
- nova-api (with only the metadata api enabled)
CONVENTIONS
In formatted blocks :
command lines starting with a # must be ran as root.
values between < and > must be replaced by your values.
PREREQUISITES
Things to prepare beforehand :
- Machines :
- They should have two network interfaces to ensure security. If only one interface is used the private part is more exposed to attacks coming from the public part.
a _public_ one to communicate with the outside world
a _private_ one for the guests VLans
- They should have two network interfaces to ensure security. If only one interface is used the private part is more exposed to attacks coming from the public part.
- Network :
- public network
private network. If the machines are not on a LAN, create one with OpenVPN.
- fixed ip range for guests
- number of networks for guests
- network size for guests
- public “floating” IPs (optional)
echo 1 > /proc/sys/net/ipv4/ip_forward
echo "nbd max_part=65" >> /etc/modules # to enable key-file, network & metadata injection into instances images
- Distribution :
- Debian GNU/Linux squeeze (currently unsupported, but backports may become available in future)
- Add wheezy and sid in the /etc/apt/sources.list
- apt-get update
- As of April 2nd, 2012 you must do the following because the most recent python2.7 is partly broken
apt-get install python2.7=2.7.2-8 python2.7-minimal=2.7.2-8 libpython2.7=2.7.2-8 echo libpython2.7 hold | dpkg --set-selections echo python2.7 hold | dpkg --set-selections echo python2.7-minimal hold | dpkg --set-selections
IMPORTANT
This HOWTO is valid for the OpenStack Nova, Glance and Keystone packages labelled 2012.1~rc1, currently available in Debian unstable (sid) and might need some adjustments with later versions.
Technical Choices
We will be using :
"Multi-host VLan networking mode":http://docs.openstack.org/diablo/openstack-compute/admin/content/networking-options.html
- KVM as hypervisor
MySql as database backend (for nova)
Installation
proxy node:
Hostname
In the following replace <mgmt.host> with the actual hostname of the machine chosen to be the management node.
Packages installation
Install dependencies:
# apt-get install -y mysql-server rabbitmq-server memcached
Note : do not set the MySQL password or add the -p option to all mysql related commands below.
In /etc/mysql/my.cnf modify the bind-address value to read :
bind-address = 0.0.0.0
(or better, instead of 0.0.0.0, the IP address of a private interface on which other compute nodes can join the proxy.)
And restart the mysql server :
# /etc/init.d/mysql restart
Now install some OpenStack packages :
# apt-get install -y nova-api nova-scheduler keystone
Answer the debconf questions and chose the proposed defaults.
Configuration
Keystone
An admin user is created and given the necessary credentials (roles in the openstack parlance) to perform administrative actions.
Edit /etc/keystone/keystone.conf and modify admin_token=ADMIN by a secret admin token of your chosing: admin_token=<ADMIN>
Variables to used by the keystone command line to connect to the keystone server with the proper credentials:
export SERVICE_ENDPOINT=http://127.0.0.1:35357/v2.0/ export SERVICE_TOKEN=<ADMIN>
Many keystone arguments require numerical IDs that are unpractical to remember. The following function is defined to retrieve the numerical ID and store it in a variable.
function get_id () { echo `$@ | awk '/ id / { print $4 }'` }
Create a tenant
ADMIN_TENANT=$(get_id keystone tenant-create --name <admin_project>)
Create a user with its password & email
ADMIN_USER=$(get_id keystone user-create --name <admin_user> --pass <secret> --email <admin@example.com>)
Grant admin rights to <admin_user> on tenant <admin_project>
ADMIN_ROLE=$(keystone role-list|awk '/ admin / { print $2 }') keystone user-role-add --user $ADMIN_USER --role $ADMIN_ROLE --tenant_id $ADMIN_TENANT KEYSTONEADMIN_ROLE=$(keystone role-list|awk '/ KeystoneAdmin / { print $2 }') keystone user-role-add --user $ADMIN_USER --role $KEYSTONEADMIN_ROLE --tenant_id $ADMIN_TENANT KEYSTONESERVICEADMIN_ROLE=$(keystone role-list|awk '/ KeystoneServiceAdmin / { print $2 }') keystone user-role-add --user $ADMIN_USER --role $KEYSTONESERVICEADMIN_ROLE --tenant_id $ADMIN_TENANT
In the file /etc/keystone/keystone.conf a variable template_file = /etc/keystone/default_catalog.templates
Update endpoint templates In the file refered by variable 'template_file' in '/etc/keystone/keystone.conf' (default: /etc/keystone/default_catalog.templates) & modify URLs (one will typically want to change localhost to the proxy's public hostname)
export OS_USERNAME=<admin_user> export OS_PASSWORD=<secret> export OS_TENANT_NAME=<admin_project> export OS_AUTH_URL=http://<mgmt.host>:5000/v2.0/ export OS_VERSION=1.1
Glance
# apt-get install -y glance
Glance-registry will ask you to configure the database. You can use sqlite3 which is enough.
Glance-common will ask you which pipeline flavor you want. Choose keystone. Then it will ask you what the auth server URL is, answer with http://<mgmt.host>:5000. Then paste the <service_token> you get from Keystone in the previous step when debconf asks for it.
In the file /etc/glance/glance-api-paste.ini and /etc/glance/glance-registry-paste.ini: Comment out
#admin_tenant_name = %SERVICE_TENANT_NAME% #admin_user = %SERVICE_USER% #admin_password = %SERVICE_PASSWORD%
and add
admin_token = <ADMIN>
And restart the services
/etc/init.d/glance-api restart /etc/init.d/glance-registry restart
- NOTE*
- If you have made a mistake on this step, doing "# dpkg-reconfigure glance-common" will give you one more chance.
Nova
In the file /etc/nova/api-paste.ini :
Look for the filter:authtoken section and replace 127.0.0.1 with <mgmt.host> and
admin_tenant_name = %SERVICE_TENANT_NAME% admin_user = %SERVICE_USER% admin_password = %SERVICE_PASSWORD%
with
admin_token = <ADMIN>
In the file /etc/nova/nova.conf :
- Add these configuration options :
## Network config # A nova-network on each compute node multi_host=true # VLan manger network_manager=nova.network.manager.VlanManager vlan_interface=<the private interface eg. eth1> # My ip my-ip=<the current machine ip address> public_interface=<the public interface eg. eth0> # Dmz & metadata things dmz_cidr=169.254.169.254/32 ec2_dmz_host=169.254.169.254 metadata_host=169.254.169.254 ## More general things # The RabbitMQ host rabbit_host=<mgmt.host> ## Glance image_service=nova.image.glance.GlanceImageService glance_api_servers=<mgmt.host>:9292 use-syslog=true ec2_host=<mgmt.host>
Restart nova services :
# /etc/init.d/nova-api restart # /etc/init.d/nova-scheduler restart
Now bootstrap nova :
# nova-manage db sync # nova-manage network create private --fixed_range_v4=<10.1.0.0/16> --network_size=<256> --num_networks=<100> # nova-manage floating create --ip_range=<192.168.0.224/28>
You should be able to see that nova-scheduler is running (OK state is :-) KO is XXX) :
# nova-manage service list Binary Host Zone Status State Updated_At nova-scheduler openstack04 nova enabled :-) 2012-01-13 17:29:48
openstack-dashboard
# apt-get install -y openstack-dashboard libapache2-mod-wsgi
a2enmod wsgi
Edit /etc/openstack-dashboard/local_settings.py and add
QUANTUM_ENABLED = False
change the line starting by with
CACHE_BACKEND='memcached://localhost:11211'
Create and edit /etc/apache2/conf.d/openstack-dashboard.conf to write this:
WSGIScriptAlias / /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi WSGIDaemonProcess horizon user=www-data group=www-data processes=3 threads=10 Alias /static/horizon /usr/share/pyshared/horizon/static/horizon Alias /static /usr/share/openstack-dashboard/openstack_dashboard/static
The panel will attempt to create files in /var/www
chown www-data /var/www/
Restart apache:
service apache2 restart
Point your browser to this server, and you'll see the dashboard.
Install the VNC console. Add the following lines to /etc/nova/nova.conf
novncproxy_base_url=http://<the current machine ip address>:6080/vnc_auto.html vncserver_listen=127.0.0.1 vncserver_proxyclient_address=127.0.0.1
apt-get install nova-console novnc
So that novncproxy_base_url=http://88.191.99.29:6080/vnc_auto.html is taken into account
/etc/init.d/nova-console restart
compute nodes:
Packages installation
Now install Openstack packages :
# apt-get install -y nova-compute nova-api nova-network nova-cert
Configuration
Nova
The file /etc/nova/api-paste.ini can be copied verbatim from the proxy host. The file /etc/nova/nova.conf can be copied from the proxy host and modified as follows:
- The IP of the machine
my-ip=<the current machine ip address>
- Only load the metadata api on compute only nodes (the other APIs need only exist on one node of the cluster).
enabled_apis=metadata
Restart services :
# /etc/init.d/nova-api restart # /etc/init.d/nova-network restart # /etc/init.d/nova-compute restart
On the proxy, check that all is running :
# nova-manage service list Binary Host Zone Status State Updated_At nova-scheduler <mgmt.host> nova enabled :-) 2012-01-16 12:29:53 nova-compute compute.host nova enabled :-) 2012-01-16 12:29:52 nova-network compute.host nova enabled :-) 2012-01-16 12:29:49
Using it
To use the nova cli, you will need to export some environment variables : n
export OS_USERNAME=<admin_user> export OS_PASSWORD=<secret> export OS_TENANT_NAME=<admin_project> export OS_AUTH_URL=http://<mgmt.host>:5000/v2.0/ export OS_VERSION=1.1
You can now use the nova command line interface :
nova list +----+------+--------+----------+ | ID | Name | Status | Networks | +----+------+--------+----------+ +----+------+--------+----------+ # nova image-list +----+------+--------+--------+ | ID | Name | Status | Server | +----+------+--------+--------+ +----+------+--------+--------+ # nova flavor-list +----+-----------+-----------+------+----------+-------+-------------+ | ID | Name | Memory_MB | Swap | Local_GB | VCPUs | RXTX_Factor | +----+-----------+-----------+------+----------+-------+-------------+ | 1 | m1.tiny | 512 | | 0 | 1 | 1.0 | | 2 | m1.small | 2048 | | 20 | 1 | 1.0 | | 3 | m1.medium | 4096 | | 40 | 2 | 1.0 | | 4 | m1.large | 8192 | | 80 | 4 | 1.0 | | 5 | m1.xlarge | 16384 | | 160 | 8 | 1.0 | +----+-----------+-----------+------+----------+-------+-------------+ # nova keypair-list +------+-------------+ | Name | Fingerprint | +------+-------------+ +------+-------------+
There is no instance, no image and some flavors. First we need to get an image and upload it to glance :
# wget http://uec-images.ubuntu.com/releases/11.10/release/ubuntu-11.10-server-cloudimg-amd64-disk1.img [...] # glance add name="ubuntu" disk_format=raw container_format=ovf < ubuntu-11.10-server-cloudimg-amd64-disk1.img Added new image: # nova image-list +--------------------------------------+--------+--------+--------+ | ID | Name | Status | Server | +--------------------------------------+--------+--------+--------+ | 82948e00-905c-4969-89b1-1f4ddb81c6bd | ubuntu | ACTIVE | | +--------------------------------------+--------+--------+--------+
To later connect to the instance via ssh, we will need to upload a ssh public-key :
# nova keypair-add --pub_key <your_public_key_file.pub> <key_name> # nova keypair-list +--------+-------------------------------------------------+ | Name | Fingerprint | +--------+-------------------------------------------------+ | my_key | 79:40:46:87:74:3a:0e:01:f4:59:00:1b:3a:94:71:72 | +--------+-------------------------------------------------+
We can now boot an image with this image :
# nova boot --flavor 1 --image 78651eea-02f6-4750-945a-4524a77f7da9 --key_name my_key my_first_instance +------------------------+--------------------------------------+ | Property | Value | +------------------------+--------------------------------------+ | OS-EXT-STS:power_state | 0 | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | RAX-DCF:diskConfig | MANUAL | | accessIPv4 | | | accessIPv6 | | | adminPass | HMs5tLK3bPCG | | config_drive | | | created | 2012-01-16T14:14:20Z | | flavor | m1.tiny | | hostId | | | id | 677750ea-0dd4-43c3-8ae0-ef54cb29915f | | image | Ubuntu 11.10 clouding amd64 | | key_name | pubkey | | metadata | {} | | name | my_first_instance | | progress | None | | status | BUILD | | tenant_id | 1 | | updated | 2012-01-16T14:14:20Z | | user_id | 1 | +------------------------+--------------------------------------+
And after few seconds :
# nova show my_first_instance +------------------------+----------------------------------------------------------+ | Property | Value | +------------------------+----------------------------------------------------------+ | OS-EXT-STS:power_state | 1 | | OS-EXT-STS:task_state | None | | OS-EXT-STS:vm_state | active | | RAX-DCF:diskConfig | MANUAL | | accessIPv4 | | | accessIPv6 | | | config_drive | | | created | 2012-01-16T14:14:20Z | | flavor | m1.tiny | | hostId | 9750641c8c79637e01b342193cfa1efd5961c300b7865dc4a5658bdd | | id | 677750ea-0dd4-43c3-8ae0-ef54cb29915f | | image | Ubuntu 11.10 clouding amd64 | | key_name | pubkey | | metadata | {} | | name | my_first_instance | | private_0 network | 10.1.0.3 | | progress | None | | status | ACTIVE | | tenant_id | 1 | | updated | 2012-01-16T14:14:37Z | | user_id | 1 | +------------------------+----------------------------------------------------------+
To see the instance console, we can go on our compute node and look at the file /var/lib/nova/instances/instance-00000001/console.log (if this is the first intance you created, else change 00000001 to the last available in the folder).
We can activate ssh access, create a floating ip, attach it to our instance and ssh into it (with user ubuntu for UEC images):
# nova secgroup-add-rule default tcp 22 22 0.0.0.0/0 # nova floating-ip-create +--------------+-------------+----------+ | Ip | Instance Id | Fixed Ip | +--------------+-------------+----------+ | 172.24.4.224 | None | None | +--------------+-------------+----------+ # nova add-floating-ip my_first_instance 172.24.4.224 # ssh -i my_key ubuntu@172.24.4.224 The authenticity of host '172.24.4.224 (172.24.4.224)' can't be established. RSA key fingerprint is 55:bf:2e:7f:60:ef:ea:72:b4:af:2a:33:6b:2d:8c:62. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '172.24.4.224' (RSA) to the list of known hosts. Welcome to Ubuntu 11.10 (GNU/Linux 3.0.0-14-virtual x86_64) * Documentation: https://help.ubuntu.com/ System information as of Mon Jan 16 14:58:15 UTC 2012 System load: 0.59 Processes: 59 Usage of /: 32.6% of 1.96GB Users logged in: 0 Memory usage: 6% IP address for eth0: 10.1.0.5 Swap usage: 0% Graph this data and manage this system at https://landscape.canonical.com/ Get cloud support with Ubuntu Advantage Cloud Guest http://www.ubuntu.com/business/services/cloud The programs included with the Ubuntu system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law. /usr/bin/xauth: file /home/ubuntu/.Xauthority does not exist To run a command as administrator (user 'root'), use 'sudo <command>'. See "man sudo_root" for details. ubuntu@my-first-instance:~$
swift nodes:
Assuming three machines installed with squeeze, the primary node being the openstack mgmt.host node and no puppet or puppetmaster installed.
swift primary node
apt-get install libmysql-ruby ruby-activerecord-2.3 sqlite3 puppetmaster puppet ruby-sqlite3
Puppet configuration:
diff --git a/puppet/puppet.conf b/puppet/puppet.conf index b18fae3..ce4ed22 100644 --- a/puppet/puppet.conf +++ b/puppet/puppet.conf @@ -7,6 +7,8 @@ factpath=$vardir/lib/facter templatedir=$confdir/templates prerun_command=/etc/puppet/etckeeper-commit-pre postrun_command=/etc/puppet/etckeeper-commit-post +pluginsync=true +storeconfigs=true [master] # These are needed when the puppetmaster is run by passenger commit 507105065306433eec8f03dd72ab52ccaf268ccc Author: root <root@sd-16961.dedibox.fr> Date: Mon Apr 2 15:04:53 2012 +0200 configure database storage diff --git a/puppet/puppet.conf b/puppet/puppet.conf index ce4ed22..af220e9 100644 --- a/puppet/puppet.conf +++ b/puppet/puppet.conf @@ -9,10 +9,19 @@ prerun_command=/etc/puppet/etckeeper-commit-pre postrun_command=/etc/puppet/etckeeper-commit-post pluginsync=true storeconfigs=true +server=mgmt.host [master] # These are needed when the puppetmaster is run by passenger # and can safely be removed if webrick is used. ssl_client_header = SSL_CLIENT_S_DN ssl_client_verify_header = SSL_CLIENT_VERIFY +storeconfigs=true +# Needed for storeconfigs=true +dbadapter=mysql +dbname=puppet +dbuser=puppet +dbpassword=password +dbserver=localhost +dbsocket=/var/run/mysqld/mysqld.sock
Setup mysql for puppet:
mysqladmin create puppet mysql -e "grant all on puppet.* to 'puppet'@'localhost' identified by 'password';"
Install openstack modules for puppet:
cd /etc/puppet git clone git://git.labs.enovance.com/openstack-puppet-modules.git modules && cd modules && git submodule init && git submodule update cp /etc/puppet/modules/swift/examples/multi.pp /etc/puppet/manifests/site.pp
commit 8eb77223e25bfff1284612417efedd228e0c6696 Author: root <root@sd-16961.dedibox.fr> Date: Mon Apr 2 15:37:19 2012 +0200 use tap0 for lan diff --git a/puppet/manifests/site.pp b/puppet/manifests/site.pp index a915aea..9b890b0 100644 --- a/puppet/manifests/site.pp +++ b/puppet/manifests/site.pp @@ -28,7 +28,7 @@ $swift_shared_secret='changeme' # assumes that the ip address where all of the storage nodes # will communicate is on eth1 -$swift_local_net_ip = $ipaddress_eth0 +$swift_local_net_ip = $ipaddress_tap0 Exec { logoutput => true }
Enable puppet autosign for all hosts:
echo '*' > /etc/puppet/autosign.conf
Deploy swift configuration on the proxy:
chown -R puppet:puppet /var/lib/puppet/ puppet agent --certname=swift_storage_1 --server=mgmt.host --verbose --debug --test /etc/init.d/xinetd reload
swift secondary nodes
deb http://ftp.fr.debian.org/debian/ wheezy main deb http://ftp.fr.debian.org/debian/ sid main apt-get install python2.7=2.7.2-8 python2.7-minimal=2.7.2-8 libpython2.7=2.7.2-8 echo libpython2.7 hold | dpkg --set-selections echo python2.7 hold | dpkg --set-selections echo python2.7-minimal hold | dpkg --set-selections apt-get install puppet ruby-sqlite3 puppet agent --certname=swift_storage_3 --server=openstack-online-0001.dedibox.fr --verbose --debug --test
create swift ring
puppet agent --certname=swift_proxy --server=openstack-online-0001.dedibox.fr --verbose --debug --test
propagate the swift configuration
puppet agent --certname=swift_storage_1 --server=openstack-online-0001.dedibox.fr --verbose --debug --test puppet agent --certname=swift_storage_2 --server=openstack-online-0001.dedibox.fr --verbose --debug --test puppet agent --certname=swift_storage_3 --server=openstack-online-0001.dedibox.fr --verbose --debug --test
check that it works
On proxy / mgmt.host :
# cd /etc/puppet/modules/swift/ext # ruby swift.rb getting credentials: curl -k -v -H "X-Storage-User: test:tester" -H 'X-Storage-Pass: testing' http://127.0.0.1:8080/auth/v1.0 verifying connection auth: curl -k -v -H "X-Auth-Token: AUTH_tk5d5a63abdf90414eafd890ed710d357b" http://127.0.0.1:8080/v1/AUTH_test Testing swift: swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing stat found containers/objects: 0/0 Uploading file to swift with command: swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing upload my_container /tmp/foo1 tmp/foo1 Downloading file with command: swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing download my_container tmp/foo1 Dude!!!! It actually seems to work, we can upload and download files!!!!
horizon
Edit /etc/keystone/default_catalog.templates like this:
catalog.RegionOne.object-store.publicURL = http://mgmt.host:8080/v1/AUTH_$(tenant_id)s catalog.RegionOne.object-store.adminURL = http://mgmt.host:8080/ catalog.RegionOne.object-store.internalURL = http://mgmt.host:8080/v1/AUTH_$(tenant_id)s catalog.RegionOne.object-store.name = 'Object Store Service'
diff --git a/swift/proxy-server.conf b/swift/proxy-server.conf index 83dda1e..8364fe7 100644 --- a/swift/proxy-server.conf +++ b/swift/proxy-server.conf @@ -7,7 +7,8 @@ user = swift [pipeline:main] # ratelimit? -pipeline = healthcheck cache tempauth proxy-server +#pipeline = healthcheck cache tempauth proxy-server +pipeline = healthcheck cache tokenauth keystone proxy-server [app:proxy-server] use = egg:swift#proxy @@ -28,3 +29,17 @@ use = egg:swift#healthcheck use = egg:swift#memcache # multi-proxy config not supported memcache_servers = 127.0.0.1:11211 + +[filter:tokenauth] +paste.filter_factory = keystone.middleware.auth_token:filter_factory +service_port = 5000 +service_protocol = http +service_host = 127.0.0.1 +auth_port = 35357 +auth_protocol = http +auth_host = 127.0.0.1 +admin_token = ADMIN + +[filter:keystone] +paste.filter_factory = keystone.middleware.swift_auth:filter_factory +operator_roles = admin, swiftoperator, projectmanager
/etc/init.d/swift-proxy restart
swift command line
apt-get install swift swift -U $OS_TENANT_NAME:$OS_USERNAME list