HOWTO: Install OpenStack on Debian GNU/Linux testing (wheezy)
Nova
This howto aims to provide guidelines to install & set up a multi-node Openstack-Compute (aka Nova) environment.
This environment will include 3 hosts :
- 2 compute nodes:
- compute1: pubnet@eth0=10.142.6.31 / privnet@eth1=192.168.66.1
- compute2: pubnet@eth0=10.142.6.32 / privnet@eth1=192.168.66.2
- 1 master/proxy/controller node (named controller in the following):
- controller: pubnet@eth0=10.142.6.33 / privnet@eth1=192.168.66.100
Choices:
- Virtualization technology: kvm/libvirt
Networking mode: ?VlanManger + multi_host
Services on compute* nodes :
- puppet agent
- nova-compute
- nova-network
- nova-api (metadata only)
On controller node:
- puppet master
- puppet agent
- mysql database
- keystone
- glance (local storage)
- nova-api
- nova-scheduler
- nova-novncproxy
PREREQUISITES
Things to prepare beforehand :
- Machines :
- They should have two network interfaces to ensure security. If only one interface is used the private part is more exposed to attacks coming from the public part.
- a "public" one to communicate with the outside world
- a "private" one for the guests VLans
- They should have two network interfaces to ensure security. If only one interface is used the private part is more exposed to attacks coming from the public part.
- Disk space:
there *must* be at least as much free disk space as RAM in / on the controller host ( see https://labs.enovance.com/issues/374 for the rationale ).
there must be enough space in /tmp to accomodate for a copy of the largest primary disk for an instance snapshot ( see the patch that fixes this in Folsom ). Alternatively, it is possible to set the temporary directory using the TMP variable. This should be done after nova-compute is installed:
echo export TMP=/var/lib/nova/tmp >> /etc/default/nova-common
- there must be enough space in /var/lib/glance to store all glance images
- there must be enough space in /var/lib/nova to store all instance disk images ( not volumes )
- Network :
- public network
private network. If the machines are not on a LAN, create one with OpenVPN.
- fixed ip range for guests
- number of networks for guests
- network size for guests
- public "floating" IPs (optional)
- Base distribution :
- Debian GNU/Linux squeeze (will be upgraded to wheezy)
- As of June 8th, 2012 you must do the following because the most recent python-prettytable is partly broken
apt-get install python-prettytable=0.5-1 echo python-prettytable hold | dpkg --set-selections
IMPORTANT
This HOWTO is valid for the OpenStack Nova, Glance and Keystone packages labelled 2012.1, currently available in Debian unstable (sid) and might need some adjustments with later versions.
Upgrade to Wheezy
Edit /etc/apt/sources.list to read :
deb http://ftp.fr.debian.org/debian/ wheezy main deb-src http://ftp.fr.debian.org/debian/ wheezy main deb http://security.debian.org/ wheezy/updates main deb-src http://security.debian.org/ wheezy/updates main # squeeze-updates, previously known as 'volatile' deb http://ftp.fr.debian.org/debian/ squeeze-updates main deb-src http://ftp.fr.debian.org/debian/ squeeze-updates main
Then :
apt-get update apt-get dist-upgrade -y reboot
Puppet
Install puppet agent (on the three nodes):
apt-get install -y puppet augeas-tools
Install puppetmaster (only on the controller node)
apt-get install -y puppetmaster sqlite3 libsqlite3-ruby libactiverecord-ruby git
Ensure ruby 1.9 is *not* installed
dpkg -l | grep ' ruby1.9'
Configure the puppet agents
On all the nodes
Enable pluginsync & configure the hostname of the puppetmaster
augtool << EOT set /files/etc/puppet/puppet.conf/agent/pluginsync true set /files/etc/puppet/puppet.conf/agent/server <hostname of the puppet master> save EOT
Configure the Puppet Master
On the controller node only.
- Enable storedconfig and configure database
augtool << EOT set /files/etc/puppet/puppet.conf/master/storeconfigs true set /files/etc/puppet/puppet.conf/master/dbadapter sqlite3 set /files/etc/puppet/puppet.conf/master/dblocation /var/lib/puppet/server_data/storeconfigs.sqlite save EOT
- Create a dummy site manifest
cat > /etc/puppet/manifests/site.pp << EOT node default { notify { "Hey ! It works !": } } EOT
- Restart puppetmaster
service puppetmaster restart
Test the puppet agents
⚠ Warning ⚠:
- With sqlite3 as database backend, only one puppet agent can run at once.
Make sure hostname(1) and /etc/hostname agree with each other because it is the name under which it will be known to the puppet master. Send a request to the puppet master, asking it to accept a certificate for the machine with:
puppet agent -vt --waitforcert 60
And while the puppet agent is waiting, on the master/controller run:
puppetca sign -a
There should be no error and you should see a message saying "Hey ! It works !"
On the puppet master, install the openstack modules and the adapt the sample manifest
Get the modules
cd /etc/puppet/modules git clone git://git.labs.enovance.com/puppet.git . git checkout openstack git submodule init git submodule update
Copy the example manifest for use by the puppetmaster
cp /etc/puppet/modules/examples/openstack.pp /etc/puppet/manifests/site.pp
Edit /etc/puppet/manifests/site.pp to change the following lines:
The actual private IPs of the controller and compute hosts (see at the beginning of this HOWTO):
$db_host = '192.168.66.100' # IP address of the host on which the database will be installed (the controller for instance) $db_allowed_hosts = ['192.168.66.%'] # IP addresses for all compute hosts : they need access to the database
The FQDN of the host providing the API server which must be the same as the <controller.hostname> used above.
# The public fqdn of the controller host $public_server = '<controller.hostname>' # The internal fqdn of the controller host $api_server = '<controller.hostname>'
If the interface used for the private network is not eth1, replace eth1 with the actual interface on which the IPs 192.168.66.0/24 are found (for instance br0).
Append at the end of the file:
node /<controller.hostname>/ inherits controller {} node /compute/ inherits compute {}
Installation
Run puppet agent on each node for which a node /fqdn/ {} stanza has been written to match.
puppet agent -vt
There should be no errors and after it is complete the services should be running.
Checking if it really works
The required services are advertised in the database
root@controller:~# nova-manage service list Binary Host Zone Status State Updated_At nova-consoleauth controller nova enabled :-) 2012-05-03 08:56:29 nova-scheduler controller nova enabled :-) 2012-05-03 08:56:31 nova-cert controller nova enabled :-) 2012-05-03 08:56:32 nova-compute compute1 nova enabled :-) 2012-05-03 08:56:50 nova-network compute1 nova enabled :-) 2012-05-03 08:56:49 nova-compute compute2 nova enabled :-) 2012-05-03 08:56:47 nova-network compute2 nova enabled :-) 2012-05-03 08:56:48
A file named 'openrc.sh' has been created in /root on the controller node. Source it & check the nova api works
root@controller:~# source openrc.sh root@controller:~# nova list +----+------+--------+----------+ | ID | Name | Status | Networks | +----+------+--------+----------+ +----+------+--------+----------+ root@controller:~# nova flavor-list +----+-----------+-----------+------+-----------+------+-------+-------------+ | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | +----+-----------+-----------+------+-----------+------+-------+-------------+ | 1 | m1.tiny | 512 | 0 | 0 | | 1 | 1.0 | | 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | | 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | | 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | | 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | +----+-----------+-----------+------+-----------+------+-------+-------------+ root@controller:~# nova image-list +----+------+--------+--------+ | ID | Name | Status | Server | +----+------+--------+--------+ +----+------+--------+--------+
The openstack cluster is quite empty and useless like this, let's upload an image in glance::
root@controller:~# wget https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img … root@controller:~# glance add name="CirrOS 0.3" disk_format=qcow2 container_format=ovf < cirros-0.3.0-x86_64-disk.img Uploading image 'CirrOS 0.3' ================================================================[100%] 7.73M/s, ETA 0h 0m 0s Added new image with ID: 949bbc5c-e6fa-4ec3-91cb-65cbb6123c54 root@controller:~# glance index ID Name Disk Format Container Format Size ------------------------------------ ------------ ------------- ----------------- ---------- 949bbc5c-e6fa-4ec3-91cb-65cbb6123c54 CirrOS 0.3 qcow2 ovf 9761280
Does it show up in nova ?
root@controller:~# nova image-list +--------------------------------------+-----------------+--------+--------+ | ID | Name | Status | Server | +--------------------------------------+-----------------+--------+--------+ | 949bbc5c-e6fa-4ec3-91cb-65cbb6123c54 | CirrOS 0.3 | ACTIVE | | +--------------------------------------+-----------------+--------+--------+
The nova network puppet module create a private network for the VM to use. Check that it has been created.
root@controller:~# nova-manage network list id IPv4 IPv6 start address DNS1 DNS2 VlanID project uuid 1 169.254.200.0/24 None 169.254.200.3 None None 2000 None 71681e09-c072-4281-b5b4-37f26ddc97bf
And some floating (public) ips (choose an IP range addressable on your network)::
root@controller:~# nova-manage floating create --ip_range 10.142.6.224/27 root@controller:~# nova-manage floating list None 10.142.6.225 None nova eth0 None 10.142.6.226 None nova eth0 …
Now create a keypair (for ssh access) and save the output in a file
root@controller:~# nova keypair-add test_keypair > test_keypair.pem root@controller:~# chmod 600 test_keypair.pem
Boot an instance and get the console log
root@controller:~# nova boot --image 949bbc5c-e6fa-4ec3-91cb-65cbb6123c54 --flavor 1 --key_name test_keypair FirstTest --poll +-------------------------------------+--------------------------------------+ | Property | Value | +-------------------------------------+--------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-SRV-ATTR:host | None | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | | OS-EXT-SRV-ATTR:instance_name | instance-00000001 | | OS-EXT-STS:power_state | 0 | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | accessIPv4 | | | accessIPv6 | | | adminPass | yab49fMqVHJf | | config_drive | | | created | 2012-05-03T10:09:00Z | | flavor | m1.tiny | | hostId | | | id | 06dd6129-f94a-488d-9670-7171491899e5 | | image | CirrOS 0.3 | | key_name | meh | | metadata | {} | | name | FirstTest | | progress | 0 | | status | BUILD | | tenant_id | d1c9085272d542eda98f7e08a1a779d6 | | updated | 2012-05-03T10:09:00Z | | user_id | cd04222b81004af5b0ff20c840fb629e | +-------------------------------------+--------------------------------------+ root@controller:~# nova console-log FirstTest …
Allocate a floating ip and associate it to the instance::
root@controller:~# nova floating-ip-create +--------------+-------------+----------+------+ | Ip | Instance Id | Fixed Ip | Pool | +--------------+-------------+----------+------+ | 10.142.6.225 | None | None | nova | +--------------+-------------+----------+------+ root@controller:~# nova add-floating-ip FirstTest 10.142.6.225 …
Update the rules for the default security group (allow icmp & ssh)::
root@controller:~# nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 +-------------+-----------+---------+-----------+--------------+ | IP Protocol | From Port | To Port | IP Range | Source Group | +-------------+-----------+---------+-----------+--------------+ | icmp | -1 | -1 | 0.0.0.0/0 | | +-------------+-----------+---------+-----------+--------------+ root@controller:~# nova secgroup-add-rule default tcp 22 22 0.0.0.0/0 +-------------+-----------+---------+-----------+--------------+ | IP Protocol | From Port | To Port | IP Range | Source Group | +-------------+-----------+---------+-----------+--------------+ | tcp | 22 | 22 | 0.0.0.0/0 | | +-------------+-----------+---------+-----------+--------------+
We now should be able to ping the instance::
root@controller:~# ping -c 1 10.142.6.225 PING 10.142.6.225 (10.142.6.225) 56(84) bytes of data. 64 bytes from 10.142.6.225: icmp_req=1 ttl=63 time=0.626 ms --- 10.142.6.225 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms
And ssh into it with the identity we created before::
root@controller:~# ssh -i test_key cirros@10.142.6.225 $ uname -a Linux cirros 3.0.0-12-virtual #20-Ubuntu SMP Fri Oct 7 18:19:02 UTC 2011 x86_64 GNU/Linix $ exit Connection to 10.142.6.114 closed.
Swift
Assuming three machines installed with squeeze, the primary node being the openstack mgmt.host node and no puppet or puppetmaster installed.
swift primary node
apt-get install libmysql-ruby ruby-activerecord-2.3 sqlite3 puppetmaster puppet ruby-sqlite3
Puppet configuration:
diff --git a/puppet/puppet.conf b/puppet/puppet.conf index b18fae3..ce4ed22 100644 --- a/puppet/puppet.conf +++ b/puppet/puppet.conf @@ -7,6 +7,8 @@ factpath=$vardir/lib/facter templatedir=$confdir/templates prerun_command=/etc/puppet/etckeeper-commit-pre postrun_command=/etc/puppet/etckeeper-commit-post +pluginsync=true +storeconfigs=true [master] # These are needed when the puppetmaster is run by passenger commit 507105065306433eec8f03dd72ab52ccaf268ccc Author: root <root@sd-16961.dedibox.fr> Date: Mon Apr 2 15:04:53 2012 +0200 configure database storage diff --git a/puppet/puppet.conf b/puppet/puppet.conf index ce4ed22..af220e9 100644 --- a/puppet/puppet.conf +++ b/puppet/puppet.conf @@ -9,10 +9,19 @@ prerun_command=/etc/puppet/etckeeper-commit-pre postrun_command=/etc/puppet/etckeeper-commit-post pluginsync=true storeconfigs=true +server=mgmt.host [master] # These are needed when the puppetmaster is run by passenger # and can safely be removed if webrick is used. ssl_client_header = SSL_CLIENT_S_DN ssl_client_verify_header = SSL_CLIENT_VERIFY +storeconfigs=true +# Needed for storeconfigs=true +dbadapter=mysql +dbname=puppet +dbuser=puppet +dbpassword=password +dbserver=localhost +dbsocket=/var/run/mysqld/mysqld.sock
Setup mysql for puppet:
mysqladmin create puppet mysql -e "grant all on puppet.* to 'puppet'@'localhost' identified by 'password';"
Install openstack modules for puppet:
cd /etc/puppet git clone git://git.labs.enovance.com/openstack-puppet-modules.git modules && cd modules && git submodule init && git submodule update cp /etc/puppet/modules/swift/examples/multi.pp /etc/puppet/manifests/site.pp
commit 8eb77223e25bfff1284612417efedd228e0c6696 Author: root <root@sd-16961.dedibox.fr> Date: Mon Apr 2 15:37:19 2012 +0200 use tap0 for lan diff --git a/puppet/manifests/site.pp b/puppet/manifests/site.pp index a915aea..9b890b0 100644 --- a/puppet/manifests/site.pp +++ b/puppet/manifests/site.pp @@ -28,7 +28,7 @@ $swift_shared_secret='changeme' # assumes that the ip address where all of the storage nodes # will communicate is on eth1 -$swift_local_net_ip = $ipaddress_eth0 +$swift_local_net_ip = $ipaddress_tap0 Exec { logoutput => true }
Enable puppet autosign for all hosts:
echo '*' > /etc/puppet/autosign.conf
Deploy swift configuration on the proxy:
chown -R puppet:puppet /var/lib/puppet/ puppet agent --certname=swift_storage_1 --server=mgmt.host --verbose --debug --test /etc/init.d/xinetd reload
swift secondary nodes
deb http://ftp.fr.debian.org/debian/ wheezy main deb http://ftp.fr.debian.org/debian/ sid main apt-get install python2.7=2.7.2-8 python2.7-minimal=2.7.2-8 libpython2.7=2.7.2-8 python-prettytable=0.5-1 echo libpython2.7 hold | dpkg --set-selections echo python2.7 hold | dpkg --set-selections echo python2.7-minimal hold | dpkg --set-selections echo python-prettytable hold | dpkg --set-selections apt-get install puppet ruby-sqlite3 puppet agent --certname=swift_storage_3 --server=openstack-online-0001.dedibox.fr --verbose --debug --test
create swift ring
puppet agent --certname=swift_proxy --server=openstack-online-0001.dedibox.fr --verbose --debug --test
propagate the swift configuration
puppet agent --certname=swift_storage_1 --server=openstack-online-0001.dedibox.fr --verbose --debug --test puppet agent --certname=swift_storage_2 --server=openstack-online-0001.dedibox.fr --verbose --debug --test puppet agent --certname=swift_storage_3 --server=openstack-online-0001.dedibox.fr --verbose --debug --test
check that it works
On proxy / mgmt.host :
# cd /etc/puppet/modules/swift/ext # ruby swift.rb getting credentials: curl -k -v -H "X-Storage-User: test:tester" -H 'X-Storage-Pass: testing' http://127.0.0.1:8080/auth/v1.0 verifying connection auth: curl -k -v -H "X-Auth-Token: AUTH_tk5d5a63abdf90414eafd890ed710d357b" http://127.0.0.1:8080/v1/AUTH_test Testing swift: swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing stat found containers/objects: 0/0 Uploading file to swift with command: swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing upload my_container /tmp/foo1 tmp/foo1 Downloading file with command: swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing download my_container tmp/foo1 Dude!!!! It actually seems to work, we can upload and download files!!!!
Swift/Horizon
Edit /etc/keystone/default_catalog.templates like this:
catalog.RegionOne.object-store.publicURL = http://mgmt.host:8080/v1/AUTH_$(tenant_id)s catalog.RegionOne.object-store.adminURL = http://mgmt.host:8080/ catalog.RegionOne.object-store.internalURL = http://mgmt.host:8080/v1/AUTH_$(tenant_id)s catalog.RegionOne.object-store.name = 'Object Store Service'
diff --git a/swift/proxy-server.conf b/swift/proxy-server.conf index 83dda1e..8364fe7 100644 --- a/swift/proxy-server.conf +++ b/swift/proxy-server.conf @@ -7,7 +7,8 @@ user = swift [pipeline:main] # ratelimit? -pipeline = healthcheck cache tempauth proxy-server +#pipeline = healthcheck cache tempauth proxy-server +pipeline = healthcheck cache tokenauth keystone proxy-server [app:proxy-server] use = egg:swift#proxy @@ -28,3 +29,17 @@ use = egg:swift#healthcheck use = egg:swift#memcache # multi-proxy config not supported memcache_servers = 127.0.0.1:11211 + +[filter:tokenauth] +paste.filter_factory = keystone.middleware.auth_token:filter_factory +service_port = 5000 +service_protocol = http +service_host = 127.0.0.1 +auth_port = 35357 +auth_protocol = http +auth_host = 127.0.0.1 +admin_token = ADMIN + +[filter:keystone] +paste.filter_factory = keystone.middleware.swift_auth:filter_factory +operator_roles = admin, swiftoperator, projectmanager
/etc/init.d/swift-proxy restart
swift command line
apt-get install swift swift -U $OS_TENANT_NAME:$OS_USERNAME list
Running the puppet agents daemons
Edit /etc/default/puppet and change to
START=yes
and run the daemon:
/etc/init.d/puppet start