Contents
- Installing Openstack Folsom on Debian GNU/Linux
- Why choosing Debian and not Ubuntu to run Openstack?
- Before Installing Debian Openstack Folsom : building and configuration
-
Installing
- dbconfig-common
-
Proxy node install
- openstack-proxy-node meta-package
- General consideration about answering to debconf prompts
- MySQL server
- Keystone
- dbconfig-common
- Keystone communication
- Registering an endpoint
- Package specific Debconf questions: glance
- Package specific Debconf questions: nova
- Package specific Debconf questions: cinder
- Post configuration
- compute nodes:
- Using Openstack
- nova-volume
- Swift
Installing Openstack Folsom on Debian GNU/Linux
Scope of this howto
This page explains how to install Openstack Folsom (Folsom is the version 2012.2 of Openstack, released early in autumn 2012). The openstack Folsom packages are still a work in progress, though this howto is in constant rework to make it match the installation procedure using these packages.
The current focus is on a subset of the possible setups: KVM and nova-network. Quantum and Xen are kept for later. The goal is to make this page, and the experimental branches of the packages, evolve in parallel until "it works": errors in the HOWTO will be fixed, and bugs in the packages will be fixed to.
Wheezy vs SID
This howto focuses on installing Folsom on top of Debian Wheezy, though currently, since the next Stable is frozen, installing it on SID should work equally (currently in SID, due to a mistake by the maintainers of libvirt, you need to install either libnetcf1 from Wheezy, or libvirt0 from Experimental, but otherwise, there isn't any problem).
On Debian wheezy though, few packages are missing. Namely, Debian source packages nodejs, less.js and python-setuptools-git have to be backported from SID to Wheezy. We expect these to be in the official Debian backports (eg: backports.debian.org), but this will unfortunately not happen before Wheezy is out. Indeed, backports FTP masters have decided that stable backports can't be opened before a stable is released. If, like me, you would like this to change, and backports available during the freeze, get in touch with Debian backports FTP masters directly. As a consequence, our scripts temporarily also create a small Wheezy backport repository.
Final result goal
This howto aims to provide guidelines to install & set up a multi-node Openstack-Compute (aka Nova) environment. It doesn't aim at documenting how to install Swift (eg: Openstack object storage), which shall be documented elsewhere.
In order to make it more simple, this howto makes the assumption that you will be running a single "proxy node", which will hold all the Openstack API server components. Later on, if you have too much load on your single host proxy node, you can migrate them to another physical server. More servers (compute, or volume nodes) can also be added to join the cloud and make it scale.
As of today, this imply that your proxy node will run: - nova-api (compute API) - nova-scheduler - glance (api and registry: that's Openstack image storage) - keystone (the Openstack authentification server) - mysql (used by all daemons) - rabbitmq - memcached - openstack-dashboard (otherwise called horizon: the web GUI) - cinder-api - quantum-server using the openvswitch plugin (Quantum manages network on Openstack) - ceilometer metering (api, collector and agent-central)
These packages will be installed through a meta-package.
Note that it is also possible to use nova-network, in which case you wouldn't use Quantum.
Technical Choices
We will be using :
"Multi-host VLan networking mode":http://docs.openstack.org/diablo/openstack-compute/admin/content/networking-options.html
- KVM as hypervisor
MySql as database backend
Note that if you use PGSQL, you will not be able to use the automation that dbconfig-common provides for all daemons that will need to connect to a remote database. This problem is in dbconfig-common itself, not in the packaging of Openstack Folsom.
Document Conventions
In formatted blocks :
command lines starting with a # must be run as root.
values between < and > must be replaced by your values.
replace <mgmt.host> with the actual hostname of the machine chosen to be the management node.
Why choosing Debian and not Ubuntu to run Openstack?
Ubuntu packages
Most developers working on Openstack are using Ubuntu. Some core developers are coming from Canonical, and Openstack is maintained as much as possible within Ubuntu. So, why using plain Debian?
Well, Ubuntu packages may be of good quality, though they lack of some automation. For example, the Ubuntu packages don't have any automation for creating and maintaining database connections. In Ubuntu, you are supposed to create database for each package by hand, create access passwords for it, and then various db-sync programs to initialize with data. This is both error prone (you could well do a mistake when setting-up access rights in MySQL, for example), and not convenient.
Also, Ubuntu people are pressured by the release schedule of Ubuntu. Every 6 months, a new version of Openstack has to be out, following the schedule of the Ubuntu release. That's not at all what we do in Debian. We used to say "it's going to be ready ... when it's going to be ready" when talking about the Debian releases. That's also the approach we took for the Openstack packages and this Folsom release: we started working on it when it was out in October, and took our time in order to have package in a shape as good as we cool, without strong release time constrains.
Debian automation and meta packages
Under Debian, the Openstack packaging team tried to add as much automation as possible, in order to make it both easy and fast to setup your Openstack based cloud computing. This is done mainly using Debconf and dbconfig-common functionality (note: debconf is the blue screen thing which asks you questions when you install a package, and dbconfig-common is a system using debconf to setup database which programs may need).
The result we aim at, is that something like "apt-get install openstack-proxy-node" should be enough for anyone to setup a controler node, without anything more to setup. We are not there yet, but we are rapidly approaching this goal. It is already a lot more easy to install Openstack than it used to be.
Scripting the install
Since Debian packages of Openstack are using debconf and dbconfig-common, it should be quite trivial to "preseed" any kind of setup (note: preseeding means that you pre-fill your system with answers to debconf, so that you don't have to type anything on the Debconf prompts). That's also another cool feature which we aim at: validating pre-written ways to configure your Openstack cloud, with proven scripts, so that you don't have to write a 10 000 lines setup script.
Limits to this approach
Unfortunately, it isn't possible to cover all cases. For example, nova comes with a nova.conf already filled with values which we found relevant, but which might not match your needs. It isn't really possible to have both a Quantum and a nova-network configuration working together on a single nova.conf. So, even with our approach, some level of manual edit of your configuration files will be needed, even though we are aiming to reduce it to the strict minimum (and have a working default setup).
Before Installing Debian Openstack Folsom : building and configuration
Hardware requirement
- The proxy node, containing all the Openstack API servers, can run with about 2GB of RAM.
- You need at least 2 network interfaces on the proxy node, and even preferably 3 if you want to physically separate management traffic and VM traffic.
- 2 NICs are prefered on compute nodes for the same reason (eg: if you want to physically separate management traffic and VM traffic).
- Make sure /tmp has enough space to accomodate for snapshoting (i.e. you might want to add /tmp none none none 0 0 in /etc/fstab to disable tmpfs on /tmp, or even use a separate partition for your /tmp)
After that, we let the reader decide how much RAM and HDD space is needed to run virtual machines.
Network needs
- A public network access and at least one public IP address so your Openstack cloud APIs can be reached from outside.
Multiple private network, one for the management, and probably multiple private LANs for your VMs. If the machines are not on a LAN, create one with OpenVPN.
- fixed ip range for guests, in order to give them public IP addresses to run services. These are called public, or “floating” IPs (optional)
Debian install
Install a base Debian Wheezy. Make sure you have enough space in your /tmp (dozens of GB) so that it can store files with the size of an operating system image. Your /var should also be big enough. If you plan on using cinder to store some VM partitions, make sure to use LVM and to leave enough free space on your volume group.
It might be a good idea to install a mail server, so that you can receive messages for root:
# apt-get install postfix # echo "root: mymailbox@example.com" >>/etc/aliases # newaliases # /etc/init.d/postfix reload
echo "nbd max_part=65" >> /etc/modules # to enable key-file, network & metadata injection into instances images
Networking setup on the proxy node
Your proxy node will be the only machine connected directly to the Internet, through it's IP address. The other physical servers (compute nodes, cinder machines, swift machines, etc.) will have to connect through your proxy node using the standard Linux NAT. As for your virtual machines, they will be connected to the Internet through quantum-server, which provide the networking to your virtual machines. Quantum dynamically creates network, on demand for the users of the cloud.
So, very pragmatically, here is how to setup networking on your proxy node.
First, install openvswitch. Create 2 briges. The first one, call it "br-ex". All the connectivity to the outside world will go through this bridge. Assuming the physical interface connected to internet is eth0, then you can type:
# ovs-vsctl add-br br-ex # ovs-vsctl br-set-external-id br-ex bridge-id br-ex # ovs-vsctl add-port br-ex eth0
Then create a 2nd bridge called "br-int", which will provide all the connectivity for your compute nodes.
# ovs-vsctl add-br br-int
Normally, after the above, you should have the below displaying:
# ovs-vsctl show Bridge br-ex Port "eth0" Interface "eth0" Port br-ex Interface br-ex type: internal Bridge br-int Port br-int Interface br-int type: internal
Since you want to provide Internet connectivity to your compute, volume and object nodes, you probably want to add ip_forwarding to your /etc/sysctl.conf, and activate the changes:
# sed -i -e 's/^[ \t#]*net\.ipv4\.ip_forward[ \t]*=.*/net.ipv4.ip_forward=1/' /etc/sysctl.conf # sysctl -p
Assuming your public IP for your eth0 was 1.2.3.4/24, and that you would like to use 192.168.128.0/18 for your LAN (that means you will have a maximum of 16 384 physical machines being part of your cloud, each of which accessible from a single IP on the LAN), then your /etc/network/interface may look close to this:
auto lo iface lo inet loopback auto br-ex iface br-ex inet static address 1.2.3.4 netmask 255.255.255.0 network 1.2.3.0 broadcast 1.2.3.255 gateway 1.2.3.1 auto eth1 iface eth1 inet static address 192.168.128.1 netmask 255.255.192.0 network 192.168.128.0 broadcast 192.168.191.255 auto eth2 iface eth2 inet static address 10.0.0.4 netmask 255.255.0.0 network 10.0.0.0 broadcast 10.0.255.255
Networking setup on the compute nodes
Typically, you would install a DHCP server running on the eth1 of your proxy node, so that your compute nodes take a random IP address on the 192.168.128.0/18 network. Since nova has a "my_ip" parameter in /etc/nova/nova.conf, it is important that your compute nodes always keep the same IP. So providing a "static DHCP" address depending on the MAC address is a good idea.
Best is if you can keep the virtual machine traffic separated from the management traffic. To continue on our example, you would assign a DHCP IP from the 192.168.128.0/18 LAN to eth0, and have eth1 bridged to "br-int".
Building the packages
Wheezy backports
There is a small shell script which copies packages from a Debian SID repository and creates a small Debian repository out of them. Just do this:
# git clone http://git.debian.org/git/openstack/openstack-auto-builder.git # cd openstack-auto-builder # ./build_backports
Then you can add your newly created repository to your sources.list:
# sudo echo "deb file://"`pwd`"/home/zigo/openstack-auto-builder/backports/debian wheezy-backports main" >>/etc/apt/sources.list # sudo apt-get update
WARNING !!! This script is completely stupid, and is a ugly hack, it only guesses the base URL for a given source package name, then copies everything, even if the official repositories contain multiple versions of a package. You've been warn! Don't expect this to be fixed, and unless you can provide a patch, don't complain.
Openstack and dependency packages
There are many packages, and their build-time and run-time dependencies are complex. So building all the 25+ packages by hand, on the correct order can be quite painful. At the time of writing these lines, the Folsom packages aren't available in Debian yet. They are only available through our Git repository on alioth.debian.org. So there is an automatic way to build them all using the "openstack-auto-builder" script, also available on Alioth. Simply do the following steps to build:
# git clone git://git.debian.org/git/openstack/openstack-auto-builder.git # cd openstack-auto-builder # ./build_openstack
This script will automatically install the necessary build-depends, git clone the current Experimental packaging trees on Alioth and build all packages. You might need to set URL=git://anonscm.debian.org/git/openstack if you don't have an ssh access on Alioth, and set a GnuPG signing key under GIT_BUILD_OPT, so that packages and the repository are signed with the key of your choice. Building will be made in the "sources" folder at the same level as the build_openstack script and your Wheezy backports Debian repository.
Note that few packages will fail to build, due to problems in the unit tests. To solve that problem, go in such a package, and build without using these tests. For example:
# cd source/glance/glance # DEB_BUILD_OPTIONS=nocheck git-buildpackage
Once built, you can go back to the rest of the building process:
# cd ../../.. # ./build_openstack
Once building is done, the build_openstack script will create a Debian repository fo you in the folder named "repo". You can use that one in your apt/sources.list:
echo "deb file:///home/username/openstack-auto-builder/repo/debian experimental main" >>/etc/apt/sources.list
Using the unofficial Debian repository
Before all these packages gets uploaded to Debian, we have a temporary Debian repository which is every so often updated. Feel free to use, but please don't consider these packages ready for an upload in Debian just yet. There are 2 repositories, one is a wheezy-backports repository (made with the stupid script above, which doesn't even rebuild anything...), and the openstack Folsom repository:
# echo "deb http://ftp.gplhost.com/debian/ wheezy-backports main" >>/etc/apt/sources.list # echo "deb http://ftp.gplhost.com/debian/ openstack main" >>/etc/apt/sources.list
Please use the closet GPLHost mirror for more throughput:
Malaysia |
deb http://601.apt-proxy.gplhost.com/debian openstack main |
Singapore |
deb http://qala-sg.apt-proxy.gplhost.com/debian openstack main |
Seattle |
deb http://seattle.apt-proxy.gplhost.com/debian openstack main |
Atlanta |
deb http://ftparchive.gplhost.com/debian openstack main |
Atlanta (ipv6) |
deb http://ipv6-ftp.gplhost.com/debian openstack main |
London |
deb http://ftp.gplhost.co.uk/debian openstack main |
Paris |
deb http://33.apt-proxy.gplhost.com/debian openstack main |
Barcelona |
deb http://34.apt-proxy.gplhost.com/debian openstack main |
Zurich |
deb http://601.apt-proxy.gplhost.com/debian openstack main |
Haifa |
deb http://972.apt-proxy.gplhost.com/debian openstack main |
These mirrors also have the wheezy-backports repository. If you wish, you can install the gplhost-archive-keyring to get the gnupg key:
# wget http://ftparchive.gplhost.com/debian/pool/squeeze/main/g/gplhost-archive-keyring/gplhost-archive-keyring_20100926-1_all.deb # dpkg -i gplhost-archive-keyring_20100926-1_all.deb
Installing
dbconfig-common
If dbconfig-common isn't installed before the setup of your server, important questions might be delayed. It will still work, but it is more convenient if you setup dbconfig-common by hand before:
# apt-get install dbconfig-common # dpkg-reconfigure dbconfig-common
dbconfig-common has the following configuration screens:
The dbconfig-common parameter is an important choice if you plan on using a remote MySQL server. This will have to be chosen from for all of your compute nodes.
Proxy node install
openstack-proxy-node meta-package
After you have both the backports and Openstack folsom installed in your sources.list, and ran apt-get update, simply do:
# apt-get install openstack-proxy-node
In this single command, all the necessary components for controlling your Openstack cloud will be installed on your server. Altogether, that's more than 240 packages. A lot of debconf questions will be asked (nearly 100). Here's few screenshots so that you know what to answer. Yes, that's really a lot of Debconf questions, but remember that:
* debconf answers can be preseeded (and eventually, fully preseed, so the installation can be fully automated) * you would otherwise configure everything by hand on config files, so that's really a time saver rather than borring useless questions.
Absolutely all of what is asked with debconf is required to have a working proxy node.
If you would like to use nova-network instead of Quantum, then you can prevent the installation of Quantum by not installing recommended packages:
# apt-get install --no-install-recommends openstack-proxy-node
General consideration about answering to debconf prompts
A number of package needs the same kind of answers. For example, glance-common, nova-common, keystone, cinder-common (etc.) all need to use a database, and will ask about the connection information. In this howto, we use glance-common (for the keystone communication) and glance-api (for registering the endpoint) and cinder-api (for setting-up the database) as an example, but this will apply to the other packages as well. It is important that you understand what you are doing when you see each of the questions, otherwise your proxy node will not work. So we give here detailed explanations of what you should answer (together with screenshots).
Because of the current way debconf is designed, it isn't (to the best of my knowledge) impossible to order which questions will be prompted to the user first, the package will ask for configuration in a quite random way. For example, you will be asked to configure quantum and its API endpoint (see below for what this is) before configuring keystone. Do not worry, the packages will really be installed in the correct order. To be able to explain what to enter, it didn't make sense to do so respecting the order in which the debconf questions are prompted. Therefor, it is left as an exercise to the reader to unscramble all this.
MySQL server
The first Debconf screen you will see will be for setting-up the password of MySQL server as follow:
Keystone
Keystone is not only an auth server for all the openstack packages, but also a catalog of services that the Openstack clients will use to know where to contact each services. Both will need to be configured in order to use Keystone. Keystone uses an thing which it calls an AUTH_TOKEN as a kind of master password to do special administrative tasks (like creating an admin user). This AUTH_TOKEN is stored in /etc/keystone/keystone.conf, and is configured through debconf as folow:
Make sure you use a strong enough password here (it is a good idea to generate one), and remember that password, because you will need it when setting-up the other components of Openstack. Next, you need to configure a first super admin:
By default, "admin" is used as tenant name, and "admin" as super user. You will also need to remember the tenant name, admin name and password, because other packages (like glance-common, nova-common, etc.) will need these to talk to keystone.
Keystone also needs to be registered as an endpoint (see below), so that it can be accessed and used by the cloud users.
So you also need to enter the public IP address that the cloud users will contact to reach your keystone instance:
Finally, enter the region name (see below for what this means):
dbconfig-common
For each package that needs access to a database (eg: cinder-common, glance-common, keystone, nova-common and quantum-plugin-openvswitch), you will be asked for a database name, a SQL username, and a password, plus the SQL root admin password (if you choose MySQL or PGSQL) in order to create the database (if it doesn't exist). Here is an example with cinder (you will be asked for the same questions for the other packages listed above):
The answers to these questions will form an SQL connection directive as folow:
{{ sql_connection = mysql://user:pass@server-hostname:port/dbname }}
You can also edit this by hand on the different configuration files.
Keystone communication
Most Openstack services need to communicate with Keystone. To do so, they need the service administrator tenant name, username and password. This information is stored in each service configuration file. For example:
{{ /etc/nova/api-paste.ini /etc/glance/glance-api-paste.ini /etc/glance/glance-registry.conf /etc/cinder/api-paste.ini /etc/quantum/api-paste.ini }}
Here's an example of debconf prompts asking for such keystone credentials:
These should match your setup of Keystone (explained above). The screen asking about the admin tenant name, and the one asking about the admin username will not be shown unless you set your debconf priority to medium.
Registering an endpoint
Keystone isn't only for auth. It's also a catalog of services, so that your users will be able to tell which IP address to use when contacting one of the Openstack services. Therefor, each service has to be registered in keystone using the keystone client. The Debian packages automate this task using debconf.
Since it is quite tedious to enter the IP address of your API server each time, the IP address will be guessed by the config script, and set as default answer to debconf. The following script is used internally:
{{{DEFROUTE_IF=LC_ALL=C /sbin/route | grep default |awk -- '{ print $8 }' DEFROUTE_IP=LC_ALL=C ip addr show "${DEFROUTE_IF}" | grep inet | head -n 1 | awk '{print $2}' | cut -d/ -f1 | grep -E '^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$'}}}
Both the API endpoint IP address and the region name are set with debconf priority "medium", which means that by default, it will not be prompted to the user, unless you do:
dpkg-reconfigure debconf
before installing any of the API servers. If this debconf priority for the API ip address is annoying anyone, please let the PKG Openstack team know, we are open to discussion. But I (eg: zigo) believe that this is a nice default, so I've set it this way. Note that another way to manually set the IP address is to first install the packages with debconf priority high, then do (for example for glance):
dpkg-reconfigure glance-api
Here's few snapshots of the endpoint creation prompts. Answer yes to this one if you wish to create an API endpoing:
Enter here the address of your keystone server. If you used the metapackage openstack-proxy-node, then 127.0.0.1 will work. Note that this question will not be asked unless you have set debconf priority to medium.
Enter here the AUTH_TOKEN value stored in /etc/keystone/keystone.conf, which you configured using debconf when installing keystone:
Enter the public IP address used to reach your service. Note that this IP address will be guessed by the packages (see above), and that you will not see this question unless you set debconf priority to medium.
Openstack has the concept of availability zones, enter the name of it (if you have only one Openstack cloud, then any name is fine, as long as it is consistent across all the Openstack services):
Package specific Debconf questions: glance
Glance-common will ask you which pipeline flavor you want. Choose keystone.
Package specific Debconf questions: nova
Describe here how to configure nova.
Package specific Debconf questions: cinder
Describe here how to configure cinder.
Post configuration
Configuring MySQL server
A number of hosts will need to have access to your MySQL over network. For example, all of your Nova-compute (compute hosts) will need to have access to this central database. By default in Debian, a MySQL server is only accessible from localhost, so we need to change that. In /etc/mysql/my.cnf modify the bind-address value to read :
bind-address = 0.0.0.0
And restart the mysql server :
# /etc/init.d/mysql restart
Nova
When installing nova-api, it will be prompted what services you want to use. Choose osapi_compute for the nova-api management host. If you are running on a single server, you also need the metadata (but if it's a proxy node, without compute service running, then you do not want to activate the metadata service).
In the file /etc/nova/nova.conf, edit the following directive to match this:
- Add these configuration options :
vlan_interface=<the private interface eg. eth1> public_interface=<the interface on which the public IP addresses are bound eg. eth0>
- In some cases, you may also want to edit these IPs:
{{ dmz_cidr=169.254.169.254/32 ec2_dmz_host=169.254.169.254 metadata_host=169.254.169.254 ec2_host=<mgmt.host> }}
Restart nova services :
# /etc/init.d/nova-api restart # /etc/init.d/nova-scheduler restart
openstack-dashboard
If you do not wish to use Quantum (and use nova-network instead), edit /etc/openstack-dashboard/local_settings.py and add:
QUANTUM_ENABLED = False
Then restart your apache web server:
service apache2 restart
Point your browser to http://<mgmt.host>/, and you'll see the dashboard. You can login using <admin_user> password <secret>.
VNC console
Add the following lines to /etc/nova/nova.conf:
novncproxy_base_url=http://<mgmt.host>:6080/vnc_auto.html vncserver_listen=0.0.0.0 vncserver_proxyclient_address=127.0.0.1
Note: <mgmt.host> will be exposed in horizon and must be a name that resolves from the client machine. It cannot be a name that only resolves on the nodes used to run OpenStack.
compute nodes:
apt-get openstack-compute-node
Note that the <mgmt.node> can also be a compute node. There is no obligation for it to be a separate physical machine. Install the packages required to run instances :
apt-get install -y openstack-compute-node
Make sure to select the metadata only for nova-api when prompted by debconf. Remember that Nova should use a unique database across all of your Openstack servers, so you should really be setting-up a remote MySQL database on each compute node (see the note about dpkg-reconfigure dbconfig-common above, and how to setup MySQL databases on remote hosts).
If you would like to use nova-network instead of Quantum, then you should explicitely install it:
# apt-get install openstack-compute-node nova-network
Using Openstack
Setting up your environment
In your .bashrc in your home directory, add the following:
export SERVICE_ENDPOINT=http://<mgmt.host>:35357/v2.0/ export SERVICE_TOKEN=<AUTH_TOKEN value> export OS_TENANT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=<keystone-admin-password> export OS_AUTH_URL=http://<mgmt.host>:5000/v2.0/ export OS_VERSION=1.1 export OS_NO_CACHE=1
once you have done that, you can contact all of your running Openstack services (keystone, nova, glance, cinder, quantum, etc.).
Configuring your first network
Now bootstrap nova :
# nova-manage network create private --fixed_range_v4=<10.1.0.0/16> --network_size=<256> --num_networks=<100> # nova-manage floating create --ip_range=<192.168.0.224/28>
Checking that the compute services are running as expected
You should be able to see that nova-scheduler is running (OK state is :-) KO is XXX):
# nova-manage service list Binary Host Zone Status State Updated_At nova-console <mgmt.host> nova enabled :-) 2012-11-30 14:48:42 nova-consoleauth <mgmt.host> nova enabled :-) 2012-11-30 14:48:43 nova-scheduler <mgmt.host> nova enabled :-) 2012-11-30 14:48:43 nova-cert <compute.host> nova enabled :-) 2012-11-30 14:48:44 nova-compute <compute.host> nova enabled :-) 2012-11-30 14:48:43 nova-network <compute.host> nova enabled :-) 2012-01-16 12:29:49
Using Glance
Before you can launch virtual machine instances, you need to upload some virtual machine images. Openstack can use the same format as in AWS (Amazon Web Services), eg ARI (ramdisk image), AKI (linux kernel) and AMI (HDD image, also linked to the ARI and AKI objects). Here is a simplistic example script showing how to upload TTY linux:
KERNEL_ID=`glance image-create --name="tty-linux-kernel" --disk-format=aki --container-format=aki < ttylinux-uec-amd64-12.1_2.6.35-22_1-vmlinuz | awk '/ id / { print $4 }'` INITRD_ID=`glance image-create --name="tty-linux-ramdisk" --disk-format=ari --container-format=ari < ttylinux-uec-amd64-12.1_2.6.35-22_1-loader | awk '/ id / { print $4 }'` glance image-create --name="tty-linux" --disk-format=ami --container-format=ami --property kernel_id=${KERNEL_ID} --property ramdisk_id=${INITRD_ID} < ttylinux-uec-amd64-12.1_2.6.35-22_1.img
After running this, you should have the following output:
#glance image-list +--------------------------------------+-------------------+-------------+------------------+----------+--------+ | ID | Name | Disk Format | Container Format | Size | Status | +--------------------------------------+-------------------+-------------+------------------+----------+--------+ | 6d45238c-edba-48e5-b3df-105e54f4e357 | tty-linux-ramdisk | ari | ari | 96629 | active | | 77f1bb15-1131-4abb-8504-49c8ac9c03a1 | tty-linux | ami | ami | 25165824 | active | | c0832a95-a63e-4390-8957-f994b3d2939d | tty-linux-kernel | aki | aki | 4404752 | active | +--------------------------------------+-------------------+-------------+------------------+----------+--------+
Using nova
You can now use the nova command line interface :
nova list +----+------+--------+----------+ | ID | Name | Status | Networks | +----+------+--------+----------+ +----+------+--------+----------+ # nova image-list +--------------------------------------+-------------------+--------+--------+ | ID | Name | Status | Server | +--------------------------------------+-------------------+--------+--------+ | 77f1bb15-1131-4abb-8504-49c8ac9c03a1 | tty-linux | ACTIVE | | | c0832a95-a63e-4390-8957-f994b3d2939d | tty-linux-kernel | ACTIVE | | | 6d45238c-edba-48e5-b3df-105e54f4e357 | tty-linux-ramdisk | ACTIVE | | +--------------------------------------+-------------------+--------+--------+ # nova flavor-list +----+-----------+-----------+------+----------+-------+-------------+ | ID | Name | Memory_MB | Swap | Local_GB | VCPUs | RXTX_Factor | +----+-----------+-----------+------+----------+-------+-------------+ | 1 | m1.tiny | 512 | | 0 | 1 | 1.0 | | 2 | m1.small | 2048 | | 20 | 1 | 1.0 | | 3 | m1.medium | 4096 | | 40 | 2 | 1.0 | | 4 | m1.large | 8192 | | 80 | 4 | 1.0 | | 5 | m1.xlarge | 16384 | | 160 | 8 | 1.0 | +----+-----------+-----------+------+----------+-------+-------------+ # nova keypair-list +------+-------------+ | Name | Fingerprint | +------+-------------+ +------+-------------+
There is no instance, one AMI image, and some flavors.
To later connect to the instance via ssh, we will need to upload a ssh public-key :
# nova keypair-add --pub_key <your_public_key_file.pub> <key_name> # nova keypair-list +--------+-------------------------------------------------+ | Name | Fingerprint | +--------+-------------------------------------------------+ | my_key | 79:40:46:87:74:3a:0e:01:f4:59:00:1b:3a:94:71:72 | +--------+-------------------------------------------------+
We can now boot an image with this image :
# nova boot --poll --flavor 1 --image 77f1bb15-1131-4abb-8504-49c8ac9c03a1 --key_name <key_name> <my_first_instance_name> +------------------------+--------------------------------------+ | Property | Value | +------------------------+--------------------------------------+ | OS-EXT-STS:power_state | 0 | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | RAX-DCF:diskConfig | MANUAL | | accessIPv4 | | | accessIPv6 | | | adminPass | HMs5tLK3bPCG | | config_drive | | | created | 2012-01-16T14:14:20Z | | flavor | m1.tiny | | hostId | | | id | 677750ea-0dd4-43c3-8ae0-ef54cb29915f | | image | tty-linux | | key_name | pubkey | | metadata | {} | | name | my_first_instance | | progress | None | | status | BUILD | | tenant_id | 1 | | updated | 2012-01-16T14:14:20Z | | user_id | 1 | +------------------------+--------------------------------------+
And after few seconds :
# nova show my_first_instance +------------------------+----------------------------------------------------------+ | Property | Value | +------------------------+----------------------------------------------------------+ | OS-EXT-STS:power_state | 1 | | OS-EXT-STS:task_state | None | | OS-EXT-STS:vm_state | active | | RAX-DCF:diskConfig | MANUAL | | accessIPv4 | | | accessIPv6 | | | config_drive | | | created | 2012-01-16T14:14:20Z | | flavor | m1.tiny | | hostId | 9750641c8c79637e01b342193cfa1efd5961c300b7865dc4a5658bdd | | id | 677750ea-0dd4-43c3-8ae0-ef54cb29915f | | image | tty-linux | | key_name | pubkey | | metadata | {} | | name | my_first_instance | | private_0 network | 10.1.0.3 | | progress | None | | status | ACTIVE | | tenant_id | 1 | | updated | 2012-01-16T14:14:37Z | | user_id | 1 | +------------------------+----------------------------------------------------------+
To see the instance console, we can go on our compute node and look at the file /var/lib/nova/instances/instance-00000001/console.log (if this is the first intance you created, else change 00000001 to the last available in the folder).
We can activate ssh access, create a floating ip, attach it to our instance and ssh into it (with user ubuntu for UEC images):
# nova secgroup-add-rule default tcp 22 22 0.0.0.0/0 # nova floating-ip-create +--------------+-------------+----------+ | Ip | Instance Id | Fixed Ip | +--------------+-------------+----------+ | 172.24.4.224 | None | None | +--------------+-------------+----------+ # nova add-floating-ip my_first_instance 172.24.4.224 # ssh -i my_key debian@172.24.4.224 The authenticity of host '172.24.4.224 (172.24.4.224)' can't be established. RSA key fingerprint is 55:bf:2e:7f:60:ef:ea:72:b4:af:2a:33:6b:2d:8c:62. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '172.24.4.224' (RSA) to the list of known hosts. . . . debian@my-first-instance:~$
If ssh does not work, check the logs in the horizon "Logs" tab associated with the instance. If it fails to find the metadata with an error that looks like:
DataSourceEc2.py[WARNING]: 'http://169.254.169.254' failed: url error [[Errno 111] Connection refused]
just try to restart
/etc/init.d/nova-compute restart /etc/init.d/nova-api restart /etc/init.d/nova-scheduler restart /etc/init.d/nova-cert restart
the source of the problem is probably that it was not retarted after a modification of the configuration files and they were not taken into account.
nova-volume
Cinder vs Nova Volume
As of the Folsom release, Cinder replaces Nova Volume, which is not deprecated. If you still want to run nova-volume, you can read the rest of this part.
Outdated howto for nova-volume
Note: as of September 22nd, 2012, the iscsitarget-dkms package must be installed from sid http://packages.qa.debian.org/i/iscsitarget/news/20120920T101826Z.html until it is accepted in wheezy.
The following instructions must be run on the <mgmt.host> node.
apt-get install lvm2 nova-volume iscsitarget iscsitarget-dkms euca2ools
Installing the guestmount package requires a patch until the corresponding packaging bug is fixed http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=669246
apt-get install guestmount
When it fails, apply the following patch.
root@osc2:~# diff -uNr /etc/init.d/zfs-fuse* --- /etc/init.d/zfs-fuse 2012-02-06 00:04:24.000000000 -0500 +++ /etc/init.d/zfs-fuse.mod 2012-05-16 05:57:35.000000000 -0400 @@ -1,8 +1,8 @@ #! /bin/bash ### BEGIN INIT INFO # Provides: zfs-fuse -# Required-Start: fuse $remote_fs -# Required-Stop: fuse $remote_fs +# Required-Start: $remote_fs +# Required-Stop: $remote_fs # Default-Start: S # Default-Stop: 0 6 # Short-Description: Daemon for ZFS support via FUSE
After applying the patch, install again.
apt-get install guestmount
Assuming /dev/<sda3> is an unused disk partition, create a volume group:
pvcreate /dev/<sda3> vgcreate nova-volumes /dev/<sda3>
Add the following lines to /etc/nova/nova.conf
iscsi_ip_prefix=192.168. volume_group=nova-volumes iscsi_helper=iscsitarget
Apply the following patch to cope with the fact that --volume-group is not accepted as an option by the nova-volume command line.
diff --git a/init.d/nova-volume b/init.d/nova-volume index 0cdda1b..1d6fa62 100755 --- a/init.d/nova-volume +++ b/init.d/nova-volume @@ -45,9 +47,9 @@ do_start() fi # Adds what has been configured in /etc/default/nova-volume - if [ -n ${nova_volume_group} ] ; then - DAEMON_ARGS="${DAEMON_ARGS} --volume_group=${nova_volume_group}" - fi +# if [ -n ${nova_volume_group} ] ; then +# DAEMON_ARGS="${DAEMON_ARGS} --volume_group=${nova_volume_group}" +# fi start-stop-daemon --start --quiet --background --chuid ${NOVA_USER}:nova --make-pidfile --pidfile $PIDFILE --startas $DAEMON --test > /dev/null \ || return 1
Fix an absolute path problem in /usr/share/pyshared/nova/rootwrap/volume.py
perl -pi -e 's|/sbin/iscsiadm|/usr/bin/iscsiadm|' /usr/share/pyshared/nova/rootwrap/volume.py
Edit /etc/default/iscsitarget and set
ISCSITARGET_ENABLE=true
Run the iscsi services :
service iscsitarget start service open-iscsi start
Start the nova-volume service
/etc/init.d/nova-volume start
Check that it shows (give it 10 seconds) with
nova-manage service list
should show a line looking like this:
nova-volume openstack nova enabled :-) 2012-05-16 09:38:26
Go to the dashboard and you will be able to create a volume and attach it to a running instance. If anything goes wrong, check the /var/log/nova/nova-volume.log and /var/log/nova/nova-compute.log files first for errors. If you would like to try the euca2ools commands instead of the dashboard you can use the examples shown at http://docs.openstack.org/trunk/openstack-compute/admin/content/managing-volumes.html (as of May 16th, 2012). Before running these commands you need to do the following:
login to the dashboard as <admin_user> go to Settings click on "EC2 Credentials" click on "Download EC2 Credentials" unzip the downloaded file source ec2rc.sh
This will define the environment variables necessary for commands such as
euca-describe-volumes
to display the list of active volumes as follows
root@openstack:~/euca2ools# euca-describe-volumes VOLUME vol-00000002 1 nova available (67af2aec0bb94cc29a43c5bca21ce3d4, openstack, None, None) 2012-05-16T09:54:23.000Z
Swift
swift nodes:
Assuming three machines installed with squeeze, the primary node being the openstack mgmt.host node and no puppet or puppetmaster installed.
swift primary node
apt-get install libmysql-ruby ruby-activerecord-2.3 sqlite3 puppetmaster puppet ruby-sqlite3
Puppet configuration:
diff --git a/puppet/puppet.conf b/puppet/puppet.conf index b18fae3..ce4ed22 100644 --- a/puppet/puppet.conf +++ b/puppet/puppet.conf @@ -7,6 +7,8 @@ factpath=$vardir/lib/facter templatedir=$confdir/templates prerun_command=/etc/puppet/etckeeper-commit-pre postrun_command=/etc/puppet/etckeeper-commit-post +pluginsync=true +storeconfigs=true [master] # These are needed when the puppetmaster is run by passenger commit 507105065306433eec8f03dd72ab52ccaf268ccc Author: root <root@sd-16961.dedibox.fr> Date: Mon Apr 2 15:04:53 2012 +0200 configure database storage diff --git a/puppet/puppet.conf b/puppet/puppet.conf index ce4ed22..af220e9 100644 --- a/puppet/puppet.conf +++ b/puppet/puppet.conf @@ -9,10 +9,19 @@ prerun_command=/etc/puppet/etckeeper-commit-pre postrun_command=/etc/puppet/etckeeper-commit-post pluginsync=true storeconfigs=true +server=mgmt.host [master] # These are needed when the puppetmaster is run by passenger # and can safely be removed if webrick is used. ssl_client_header = SSL_CLIENT_S_DN ssl_client_verify_header = SSL_CLIENT_VERIFY +storeconfigs=true +# Needed for storeconfigs=true +dbadapter=mysql +dbname=puppet +dbuser=puppet +dbpassword=password +dbserver=localhost +dbsocket=/var/run/mysqld/mysqld.sock
Setup mysql for puppet:
mysqladmin create puppet mysql -e "grant all on puppet.* to 'puppet'@'localhost' identified by 'password';"
Install openstack modules for puppet:
cd /etc/puppet git clone git://git.labs.enovance.com/openstack-puppet-modules.git modules && cd modules && git submodule init && git submodule update cp /etc/puppet/modules/swift/examples/multi.pp /etc/puppet/manifests/site.pp
commit 8eb77223e25bfff1284612417efedd228e0c6696 Author: root <root@sd-16961.dedibox.fr> Date: Mon Apr 2 15:37:19 2012 +0200 use tap0 for lan diff --git a/puppet/manifests/site.pp b/puppet/manifests/site.pp index a915aea..9b890b0 100644 --- a/puppet/manifests/site.pp +++ b/puppet/manifests/site.pp @@ -28,7 +28,7 @@ $swift_shared_secret='changeme' # assumes that the ip address where all of the storage nodes # will communicate is on eth1 -$swift_local_net_ip = $ipaddress_eth0 +$swift_local_net_ip = $ipaddress_tap0 Exec { logoutput => true }
Enable puppet autosign for all hosts:
echo '*' > /etc/puppet/autosign.conf
Deploy swift configuration on the proxy:
chown -R puppet:puppet /var/lib/puppet/ puppet agent --certname=swift_storage_1 --server=mgmt.host --verbose --debug --test /etc/init.d/xinetd reload
swift secondary nodes
deb http://ftp.fr.debian.org/debian/ wheezy main deb http://ftp.fr.debian.org/debian/ sid main apt-get install python2.7=2.7.2-8 python2.7-minimal=2.7.2-8 libpython2.7=2.7.2-8 echo libpython2.7 hold | dpkg --set-selections echo python2.7 hold | dpkg --set-selections echo python2.7-minimal hold | dpkg --set-selections apt-get install puppet ruby-sqlite3 puppet agent --certname=swift_storage_3 --server=openstack-online-0001.dedibox.fr --verbose --debug --test
create swift ring
puppet agent --certname=swift_proxy --server=openstack-online-0001.dedibox.fr --verbose --debug --test
propagate the swift configuration
puppet agent --certname=swift_storage_1 --server=openstack-online-0001.dedibox.fr --verbose --debug --test puppet agent --certname=swift_storage_2 --server=openstack-online-0001.dedibox.fr --verbose --debug --test puppet agent --certname=swift_storage_3 --server=openstack-online-0001.dedibox.fr --verbose --debug --test
check that it works
On proxy / mgmt.host :
# cd /etc/puppet/modules/swift/ext # ruby swift.rb getting credentials: curl -k -v -H "X-Storage-User: test:tester" -H 'X-Storage-Pass: testing' http://127.0.0.1:8080/auth/v1.0 verifying connection auth: curl -k -v -H "X-Auth-Token: AUTH_tk5d5a63abdf90414eafd890ed710d357b" http://127.0.0.1:8080/v1/AUTH_test Testing swift: swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing stat found containers/objects: 0/0 Uploading file to swift with command: swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing upload my_container /tmp/foo1 tmp/foo1 Downloading file with command: swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing download my_container tmp/foo1 Dude!!!! It actually seems to work, we can upload and download files!!!!
diff --git a/swift/proxy-server.conf b/swift/proxy-server.conf index 83dda1e..8364fe7 100644 --- a/swift/proxy-server.conf +++ b/swift/proxy-server.conf @@ -7,7 +7,8 @@ user = swift [pipeline:main] # ratelimit? -pipeline = healthcheck cache tempauth proxy-server +#pipeline = healthcheck cache tempauth proxy-server +pipeline = healthcheck cache tokenauth keystone proxy-server [app:proxy-server] use = egg:swift#proxy @@ -28,3 +29,17 @@ use = egg:swift#healthcheck use = egg:swift#memcache # multi-proxy config not supported memcache_servers = 127.0.0.1:11211 + +[filter:tokenauth] +paste.filter_factory = keystone.middleware.auth_token:filter_factory +service_port = 5000 +service_protocol = http +service_host = 127.0.0.1 +auth_port = 35357 +auth_protocol = http +auth_host = 127.0.0.1 +admin_token = ADMIN + +[filter:keystone] +paste.filter_factory = keystone.middleware.swift_auth:filter_factory +operator_roles = admin, swiftoperator, projectmanager
/etc/init.d/swift-proxy restart
swift command line
apt-get install swift swift -U $OS_TENANT_NAME:$OS_USERNAME list