Differences between revisions 2 and 73 (spanning 71 versions)
Revision 2 as of 2012-02-14 14:18:17
Size: 18971
Editor: JulienDanjou
Comment:
Revision 73 as of 2013-06-11 16:08:07
Size: 817
Editor: ?Thomas Goirand
Comment:
Deletions are marked like this. Additions are marked like this.
Line 1: Line 1:
= HOWTO: Openstack on Debian GNU/Linux unstable (sid) = = Page moved =
Line 3: Line 3:
This howto aims to provide guidelines to install & set up a multi-node Openstack-Compute (aka Nova) environment. This page used to host the howto for Openstack 2012.1, code name Essex, which is in Debian 7.0 Wheezy (currently Debian testing, soon to be released). As there are multiple howtos, this page has been moved to: https://wiki.debian.org/OpenStackHowto/Essex
Line 5: Line 5:
This environment will include : = OpenStack Howtos index =
Line 7: Line 7:
 * one “proxy” node (host name '''<proxy.host>''') with the following services :
  * nova-api
  * nova-scheduler
  * glance
  * keystone
  * mysql
  * rabbitmq
  * memcached
 * one or more pure “compute” (host name '''<computeNN.host>''') nodes with the following services :
  * nova-compute
  * nova-network
  * nova-api (with only the metadata api enabled)
All these howtos have been tested under Wheezy, with some backports from Experimental.
Line 20: Line 9:
== CONVENTIONS == - Openstack 2012.1 (Essex): https://wiki.debian.org/OpenStackHowto/Essex
Line 22: Line 11:
In formatted blocks :
 * command lines starting with a '''#''' must be ran as root.
 * values between '''<''' and '''>''' must be replaced by your values.
- Openstack 2012.2 (Folsom): https://wiki.debian.org/OpenStackHowto/Folsom
Line 26: Line 13:
== PREREQUISITES == - Networking with Quantum: https://wiki.debian.org/OpenStackHowto/Quantum
Line 28: Line 15:
Things to prepare beforehand : - Puppet Howto: http://wiki.debian.org/OpenStackPuppetHowto
Line 30: Line 17:
 * Machines :
  * They should have two network interfaces to ensure security. If only one interface is used the private part is more exposed to attacks coming from the public part.
   * a &#95;public&#95; one to communicate with the outside world
   * a &#95;private&#95; one for the guests VLans
 * Network :
  * public network
  * private network. If the machines are not on a LAN, [[L2-openvpn|create one with OpenVPN]].
  * fixed ip range for guests
  * number of networks for guests
  * network size for guests
  * public “floating” IPs (optional)
  * echo 1 > /proc/sys/net/ipv4/ip_forward
  * echo "nbd max_part=65" >> /etc/modules # to enable key-file, network & metadata injection into instances images
 * Distribution :
  * Debian GNU/Linux squeeze
  * Add wheezy and sid in the /etc/apt/sources.list
  * apt-get update
- Razor howto: http://wiki.debian.org/OpenStackRazorHowto
Line 48: Line 19:

== IMPORTANT ==

This HOWTO is valid for the Openstack Nova packages labelled 2012.1~e2, currently available in Debian GNU/Linux unstable sid and might need some adjustments with later versions.

== Technical Choices ==

We will be using :
 * "Multi-host VLan networking mode":http://docs.openstack.org/diablo/openstack-compute/admin/content/networking-options.html
 * Keystone for authentication
 * KVM as hypervisor
 * MySql as database backend

== Installation ==

=== proxy node: ===

==== Hostname ====
In the following replace '''<proxy.host>''' with the actual hostname of the machine chosen to be the proxy node.

==== Packages installation ====

Install dependencies:

{{{# apt-get install -y mysql-server rabbitmq-server memcached}}}

Note : do not set the MySQL password or add the -p option to all mysql related commands below.

In '''/etc/mysql/my.cnf''' modify the '''bind-address''' value to read :

{{{bind-address = 0.0.0.0}}}


(or better, instead of '''0.0.0.0''', the IP address of a private interface on which other compute nodes can join the proxy.)

And restart the mysql server :

{{{# /etc/init.d/mysql restart}}}


Create two MySQL databases and associated users :

{{{
# mysqladmin create nova
# mysql -e "grant all on nova.* to '<nova_user>' identified by '<nova_secret>'"
# mysqladmin flush-privileges
}}}


Now install some OpenStack packages :

{{{# apt-get install -y nova-api nova-scheduler keystone}}}


==== Configuration ====

===== Keystone =====

Answer the debconf questions and chose the defaults.

Add a project (tenant) and an admin user :

{{{
# keystone-manage tenant add admin
# keystone-manage user add admin <admin_password>
# keystone-manage role grant Admin admin admin
# keystone-manage role grant Admin admin
# keystone-manage role grant KeystoneAdmin admin
# keystone-manage role grant KeystoneServiceAdmin admin
}}}


Add services :

{{{
# keystone-manage service add nova compute "Nova Compute Service"
# keystone-manage service add ec2 ec2 "EC2 Compatibility Layer"
# keystone-manage service add glance image "Glance Image Service"
# keystone-manage service add keystone identity "Keystone Identity Service"
}}}


Endpoint templates for the region :

{{{
# keystone-manage endpointTemplates add RegionOne nova http://<proxy.host>:8774/v1.1/%tenant_id% http://<proxy.host>:8774/v1.1/%tenant_id% http://<proxy.host>:8774/v1.1/%tenant_id% 1 1
# keystone-manage endpointTemplates add RegionOne ec2 http://<proxy.host>:8773/services/Cloud http://<proxy.host>:8773/services/Admin http://<proxy.host>:8773/services/Cloud 1 1
# keystone-manage endpointTemplates add RegionOne glance http://<proxy.host>:9292/v1/%tenant_id% http://<proxy.host>:9292/v1/%tenant_id% http://<proxy.host>:9292/v1/%tenant_id% 1 1
# keystone-manage endpointTemplates add RegionOne keystone http://<proxy.host>:5000/v2.0 http://<proxy.host>:35357/v2.0 http://<proxy.host>:5000/v2.0 1 1
}}}


And finally, a service token with a «far far away» expiration date (used by other services to talk to keystone) and the credentials for the admin account :

{{{
# keystone-manage token add <service_token> admin admin 2047-12-31T13:37
# keystone-manage credentials add admin EC2 'admin' '<admin_password>' admin
}}}


 *NOTE*
    The '''<service_token>''' value will be pasted into nova and glance configs later.

===== Glance =====

{{{# apt-get install -y glance}}}

Glance-common will ask you which pipeline flavor you want. Choose ''keystone''. Then it will ask you what the ''auth server URL'' is, answer with ''http://<proxy.host>:5000''. Then paste the '''<service_token>''' you get from Keystone in the previous step when debconf asks for it.

===== Nova =====

In the file '''/etc/nova/api-paste.ini''' :
 * In sections '''pipeline:ec2cloud''' and '''pipeline:ec2admin''' :
  * Replace "'''ec2noauth'''" with "'''authtoken keystonecontext'''"
 * In section '''pipeline:openstack_api_v2''':
  * Replace "'''noauth'''" with "'''authtoken keystonecontext'''"
 * Add the following sections and replace '''<proxy.host>''' and '''<service_token>''' :
{{{
[filter:keystonecontext]
paste.filter_factory = keystone.middleware.nova_keystone_context:NovaKeystoneContext.factory

[filter:authtoken]
paste.filter_factory = keystone.middleware.auth_token:filter_factory
service_protocol = http
service_host = <proxy.host>
service_port = 5000
auth_host = <proxy.host>
auth_port = 35357
auth_protocol = http
auth_uri = http://<proxy.host>:5000/
admin_token = <service_token>
}}}

In the file '''/etc/nova/nova.conf''' :

 * Add these configuration options :
{{{
## Network config
# A nova-network on each compute node
--multi_host
# VLan manger
--network_manager=nova.network.manager.VlanManager
--vlan_interface=<the private interface eg. eth1>
# Tenants networks, e.g. prepare 100 networks, each one a /24, starting from 10.1.0.0
--num_networks=<100>
--network_size=<256>
--fixed_range=<10.1.0.0/16>
# My ip
--my-ip=<the current machine ip address>
--public_interface=<the public interface eg. eth0>
# Dmz & metadata things
--dmz_cidr=169.254.169.254/32
--ec2_dmz_host=169.254.169.254
--metadata_host=169.254.169.254
## More general things
# Sure, daemonize
--daemonize=1
# The database connection string
--sql_connection=mysql://<nova_user>:<nova_secret>'''<proxy.host>/nova
# The RabbitMQ host
--rabbit_host=<proxy.host>
## Glance
--image_service=nova.image.glance.GlanceImageService
--glance_api_servers=<proxy.host>:9292
# if you want
--use-syslog
## API
--osapi_host=<proxy.host>
--ec2_host=<proxy.host>
# Load some extensions
--osapi_extension=nova.api.openstack.v2.contrib.standard_extensions
--osapi_extension=extensions.admin.Admin
# Allow access to some “admin-only” api features
--allow_admin_api
}}}

Restart nova services :

{{{
# /etc/init.d/nova-api restart
# /etc/init.d/nova-scheduler restart
}}}


Now bootstrap nova :

{{{
# nova-manage db sync
# nova-manage network create private --fixed_range_v4=<10.1.0.0/16> --network_size=<256> --num_networks=<100>
# nova-manage floating create <192.168.0.224/28>
}}}
Note: the values chosen for --fixed_range_v4=<10.1.0.0/16> --network_size=<256> --num_networks=<100> must match the values for the corresponding options set in the nova.conf file above

You should be able to see that '''nova-scheduler''' is running (OK state is ''':-)''' KO is '''XXX''') :

{{{
# nova-manage service list
Binary Host Zone Status State Updated_At
nova-scheduler openstack04 nova enabled :-) 2012-01-13 17:29:48
}}}

=== compute nodes: ===

==== Packages installation ====

Now install Openstack packages :

{{{# apt-get install -y nova-compute nova-api nova-network python-keystone}}}


Apply "this patch":https://github.com/openstack/nova/commit/6ce042cafbf410a213c5d7937b93784e8f0a1655 to file '''/usr/share/pyshared/nova/api/metadata/handler.py''' if not already done.

==== Configuration ====

===== Nova =====

The file '''/etc/nova/api-paste.ini''' can be copied verbatim from the proxy host.
The file '''/etc/nova/nova.conf''' can be copied from the proxy host and modified as follows:
 * The IP of the machine
{{{--my-ip=<the current machine ip address>}}}
 * Only load the metadata api on compute-only nodes
{{{--enabled_apis=metadata}}}

Restart services :

{{{
# /etc/init.d/nova-api restart
# /etc/init.d/nova-network restart
# /etc/init.d/nova-compute restart
}}}


On the proxy, check that all seems to be running :

{{{
# nova-manage service list
Binary Host Zone Status State Updated_At
nova-scheduler <proxy.host> nova enabled :-) 2012-01-16 12:29:53
nova-compute compute.host nova enabled :-) 2012-01-16 12:29:52
nova-network compute.host nova enabled :-) 2012-01-16 12:29:49
}}}

It should be working \o/

== Using it ==

To use the nova cli, you will need to export some environment variables :

{{{
# export NOVA_USERNAME=admin
# export NOVA_API_KEY=<admin_password>
# export NOVA_PROJECT_ID=admin
# export NOVA_URL=http://<proxy.host>:5000/v2.0/
# export NOVA_VERSION=1.1
}}}


You can now use the '''nova''' command line interface :

{{{
nova list
+----+------+--------+----------+
| ID | Name | Status | Networks |
+----+------+--------+----------+
+----+------+--------+----------+
# nova image-list
+----+------+--------+--------+
| ID | Name | Status | Server |
+----+------+--------+--------+
+----+------+--------+--------+
# nova flavor-list
+----+-----------+-----------+------+----------+-------+-------------+
| ID | Name | Memory_MB | Swap | Local_GB | VCPUs | RXTX_Factor |
+----+-----------+-----------+------+----------+-------+-------------+
| 1 | m1.tiny | 512 | | 0 | 1 | 1.0 |
| 2 | m1.small | 2048 | | 20 | 1 | 1.0 |
| 3 | m1.medium | 4096 | | 40 | 2 | 1.0 |
| 4 | m1.large | 8192 | | 80 | 4 | 1.0 |
| 5 | m1.xlarge | 16384 | | 160 | 8 | 1.0 |
+----+-----------+-----------+------+----------+-------+-------------+
# nova keypair-list
+------+-------------+
| Name | Fingerprint |
+------+-------------+
+------+-------------+
}}}

There is no instance, no image and some flavors. First we need to get an image and upload it to glance :

{{{
# wget http://uec-images.ubuntu.com/releases/11.10/release/ubuntu-11.10-server-cloudimg-amd64-disk1.img
[...]
# glance --auth_token=<service_token> add name="Ubuntu 11.10 clouding amd64" < ubuntu-11.10-server-cloudimg-amd64-disk1.img
Added new image with ID: 78651eea-02f6-4750-945a-4524a77f7da9
# nova image-list
+--------------------------------------+-----------------------------+--------+--------+
| ID | Name | Status | Server |
+--------------------------------------+-----------------------------+--------+--------+
| 78651eea-02f6-4750-945a-4524a77f7da9 | Ubuntu 11.10 clouding amd64 | ACTIVE | |
+--------------------------------------+-----------------------------+--------+--------+
}}}

To later connect to the instance via ssh, we will need to upload a ssh public-key :
{{{
# nova keypair-add --pub_key <your_public_key_file.pub> <key_name>
# nova keypair-list
+--------+-------------------------------------------------+
| Name | Fingerprint |
+--------+-------------------------------------------------+
| my_key | 79:40:46:87:74:3a:0e:01:f4:59:00:1b:3a:94:71:72 |
+--------+-------------------------------------------------+
}}}

We can now boot an image with this image :

{{{
# nova boot --flavor 1 --image 78651eea-02f6-4750-945a-4524a77f7da9 --key_name my_key my_first_instance
+------------------------+--------------------------------------+
| Property | Value |
+------------------------+--------------------------------------+
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| RAX-DCF:diskConfig | MANUAL |
| accessIPv4 | |
| accessIPv6 | |
| adminPass | HMs5tLK3bPCG |
| config_drive | |
| created | 2012-01-16T14:14:20Z |
| flavor | m1.tiny |
| hostId | |
| id | 677750ea-0dd4-43c3-8ae0-ef54cb29915f |
| image | Ubuntu 11.10 clouding amd64 |
| key_name | pubkey |
| metadata | {} |
| name | my_first_instance |
| progress | None |
| status | BUILD |
| tenant_id | 1 |
| updated | 2012-01-16T14:14:20Z |
| user_id | 1 |
+------------------------+--------------------------------------+
}}}

And after few seconds :

{{{
# nova show my_first_instance
+------------------------+----------------------------------------------------------+
| Property | Value |
+------------------------+----------------------------------------------------------+
| OS-EXT-STS:power_state | 1 |
| OS-EXT-STS:task_state | None |
| OS-EXT-STS:vm_state | active |
| RAX-DCF:diskConfig | MANUAL |
| accessIPv4 | |
| accessIPv6 | |
| config_drive | |
| created | 2012-01-16T14:14:20Z |
| flavor | m1.tiny |
| hostId | 9750641c8c79637e01b342193cfa1efd5961c300b7865dc4a5658bdd |
| id | 677750ea-0dd4-43c3-8ae0-ef54cb29915f |
| image | Ubuntu 11.10 clouding amd64 |
| key_name | pubkey |
| metadata | {} |
| name | my_first_instance |
| private_0 network | 10.1.0.3 |
| progress | None |
| status | ACTIVE |
| tenant_id | 1 |
| updated | 2012-01-16T14:14:37Z |
| user_id | 1 |
+------------------------+----------------------------------------------------------+
}}}

To see the instance console, we can go on our compute node and look at the file '''/var/lib/nova/instances/instance-00000001/console.log''' (if this is the first intance you created, else change '''00000001''' to the last available in the folder).

We can activate ssh access, create a floating ip, attach it to our instance and ssh into it (with user '''ubuntu''' for UEC images):

{{{
# nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
# nova floating-ip-create
+--------------+-------------+----------+
| Ip | Instance Id | Fixed Ip |
+--------------+-------------+----------+
| 172.24.4.224 | None | None |
+--------------+-------------+----------+
# nova add-floating-ip my_first_instance 172.24.4.224
# ssh -i my_key ubuntu@172.24.4.224
The authenticity of host '172.24.4.224 (172.24.4.224)' can't be established.
RSA key fingerprint is 55:bf:2e:7f:60:ef:ea:72:b4:af:2a:33:6b:2d:8c:62.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '172.24.4.224' (RSA) to the list of known hosts.
Welcome to Ubuntu 11.10 (GNU/Linux 3.0.0-14-virtual x86_64)

 * Documentation: https://help.ubuntu.com/

System information as of Mon Jan 16 14:58:15 UTC 2012

System load: 0.59 Processes: 59
Usage of /: 32.6% of 1.96GB Users logged in: 0
Memory usage: 6% IP address for eth0: 10.1.0.5
Swap usage: 0%

Graph this data and manage this system at https://landscape.canonical.com/
Get cloud support with Ubuntu Advantage Cloud Guest
http://www.ubuntu.com/business/services/cloud

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

/usr/bin/xauth: file /home/ubuntu/.Xauthority does not exist
To run a command as administrator (user 'root'), use 'sudo <command>'.
See &quot;man sudo_root&quot; for details.

ubuntu@my-first-instance:~$
}}}

Et voilà !
- OpenStack with Ceph: http://wiki.debian.org/OpenStackCephHowto

Page moved

This page used to host the howto for Openstack 2012.1, code name Essex, which is in Debian 7.0 Wheezy (currently Debian testing, soon to be released). As there are multiple howtos, this page has been moved to: https://wiki.debian.org/OpenStackHowto/Essex

OpenStack Howtos index

All these howtos have been tested under Wheezy, with some backports from Experimental.

- Openstack 2012.1 (Essex): https://wiki.debian.org/OpenStackHowto/Essex

- Openstack 2012.2 (Folsom): https://wiki.debian.org/OpenStackHowto/Folsom

- Networking with Quantum: https://wiki.debian.org/OpenStackHowto/Quantum

- Puppet Howto: http://wiki.debian.org/OpenStackPuppetHowto

- Razor howto: http://wiki.debian.org/OpenStackRazorHowto

- OpenStack with Ceph: http://wiki.debian.org/OpenStackCephHowto