Skip to content

OpenStack All in One

Sandro Mathys edited this page Jul 16, 2015 · 1 revision

With this guide, you'll be able to deploy an OpenStack Juno with last OSS version of MidoNet and test it in a single Virtual Machine.

The PuppetForge (uploaded modules) way

This tutorial is tested against Ubuntu 14.04, but it should work for CentOS 7 too.

Prepare the environment

If you experience the annoying unable to resolve host '<your-hostname>' , execute the following:

    sed -i "1c\127.0.0.1 $(hostname) localhost" /etc/hosts

Install puppet in your machine and the midonet puppet modules:

(run as root)

    apt-get -y update
    apt-get install -y puppet
    puppet module install midonet-midonet_openstack
    puppet module install --force midonet-neutron

We need to force-install neutron, because we have our own fork for Juno releases and a neutron module will be already installed after the installation of midonet-midonet_openstack because of puppetlabs-openstack dependency. This step won't be needed since Kilo.

Create a macvlan l2 device an associate an IP address to it. This trick will allow you to copy-paste the configuration files of this tutorial without the need of modify the IP addresses and networks to your Virtual Machine network stack to the hiera.yaml configuration file. It just works.

   ip l add osservices link eth0 type macvlan
   ip a add 172.28.28.4/24 dev osservices

Configure Puppet Files

Configure hiera. Hiera should load the module_data module to leverage all the configuration options already stored in puppet-midonet/data files. So you only have to define the ones you want to override. Hiera common.yaml will live in /var/lib/hiera

cat <<EOF > /etc/puppet/hiera.yaml
---
:backends:
  - yaml
  - module_data
:yaml:
  :datadir: /var/lib/hiera
:hierarchy:
  - common
EOF

Make sure /var/lib/hiera exists:

    mkdir -p /var/lib/hiera

Configure hiera data:

cat <<EOF > /var/lib/hiera/common.yaml
openstack::region: 'openstack'

######## Networks
openstack::network::api: '172.28.28.4/24'
openstack::network::external: '192.168.22.0/24'
openstack::network::management: '172.28.28.4/24'
openstack::network::data: '192.168.22.0/24'

openstack::network::external::ippool::start: 192.168.22.100
openstack::network::external::ippool::end: 192.168.22.200
openstack::network::external::gateway: 192.168.22.2
openstack::network::external::dns: 192.168.22.2

######## Private Neutron Network

openstack::network::neutron::private: '10.0.0.0/24'

######## Fixed IPs (controllers)

openstack::controller::address::api: '172.28.28.4'
openstack::controller::address::management: '172.28.28.4'
openstack::storage::address::api: '172.28.28.4'
openstack::storage::address::management: '172.28.28.4'

######## Database

openstack::mysql::root_password: 'testmido'
openstack::mysql::service_password: 'testmido'
openstack::mysql::allowed_hosts: ['172.28.28.4']

openstack::mysql::keystone::user: 'keystone'
openstack::mysql::keystone::pass: 'testmido'

openstack::mysql::glance::user: 'glance'
openstack::mysql::glance::pass: 'testmido'
openstack::glance::api_servers: ['172.28.28.4:9292']

openstack::mysql::nova::user: 'nova'
openstack::mysql::nova::pass: 'testmido'

openstack::mysql::neutron::user: 'neutron'
openstack::mysql::neutron::pass: 'testmido'

######## RabbitMQ

openstack::rabbitmq::user: 'openstack'
openstack::rabbitmq::password: 'testmido'
openstack::rabbitmq::hosts: ['172.28.28.4:5672']

######## Keystone

openstack::keystone::admin_token: 'testmido'
openstack::keystone::admin_email: 'mido-dev@lists.midonet.org'
openstack::keystone::admin_password: 'testmido'

openstack::keystone::tenants:
    "midokura":
        description: "Test tenant"

openstack::keystone::users:
    "midoadmin":
        password: "midoadmin"
        tenant: "midokura"
        email: "foo@midokura.com"
        admin: true
    "midouser":
        password: "midouser"
        tenant: "midokura"
        email: "bar@midokura.com"
        admin: false
    "midonet":
        password: 'testmido'
        tenant: 'services'
        email: "midonet@midokura.com"
        admin: true

######## Glance

openstack::glance::password: 'midokura'

######## Cinder

openstack::cinder::password: 'testmido'
openstack::cinder::volume_size: '8G'

######## Swift

openstack::swift::password: 'dexc-flo'
openstack::swift::hash_suffix: 'pop-bang'

######## Nova

openstack::nova::libvirt_type: 'qemu'
openstack::nova::password: 'testmido'

######## Neutron

openstack::neutron::password: 'testmido'
openstack::neutron::shared_secret: 'testmido'
openstack::neutron::core_plugin: 'midonet.neutron.plugin.MidonetPluginV2'
openstack::neutron::service_plugins: []

######## Horizon

openstack::horizon::secret_key: 'testmido'
openstack::horizon::horizon_server_aliases: ['*']
openstack::horizon::allowed_hosts: ['*']

# Even some of this data is not deployed, it seems to be mandatory for
# puppetlabs-openstack project. So let's keep configure it
######## Tempest

openstack::tempest::configure_images    : true
openstack::tempest::image_name          : 'Cirros'
openstack::tempest::image_name_alt      : 'Cirros'
openstack::tempest::username            : 'demo'
openstack::tempest::username_alt        : 'demo2'
openstack::tempest::username_admin      : 'test'
openstack::tempest::configure_network   : true
openstack::tempest::public_network_name : 'public'
openstack::tempest::cinder_available    : false
openstack::tempest::glance_available    : true
openstack::tempest::horizon_available   : true
openstack::tempest::nova_available      : true
openstack::tempest::neutron_available   : true
openstack::tempest::heat_available      : false
openstack::tempest::swift_available     : false

######## Log levels
openstack::verbose: 'True'
openstack::debug: 'True'

######## Ceilometer
openstack::ceilometer::address::management: '0.0.0.0'
openstack::ceilometer::mongo::username: 'mongo'
openstack::ceilometer::mongo::password: 'mongosecretkey123'
openstack::ceilometer::password: 'whi-truz'
openstack::ceilometer::meteringsecret: 'ceilometersecretkey'

######## Heat
openstack::heat::password: 'zap-bang'
openstack::heat::encryption_key: 'heatsecretkey123'
EOF

The network part deserves an explanation:

   openstack::network::api: '172.28.28.4/24'
   openstack::network::external: '192.168.22.0/24'
   openstack::network::management: '172.28.28.4/24'
   openstack::network::data: '192.168.22.0/24'
   openstack::network::external::ippool::start: 192.168.22.100
   openstack::network::external::ippool::end: 192.168.22.200
   openstack::network::external::gateway: 192.168.22.2
   openstack::network::external::dns: 192.168.22.2
   openstack::network::neutron::private: '10.0.0.0/24'
  • openstack::network::api: api network is the one from which an OpenStack client access to OpenStack APIs
  • openstack::network::external: is the range of the public external OpenStack network. With MidoNet lives purely in the overlay. That means that you can put any value here, while you are coherent with the values of openstack::network::external::*
  • openstack::network::management: is the network from which OS services communicate each other. In this all-in-one or in trivial deployments use to be the same network as openstack::network::api
  • openstack::network::data: is the underlay network that gives access from OS compute nodes to Internet. OS private networks (and MidoNet's external one) are tunneled upon this network. In this example, we use the same one as openstack::network::external, because we don't use it at all (single machine, no need of tunnel zones)
  • openstack::network::neutron::private: Is the default-created tenant network. Can be any value.

Create the site.pp file:

cat <<EOF > /etc/puppet/manifests/site.pp

# Most images does not have the fqdn infored via 'facter'. This line
# tricks the deployment using fqdn as the same value as hostname
if empty(\$::fqdn) {
    \$fqdn = \$::hostname
}
include midonet_openstack::role::allinone

EOF

Run the site.pp file:

  $ puppet apply --verbose /etc/puppet/manifests/site.pp

Allow Horizon be accessible from anywhere

By default, Horizon only will allow users from '172.28.28.4' access to it. We have to patch apache configuration files to be accessible from anywhere

    $ sed -i "/^ALLOWED_HOSTS/c\ALLOWED_HOSTS = ['*']" /etc/openstack-dashboard/local_settings.py
    $ sed -i "/ServerAlias 172.28.28.4/a\ \ ServerAlias *" /etc/apache2/sites-available/15-horizon_vhost.conf
    $ hostnamectl set-hostname $(facter hostname)

And restart apache

    $ service apache2 restart

Allow VNC

Nova VNC is listening on port 6080 of 172.28.28.4, in order to make VNC reachable from outside we need to add a prerouting rule:

     $ iptables -t nat -I PREROUTING -p tcp --dport 6080 -d {host_IP} -j DNAT --to 172.28.28.4

Optional. Fix Horizon bug

There is a bug in Horizon that don't let tenants see ports attached to a Virtual Machines when they are trying to associate them to a Floating IP. Create the following patch file to fix this bug.

cat <<EOF > /tmp/horizon_floating_ip.patch
diff --git a/openstack_dashboard/api/neutron.py b/openstack_dashboard/api/neutron.py
index fff61ad..ebe37d1 100644
--- a/openstack_dashboard/api/neutron.py
+++ b/openstack_dashboard/api/neutron.py
@@ -411,10 +411,14 @@ class FloatingIpManager(network_base.FloatingIpManager):
                           r.external_gateway_info.get('network_id')
                           in ext_net_ids)]
         reachable_subnets = set([p.fixed_ips[0]['subnet_id'] for p in ports
-                                 if ((p.device_owner ==
-                                      'network:router_interface')
-                                     and (p.device_id in gw_routers))])
-        return reachable_subnets
+                                if ((p.device_owner in
+                                     ROUTER_INTERFACE_OWNERS)
+                                    and (p.device_id in gw_routers))])
+        # we have to include any shared subnets as well because we may not
+        # have permission to see the router interface to infer connectivity
+        shared = set([s.id for n in network_list(self.request, shared=True)
+                      for s in n.subnets])
+        return reachable_subnets | shared

      def list_targets(self):
          tenant_id = self.request.user.tenant_id
--
2.3.7
diff --git a/openstack_dashboard/api/neutron.py b/openstack_dashboard/api/neutron.py
index ebe37d1..3a4edd8 100644
--- a/openstack_dashboard/api/neutron.py
+++ b/openstack_dashboard/api/neutron.py
@@ -46,6 +46,11 @@ IP_VERSION_DICT = {4: 'IPv4', 6: 'IPv6'}
 OFF_STATE = 'OFF'
 ON_STATE = 'ON'

+ROUTER_INTERFACE_OWNERS = (
+    'network:router_interface',
+    'network:router_interface_distributed'
+)
+
	
 class NeutronAPIDictWrapper(base.APIDictWrapper):
   	 
--
2.3.7
EOF

Apply the patch and restart Horizon:

$ cd /usr/share/openstack-dashboard
$ patch -p1 < /tmp/horizon_floating_ip.patch
$ cd -

$ service apache2 restart

Optional. Define the upstart script

If you restart the Virtual Machine, virtual macvlan device osservices, from where all the OpenStack services are attachedd, will not exist (we created manually). Define this upstart script to let your Virtual Machine create this machine every time the machine spawns:

cat <<EOF > /etc/init/adapt.conf

# description "A script to make the system have a device with the fixed snapshot ip and accessible horizon"
# Author "Antoni Segura Puimedon - toni@midokura.com"

start on (started networking)
stop on shutdown

script
    ip link add osservices link eth0 type macvlan
    ip addr add 172.28.28.4/24 dev osservices

    hostnamectl set-hostname $(facter hostname)
end script
EOF

Access through Horizon

These are the default created user/passwords

  • midoadmin/midoadmin: admin user of tenant Midokura
  • midouser/midouser: 'raw' user of tenant Midokura
  • admin/testmido: admin user of tenant admin

The source way

NOT READY!!!

This way you'll be able to run last version of puppet modules in GitHub instead of Puppet Forge

Clone the environment

First clone midonet_openstack and move to the examples/allinone directory:

    $ git clone git://github.com/midonet/puppet-midonet_openstack
    $ cd puppet-midonet_openstack/examples/allinone

There is a Vagrantfile which defines the deployment.

Just run:

    $ vagrant up

Choose the bridged interface and wait until the deployment finishes. The default user/password is midogod/midogod.

Go to the IP address of the bridged interface in your navigator to access through Horizon.

Clone this wiki locally