Skip to content
Sandro Mathys edited this page Oct 20, 2015 · 3 revisions

With this guide, you'll be able to deploy an OpenStack Juno with last OSS version of MidoNet. The tutorial is thought to run Virtual Machines to test the installation. However, it should be easy to adapt to other environments. Please note that it will imply knowledge of Puppet Master/Agent configuration to do it so.

Clone the environment

First clone arrakis and move to the examples/multinode directory:

    $ git clone git://github.com/midonet/arrakis
    $ cd arrakis/examples/multinode

There is a Vagrantfile which defines all the virtual machines to run. To ilustrate what it defines, I've made this schema:

Let's follow the steps to deploy this environment

Puppet Master

First, we need to create and run the Puppet Master machine.

    $ vagrant up puppetmaster

Log in and run puppet master in foreground (not needed, but useful to keep track of what's going on):

    $ vagrant ssh puppetmaster
    $ sudo puppet master --no-daemonize --verbose

I recommend to open another terminal now and leave puppet master throw logs.

Zookeeper/Cassandra nodes

the nsdb* nodes in the schema are the nodes that run zookeeper and cassandra. To configure the first one, do

    $ vagrant up nsdb1
    $ vagrant ssh nsdb1
    $ sudo puppet agent --onetime

(If you don't set --onetime, puppet agent will pull from master new configurations each 3 minutes. Up to you)

and log out. You should be able to see something like this in the puppet master logs:

  Notice: nsdb1.midokura.com has a waiting certificate request
  Info: Autosigning nsdb1.midokura.com
  Notice: Signed certificate request for nsdb1.midokura.com
  Notice: Removing file Puppet::SSL::CertificateRequest nsdb1.midokura.com at '/var/lib/puppet/ssl/ca/requests  /nsdb1.midokura.com.pem'
  Error: Could not resolve 172.16.33.4: no name for 172.16.33.4
  Info: Caching node for nsdb1.midokura.comNotice: Scope(Class[Midonet::Repository::Ubuntu]): Adding midonet sources for Debian-like distribution
  Notice: Scope(Class[Midonet::Repository::Ubuntu]): Adding midonet sources for Debian-like distribution
  Notice: Compiled catalog for nsdb1.midokura.com in environment production in 0.56 seconds

Ignore the error.

Do the same for nsdb2 and nsdb3

Cassandra won't run. Not because it is badly configured, but because it needs more RAM. You can change it by modifying the vb.memory of the nsdb* VMs defined in the Vagrantfile. My machine has its RAM limitations and we can ignore this error for now because Cassandra is not needed for all use cases.

Controller

Here it will deploy all the OpenStack services and APIs except the ones that belong to the computer.

As before, create the VM:

    $ vagrant up controller

It will ask to which interface you want to bridge the VM. This bridged interface will give you access to all the APIs from your host machine. You need to resolve which is the bridged interface and then configure puppet master to give it to it.

Log into the VM:

    $ vagrant ssh controller

The bridged interface is the eth2:


vagrant@controller:~$ ip -4 a | grep eth2
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    inet 192.168.2.39/24 brd 192.168.2.255 scope global eth2

knowing this, you have to change the yaml file of the puppetmaster before to run puppet agent. So:

    $ exit
    $ vagrant ssh puppetmaster

There are tree configuration variables you have to change in /var/lib/hiera/common.yaml:

    vagrant@puppetmaster:~$ cat /var/lib/hiera/common.yaml | grep replace
    openstack::network::api: '<replace-host-network>'
    openstack::controller::address::api: "<replace-controller-bridged-ip>"
    openstack::storage::address::api: "<replace-controller-bridged-ip>"

You have to edit the /var/lib/hiera/common.yaml with the values you got in the bridged ip of the controller. In my case:

<replace-controller-bridged-ip> to 192.168.2.39

<replace-host-network> to 192.168.2.0/24

Once done, go back to controller vm and run the puppet agent:

    $ exit
    $ vagrant ssh controller
    $ sudo puppet agent --onetime

This time, the --onetime is mandatory: for some reasons, OpenStack manifests are not indempotent, so you can only configure the controller once.

It takes more than half hour, so meanwhile you can deploy the computes

Compute nodes

Don't start compute nodes till the controller is done. They perform authentication through Keystone and you must be sure that Keystone is up.

That's the same than nsdb ones:

    $ vagrant up compute1
    $ vagrant ssh compute1
    $ sudo puppet agent --onetime

Please note that current midonet puppet module does not create the tunnel zones. We are aware that has to be improved. Meanwhile, follow the official docs doing it manually:

https://docs.midonet.org/docs/latest/quick-start-guide/ubuntu-1404_juno/content/_midonet_host_registration.html

And the same for compute2 (although is quite posible you'll run out of resources here!. Do it only if you need it)

That should be enough to see the Horizon right on your bridged ip (192.168.2.39) in my case:

Login with users/passwords

  • midogod:midogod (admin user of tenant midokura)
  • midoguy:midoguy (raw user of tenant midokura)
  • admin:testmido (admin user of tenant admin)
Clone this wiki locally