Skip to content

philbert/vagrant-puppet-openstack

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 

Repository files navigation

vagrant-puppet-openstack

This Vagrantfile will bring up a 5-node Openstack Icehouse environment (Puppet Master, Controller, Compute, Network and Storage nodes) on virtualbox. It uses Puppet Enterprise and the puppetlabs-openstack modules on Centos 6.5.

I explicitly install all of the versions of the openstack dependencies that I have verified to work. Verficiation has really just been limited to basic smoke testing: everything that is necessary to create an instance, ssh to it and verify it can reach the outside world.

The complete provisioning after 'vagrant up' takes a long time (like 1hr+). At the end of it, all the required puppet modules should be installed, ip addresses and networks configured, nodes and classes added, agents run and converged. That all sounds great, but you still won't have a working system yet. For that you'll need to read the notes below.

Host machine configuration

This project was build with the following host machine configuration:

  • OSX Mavericks (10.9.4)
  • Vagrant (1.6.2)
  • Vagrant oscar plugin (0.4.1) (contains vagrant-pe_build and vagrant-hosts)
  • Virtualbox (4.3.12)
  • Enable ip forwarding and setup NAT for instances to allow internat access. See below how to do this.

Login information

The puppet master is available on https://192.168.11.3. These login details are created by vagrant-pe_build.

user: admin@puppetlabs.com

pass: puppetlabs

Openstack horizon dashboard is available on http://192.168.11.4. These login details are configured in the openstack.yaml hiera file.

user: test

pass: abc123

Additional host setup for NAT and IP forwarding

Because the Openstack external network is actually setup as a host-only network in virtualbox, the only way to give created instances internet access is to setup NAT for them. At the moment, I don't have a better way to do this than to manually set this up on the host machine :( I think it might be possible to specify the external network to use a bridged network, but then you're probably going to run into issues trying to use static ips on a network with dhcp enabled (unless you're on a network that doesn't have dhcp, but when is that ever the case?).

If you have a better way to do this please let me know or changeit and send me a PR!

So here is how to setup ip forwarding and NAT on your (OSX) host: Enable ip forwarding:

sudo sysctl -w net.inet.ip.forwarding=1

Add this line to the file /etc/pf.conf on your host after the line: 'nat-anchor "com.apple/*"'

nat on { en0 } from 192.168.22.101/32 to any -> { (en0) }

Flush and reload the new rules:

sudo pfctl -F all -f /etc/pf.conf

Enable the packet filter:

sudo pfctl -e

There are a couple of caveats with this method:

  • If you are connecting to the net on an interface other than en0, then make sure you change the nat rule to the correct interface
  • If you have more than once instance you will need to add another rule for that too, or fiddle around to get a correct CIDR that only your instances are on. You can't just forward everything on 192.168.22.0/24 because some traffic on that network is destined for node-node communication. I never bothered to do this because I recognise there are limits to how much I can torture my poor little laptop
  • If you reboot your host, then you'll need to run the commands above again
  • If you're on linux, you can do the same with iptables. If you're on Windows... I'm sorry

Idiosyncrasises of the environment

Either due to the behaviour of Openstack or how the puppet modules configure the Openstack services, there are a bunch of issues that you will run into with this system that you need to know how to detect and fix to make it work properly. I will try to document here all the main things that I've found (at least with this version and method of building the system). If you find more then let me know, or update this readme and send me a PR.

A bit of a disclaimer: Some of the things I mention below may not be totally correct. I'm writing this with my current level of understanding of Openstack (and puppet) and I still have a lot to learn.

  1. Always check that Openstack thinks all your services, compute agents and network agents are up and running on http://192.168.11.4/dashboard/admin/info. If any of them are in State down, log into the host and restart the service.
  2. If you log into the Network node and run ip netns it will likely return nothing on a freshly build system. If it is working you should see two items qdhcp-UUID and qrouter-UUID. From what I understand, namespaces are initialised by the kernel, so the only thing that I've found to fix this issue is a reboot of the network node
  3. If you reboot the network node, you may also end up with with some network abnormalities on the compute node. I have typically observed this as DHCP discovery message being sent from a created instance being swallowed on the compute node. Continue reading for debugging and fixing.
  4. If you reboot the compute or network nodes, or start up the system from being shutdown, you will likely end up with some missing network configuration in iptables that should be setup by the agents such as openvswitch, l3 and dhcp. You can tell this by running iptables -S (and knowing what to look for).
  5. On the compute node, if you are missing the chain neutron-openvswi-sg-chain, rules for the tap interface (and a load of others), then you need to service neutron-openvswitch-agent restart. More than likely you will also need to service openstack-nova-compute restart as well.
  6. On the network node it is a similar thing if you are missing the neutron-openvswi-sg-chain (plus a bunch of others), you need to service neutron-openvswitch-restart.
  7. Check that iptables is also setup correctly for the router on the network node. Run ip netns to list the namespaces and then check ip netns exec qrouter-UUID iptables -S. If you are missing rules for "neutron-l3-agent-BLAH" then you need to service neutron-l3-agent restart. If you are missing rules for "neutron-vpn-agen-BLAH" then you need to service neutron-vpn-agent restart
  8. If you create an instance and it does not appear to get an ip address, there are two most likely causes: 1. DHCP discovery messages are not making their way to the DHCP server, or 2. DHCP offers are not getting back from the DHCP server to the instance. The way to determine on which side the problem is run tcpdump -i br-int -vvv -s 1500 '((port 67 or port 68) and (udp[8:1] = 0x1))' which checks for all DHCP-packets on the br-int integration bridge interface. Run this on both the compute node and the network node at the time the instance is trying to send the discovery message (to see when the instance is doing this look for "Sending discover..." in the instance console log).
  9. If you are not seeing DHCP discovery messages on the br-int interface of the compute node then you should restart the openstack-nova-compute and neutron-openvswitch-agent
  10. If you are not seeing DHCP discovery messages on the br-int interface of the network node then you should restart the neutron-openvswitch-agent and neutron-l3-agent
  11. If you are not seeing DHCP offer responses on the br-int interface of the network node then you should restart the neutron-dhcp-agent
  12. If you are still having problems getting an ip address to the instance, but you've checked all the above, then you've likely hit something that I've not seen. Submit me an issue and I'll see if I can help!
  13. If you have problems with the Horizon user interface with an error page (might be a 50x or 40x) saying "Oops something went wrong" then check to see if the rabbitmq-server is running on the control node
  14. After allocating an external IP to the instance you should be able to log into the cirros test instance with ssh cirros@192.168.22.101, however you probably won't have internet access from the instance unless you followed my directions above to setup packet forwarding and NAT to the instance.

Additional Resources

###HAPPY OPENSTACKING!

License

The MIT License (MIT)

Copyright (c) 2015 Philip Cheong

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

About

Vagrantfile to spin up a 5-node Openstack Icehouse system with puppet enterprise on virtualbox

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published