Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Need the ability to set up and test complex networks #72

Open
trevor-vaughan opened this issue Mar 13, 2019 · 14 comments
Open

Need the ability to set up and test complex networks #72

trevor-vaughan opened this issue Mar 13, 2019 · 14 comments

Comments

@trevor-vaughan
Copy link

Currently, we do this using beaker with one of our most complex multi-node test scenarios being represented at https://github.com/simp/pupmod-simp-rsyslog/tree/master/spec/acceptance/suites.

The main driver of the SIMP project using Beaker over Test Kitchen was the ability to easily do multi-node testing.

If litmus cannot do this and requires the use of PDK (which is a lot of work on existing large frameworks) then we may be unable to migrate.

@tphoney
Copy link
Contributor

tphoney commented Mar 20, 2019

Thanks @trevor-vaughan for this input. and its something that we are actively thinking about.
In the short term we are looking at testing simple deployments, to prove the concept out.
To your point a complex network deployment need to be able to handle:

  • many different kinds of provisioning setup. ie what machines / provisioner do i use for PR testing. vs release testing. how many machines / what machines do i spin up for local developer testing.
  • what puppet agent versions do i want on what machine, this should be able to use the list from the provisioning setup above.
  • how do i attach roles / classes / tests to particular machines.
  • how do we use existing machines in this deployment

previously beaker flattened all of this into a single host.yaml file. which allowed some flexibility but was not perfect.

@trevor-vaughan
Copy link
Author

@tphoney Back in Beaker-land, we worked around this by adding ERB support to the host.yaml files.

At that point we literally had infinite composability for users that needed it and everyone else had a pretty good, easy to read and understand, format.

@tphoney
Copy link
Contributor

tphoney commented Mar 26, 2019

Hi @trevor-vaughan i have taken a run up to provisioning, #78. Please note it does not install puppet or do anything else complex. This is the first step towards different deployment scenarios.

@trevor-vaughan
Copy link
Author

@tphoney Left some comments. Provisioning Puppet should definitely be a separate step since you often want to do different things before adding Puppet to the system and/or start with an image that already has Puppet installed.

@tphoney
Copy link
Contributor

tphoney commented Apr 2, 2019

Additional features:

  • From the Foreman perspective we have various tools where it's important to have a FQDN. This means we always set a hostname to host.example.com. It'd be nice if you had the option to say server.example.com is centos-7-x86_64, client-c7.example.com is centos-7-x86_64 and client-d9.example.com is debian-9-x86_64.

  • I would like the ability to include files instead of having to put all of the complexity into a single file. Having an includedir directive would be great.
    A single host
    A group of hosts that can talk to one another
    Several groups of hosts that get destroyed between tests
    Basically, when you need to test combinatorics, you simply can't spin up that many hosts at the same time on a given system so you need to be able to loop through groups of hosts in different situations. The SIMP project enhanced Beaker to support this as 'scenarios' (not to be confused with Beaker scenarios which were added later)

  • What I expect from a tool is:
    A clean machine with a certain base image
    I can configure a hostname
    I can configure the "hardware" (more important with VMs) - memory, CPUs
    I can get some console to it (SSH or direct with docker

@Dan33l
Copy link

Dan33l commented Apr 2, 2019

  I can configure the "hardware" (more important with VMs) - memory, CPUs

If we understand configure the hardware as 'provisionning several network devices' i am fine with closing #60

@tphoney
Copy link
Contributor

tphoney commented Jul 24, 2019

HI everyone, i have a POC you may be interested in. It lives in puppetlabs/provision#58 and https://github.com/puppetlabs/provision/wiki#experimental-features

It was an alternative to using litmus directly( which is focused around a specific test scenario). Using bolt plans and tasks, this can i spin up an arbitrary number of systems, assign them roles / names. And run tasks against those machines or run code on a puppet server. It uses plans for running checks, and does not use serverspec at all. This would allow you to write tests in whatever language you want. It is much more flexible.

Any feedback is welcome. Adding in @florindragos and @michaeltlombardi and @ThoughtCrhyme for visibility.

@trevor-vaughan
Copy link
Author

Took a look at the PoC and it makes sense from the point of view of flexibility. How would you get the usual rspec-style feedback in this sort of scenario? (I'm not a fan of how serverspec works, but I do like having my tests cleanly defined in a test language).

Honestly, this is sort of what I figured would be auto-generated as a replacement for the Beaker guts based on the nodesets that are provided. Not sure if it will be faster though and will be difficult to extend via plugins if I'm reading it correctly.

@Dan33l
Copy link

Dan33l commented Jul 24, 2019

Hi.

@tphoney i had a look to the PoC, but i didn't found any possibility in provisioning a SUT with several network interface as we talked in #60. Am i missing something ?

Perhaps we should modify :

I can configure the "hardware" (more important with VMs) - memory, CPUs

by

I can configure the "hardware" (more important with VMs) - memory, CPUs and network interfaces

@tphoney
Copy link
Contributor

tphoney commented Jul 26, 2019

@trevor-vaughan litmus is designed for being wholly parallel, and is suited to testing a module on singular machines. These machines do not affect each other.
This new approach allows nodes to interact with each other, and the language of rspec is not suited to that. eg do a thing vs do a thing on this node. How ever, if you look at spec_helper_acceptance_local.rb which is used to set up rspec, the only thing required to use rspec would be setting the environment variable of TARGET_HOST https://github.com/puppetlabs/puppet_litmus/wiki/Converting-a-module-to-use-Litmus#specspec_helper_acceptancerb

In terms of auto-generating, i would say that we need templates, and we can feed variables into plans. From speaking to different users, their needs are quite drastically different, which means that breaking up 1 magic process that was inflexible. This will give you smaller more composable parts. EG Rather than 1 F1 car with everything set (Beaker), this new POC allows for lots of different cars, but we need more information on what you want the car to do.

@tphoney
Copy link
Contributor

tphoney commented Jul 26, 2019

@Dan33l by using plans and tasks, this should allow you to script the network changes you desire, and target it whatever SUT you want. Obviously there is networking magic that you already script, bolt will allow you to enact that on whatever you want.

@trevor-vaughan
Copy link
Author

@tphoney I agree with the idea of the new approach but it seems to be veering toward just writing complex plans for disparate environments. This absolutely works, but I'm not sure why it is better than the approach that Beaker takes.

I do believe that you are incorrect regarding the language of rspec not being suited to interaction between nodes. We've been doing it for a few years now with Beaker with great success and ease of understanding.

For example, the following produces very understandable output and stops when something is incorrect with sufficient output.

it 'should configure the server' do
  apply_manifest_on(server, server_manifest, :catch_failures => true)
end

it 'should configure the client' do
  apply_manifest_on(client, client_manifest, :catch_failures => true)
end

it 'should be able to successfully communicate between the client and server' do
  on(client, "curl http://#{server}/whatever")
end

I'm still falling back on what's preventing me from looking at litmus more deeply which is primarily that it's much more difficult to get the nodes set up that what is currently in use.

@tphoney
Copy link
Contributor

tphoney commented Oct 3, 2019

Can i suggest you have a look at this https://github.com/puppetlabs/provision/wiki/An-example-run-through-of-complex-provisioning

This will allow you to provision machines and assign them roles. These roles are stored in the inventory.yaml file. so it can be used to differentiate between SUT. Does that help ?
https://github.com/puppetlabs/provision/blob/master/plans/tests_against_agents.pp for a plan that runs rspec against certain roles.
or
you can read the inventory hash in your rspec, and filter on what ever you need https://github.com/puppetlabs/puppet_litmus/blob/master/lib/puppet_litmus/inventory_manipulation.rb#L119 allows you to retrieve the vars/role associated with a SUT .

@DavidS
Copy link
Contributor

DavidS commented May 22, 2020

As one step towards this goal, watch this demo of the bolt/terraform integration as a driver for provisioning infrastructure for testing: https://www.youtube.com/watch?v=8BMo9DcZ4-Q

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants