Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Full k6 control from API #11

Closed
ragnarlonn opened this issue Dec 22, 2016 · 4 comments
Closed

Full k6 control from API #11

ragnarlonn opened this issue Dec 22, 2016 · 4 comments

Comments

@ragnarlonn
Copy link

ragnarlonn commented Dec 22, 2016

After playing around with remote controlling k6 from another process I've discovered some things that's missing or would ease the integration considerably.

  • I'd like to be able to start k6 in some kind of "slave mode" where it only spins up the API. From there on I want to interact with k6 via the API only. I plan to start k6 with standard in, out and err bound to /dev/null and detached from any shell.
  • I'd like to have a "self destruct" API endpoint (v1/quit?) to kill speedboat when it's time to stop the test and tear down output plugins and similar.
  • To even get the test going, I need a v1/run endpoint that would work as we discussed earlier. This would be the way to send the javascript scenario + config in a POST-request to a process started in a way as described above.

These are thoughts based on last weeks testing/discovery coding and are not finalized suggestions. Hoping for a good discussion.

Imported from https://trello.com/c/G5sdTlC4/86-full-k6-control-from-api

@ragnarlonn
Copy link
Author

ragnarlonn commented Dec 22, 2016

Martin Fijal
Pausing this for now since it's out of scope.
Dec 13 at 10:49 AM - Reply - Delete

Daniel Brandt
@uppfinnarn Oh... I like that idea. It really is a separate concern from the engine and would make for cleaner code too, I think.
Dec 8 at 8:51 PM - Reply - Delete

Emily Ekberg
Hm, possible idea: a k6 puppet command that listens on a different port and exposes an API, from which you can spawn k6 run subprocesses, full script payloads could be accepted there by saving them to disk in a temporary directory.

The big thing is that this would avoid introducing more complexity in the already quite complex engine/runner system, and allow instances to be reused in a clean fashion.
Dec 8 at 8:12 PM - Reply - Delete

Daniel Brandt
@uppfinnarn A nil-runner seems like a good solution, it was something like that I was looking for. What I'm trying to solve here is a fire-and-forget way of starting an "empty" k6 instance, I suppose it overlaps somewhat with the wish for a --daemonize mode I've seen asked about in slack. In particular, what I want to avoid is having to copy files to an instance just to start k6. This way, I can orchestrate the exact starting time of the test and possibly re-use the instance easier. I'd love to be able to spin up an instance from an AMI that starts k6 as it boots up.

Regarding the self-destruct endpoint, this might not be strictly necessary. I could just terminate the instance when the test is scaled down...
Dec 8 at 11:14 AM - Reply - Delete

Emily Ekberg
I'm trying to think of a good way to do this… you need a Runner before you can do anything involving the State, but you can't safely unload a Runner (covering for that with the possibility of malicious scripts would be a slight performance decrease). I guess the best way would be to do something like:

k6 run NULL
Which would construct an Engine with a nil Runner, then you could call /v1/init with a { type: "js", filename: "script.js" } to initialize it…

Somehow, I figured we'd just fire up k6 instances on nodes over SSH or through another program that spawns k6 in a subprocess, rather than have it worry about its own life cycle.
Dec 7 at 6:42 PM - Reply - Delete

Daniel Brandt
@robinegustafsson @uppfinnarn I use the source code as a reference for now as this is mainly for internal use at the moment, documentation (beyond code comments at least) is not something I feel we have time for at the moment. Others may have other ideas of course. :)
Dec 6 at 10:31 AM - Reply - Delete

Robin Gustafsson
@uppfinnarn You also had some ideas for trying to unify this "control API" with the "cloud API" (https://loadimpact.quip.com/08WVAPpFVbfG) to some extent right?

Is there description of the current control API anywhere or does one have to dig into the code (https://github.com/loadimpact/k6/blob/master/api/api.go)?
Dec 6 at 10:27 AM - Reply - Add Link as Attachment - Delete

David Rosen
OK. good plan. Let's aim to clarify and align on missing pieces of functionality asap, then move to fill the gaps.
Dec 6 at 10:21 AM - Reply - Delete

Daniel Brandt
@davidrosen23 It's implemented to some extent. I'll be creating a code repository shortly to demonstrate what's there. What I'm suggesting here is expansions to that. The v1/run endpoint is discussed informally previously between at least me and Emily.
Dec 6 at 10:12 AM - Reply - Delete

David Rosen
@uppfinnarn @robinegustafsson - if I recall correctly we had the remote control API functionality implemented in k6 a while back. Is it still working?

If not, what's the level of effort to get this implemented? What upcoming planned work will have to be pushed out if we wanted to get to this as the next task in line for k6?
Dec 6 at 10:09 AM - Reply - Delete

@liclac
Copy link
Contributor

liclac commented Jan 13, 2017

Okay so, picking this back up; I don't know where we discussed it, but the most recent proposal is to use a k6 puppet command. It'll listen on a different port, and have a CRUD API for instances:

  • GET /v1/instances
  • POST /v1/instances
  • GET /v1/instances/:id
  • DELETE /v1/instances/:id

@ppcano
Copy link
Contributor

ppcano commented Nov 28, 2017

@ragnarlonn Could we close this issue?

@liclac
Copy link
Contributor

liclac commented Nov 29, 2017

This should just be a part of #140

@liclac liclac closed this as completed Nov 29, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants