Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Event bus for configuration changes #45

Open
patch0 opened this issue Jun 13, 2017 · 5 comments
Open

Event bus for configuration changes #45

patch0 opened this issue Jun 13, 2017 · 5 comments

Comments

@patch0
Copy link
Contributor

patch0 commented Jun 13, 2017

A common problem in developing Symbiosis functionality is the difficulty in detecting configuration changes. Currently Symbiosis is configured using SFTP, but has no hooks into that system, so it's always going to be somewhat hobbled.

It would be good to have some sort of event bus to detect configuration changes, and act upon them at the time. This removes the need for polling which is wasteful and not very timely in some cases. Using a more event based approach would make configuration changes more or less instant, and maybe more reliable too.

One way of achieving this would be to have hooks into the SFTP subsystem. OpenSSH doesn't seem to have them, others do, ie https://www.npmjs.com/package/ssh

Whilst not advocating the addition of node into the stack, it would be good to have this functionality somehow, so it's open for debate.

Originally reported on Bytemark's Gitlab by @dedwards on 2016-08-10T11:39:11.969Z

@patch0
Copy link
Contributor Author

patch0 commented Jun 13, 2017

modd (the golang daemon @skemp mentioned) uses inotify internally (well, an abstraction over inotify & equivalents for other OSes) - if we're discounting using e.g. incron for reasons of inotify not being good enough, then we should discount that also.

Originally posted by @telyn on 2017-03-03T10:00:14.862Z

@patch0
Copy link
Contributor Author

patch0 commented Jun 13, 2017

The only (obvious!) comment I have here is that this might lead to a proliferation of workers which are idle most of the time.

In terms of light-weight, but reliable, queues I've used both beanstalkd and redis for custodian. Both of those have decent ruby clients, which is what I assume we'll be using.

If we want something more queue-centric mosquitto is mqq-compatible.

Architecturally the big issue is deciding how/when to inject the messages. If you have a queue running, receiving/transmitting messages that's fine, we can write a bunch of agents easily such that a JSON message can be responded to. For example we might assume each message has a "type" and a "path". So we could transmit:

 { type: ssl-config,
    path: /srv/example.com/config/ssl.key }

That's pretty simple, obviously. What is harder is working out what injects the message. Because the obvious approach is a deamon which issues watches on directories:

  • Watch /srv/*/config/ssl.*
    • Transmit ssl-config message.
  • Watch /etc/symbiosis/firewall.d/
    • Transmit firewall-changed message (ip-blacklisted, ip-whitelisted, ip-expired, etc.)

But as Patrick just said using watches is expensive. So we end up back where we started! Having a queue is neat, and could allow interesting things to happen, but it doesn't solve the problem of responding to changes in a timely fashion, because we still have to get those updates. (i.e. If we could efficiently react to filesystem events we could exec scripts directly, without the indirection the queue would provide.)

There are some solutions which use polling, or a this golang daemon which allow arbitrary commands to be run on different path-changing events. They might be worth looking at, but at the least you'd want to think about which events we're monitoring:

  • /srv/\*/config/\*
  • /srv/\*/public/mailboxes/*/password

etc.

Originally posted by @skx on 2017-03-03T08:02:14.900Z

@patch0
Copy link
Contributor Author

patch0 commented Jun 13, 2017

(Ah, I thought that was what already happened!)

Right, yeah inotify sounds like it's not really a future-proof solution.

An architecture plan would be extremely sensible I think - symbiosis is a complex bundle of moving parts currently tied together using a filesystem as a database. We probably want to figure out a few architecture plans ('current', 'future utopia', and some steps in between)

I'm having some design thoughts that are a bit beyond the scope of this issue, now..

Originally posted by @telyn on 2017-03-02T17:01:51.697Z

@patch0
Copy link
Contributor Author

patch0 commented Jun 13, 2017

The goal is here to have a more responsive system. Yes, we could trigger the httpd daemon to reconfigure itself every minute, but that sounds :sick:.

Using inotify will not scale on boxes with many domains -- I think the default maximum is 8192 watches, and each one takes up more than zero bytes of memory. So maybe hooking into SFTP would be awesome, but I think a first goal here would be have commands poke messages in to a queue when updates happen, the prime example being SSL certificates. But we could use this technique for other things too, e.g firewall rules, erm.. other bits.

Maybe we need to come up with an architecture plan, which looks at the whole system and how it glues together.

Originally posted by @patch0 on 2017-03-02T16:40:42.952Z

@patch0
Copy link
Contributor Author

patch0 commented Jun 13, 2017

I've been talking with @patch0 about getting a event bus/message queue into symbiosis.

The options basically seem to boil down to dbus vs POSIX message queues. Both have ruby implementations/wrappers: dbus, POSIX message queue

What you're suggesting with the npmjs package (which is based on the C library libssh - we could no doubt find bindings or equivalent implementations of the SSH protocol in other languages) is effectively writing our own SSH server just for SFTP, just so we know when files get modified by SFTP. I don't think this is a particularly sensible route to go down, but I could just be being fearful for no great reason. The other thing to note is that it would still be possible for users to log in over SSH and modify the config files themselves - if we transitioned wholesale to this SFTP model it'd mean your changes wouldn't get picked up until you manually ran a script to check or until you tweaked something with SFTP.

We could run something like incrond that supports recursing into folders (or incron with an extra job to update incron's tables when a domain is added/removed), and have it detect when files/folders are modified in the /srv folder, then pass the details to a script we'd write which would determine if it was something needing to go on the message queue (basically 'does the filename match /srv/*/config/*'?) and chuck it on the queue. That might start performing poorly when many files in /srv are being modified at once (magento with its PHP sessions?), also Patrick said that incron is racy? I can see an issue where a user deletes a whole domain, but during that incron fires to say 'this piece of config has been removed, this other piece of config has been removed' and symbiosis spends a load of extra time re-doing the config for a domain it's ultimately removing.

Assuming we have a POSIX message queue, we could use systemd to listen to the queue and dispatch jobs to actually do stuff. There's a ListenMessageQueue directive for systemd sockets, which allows us to start a unit given some input on a message queue.
Alternatively, our incron script could talk dbus to systemd itself and start a unit using the StartUnit call

So these are sort of some options in that regard, but I'm not entirely sure what the need is - what's the actual problem with the way Symbiosis currently works? I know there's a bit of a delay as the cron jobs to recheck the config only come round every minute or so. Are there other problems?

Originally posted by @telyn on 2017-03-02T16:29:17.292Z

@patch0 patch0 modified the milestone: buster+ Jul 20, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant