Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pdf-bot limited to one machine rendering pdf's #18

Open
danielwestendorf opened this issue Jan 15, 2018 · 0 comments
Open

pdf-bot limited to one machine rendering pdf's #18

danielwestendorf opened this issue Jan 15, 2018 · 0 comments

Comments

@danielwestendorf
Copy link
Contributor

I'm looking for feedback from @esbenp before I dig into a PR for this.

Goal:

I'd like to adapt pdf-bot to be a scaleable pdf rendering microservice which can have resources added/removed on demand to handle workload fluctuations.

Problem:

Because of pdf-bot's PostgreSQL database wide queue locking, only one machine can render pdf's for the given API endpoint at a time.

Because PG is a shared database, it' possible to scale the work load horizontally across many machines in parallel. To accomplish this, we would need to change the queue locking mechanism to be on a per-job basis, and adapt the generation commands (shift:all comes to mind) to support this.

There are a few concerns here:

  1. This would require a database migration of some sort to support
  2. Process crashes, unhandled errors, etc could result in jobs never being processed if implemented poorly
  3. ?

Purposed implementation:

  • Add a processing_started_at date column to the jobs table
  • Adapt getAllUnfinished to select jobs where they aren't completed and processing_started_at is greater than a given a configurable amount (30 sec default maybe)
  • Make isBusy calls return false always (maybe?)
  • Adapt cli scripts (shift, shift:all, etc) to handle the possibility of getting an empty array of jobs instead of relying on an isBusy call (maybe?)
  • Remove setIsBusy calls (maybe?)
  • Add changes to LowDb as well (maybe?)
  • Remove worker table
id processing_started_at completed_at
1 2018-01-08 17:31:17.825153 2018-01-08 17:31:48.925153
2 2018-01-08 17:31:17.825153 null
3 2018-01-08 17:31:47.925153 null
4 2018-01-08 17:31:48.925153 null
5 null null
6 null null

Given this sample data, jobs 2, 5, and 6 would be eligible for the next generation worker to start processing, while jobs 3 and 4 are assumed to be currently processing.

If this all sounds like too big of an overhaul, I'd be open to other suggestions. I'd also be willing to add the support to a new Redis database adapter instead as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant