Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: job-manager: add support for housekeeping scripts with partial release of resources #5818

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

garlick
Copy link
Member

@garlick garlick commented Mar 21, 2024

This is a prototype of an idea @grondo and I were discussing last week after someone was complaining about a large job stuck in CLEANUP state while the epilog ran (reminding us with a pang of guilt that the lack of "partial release" support is going to be an ongoing source of sadness). This introduces a new type of epilog script called "housekeeping" that runs after the one we have now. There are two main features here

  1. the job can reach INACTIVE state before housekeeping begins. This implies that the job no longer shows up in the active job list, and that flux-accounting stops accounting for the resources against the user.
  2. if the scheduler supports it, resources can be freed to the scheduler in parts as housekeeping completes on each execution target

sched-simple supports (2) above. Fluxion does not, yet. See flux-framework/flux-sched#1151.

The idea was that maybe we could move some things that take a long time (like ansible updates) and that are not job-specific, from the epilog to housekeeping. In addition, working on this will remove some barriers to implementing partial release of resources from jobs, which is more involved, but the ultimate goal.

One loose end is the RFC 27 hello request. Currently an R fragment cannot be passed back that way. For now, if a job's allocation is partially freed when the scheduler is reloaded, it is logged at LOG_ERR and not included in the list of pre-allocated jobs. We accept that we might schedule a job on a node before housekeeping is finished when the scheduler is reloaded (hopefully rare). More design work required for RFC 27.

Another is tooling. flux resource list shows resources in housekeeping as allocated. It would be nice to know at a glance which nodes are tied up in housekeeping and maybe provide other information that would add transparency and improve how the admins interact with the machine.

This is currently based on #5796 because there was a bit of overlap in the code, but conceptually they are separate features. At the time this PR was posted, the new work is the top 4 commits.

I probably jumped the gun going this far with this without getting some feedback on the approach. I'll just say it's only a couple days work and I learned a fair bit about the problem space so we can throw this out and start over if it ends up being off base.


if (!(hk = calloc (1, sizeof (*hk))))
return NULL;
hk->ctx = ctx;

Check warning

Code scanning / CodeQL

Local variable address stored in non-local memory Warning

A stack address which arrived via a
parameter
may be assigned to a non-local variable.
@garlick
Copy link
Member Author

garlick commented Apr 3, 2024

rebased on current master

@garlick
Copy link
Member Author

garlick commented Apr 3, 2024

Possibly this is going too far, but to reuse as much tooling as possible, it would be nice if housekeeping "jobs" appeared as a real jobs that you could view with flux jobs, that produce an eventlog for debugging, that could be recovered on restart, etc.

I wonder if we could revive some of the "fastrun" experiment e.g. 9f2a338 to allow the job manager to start instance owner jobs on its own. Then start one of these jobs for housekeeping each time a regular job completes. Bypass the exec system in favor of the direct, stripped down approach proposed here (no shell), and implement partial release using resource-update so flux jobs would list housekeeping jobs:

$ flux jobs
       JOBID USER     NAME       ST NTASKS NNODES     TIME INFO
   ƒ26aXQeNs flux     housekeep+ R       2      2     3.5h elcap[3423,3499]

and it would be obvious which nodes are stuck, when that happens. The eventlog could provide a nice record of how nodes behaved.

@grondo
Copy link
Contributor

grondo commented Apr 3, 2024

That's a cool idea! While there would be some interesting benefits, I did consider some issues:

Would we end up in the same place because jobs don't currently support partial release? I.e. we'd have to solve partial release and if we did that we could just use traditional epilogs.

Also, I wonder what would actually be in the eventlog for these jobs, since there is no scheduling and no job shell? Would it just be a start, finish, and clean? Without a job shell we wouldn't capture the housekeeping job's output in the traditional way.

The housekeeping jobs would have to be special cased to avoid running the job prolog and also spawning housekeeping work themselves.

Not to say these issues can't be overcome, but I thought I'd bring them up.

@garlick
Copy link
Member Author

garlick commented Apr 3, 2024

Hmm, great points to ponder.

I might be trivializing, but I thought partial release from the housekeeping exec to the scheduler could be implemented as it is here, except we'd support resource-update to show what's been released to the tools.

I guess that implies some sort of "exec bypass" in the job manager.

Perhaps in the spirit of prototyping I could try to tack something on here as a proof of concept if the idea isn't too outlandish.

@grondo
Copy link
Contributor

grondo commented Apr 3, 2024

As a simpler, though less interesting, alternative, we could add a new resource "state" like allocated or free. I'm not sure I like "houskeeping" but maybe call it maint or service mode for now, since housekeeping is kind of like scheduled maintenance.

The job manager could just keep the set of resources in this state and send that set along in the job-manager.resource-status response with the allocated resources, and flux resource list could report nodes in this state.

The one thing missing here that's really nice in using the job abstraction (or even the traditional approach of keeping jobs in CLEANUP state) is that there's no tracking of how long resources have been doing housekeeping/maintenance. Probably would need some way to provide that. 🤔

@grondo
Copy link
Contributor

grondo commented Apr 3, 2024

Perhaps in the spirit of prototyping I could try to tack something on here as a proof of concept if the idea isn't too outlandish.

It does not seem too outlandish for a prototype. My main worry is having two classes of jobs, and the fallout this could cause among all the tools. However, I do see why tracking these things as jobs is compelling.

@grondo
Copy link
Contributor

grondo commented Apr 3, 2024

I might be trivializing, but I thought partial release from the housekeeping exec to the scheduler could be implemented as it is here, except we'd support resource-update to show what's been released to the tools.

This might be good to do anyway since there's interest in allowing jobs to release resources while still running. We'll have to think through how tools and the job record reflect a changing resource set.

Copy link

codecov bot commented Apr 18, 2024

Codecov Report

Merging #5818 (10330a9) into master (21ce5c0) will decrease coverage by 0.05%.
The diff coverage is 76.13%.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #5818      +/-   ##
==========================================
- Coverage   83.31%   83.27%   -0.05%     
==========================================
  Files         514      515       +1     
  Lines       82809    83047     +238     
==========================================
+ Hits        68990    69155     +165     
- Misses      13819    13892      +73     
Files Coverage Δ
src/modules/job-manager/event.c 78.52% <100.00%> (ø)
src/modules/job-manager/alloc.c 76.69% <75.00%> (-0.49%) ⬇️
src/modules/job-manager/job-manager.c 63.33% <50.00%> (-0.46%) ⬇️
src/modules/job-manager/housekeeping.c 76.49% <76.49%> (ø)

... and 16 files with indirect coverage changes

@garlick
Copy link
Member Author

garlick commented Apr 25, 2024

Mentioning @ryanday36.

Ryan, this is a new epilog like capability that runs after each job but does not prevent the job from completing. It also supports "partial release" whereby any stragglers are returned to the scheduler independently. The main topic of discussion is tooling. How do you tell that housekeeping is still running on node(s) and for how long?

Some other aspects are:

  • stdio is not captured and nodes are not drained when the script fails. I figured the script itself could arrange for either of those things. For example calling flux logger or flux resource drain) but that could be changed.
  • right now there is a corner case where a restart of Flux could lose track of running housekeeping work, and new jobs could start on a node before housekeeping is finished. Presumably we'll need to address that in some way.

@ryanday36
Copy link

This seems like an interesting approach. As far as tooling, if you do list the housekeeping jobs are "real" jobs, I like that the they are listed as belonging to the flux user rather than the original job user. Slurm jobs in "completing" states that list the submitting user seem to be a perpetual source of confusion for our users. If you do list them as "real" jobs though, it would be good if they had some state other than "R" to avoid confusion.

I think that using flux logger and flux resource drain in the housekeeping jobs would be acceptable, although it would still be good if the node got drained if a housekeeping job exited with anything other than success.

I agree that yes, it would be good to track nodes that have unfinished housekeeping.

My last thought is a sort of meta-concern. It seems like whenever we talk about the current prolog / epilog it's described as something that you're not quite happy with. Would this be further cementing that implementation, or would it be separating concerns in a way that would allow you to make changes to the current prolog / epilog more easily?

@grondo
Copy link
Contributor

grondo commented Apr 25, 2024

I think that using flux logger and flux resource drain in the housekeeping jobs would be acceptable

My worry is that unanticipated errors from a script or set or scripts run in this way will be impossible to diagnose after the fact. You can easily code your housekeeping script to do

  try_something || flux logger "something failed"

Are we going to suggest authors do that on every line?

What if your python script throws an unexpected exception, are we going to suggest cleanup scripts are wrapped in a large try/expect block so that the exception can be logged with flux logger? What if the exception is that the script tried to connect to flux and somehow failed?

If we represent these things as jobs, we have a nice way to capture output from jobs, so maybe we could just do that? Is the worry that it is just too much work, or that it will double the KVS usage since there will be one housekeeping job per actual job?

@garlick
Copy link
Member Author

garlick commented Apr 25, 2024

Good points. I guess I was thinking of the runner script that they have now that could log errors from the scripts that it runs, but it's actually easy enough to capture output if we need it. The downside is if we capture it in the flux log like we do now, it can be a source of log noise that pushes out the interesting stuff. I think its very easy to add.

@grondo
Copy link
Contributor

grondo commented Apr 25, 2024

If these housekeeping scripts are emulated like jobs could we just log the errors in an output eventlog?
Or are they only jobs as far as job-list is concerned, so all other job related commands are just going to say "no such jobid"?

@grondo
Copy link
Contributor

grondo commented Apr 25, 2024

It seems like whenever we talk about the current prolog / epilog it's described as something that you're not quite happy with. Would this be further cementing that implementation, or would it be separating concerns in a way that would allow you to make changes to the current prolog / epilog more easily?

The problem with the current way the per-node epilog is managed is that it uses a facility that was designed for things that need to run once in a central location after each job to remotely execute the epilog using a specialized script. This was meant to be a stopgap until we could distribute the job execution system, which would then be able to manage the epilog scripts directly from each broker.

However, in my opinion, the solution proposed here is superior to even what we had planned. The benefits I see are:

  • The administrative epilog is separated from the user's job, both for runtime accounting and also reporting as you note
  • The current style epilog could still be used during the transition, or for than anything that should keep resources associated with the user's job until completion (i.e. both epilog styles can coeexist)
  • It doesn't require the job execution system to be distributed
  • Implementation of how the epilog or housekeeping scripts are run can be improved over time (not dependent on job execution system)
  • More flexible: you could presumably schedule maintenance or other work not related to jobs as 'housekeeping'

The only drawback I can see is perhaps more muddled reporting -- it isn't clear what user job spawned a housekeeping job (if that is the way it is going to be reported). But that could easily be queried, or could even be an annotation come to think of it. Maybe that is actually a benefit.

Also, reporting housekeeping work as jobs will (I assume?) report the nodes as "allocated" instead of some other state, and when queries are made for jobs there will always be some housekeeping jobs reported that would likely have to be filtered out.

@ryanday36
Copy link

Cool. Thanks for the discussion on the benefits @grondo. That makes me feel better about it.

I do think that it would be good to be able to correlate housekeeping jobs with the user job that launched them. It's good to be able to find out if one user is consistently managing to leave nodes in bad states.

I'd actually be interested in being able to track the time spent in housekeeping separately from the time running the user jobs. It could be a good tool for keeping run_ansible and other epilog things from getting out of hand. That said, for most reporting, we'd probably lump them in as 'allocated'. I believe that's how time spent in 'completing' currently gets counted in Slurm.

Lastly, good points on the logging discussion. Things like run_ansible do generate a ton of output that would probably be good to keep out of the flux log. It would be good to be able to get to it if the housekeeping job fails though, so a job specific output event log that could be referenced if the job fails seems like it would be useful.

Problem: free requests may not take place in the context of the
job once housekeeping is in place, but t2212-job-manager-plugins.t
uses the debug.free-request event as an indication that the job
debug flag could be set.

Use debug.alloc-request in the test instead.
Problem: jobs get stuck in CLEANUP state while long epilog
scripts run, causing sadness and idling resources.

Introduce a new type of epilog script called "housekeeping" that is
ostensibly job independent.  Instead of freeing resources directly
to the scheduler, jobs free resources to housekeeping, post their free
event, and may reach INACTIVE state.  Meanwhile, housekeeping can run
a script on the allocated resources and return the resources to the
scheduler when complete. The resources are still allocated to the job
as far as the scheduler is concerned while housekeeping runs.  However
since the job has transitioned to INACTIVE, the flux-accounting plugin
will decrement the running job count for the user and stop billing
the user for the resources.  'flux resource list' utility shows the
resources as allocated.

By default, resources are released all at once to the scheduler, as before.
However, if configured, resources can be freed to the scheduler immediately
as they complete housekeeping on each execution target, or a timer can be
started on completion of the first target, and when the timer expires, all
the targets that have completed thus far are freed in one go. Following that,
resources are freed to the scheduler immediately as they complete.

This works with sched-simple without changes, with the exception that the
hello protocol does not currently support partial release so, as noted in
the code, housekeeping and a new job could overlap when the scheduler is
reloaded on a live system.  Some RFC 27 work is needed to resolve ths.

The Fluxion scheduler does not currently support partial release
(flux-framework/flux-sched#1151).  But as discussed over there, the
combination of receiving an R fragment and a jobid in the free request
should be sufficient to get that working.
Problem: there is no test coverage for job-manager housekeeping.

Add a sharness script.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants