New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LLNL GitLab-CI #747
Comments
@jedbrown For your information: https://radiuss-ci.readthedocs.io/en/latest/ |
The libceed repo already has |
Yes there is. We can create "rules" that will distinguish based on the server name for example: |
I think I would prefer the first of these two options because all content stays in the repository. |
This ECP documentation is really good, and will save me considerable amount of time answering the same questions here on the lab internal docs! |
I like the first of those two options as well. I could see it being useful to split the scripts for each CI instance into separate files that get referenced in |
@jeremylt, I am failing to see the nuance with the example given where each job is calling a different script. |
I'm just commenting that I would find it easier to see what's going if all this from our current yaml
would be put in a separate bash file (and the script for the new LLNL job put into its own bash file) if we run both CI jobs off of the same yaml. Then we'd have
or something along those lines |
FYI, the first method presents the drawbacks that rules can be hard to stack.
There are 2 inaccuracies here:
That is because, per documentation:
|
I don't see a link to the ECP CI documentation repo, but I would like to share those thoughts with the authors. |
@jeremylt I agree, putting scripts in a separate file is a good practice. |
I think they'd love a merge request. https://gitlab.com/ecp-ci/ecp-ci.gitlab.io/-/blob/main/docs/guides/multi-gitlab-project.rst |
For those with LLNL CZ access, I created a mirror repository that will be able to run CI jobs. You can log in and request access if you don't already have.
Relevant documentation:
The LLNL system is set up to allow "LGTM" comments by a trusted member to launch a pipeline with jobs on LLNL machines (including quart, lassen, and corona). Some MFEM team members have experience with this setup. I don't think we need to use it for all PRs, but it'd be wonderful to set this up so it's easy to use for PRs that look like they would benefit. I'd envision using the batch executor on a single node, ideally with either MFEM or PETSc to test multi-GPU solvers. This could include short-running performance tests for longitudinal tracking.
The text was updated successfully, but these errors were encountered: