New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Incorporating reproducibility #12
Comments
I'd suggest we try the awards route. We could add an additional checklist during the abstract submission if the authors would like to participate in the prize, please share code/data. I'm happy to champion reproducibility in the 2024 CI conference with @dorchard. We should build upon existing resources e.g. CI2023 Reproducibility Challenge, AGILE Reproducibility Checklist and others. |
Happy to get involved in this discussion :) |
Sorry for being slow on the uptake- a very busy month, to wit, perhaps its best having a team of reproducibility champions! I would be interested in leading efforts around reproducibility in peer review process, e.g., having an additional artefact evaluation committee that reviews artefacts submitted alongside accepted papers. I couldn't organise it on my own though! |
My postdoc Roly Perera would be able to help too I think and would be a great addition. He and I have both been involved in artefact evaluation in the past (both as committee members and as punters being evaluated!) |
@rolyp would be a great addition. Roly, please feel free to confirm your interest here (: |
Count me in – sounds great. |
@dorchard the reproducibility review is interesting - but thinking about the composition of submissions, we often have much more abstracts than papers. The reproducibility review probably only apply to the full paper submissions. |
Yes I was thinking just for full paper submissions and on those that are accepted, i.e., this wouldn't be about accepting/rejecting based on the artefact, but about assessing its reproducibility and trying to encourage and enable authors to have available and reusable artefacts. |
Hi @dorchard @rolyp @acocac @MarionBWeinzierl @geo-rao - Love where this discussion is going! We'll need to include some note on this in the submission guidelines (which are going live asap!) if the full paper submissions are going to be assessed on this, even informally. Could you have a look at the submission info PR and make any suggestions on the file? I'll add you all to the repo now so you have write access (let me know if you can't access!). I suggest you work on something quite light-touch (or non-specific!) for timelines :) |
Here is the description of the AE/AD evaluation at SC23, including what they ask for for the badges: https://sc23.supercomputing.org/program/papers/reproducibility-appendices-badges/ |
Hi @cassgvp, I’m off this afternoon and most of tomorrow but will earmark some time on Thurs to take a look. Thanks. |
A few random notes/questions in advance of today’s meeting:
|
@acocac @cassgvp Quick note in advance of today’s meeting. I noticed the #3 issue. Is it worth now creating a separate Artifact Evaluation Committee issue to track the following:
|
Sorry I did not do my homework yet of writing an overview of the approach we will use. |
Sharing some areas of work and questions discussed in today's meeting. Additional inputs or resources would be welcome 🙏 - Reviewers: discussed strategies to recruit volunteers. Should we start with the existing list of reviewers for abstract/full paper submissions? Shall we have a separate call for reviewers of computational artefacts? - Checklist: which elements are essential? will we evaluate reproducibility according to a tier-based system ACM artefact review badging? which other components should we adopt/adapt from existing checklists: SC, AGILE, Machine Learning, ReproHack? - Communication: how do we ensure a safe and healthy communication space between authors and reviews (and moderators)? are we adhering to an existing code of conduct e.g. EDS, Climate Informatics or do we need a custom one? - Badges: if using an existing badging system, which permission do we require e.g. ACM? do we prefer badges successfully implemented in the wider informatics/computer science community e.g. ACM in SPLASH2024 or emerging ones in other domains e.g. CodeCheck? - Platforms: do we prefer simple platforms: slack, email and Google Forms, or paid ones? - Computational artefacts: can we anticipate or investigate which are the types of artefacts commonly used in computational climate science? will the reproducibility assessment change according to the typology of computational artefacts? - Activities:
|
@acocac @cassgvp Closing this as “completed” – in the sense that the Artifact Evaluation process is well under way at https://github.com/orgs/alan-turing-institute/projects/246. |
Summary Sentence
The organising committee are keen to emphasise the importance of reproducibility in our community.
The following activities would provide opportunities to celebrate and build capacity for reproducibility in this meeting.
See also:
What needs to be done?
Org committee to decide what activites to include and nominate leads for each.
—
Update after the issue was opened
[Add details]
The text was updated successfully, but these errors were encountered: