-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix postprocessing failure on local docker #262
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks pretty good. One comment below.
buildstockbatch/postprocessing.py
Outdated
def read_results_json(fs, filename): | ||
def read_results_json(fs, filename, job_id): | ||
with fs.open(filename, 'rb') as f1: | ||
with gzip.open(f1, 'rt', encoding='utf-8') as f2: | ||
dpouts = json.load(f2) | ||
df = pd.DataFrame(dpouts) | ||
# Sorting is needed to ensure all dfs have same column order. Dask will fail otherwise. | ||
df['job_id'] = job_id |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This works, but wouldn't it be simpler and less error prone to parse out the job id from the filename inside this function rather than to pass it in?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, it totally would be. I guess your previous implementation polluted my mind into thinking I should calculate them all at once! I will fix this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Haha. I guess so. Do as I say, not as I do. 😉
buildstockbatch/postprocessing.py
Outdated
results_json_files = fs.glob(f'{sim_output_dir}/results_job*.json.gz') | ||
job_ids = [int(re.search(r'results_job(\d+)\.json\.gz', x).group(1)) for x in results_json_files] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
see comment in the read_results_json
function above.
Ignorant question: Is it possible to create "local docker" CI tests that would catch this issue in the future? It's great that it got caught by the ResStock CI, but ideally it would be caught sooner (i.e., here). |
@shorowit I talked about this with @joseph-robertson as well and it seems like a good idea to have those integration tests running for buildstockbatch PR in addition to ResStock PRs. I think this can be a part of a larger discussion on overhauling the CI for BSB that @nmerket is considering. |
@shorowit Yes, we should catch this. The way we're doing CI with Circle CI makes it hard to do, but switching to GitHub actions will make this possible. #223 should be the next thing I get to on bsb. Will chat with @joseph-robertson about best practices he's learned there. |
Pull Request Description
The postprocessing was failing on docker because the last PR (#258) was relying in jobx.json files to find out which simulation belonged to which job and finding the total number of upgrades. However, jobx.json files are not available while running locally using docker. This PR addresses that issue by not relying on jobx.json anymore.
Checklist
Not all may apply