Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ResStock-HPXML #208

Merged
merged 97 commits into from
Mar 2, 2022
Merged
Show file tree
Hide file tree
Changes from 27 commits
Commits
Show all changes
97 commits
Select commit Hold shift + click to select a range
b101461
stubbing out ResidentialHpxmlWorkflowGenerator
nmerket Feb 17, 2021
babd75a
Update residential hpxml workflow generator and test.
joseph-robertson Feb 17, 2021
38d30e0
Update docs with hpxml workflow generator page.
joseph-robertson Feb 17, 2021
3ad5c48
Include measures at resources/hpxml-measures.
joseph-robertson Feb 17, 2021
30e7c74
Fix some formatting.
joseph-robertson Feb 17, 2021
c4c22b6
Few more minor docs changes.
joseph-robertson Feb 17, 2021
5f574c8
Get os 3.1 image and correct timeseries file.
joseph-robertson Feb 17, 2021
2f0228c
Some build existing model args have changed.
joseph-robertson Feb 17, 2021
60c3fe8
Remove more unit_ from build existing model.
joseph-robertson Feb 17, 2021
5b3d901
Clean up a bunch of stuff.
joseph-robertson Feb 18, 2021
9f6d31d
Mount hpxml measures folder.
joseph-robertson Feb 18, 2021
458929e
Only mount hpxml folder if it exists.
joseph-robertson Feb 18, 2021
baea180
Typo.
joseph-robertson Feb 18, 2021
d462e79
Change mount dir in docker and aws.
joseph-robertson Feb 18, 2021
89cea4f
Revert hpxml folder changes and try mount separately instead.
joseph-robertson Feb 18, 2021
36e6931
Try fixing resources bind mount.
joseph-robertson Feb 18, 2021
a9dd5ea
Fix how do_timeseries is determined.
joseph-robertson Feb 19, 2021
cd52db7
Get upgrade costs from datapoint out json.
joseph-robertson Feb 19, 2021
1054eb2
Add run options to hpxml workflow.
joseph-robertson Feb 19, 2021
c18dc16
Report upgrade costs in results csv.
joseph-robertson Feb 19, 2021
83bb566
pinning to numpy<1.20
nmerket Feb 19, 2021
9295171
updating numpy>=1.20 and pyarrow>=3.0 as those seem to work with each…
nmerket Feb 20, 2021
8561012
Merge branch 'develop' into restructure-v3
joseph-robertson Apr 14, 2021
45687c2
Update for new schedules csv filename.
joseph-robertson Apr 14, 2021
78bd3b4
Run option fast, and call server directory cleanup.
joseph-robertson Apr 14, 2021
a22012a
Switch from data_point_out json to results json.
joseph-robertson Apr 14, 2021
febc51b
Get started_at and completed_at from job files.
joseph-robertson Apr 14, 2021
e8fe426
Clean up default args and add optional debug arg.
joseph-robertson Apr 14, 2021
6c14585
Merge pull request #217 from NREL/restructure-v3-runoptions
joseph-robertson Apr 14, 2021
e6a8bf8
Get residential hpxml test passing.
joseph-robertson Apr 15, 2021
0b3e65f
Merge branch 'develop' into restructure-v3
joseph-robertson Apr 15, 2021
729be65
Clean up started_at and completed_at stamps.
joseph-robertson Apr 15, 2021
c239132
Add measures and reporting_measures into hpxml generator.
joseph-robertson Apr 15, 2021
201fff3
Update the hpxml generator docs.
joseph-robertson Apr 15, 2021
dffd251
Merge remote-tracking branch 'origin/develop' into restructure-v3
nmerket Apr 27, 2021
74e4199
Merge remote-tracking branch 'origin/develop' into restructure-v3
nmerket Apr 27, 2021
477213d
Determine invalid dps using results json.
joseph-robertson Apr 27, 2021
04eaf08
Remove debug line.
joseph-robertson Apr 27, 2021
fd84328
Minor fixes in postprocessing.
joseph-robertson Apr 29, 2021
660b1ab
Update changelog_dev.
joseph-robertson Apr 29, 2021
73e465c
Merge branch 'develop' into restructure-v3
joseph-robertson May 11, 2021
8696089
Update simg to os320.
joseph-robertson May 11, 2021
e3e25dc
Merge branch 'develop' into restructure-v3
joseph-robertson Jul 19, 2021
f0168ed
Update version and sha to os 3.2.1.
joseph-robertson Jul 19, 2021
eb906bc
Merge branch 'develop' into restructure-v3
joseph-robertson Aug 17, 2021
c324327
Merge branch 'develop' into restructure-v3
joseph-robertson Sep 14, 2021
a8d4f71
Update hpxml workflow generator with server dir cleanup changes.
joseph-robertson Sep 15, 2021
e266a89
Add to schema yml.
joseph-robertson Sep 15, 2021
b05a159
Reporting measure name changes.
joseph-robertson Sep 16, 2021
b1acb96
Update hpxml workflow test.
joseph-robertson Sep 16, 2021
5481750
Format.
joseph-robertson Sep 16, 2021
b288f84
Change more sim output report names.
joseph-robertson Sep 21, 2021
739af63
Even more measure name changes.
joseph-robertson Sep 21, 2021
74f1302
Produce hourly end use and total loads timeseries by default.
joseph-robertson Oct 19, 2021
91a7427
Update residential hpxml workflow generator docs.
joseph-robertson Oct 19, 2021
89678ab
Fix default os version.
joseph-robertson Oct 26, 2021
7115c51
Change timeseries default behavior back to none.
joseph-robertson Oct 29, 2021
0da0cd4
Bump to official os v330.
joseph-robertson Nov 5, 2021
75ff695
Merge branch 'develop' into restructure-v3
joseph-robertson Nov 15, 2021
cf83525
update check for timeseries csv request
aspeake Nov 19, 2021
f526107
Merge branch 'restructure-v3' of github.com:NREL/buildstockbatch into…
aspeake Nov 19, 2021
e6a1aa3
correct yml argument name for simulation output report
aspeake Nov 19, 2021
276265f
Bump to official os v330.
joseph-robertson Nov 5, 2021
b049197
Update workflow generator for co2 emissions timeseries.
joseph-robertson Dec 20, 2021
dd3379f
Another generator update.
joseph-robertson Dec 20, 2021
8ce7bf3
Allow argument list in yml for co2_emissions.
joseph-robertson Jan 7, 2022
3fb4086
Update arg names.
joseph-robertson Jan 10, 2022
465ca13
Continue to update workflow generator for emissions arguments.
joseph-robertson Jan 11, 2022
dac9681
Move yml emissions arg up a level.
joseph-robertson Jan 24, 2022
a8f4296
Updates to workflow generator docs.
joseph-robertson Jan 24, 2022
26da8e8
Update workflow generator docs.
joseph-robertson Jan 24, 2022
340b3d0
Merge branch 'restructure-v3' into restructure-v3-cambium
joseph-robertson Jan 24, 2022
fc5689d
Clean up.
joseph-robertson Jan 24, 2022
3a47168
Updates to schema.
joseph-robertson Jan 24, 2022
c10dff2
Include emissions fuel args.
joseph-robertson Jan 27, 2022
d181d08
Update validate method.
joseph-robertson Jan 27, 2022
8b7233e
Relax requires on fuel args.
joseph-robertson Jan 27, 2022
0d5ef86
Revert and try float.
joseph-robertson Jan 27, 2022
782f6e2
The validator is num not float.
joseph-robertson Jan 27, 2022
e377939
Convert to str.
joseph-robertson Jan 28, 2022
6764522
Add dst and utc time columns by default.
joseph-robertson Jan 31, 2022
84e3a09
Merge branch 'restructure-v3' into restructure-v3-cambium
joseph-robertson Feb 1, 2022
f94bd6f
Merge branch 'develop' into restructure-v3
joseph-robertson Feb 1, 2022
7e9f7e4
Remove optionals from workflow generator.
joseph-robertson Feb 1, 2022
27f9ceb
Merge branch 'restructure-v3' into restructure-v3-cambium
joseph-robertson Feb 1, 2022
6b2b116
Update workflow docs.
joseph-robertson Feb 1, 2022
8d90bcf
More docs updates.
joseph-robertson Feb 1, 2022
fec2f3d
Merge pull request #259 from NREL/restructure-v3-cambium
joseph-robertson Feb 1, 2022
4a0eeb4
remove overwrite of TimeDST column
aspeake Feb 4, 2022
346eb32
Merge pull request #264 from NREL/restructure-v3-dst-fix
aspeake Feb 8, 2022
4cb4380
Merge branch 'develop' into restructure-v3
joseph-robertson Feb 9, 2022
174279c
Fixes for backward compatibility.
joseph-robertson Feb 16, 2022
f5f5c11
Fix str.
joseph-robertson Feb 16, 2022
24f53c7
Merge branch 'github_actions' into restructure-v3
joseph-robertson Feb 16, 2022
28051fc
Fix.
joseph-robertson Feb 17, 2022
5d7670f
Update for removing fast run option.
joseph-robertson Feb 17, 2022
5b6296a
Merge branch 'develop' into restructure-v3
joseph-robertson Feb 22, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
2 changes: 2 additions & 0 deletions buildstockbatch/aws/aws.py
Original file line number Diff line number Diff line change
Expand Up @@ -1800,6 +1800,8 @@ def run_batch(self):
project_path = pathlib.Path(self.project_dir)
buildstock_path = pathlib.Path(self.buildstock_dir)
tar_f.add(buildstock_path / 'measures', 'measures')
if os.path.exists(buildstock_path / 'resources/hpxml-measures'):
tar_f.add(buildstock_path / 'resources/hpxml-measures', 'resources/hpxml-measures')
tar_f.add(buildstock_path / 'resources', 'lib/resources')
tar_f.add(project_path / 'housing_characteristics', 'lib/housing_characteristics')

Expand Down
36 changes: 29 additions & 7 deletions buildstockbatch/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,8 +38,9 @@

class BuildStockBatchBase(object):

DEFAULT_OS_VERSION = '2.9.1'
DEFAULT_OS_SHA = '3472e8b799'
# http://openstudio-builds.s3-website-us-east-1.amazonaws.com
DEFAULT_OS_VERSION = '3.1.0'
DEFAULT_OS_SHA = 'e165090621'
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As we talked about we need to find a way (eventually) to make this dependent on which workflow you're using. Need to coordinate with ComStock.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nmerket Idea: Pull openstudio version from resstock's __version__.py file? https://github.com/NREL/resstock/blob/develop/resources/__version__.py#L5

CONTAINER_RUNTIME = None
LOGO = '''
_ __ _ __, _ __
Expand Down Expand Up @@ -172,17 +173,32 @@ def cleanup_sim_dir(sim_dir, dest_fs, simout_ts_dir, upgrade_id, building_id):

# Convert the timeseries data to parquet
# and copy it to the results directory
timeseries_filepath = os.path.join(sim_dir, 'run', 'enduse_timeseries.csv')
schedules_filepath = os.path.join(sim_dir, 'generated_files', 'schedules.csv')
results_timeseries_filepath = os.path.join(sim_dir, 'run', 'results_timeseries.csv')
timeseries_filepath = results_timeseries_filepath
skiprows = [1]
# FIXME: Allowing both names here for compatibility. Should consolidate on one timeseries filename.
if not os.path.isfile(results_timeseries_filepath):
enduse_timeseries_filepath = os.path.join(sim_dir, 'run', 'enduse_timeseries.csv')
timeseries_filepath = enduse_timeseries_filepath
skiprows = False
schedules_filepath = ''
if os.path.isdir(os.path.join(sim_dir, 'generated_files')):
for file in os.listdir(os.path.join(sim_dir, 'generated_files')):
if file.endswith('schedules.csv'):
schedules_filepath = os.path.join(sim_dir, 'generated_files', file)
if os.path.isfile(timeseries_filepath):
# Find the time columns present in the enduse_timeseries file
possible_time_cols = ['time', 'Time', 'TimeDST', 'TimeUTC']
cols = pd.read_csv(timeseries_filepath, index_col=False, nrows=0).columns.tolist()
actual_time_cols = [c for c in cols if c in possible_time_cols]
if not actual_time_cols:
logger.error(f'Did not find any time column ({possible_time_cols}) in enduse_timeseries.csv.')
raise RuntimeError(f'Did not find any time column ({possible_time_cols}) in enduse_timeseries.csv.')
tsdf = pd.read_csv(timeseries_filepath, parse_dates=actual_time_cols)
logger.error(f'Did not find any time column ({possible_time_cols}) in {timeseries_filepath}.')
raise RuntimeError(f'Did not find any time column ({possible_time_cols}) in {timeseries_filepath}.')
if skiprows:
tsdf = pd.read_csv(timeseries_filepath, parse_dates=actual_time_cols, skiprows=skiprows)
tsdf['TimeDST'] = tsdf['Time'] # FIXME: Actually write TimeDST to results_timeseries.csv?
else:
tsdf = pd.read_csv(timeseries_filepath, parse_dates=actual_time_cols)
if os.path.isfile(schedules_filepath):
schedules = pd.read_csv(schedules_filepath)
schedules.rename(columns=lambda x: f'schedules_{x}', inplace=True)
Expand Down Expand Up @@ -540,6 +556,12 @@ def process_results(self, skip_combine=False, force_upload=False):
self.get_dask_client() # noqa: F841

do_timeseries = 'timeseries_csv_export' in self.cfg['workflow_generator']['args'].keys()
if not do_timeseries:
if 'simulation_output_report' in self.cfg['workflow_generator']['args'].keys():
if 'timeseries_frequency' in self.cfg['workflow_generator']['args']['simulation_output_report'].keys():
do_timeseries = \
(self.cfg['workflow_generator']['args']['simulation_output_report']['timeseries_frequency'] !=
'none')

fs = LocalFileSystem()
if not skip_combine:
Expand Down
11 changes: 11 additions & 0 deletions buildstockbatch/eagle.py
Original file line number Diff line number Diff line change
Expand Up @@ -229,6 +229,11 @@ def run_job_batch(self, job_array_number):
pathlib.Path(self.buildstock_dir) / 'measures',
self.local_buildstock_dir / 'measures'
)
if os.path.exists(pathlib.Path(self.buildstock_dir) / 'resources/hpxml-measures'):
self.clear_and_copy_dir(
pathlib.Path(self.buildstock_dir) / 'resources/hpxml-measures',
self.local_buildstock_dir / 'resources/hpxml-measures'
)
self.clear_and_copy_dir(
self.weather_dir,
self.local_weather_dir
Expand Down Expand Up @@ -336,6 +341,12 @@ def run_building(cls, output_dir, cfg, n_datapoints, i, upgrade_idx=None):
container_symlink = os.path.join('/var/simdata/openstudio', os.path.basename(src))
runscript.append('ln -s {} {}'.format(*map(shlex.quote, (container_mount, container_symlink))))

if os.path.exists(os.path.join(cls.local_buildstock_dir, 'resources/hpxml-measures')):
runscript.append('ln -s /resources /var/simdata/openstudio/resources')
src = os.path.join(cls.local_buildstock_dir, 'resources/hpxml-measures')
container_mount = '/resources/hpxml-measures'
args.extend(['-B', '{}:{}:ro'.format(src, container_mount)])

# Build the openstudio command that will be issued within the singularity container
# If custom gems are to be used in the singularity container add extra bundle arguments to the cli command
cli_cmd = 'openstudio run -w in.osw'
Expand Down
3 changes: 3 additions & 0 deletions buildstockbatch/localdocker.py
Original file line number Diff line number Diff line change
Expand Up @@ -103,6 +103,9 @@ def run_building(cls, project_dir, buildstock_dir, weather_dir, docker_image, re
(os.path.join(project_dir, 'housing_characteristics'), 'lib/housing_characteristics', 'ro'),
(weather_dir, 'weather', 'ro')
]
if os.path.exists(os.path.join(buildstock_dir, 'resources', 'hpxml-measures')):
bind_mounts += [(os.path.join(buildstock_dir, 'resources', 'hpxml-measures'),
'resources/hpxml-measures', 'ro')]
docker_volume_mounts = dict([(key, {'bind': f'/var/simdata/openstudio/{bind}', 'mode': mode}) for key, bind, mode in bind_mounts]) # noqa E501
for bind in bind_mounts:
dir_to_make = os.path.join(sim_dir, *bind[1].split('/'))
Expand Down
53 changes: 41 additions & 12 deletions buildstockbatch/postprocessing.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,14 +41,18 @@ def read_data_point_out_json(fs, reporting_measures, filename):
with fs.open(filename, 'r') as f:
d = json.load(f)
except (FileNotFoundError, json.JSONDecodeError):
return None
else:
if 'SimulationOutputReport' not in d:
d['SimulationOutputReport'] = {'applicable': False}
for reporting_measure in reporting_measures:
if reporting_measure not in d:
d[reporting_measure] = {'applicable': False}
return d
try:
with fs.open(filename.replace('data_point_out', 'results'), 'r') as f:
d = json.load(f)
except (FileNotFoundError, json.JSONDecodeError):
return None

if 'SimulationOutputReport' not in d:
d['SimulationOutputReport'] = {'applicable': False}
for reporting_measure in reporting_measures + ['UpgradeCosts']:
if reporting_measure not in d:
d[reporting_measure] = {'applicable': False}
return d


def to_camelcase(x):
Expand Down Expand Up @@ -82,7 +86,7 @@ def flatten_datapoint_json(reporting_measures, d):
new_d[f'{col2}.{k}'] = v

# additional reporting measures
for col in reporting_measures:
for col in reporting_measures + ['UpgradeCosts']:
for k, v in d.get(col, {}).items():
new_d[f'{col}.{k}'] = v

Expand All @@ -107,12 +111,30 @@ def read_out_osw(fs, filename):
]
for key in keys_to_copy:
out_d[key] = d.get(key, None)
for step in d.get('steps', []):
if step['measure_dir_name'] == 'BuildExistingModel':
out_d['building_id'] = step['arguments']['building_id']
return out_d


def read_job_files(fs, started, finished):
jobs = {'completed_status': 'Fail'}
try:
with fs.open(started, 'r') as f:
started_at = re.search(r'Started Workflow (.*\s.*?)\s', f.readline()).group(1)
started_at = started_at.replace('-', '').replace(':', '').replace(' ', 'T') + 'Z'
jobs['started_at'] = started_at
except:
return None
try:
with fs.open(finished, 'r') as f:
completed_at = re.search(r'Finished Workflow (.*\s.*?)\s', f.readline()).group(1)
completed_at = completed_at.replace('-', '').replace(':', '').replace(' ', 'T') + 'Z'
joseph-robertson marked this conversation as resolved.
Show resolved Hide resolved
jobs['completed_at'] = completed_at
except:
return None
else:
jobs['completed_status'] = 'Success'
return jobs


def read_simulation_outputs(fs, reporting_measures, sim_dir, upgrade_id, building_id):
"""Read the simulation outputs and return as a dict

Expand All @@ -139,6 +161,9 @@ def read_simulation_outputs(fs, reporting_measures, sim_dir, upgrade_id, buildin
out_osw = read_out_osw(fs, f'{sim_dir}/out.osw')
if out_osw:
dpout.update(out_osw)
else: # for when run_options: fast=true
jobs = read_job_files(fs, f'{sim_dir}/run/started.job', f'{sim_dir}/run/finished.job')
dpout.update(jobs)
dpout['upgrade'] = upgrade_id
dpout['building_id'] = building_id
return dpout
Expand Down Expand Up @@ -194,6 +219,10 @@ def clean_up_results_df(df, cfg, keep_upgrade_id=False):
col.startswith(to_camelcase(reporting_measure))])
sorted_cols += reporting_measure_cols

upgrade_costs_cols = sorted([col for col in results_df.columns if
col.startswith(to_camelcase('UpgradeCosts'))])
sorted_cols += upgrade_costs_cols

results_df = results_df.reindex(columns=sorted_cols, copy=False)

# for col in results_df.columns:
Expand Down
2 changes: 1 addition & 1 deletion buildstockbatch/test/test_postprocessing.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ def test_report_additional_results_csv_columns(basic_residential_project_file):
tarf.extractall(sim_out_dir)

dpouts2 = []
for filename in sim_out_dir.rglob('data_point_out.json'):
for filename in sim_out_dir.rglob('results.json'):
with filename.open('rt', encoding='utf-8') as f:
dpout = json.load(f)
dpout['ReportingMeasure1'] = {'column_1': 1, 'column_2': 2}
Expand Down
1 change: 1 addition & 0 deletions buildstockbatch/workflow_generator/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,3 +2,4 @@

from .residential import ResidentialDefaultWorkflowGenerator # noqa F041
from .commercial import CommercialDefaultWorkflowGenerator # noqa F041
from .residential_hpxml import ResidentialHpxmlWorkflowGenerator # noqa F041
161 changes: 161 additions & 0 deletions buildstockbatch/workflow_generator/residential_hpxml.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,161 @@
# -*- coding: utf-8 -*-

"""
buildstockbatch.workflow_generator.residential_hpxml
~~~~~~~~~~~~~~~
This object contains the residential classes for generating OSW files from individual samples

:author: Joe Robertson
:copyright: (c) 2021 by The Alliance for Sustainable Energy
:license: BSD-3
"""

import datetime as dt
import json
import logging
import re
import yamale

from .base import WorkflowGeneratorBase

logger = logging.getLogger(__name__)


class ResidentialHpxmlWorkflowGenerator(WorkflowGeneratorBase):

@classmethod
def validate(cls, cfg):
"""Validate arguments

:param cfg: project configuration
:type cfg: dict
"""
schema_yml = """
build_existing_model: map(required=False)
simulation_output_report: map(required=False)
---
measure-spec:
measure_dir_name: str(required=True)
arguments: map(required=False)
"""
workflow_generator_args = cfg['workflow_generator']['args']
schema_yml = re.sub(r'^ {8}', '', schema_yml, flags=re.MULTILINE)
schema = yamale.make_schema(content=schema_yml, parser='ruamel')
data = yamale.make_data(content=json.dumps(workflow_generator_args), parser='ruamel')
yamale.validate(schema, data, strict=True)
return True

def create_osw(self, sim_id, building_id, upgrade_idx):
"""
Generate and return the osw as a python dict

:param sim_id: simulation id, looks like 'bldg0000001up01'
:param building_id: integer building id to use from the sampled buildstock.csv
:param upgrade_idx: integer index of the upgrade scenario to apply, None if baseline
"""
# Default argument values
workflow_args = {
'build_existing_model': {},
'simulation_output_report': {},
}
workflow_args.update(self.cfg['workflow_generator'].get('args', {}))

logger.debug('Generating OSW, sim_id={}'.format(sim_id))

sim_ctl_run_prd_args = {
'simulation_control_run_period_begin_month': 1,
'simulation_control_run_period_begin_day_of_month': 1,
'simulation_control_run_period_end_month': 12,
'simulation_control_run_period_end_day_of_month': 31,
'simulation_control_run_period_calendar_year': 2007,
}

bld_exist_model_args = {
'building_id': building_id,
'workflow_json': 'measure-info.json',
'sample_weight': self.n_datapoints / self.cfg['baseline']['n_buildings_represented'],
}
bld_exist_model_args.update(sim_ctl_run_prd_args)
bld_exist_model_args.update(workflow_args['build_existing_model'])

sim_out_rep_args = {
'timeseries_frequency': True,
'include_timeseries_fuel_consumptions': False,
'include_timeseries_end_use_consumptions': False,
'include_timeseries_hot_water_uses': False,
'include_timeseries_total_loads': False,
'include_timeseries_component_loads': False,
'include_timeseries_unmet_loads': False,
'include_timeseries_zone_temperatures': False,
'include_timeseries_airflows': False,
'include_timeseries_weather': False,
}
sim_out_rep_args.update(workflow_args['simulation_output_report'])

osw = {
'id': sim_id,
'steps': [
{
'measure_dir_name': 'BuildExistingModel',
'arguments': bld_exist_model_args
}
],
'created_at': dt.datetime.now().isoformat(),
'measure_paths': [
'measures',
'resources/hpxml-measures'
],
'run_options': {
'fast': True,
'skip_expand_objects': True,
'skip_energyplus_preprocess': True
}
}

osw['steps'].extend([
{
'measure_dir_name': 'SimulationOutputReport',
'arguments': workflow_args['simulation_output_report']
},
{
'measure_dir_name': 'UpgradeCosts',
'arguments': {}
},
{
'measure_dir_name': 'ServerDirectoryCleanup',
'arguments': {}
}
])

if upgrade_idx is not None:
measure_d = self.cfg['upgrades'][upgrade_idx]
apply_upgrade_measure = {
'measure_dir_name': 'ApplyUpgrade',
'arguments': {
'run_measure': 1
}
}
if 'upgrade_name' in measure_d:
apply_upgrade_measure['arguments']['upgrade_name'] = measure_d['upgrade_name']
for opt_num, option in enumerate(measure_d['options'], 1):
apply_upgrade_measure['arguments']['option_{}'.format(opt_num)] = option['option']
if 'lifetime' in option:
apply_upgrade_measure['arguments']['option_{}_lifetime'.format(opt_num)] = option['lifetime']
if 'apply_logic' in option:
apply_upgrade_measure['arguments']['option_{}_apply_logic'.format(opt_num)] = \
self.make_apply_logic_arg(option['apply_logic'])
for cost_num, cost in enumerate(option.get('costs', []), 1):
for arg in ('value', 'multiplier'):
if arg not in cost:
continue
apply_upgrade_measure['arguments']['option_{}_cost_{}_{}'.format(opt_num, cost_num, arg)] = \
cost[arg]
if 'package_apply_logic' in measure_d:
apply_upgrade_measure['arguments']['package_apply_logic'] = \
self.make_apply_logic_arg(measure_d['package_apply_logic'])

build_existing_model_idx = \
[x['measure_dir_name'] == 'BuildExistingModel' for x in osw['steps']].index(True)
osw['steps'].insert(build_existing_model_idx + 1, apply_upgrade_measure)

return osw