Skip to content
This repository has been archived by the owner on Mar 21, 2024. It is now read-only.

Fallback run for tests failed and it makes tests fail locally #665

Open
fepegar opened this issue Feb 18, 2022 · 0 comments
Open

Fallback run for tests failed and it makes tests fail locally #665

fepegar opened this issue Feb 18, 2022 · 0 comments
Assignees
Labels
bug Something isn't working

Comments

@fepegar
Copy link
Contributor

fepegar commented Feb 18, 2022

The run ID refs_pull_633_merge_1642019743_f212b068 failed. This causes multiple tests to fail, e.g.,

def test_registered_model_file_structure_and_instantiate(test_output_dirs: OutputFolderForTests) -> None:
"""
Downloads the model that was built in the most recent run, and checks if its file structure is as expected.
"""

This test is skipped in CI, therefore this is missed:

@pytest.mark.after_training_single_run
@pytest.mark.after_training_ensemble_run
@pytest.mark.after_training_glaucoma_cv_run
@pytest.mark.after_training_hello_container
def test_registered_model_file_structure_and_instantiate(test_output_dirs: OutputFolderForTests) -> None:

If the run ID is replaced with, e.g., a test that I just ran and everything was ok, the test below fails as it expects a different directory tree for the logs:

@pytest.mark.after_training_single_run
def test_check_dataset_mountpoint(test_output_dirs: OutputFolderForTests) -> None:
"""
Check that the dataset mountpoint has been used correctly. The PR build submits the BasicModel2Epochs with
dataset mounting, using a fixed mount path that is given in the model.
"""
run = get_most_recent_run(fallback_run_id_for_local_execution=FALLBACK_SINGLE_RUN)
files = run.get_file_names()
# Account for old and new job runtime: log files live in different places
driver_log_files = ["azureml-logs/70_driver_log.txt", "user_logs/std_log.txt"]
downloaded = test_output_dirs.root_dir / "driver_log.txt"
for f in driver_log_files:
if f in files:
run.download_file(f, output_file_path=str(downloaded))
break
else:
raise ValueError("The run does not contain any of the driver log files")
logs = downloaded.read_text()
expected_mountpoint = BasicModel2Epochs().dataset_mountpoint
assert f"local_dataset : {expected_mountpoint}" in logs

The failed run ID makes other tests fail as well, e.g.,

@pytest.mark.after_training_single_run
def test_is_completed_single_run() -> None:
"""
Test if we can correctly check run status for a single run.
:return:
"""
logging_to_stdout()
workspace = get_default_workspace()
get_run_and_check(get_most_recent_run_id(), True, workspace)

For reference, that run ID was introduced by @ant0nsc in:

AB#5029

fepegar added a commit that referenced this issue Mar 1, 2022
@peterhessey peterhessey self-assigned this May 19, 2022
@peterhessey peterhessey added the bug Something isn't working label May 19, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Something isn't working
Projects
Status: No status
Development

No branches or pull requests

2 participants