Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fail gracefully when Socrata is down #213

Open
geneorama opened this issue Aug 9, 2023 · 4 comments
Open

Fail gracefully when Socrata is down #213

geneorama opened this issue Aug 9, 2023 · 4 comments

Comments

@geneorama
Copy link
Member

We received notice from CRAN that RSocrata is failing with some regularity, which causes headaches for the CRAN administrators. We believe this happens because Socrata is occasionally briefly down for maintenance.

In order to "fail gracefully" @nicklucius and I discussed checking the site to see if it's up before running the tests.

For now I plan to use something like attr(curlGetHeaders("https://data.cityofchicago.org/"), "status") to check for 200 at the start of the tests, and abort testing if that fails. Error handling will be necessary because it returns an error if the site is down.

Example error for non existent website:

# > attr(curlGetHeaders("https://badurlx1x2x3x4.cityofchicago.org/"), "status")
# Error in curlGetHeaders("https://badurlx1x2x3x4.cityofchicago.org/") : 
#   libcurl error code 6:
#   Could not resolve host: badurlx1x2x3x4.cityofchicago.org

Example error for timeout example:

# > attr(curlGetHeaders("https://dev.socrata.com/docs/endpoints.html"), "status")
# Error in curlGetHeaders("https://dev.socrata.com/docs/endpoints.html") : 
#   libcurl error code 28:
# 	Failed to connect to dev.socrata.com port 443 after 21045 ms: Timed out

I would also like to take this opportunity to at least try splitting the tests (#103), and this should also take care of #131 which is fail gracefully on write.

I think it would be a mistake to resolve this issue by introducing new error handling into read.socrata and write.socrata. The existing errors are descriptive. For example if you plug in a bad URL you get a 404. Also modifying the tests is the least invasive way to make changes. Adding unexpected error handling would be a significant change and could have unintended consequences. For example, I would to have a process running that updates a dataset nightly and for that process to fail without the expected error.

@geneorama
Copy link
Member Author

Correction, this wouldn't take care of #131.

This would resolve the CRAN tests related to #131, but it wouldn't resolve the failure when writing to Socrata.

@geneorama geneorama changed the title Fail gracefully when Socrata is down - address other unit test issues Fail gracefully when Socrata is down Aug 10, 2023
@geneorama
Copy link
Member Author

CRAN was removed because it has been failing tests randomly and creating overhead for the CRAN team. The CRAN maintainers are no longer allowing any tolerance for occasional transgressions, and RSocrata will be removed after a single false positive failed test.

The problem is that over the years we’ve written between 50 and 100 tests to address specific issues with specific datasets. The tests work when everything is working normally, but there are ways that the tests can fail under abnormal circumstances. For example, there could be a brief but unplanned outage, planned API maintenance, a dataset might get removed, or there could be changes in datasets that invalidate our tests.

Those external factors are unexpected, unusual, and generally impossible to predict. Also, it's hard to make exceptions for the tests. If we said "when this test fails, ignore it", then what's the purpose of the test? Also, I'm not sure if we can reliably catch outages and timeouts because this isn't a reproducible condition.

Some ideas:

  • Remove all external dependencies from the automated tests. One could argue that that point of CRAN tests is to test package dependencies, and we could still have tests that are required locally to protect against regressions.
  • We could only run tests against the Socrata demo site and work with the vendor to get more examples online.
  • We could test for general API availability and hope for the best (this was my initial approach on this branch).

That's not a comprehensive list of possible approaches, and each have their own considerations.

In my opinion we need a test API where we can reproduce the specific conditions that we are testing if we're ever going to have more reliable interactions with the API.

The tests use several API endpoints, examples include:

All of the tests here: https://github.com/Chicago/RSocrata/blob/master/tests/testthat/test-all.R

It’s unsustainable to rely on this disparate collection of random datasets, because you never know when the conditions will change and invalidate the tests.

@geneorama
Copy link
Member Author

After going back and forth with ChatGPT I had another idea to take a two step approach.

Within read.socrata add a handler for timeout errors that looks something like this:

read.socrata <- function(url, ...) {
  response <- tryCatch({
    httr::GET(url)
    ## plus rest of read.socrata code
    return(result)
  }, error = function(e) {
    warning(paste("Website non-responsive for URL:", url, "Skipping request."))
    return(NULL)
  })

Then write a second wrapper for the tests, perhaps called test.socrata, and skip the tests when they are NULL.

This is a bit of a work in progress and wouldn't solve for the cases when datasets change or other unexpected errors occur, but it would at least be foolproof (hopefully) for outages.

Also, I noticed that httr2 is out, and that could provide some mechanisms for more robust handling.

@geneorama
Copy link
Member Author

For future reference there may be something useful in the mocking functions documented here: https://r-pkgs.org/testing-advanced.html#mocking I'm not sure at a glance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant