Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve robustness of unit and integration tests. #330

Open
1 of 9 tasks
willgraf opened this issue Apr 13, 2020 · 4 comments
Open
1 of 9 tasks

Improve robustness of unit and integration tests. #330

willgraf opened this issue Apr 13, 2020 · 4 comments
Labels
enhancement New feature or request help wanted Extra attention is needed tests Testing is required

Comments

@willgraf
Copy link
Contributor

willgraf commented Apr 13, 2020

Is your feature request related to a problem? Please describe.
The project now has good testing pipelines thanks to #281, #320, and #326. However, these unit and integration tests can still be improved.

Describe the solution you'd like

Unit tests (moved from #272):

  • Test Makefiles in /conf/tasks.
  • Validate the output of helmfile build with kubectl.
  • Validate the /conf/addons with kubectl.
  • Test the shell scripts in /scripts.
  • Move the prometheus rules into their own file and test them.
  • Test sphinx documentation builds.

Integration Tests (moved from #281):

  • Test the state of the deployed cluster (what charts are deployed, what HPAs are up, etc.)
  • Find the public IP address and make sure it's reachable.
  • Potentially test a tf-serving deployment by sending a pre-defined image through a model.
@willgraf willgraf added the enhancement New feature or request label Apr 13, 2020
This was referenced Apr 13, 2020
@willgraf
Copy link
Contributor Author

willgraf commented May 22, 2020

Related to this: Sometimes the prometheus-operator can take a long time to come to a Running status, and will fail the integration tests. We can either reduce the default timeout time for helm deployments (10 minutes is a pretty long time), or we can try to use travis_wait in the test, or a combination of both.


UPDATE: This issue has pretty much gone away since migrating to GitHub Actions.

@willgraf willgraf added the tests Testing is required label Oct 21, 2020
@willgraf
Copy link
Contributor Author

willgraf commented Dec 10, 2020

Test the state of the deployed cluster (what charts are deployed, what HPAs are up, etc.)

Several PRs that will be in v1.4.0 (#395, #396, #397, #400, #405) migrate YAML configuration files into the helm chart/helmfile (e.g. HPAs, cert-manager configurations, configmaps). With these PRs, there are no actively used /conf/addons or /conf/patches files.

This should move some of the testing burdens from testing a cluster "state" to testing the charts themselves.

@willgraf
Copy link
Contributor Author

#402 introduces docs testing.

@willgraf
Copy link
Contributor Author

Now that we are using GitHub Actions, it seems like there are a few tools available that may assist in testing helm charts (and maybe helmfiles).

  • chart-testing: A tool for linting/testing charts against a remote repository.
  • kind-action: Set up a kubernetes instance as an Action.
  • chart-testing-action: Example running chart-testing on a kind cluster in a GitHub Action.

@willgraf willgraf added the help wanted Extra attention is needed label Jun 7, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request help wanted Extra attention is needed tests Testing is required
Projects
None yet
Development

No branches or pull requests

1 participant