-
-
Notifications
You must be signed in to change notification settings - Fork 3.8k
If CI checks fail on your PR
If your PR build fails, do not despair! Scroll down to the bottom of the PR thread until you see the results of the continuous integration (CI) tests that ran:
The failure may be due to one of the following things:
-
If you see a warning that your PR has a merge conflict, then see the merge conflicts section below.
-
If your issue is not a merge conflict, then click on the "Details" link next to any failing builds and inspect the logs.
- If you see an error when installing third-party libraries, then there was probably a network issue. See the network problems section below.
- If you see from the logs that a test or lint check is failing, see the failing tests and lint checks section below.
If many of the CI checks are failing at once, your PR has probably resulted in a breakage. To debug and fix this, try running the local development server and checking that the website behaves normally, with no errors in the developer console. Often, you will probably get errors in the backend server logs or the developer console which will help you figure out what is going wrong. The CI logs should also contain useful information that can help with debugging.
You can fix merge conflicts by following the "merge from develop" instructions in step 5 of our instructions for making a pull request. Then when you push the merge commit to your feature branch on GitHub, the merge conflict message will disappear.
If the error has a message like the one below, then it is due to a network error and is unrelated to your PR. Please reach out to a code owner or Core Maintainer to restart your test.
Installing Node.js
Traceback (most recent call last):
File "/usr/local/lib/python2.7/runpy.py", line 174, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/local/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File ".../scripts/install_third_party_libs.py", line 311, in <module>
main()
File ".../scripts/install_third_party_libs.py", line 236, in main
setup.main(args=[])
File "scripts/setup.py", line 158, in main
download_and_install_node()
File "scripts/setup.py", line 120, in download_and_install_node
outfile_name)
File "scripts/setup.py", line 81, in download_and_install_package
python_utils.url_retrieve(url_to_retrieve, filename=filename)
File "python_utils.py", line 289, in url_retrieve
return urllib.urlretrieve(source_url, filename=filename)
File "/usr/local/lib/python2.7/urllib.py", line 98, in urlretrieve
return opener.retrieve(url, filename, reporthook, data)
File "/usr/local/lib/python2.7/urllib.py", line 289, in retrieve
"of %i bytes" % (read, size), result)
urllib.ContentTooShortError: retrieval incomplete: got only 7372553 out of 18638507 bytes
Exited with code exit status 1
If you see a different error while installing third-party libraries, please file an issue and mention @automated-qa-reviewers to notify the Automated QA team.
If you see that a test or lint check is failing, there are two possibilities: your code could be wrong, or the test/check could be flaky. There is no easy way to tell whether a particular failure is a flake, but here are some guidelines:
-
Flakes mostly occur in the end-to-end (E2E) tests. Sometimes we see flakes in the lighthouse or frontend tests, but the backend tests and linters almost never flake.
-
Consider whether your changes could have plausibly caused the failure. For example, if you just updated the README, then there's no way that you could have broken an E2E test. However, be careful to consider that changes in one part of the code can have unintended effects in apparently unrelated code. For example, if you add an E2E test that creates an exploration with the same name as an exploration created by another E2E test, you could break that other E2E test, even if it's testing completely unrelated code.
-
Check whether the error message you see matches a known flake. For E2E tests, look for a green message in the log like this:
E2E logging server says test flaky: {{true/false}}
This message is based on a list of known flakes maintained by the Automated QA team. Note however that this list may be incomplete, so a test could be flaking even if this message says
false
.
Note that all our CI checks except for the backend tests merge from the upstream develop
branch before running, so you may need to merge from develop
locally to reproduce the failure.
If your code is wrong, then you'll need to fix it just as you would respond to reviewer comments. You may also want to review the following documentation to help you debug:
- If a lint check is failing, see Lint Checks.
- If a test is failing, see Tests.
- For general debugging tips, see our debugging guides.
If the test/check is flaky, and you have confirmed that it's not due to your changes, please file a CI Flake report. After doing that, you can restart the test as follows:
-
Click the "Details" link next to the test you want to restart.
-
Click on the "Re-run all jobs" button in the upper-right.
If you don't see the button or cannot click it, you might not have the necessary permissions. In that case, ask one of your reviewers to restart the test for you. When doing this, please provide a link to the CI Flake report you filed.
Following these instructions should result in PRs that are green and ready to merge by the time a reviewer looks at them, thus shortening the review cycle! If you are still unable to resolve the issues yourself, please follow our instructions to get help from other developers.
Have an idea for how to improve the wiki? Please help make our documentation better by following our instructions for contributing to the wiki.
Core documentation
Developing Oppia
- FAQs
- Installing Oppia
- Getting started with the codebase
- Making your first PR
- Learning resources for developers
- Codebase Overview
- Coding Guidelines
- Coding style guide
- Guidelines for creating new files
- How to add a new page
- How to write frontend type definitions
- How to write design docs
- Revert and Regression Policy
- Server errors and solutions
-
Debugging
- If your presubmit checks fail
- If CI checks fail on your PR
- Finding the commit that introduced a bug
- Interpreting GitHub Actions Results
- Debugging Docs
- Debugging datastore locally
- Debugging end-to-end tests
- Debugging backend tests
- Debugging frontend tests
- Debug frontend code
- Debugging custom ESLint check tests
- Debugging custom Pylint check tests
- Debugging Stories
- Guidelines for launching new features
- Guidelines for making an urgent fix (hotfix)
- Lint Checks
- Oppia's code owners and checks to be carried out by developers
- Privacy aware programming
- Backend Type Annotations
- Bytes and string handling in Python 3
- Guidelines for Developers with Write Access to oppia/oppia
- Testing
- Release Process
Developer Reference
- Oppiabot
- Frontend
- Backend
- Translations
- Webpack
- Third-party libraries
- Extension frameworks
- Oppia-ml Extension
- Mobile development
- Mobile device testing
- Performance testing
- Build process
- Team structure
- Triaging Process
- Playbooks
- Wiki
- Past Events