Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[IMP] tests: make post_install the default #143757

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

seb-odoo
Copy link
Contributor

@seb-odoo seb-odoo commented Nov 27, 2023

Performance

post_install tests are parallelized on runbot, allowing to reduce build times. With this PR, the incompressible at_install build time is reduced to 13 min (down from 36 min) for community, and to 27min (down from 61 min) for enterprise.

Coverage

Post-install tests cover the full application (with all overrides installed) rather than a particular sub-set of the code.

Tests of sub-set of the code are desirable too, but they are already tested with single-app builds on runbot.

Maintenance

Post-install tests are easier to maintain and to run, as dropping the database and installing it again is not necessary to run them.

https://github.com/odoo/enterprise/pull/51557
odoo/design-themes#764
odoo/documentation#6670

@seb-odoo seb-odoo self-assigned this Nov 27, 2023
@robodoo
Copy link
Contributor

robodoo commented Nov 27, 2023

@C3POdoo C3POdoo added the RD research & development, internal work label Nov 27, 2023
@seb-odoo seb-odoo marked this pull request as ready for review November 27, 2023 13:05
@d-fence
Copy link
Contributor

d-fence commented Nov 27, 2023

check-style is red because of a pylint killed by the OOM killer. Too much memory needed to check 700 files.
I propose to simply override the check-style as the ruff step worked and is green

@seb-odoo seb-odoo force-pushed the master-post-install--seb branch 2 times, most recently from 7651341 to 408edd7 Compare November 27, 2023 13:11
@C3POdoo C3POdoo requested review from a team, dbkosky, rhe-odoo and xmo-odoo and removed request for a team November 27, 2023 13:11
@Xavier-Do
Copy link
Contributor

Xavier-Do commented Nov 28, 2023

It seems plausible but particularly high, I guess we should try to track on which tests the time is lost in particular (if any) and where in those tests.

Yes we may also enable detailed stats to spot the most increasing classes to get a elephant the the room and maybe profile this class. We may find interesting stuff and maybe improve overall post install performances which would be great :)

Anyway this task is a must have, we wanted to do that for a while in all cases so thanks for your work!

Copy link
Member

@rco-odoo rco-odoo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I disagree with this proposal.

  • Post-install tests are not unit tests: they test the features of a module with plenty of other modules being installed, where some of those modules possibly modify the said features. They are integration tests, not unit tests. The default should be unit testing, not integration testing.
  • Pretending that they are easier to maintain is not correct. It's only true if you depend on nothing, and nothing else depends on you. In Odoo we rarely make isolated modules.
  • The main argument is the runtime on runbot. IMHO the solution is not changing the test framework design, but improving the CI tool. We can parallelize "at install" tests by installing databases where we only test a subset of the modules. What's hard is to divide the modules in subsets such that each can be run in a small amount of time.

@seb-odoo
Copy link
Contributor Author

seb-odoo commented Nov 28, 2023

For the record, below is my point of view about your concerns. I feel like there is a strong difference between the needs of framework and the needs of business.

I would suggest to plan a meeting with key people on the subject to decide what we want to do.

  1. Coverage is an advantage. We want to know and to highlight if a module modifies the framework or another module in an unforeseen way. We are writing an ERP, integration is at the core of what we do, unit tests have little business value.
    I can see how it can still be useful for some specific framework tests, and at install is still possible for them. The huge majority of tests are business tests, and they benefit more from post install.
    The feedback loop is longer, but single-app builds also ensure simpler cases are working as expected with post install. The inverse, which is running at install tests with a full setup is not possible.

  2. Post install tests are harder to write (once) to make them compatible with different setup, which then makes them much easier to run (for everyone afterwards). They are easier to maintain because once they are written and the runbot passes, we know they have been written to cover all cases. We can run them in almost any setup, and they should work.
    When it needs to be debugged or modified, it takes seconds to run the test, and it can be done over and over. at install tests require a very specific setup, and a module installation again and again, every time we need to run them. It makes conceptually no sense to reinstall a module when just wanting to run a test when changing nothing else but the code of that test. "at install" looks like a hack because we're not able to control what is in the registry more precisely in a specific test. If that is something we want, can't we improve that part, so that we could better define the context(s) of a test (suite), rather than requiring an installation?

  3. CI tool and test framework are closely related together, one should help the other. I have no idea how easy it is to parallelize at install, but it looks like what single-app builds already do, and and it appears to be very slow to install those databases and run the tests individually (when looking at their duration, I don't know if it can be optimized). Parallelizing post install is not hard, as we're not trying the find the perfect solution, just a good enough compromise. If we have enough CPU or a good enough queue it's even easier, we could just plan the tests of each module or test class individually, so we don't even need to decide on a split. Again just an idea, as I'm not familiar enough with the CI architecture.

I hope this is good food for thought, and I'm not attached to a particular solution, so if you think we can do better I'm all for it! I just want to have tests that are easy to run locally, a CI that takes as little time as possible, and tests that are robust enough to detect functional issues (no matter where they are introduced). I feel we can do better than now, and for my needs this PR goes into that direction.

@Xavier-Do
Copy link
Contributor

@rco-odoo
We can talk about that later as proposed by seb but I have a few quick key points.

At_install behavior is broken by construction, it only works fine for base.

If you add a module coming first in the alphabetic order with the same depth all your at install tests will now use the new module on runbot. This is why we have the single module nightly builds. They test if only the modules (and auto install) are present, this is the closer to what you call a real unittest.

Post install are easier to maintain because it is easy to replay them, some at_install test are very hard to debug because they fail at mid install of at_install build. This means that you need to install half the modules to be in the same scenario. For post install you can just download a dump of the database to reproduce the same scenario if it doesn't fail with the module alone.

After this change, we have two simple case, only the module (nightly), or all the module (merge). The case, "all the module at the same depth present before me in the alphabetical order" disappears.

The main argument is the runtime on runbot. IMHO the solution is not changing the test framework design, but improving the CI tool. We can parallelized "at install" tests by installing databases where we only test a subset of the modules. What's hard is to divide the modules in subsets such that each can be run in a small amount of time.

In this case you will have the overhead of the install per at_install split. The install takes ~18 minutes, the at install ~60 (it can vary, just picking one example) so ~42 minutes of tests.
To run a at install test as if it was run in one go, you need to install all the modules, and split the test that are run. To split the 60 minutes in two build, you have 2 builds of 18+21 so 39 minutes. And it hides possible interaction between modules (not proprely cleared patches, modified environment or global structure, ...) Those bugs are easier to debug for post install test since we can run them faster without the install first.
More the install time rises (and at install tests), more you need to split to get a quick enough build, more overhead you have. It's not the case of post install tests. Overhead is the restore, a few seconds.

Most of the time when adding test, we just don't realize that it could be post install. It is usually better since there is more coverage in case of overrides. And if you need a at_install test, it can be tagged accordingly.

@Polymorphe57 Polymorphe57 removed their request for review November 29, 2023 16:43
@randall-vx
Copy link
Contributor

@moylop260 FYI

@seb-odoo seb-odoo force-pushed the master-post-install--seb branch 2 times, most recently from 320d96a to bde7056 Compare December 6, 2023 10:35
@seb-odoo seb-odoo force-pushed the master-post-install--seb branch 3 times, most recently from f7d79d5 to 37fd66a Compare December 28, 2023 12:46
@seb-odoo seb-odoo force-pushed the master-post-install--seb branch 4 times, most recently from 36198c5 to 56994cc Compare January 29, 2024 17:50
Performance
-----------

post_install tests are parallelized on runbot, allowing to reduce build
times. With this PR, the incompressible at_install build time is reduced
to 13 min (down from 36 min) for community, and to 27min (down from 61
min) for enterprise.

Coverage
--------
Post-install tests cover the full application (with all overrides
installed) rather than a particular sub-set of the code.

Tests of sub-set of the code are desirable too, but they are already
tested with single-app builds on runbot.

Maintenance
-----------
Post-install tests are easier to maintain and to run, as dropping the
database and installing it again is not necessary to run them.
post_install is now the default, the tags can be removed from tests.
@dbkosky dbkosky removed their request for review February 27, 2024 12:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
RD research & development, internal work
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

8 participants