Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Random tests #256

Open
davidbrochart opened this issue Mar 20, 2024 · 1 comment
Open

Random tests #256

davidbrochart opened this issue Mar 20, 2024 · 1 comment
Labels
enhancement New feature or request

Comments

@davidbrochart
Copy link
Collaborator

Problem

We currently have very few tests, and these tests are very low-level. They are definitely not enough to catch corner cases that happen infrequently. Bugs are being reported but happen under heavy load and are difficult to reproduce. Testing RTC e.g. in JupyterLab meetings on a Binder instance is not ideal, or at least not systematic.

Proposed Solution

One way to stress-test RTC at a high level in an automatic way could be through random tests. They are a good solution when it comes to exploring complex states that would otherwise be very difficult to imagine or reproduce. One might say that they are not very reproducible by design, but another way to look at them is that they explore more space at each run, which I see as a way to increase our confidence in the robustness of RTC over time. When failing, a test could write the YStore as an artifact in the CI, which could be used to reproduce the bug locally.
Tests could use Python web clients. This has the drawback of not testing JupyterLab's frontend (JavaScript) code, but most of the complexity lies in the backend anyway (in out-of-band change detection for instance). Writing tests in pure Python would be easier and would probably allow to test more scenarios. What I have in mind is the following:

  • Launch a Jupyter server.
  • Launch many Python clients on the same (notebook) document.
  • Create a "reference" shared model for the document.
  • Make clients generate random changes, and apply them to the reference.
  • Have checkpoints where shared models in the server and in clients are compared to the reference.
  • Have checkpoints where the document on disk is compared to the reference.

Thanks to the guarantees provided by CRDTs, the fact that changes to the reference are not applied in the same order as for the shared models in the server and in the clients would not matter. The checkpoints would allow some time for conflicts to resolve before comparing states.

Additional context

See jupyterlab/jupyterlab#14532.

@davidbrochart davidbrochart added the enhancement New feature or request label Mar 20, 2024
@davidbrochart
Copy link
Collaborator Author

See #257 for what it could look like.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant