Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DRAFT] chore: benchmark tests #840

Draft
wants to merge 15 commits into
base: experimental_v3
Choose a base branch
from

Conversation

daniel-sanche
Copy link
Contributor

WIP, blocked on test proxy: #836

This PR adds simple benchmark tests. The main goal currently is for performance profiling, to detect bottlenecks during development. In the future, we can extend this system to get more objective performance metrics, and to
run cross-language benchmarks, to test functionality against existing clients.


Implementation

Benchmarks are implemented in tests/benchmarks/benchmarks.py, which define client and server logic to test as part of the benchmark.

tests/benchmarks/test_benchmarks.py is a pytest file that will drive the benchmarks on both the v3 client and the v3 client, and report timing results.

Running python -m pytest test_benchmarks.py --profile will include a profile trace for each failed benchmark, which can be read through the CLI or visualized in snakeviz to find bottlenecks in the code. We can use this to optimize our code.

@product-auto-label product-auto-label bot added size: xl Pull request size is extra large. api: bigtable Issues related to the googleapis/python-bigtable API. labels Jul 20, 2023
@daniel-sanche daniel-sanche changed the base branch from main to experimental_v3 August 30, 2023 20:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
api: bigtable Issues related to the googleapis/python-bigtable API. size: xl Pull request size is extra large.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant