Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance Tests #5

Closed
e-dant opened this issue Oct 14, 2022 · 5 comments
Closed

Performance Tests #5

e-dant opened this issue Oct 14, 2022 · 5 comments
Labels
documentation Improvements or additions to documentation enhancement New feature or request good first issue Good for newcomers help wanted Extra attention is needed

Comments

@e-dant
Copy link
Owner

e-dant commented Oct 14, 2022

We need performance benchmarks against the other filesystem watchers. I'm thinking:

  • chokidar
  • fswatch
  • watchman
  • notify-rs

We have performance tests, but they're not comparative.

@e-dant e-dant self-assigned this Oct 14, 2022
@e-dant e-dant added enhancement New feature or request documentation Improvements or additions to documentation labels Oct 14, 2022
@e-dant
Copy link
Owner Author

e-dant commented Jan 17, 2023

Depending on whether or not some of the other libraries are multithreaded, this might require pinning cpu cores for reliable metrics. I'm not sure how to do that on Windows, but on systems that use posix threads there's an affinity option that might work for this. It's not clear how that will work for Chokidar, though. I think taskset (program) and Grub ('s config file) could be used for that on Linux distros.

@e-dant
Copy link
Owner Author

e-dant commented Jun 18, 2023

These tests could measure the latency between:

  1. a filesystem event, and
  2. the event being reported by these watchers

The first could be a simple bash script.

The second could be storing the event, and its timestamp, then writing it out to a file.

The events could be some set number of different kinds of events and path types: Create, modify, move, remove; Files and directories.

The events could be created some set number of times and some set interval. Similar things are done in this library's unit tests.

@e-dant e-dant added help wanted Extra attention is needed good first issue Good for newcomers labels Jun 18, 2023
@fabiospampinato
Copy link

It may be worth measuring also how long it takes to watch a big folder, and how much memory that takes.

@e-dant
Copy link
Owner Author

e-dant commented Jun 19, 2023

It may be worth measuring also how long it takes to watch a big folder, and how much memory that takes.

Definitely.

I'm curious about different scenarios and profiling metrics.

Some parameters to tweak:

  • Directory size
  • Nesting level
  • File count
  • Event interval
  • Event kind
  • Resource contention

Some parameters to measure:

  • Memory usage
  • Normalized (over what time?) memory usage
  • CPU usage (again, time?)
  • CPU usage/thread
  • L1, 2 cache misses/hits

We'll get there :)

@e-dant
Copy link
Owner Author

e-dant commented May 15, 2024

I'm comfortable with our performance tests and valgrind

@e-dant e-dant closed this as completed May 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation enhancement New feature or request good first issue Good for newcomers help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

2 participants