Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sandbox reuse for tests spends a while #22309

Open
DavidZbarsky-at opened this issue May 10, 2024 · 23 comments
Open

Sandbox reuse for tests spends a while #22309

DavidZbarsky-at opened this issue May 10, 2024 · 23 comments
Labels
P2 We'll consider working on this in future. (Assignee optional) team-Local-Exec Issues and PRs for the Execution (Local) team type: bug

Comments

@DavidZbarsky-at
Copy link

Description of the bug:

These tests should have pretty much the same sandbox, but it looks like we are spending 1-2s per test on filtering the existing files.
image

Would it be possible for the stashed sandbox to store a list of its contents, so we can do this more efficiently?

Which category does this issue belong to?

No response

What's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.

No response

Which operating system are you running Bazel on?

Linux

What is the output of bazel info release?

release 7.1.0

If bazel info release returns development version or (@non-git), tell us how you built Bazel.

No response

What's the output of git remote get-url origin; git rev-parse HEAD ?

No response

Is this a regression? If yes, please try to identify the Bazel commit where the bug was introduced.

Related to e9022f6 ?

Have you found anything relevant by searching the web?

No response

Any other information, logs, or outputs that you want to share?

No response

@fmeum
Copy link
Collaborator

fmeum commented May 10, 2024

@DavidZbarsky-at Could you also test the performance with 7.0.0, which doesn't include e9022f6?

Cc @oquenchil

@oquenchil
Copy link
Contributor

I actually implemented storing the contents of a stashed sandbox in memory but it uses noticeably more memory so I didn't check that in. Regardless of what caused the regression here (which I should fix) I'd want to keep the --reuse_sandbox_directories flag as an alternative that consumes as little memory as possible. For those people where memory isn't a problem then they can place their sandbox on tmpfs with --sandbox_base=/dev/shm.

Have you tried the latter? Could you comment on whether this helps you? If you try with tmpfs, you could try passing at the same time --noreuse_sandbox_directories and --experimental_sandbox_async_tree_delete_idle_threads=0.

@oquenchil
Copy link
Contributor

Do you have a repro for what you are seeing in your build that I could use to debug?

@fmeum
Copy link
Collaborator

fmeum commented May 10, 2024

@bazel-io flag

@bazel-io bazel-io added the potential release blocker Flagged by community members using "@bazel-io flag". Should be added to a release blocker milestone label May 10, 2024
@DavidZbarsky-at
Copy link
Author

@DavidZbarsky-at Could you also test the performance with 7.0.0, which doesn't include e9022f6?

On 7.0.0 I was seeing around 600-900ms sandbox setup overhead. Note that we are using tmpfs sandbox base; without that it was around 20-30s per test.

image

I actually implemented storing the contents of a stashed sandbox in memory but it uses noticeably more memory so I didn't check that in.

Perhaps we can store a manifest file on disk? That would mean a single read vs a filesystem walk. I think you can stream the read + comparison if it's sorted?

Regardless of what caused the regression here (which I should fix) I'd want to keep the --reuse_sandbox_directories flag as an alternative that consumes as little memory as possible. For those people where memory isn't a problem then they can place their sandbox on tmpfs with --sandbox_base=/dev/shm.

Have you tried the latter? Could you comment on whether this helps you? If you try with tmpfs, you could try passing at the same time --noreuse_sandbox_directories and --experimental_sandbox_async_tree_delete_idle_threads=0.

I was hoping sandbox reuse would allow a further speedup on top of the tmpfs sandbox base (even with tmpfs, syscalls aren't free); it feels like the worker should be able to diff a list of 60K items fairly quickly :)

So to summarize, here is the set of timings I am seeing:

  • Bazel 7.1.0, sandbox reuse + tmpfs: 1-2s overheads
  • Bazel 7.1.0, no sandbox reuse + tmpfs: 1-1.5s overheads
  • Bazel 7.0.0, no sandbox reuse + tmpfs: .5-1s overheads

I can try to come up with a repro if that would be helpful?

@keertk
Copy link
Member

keertk commented May 10, 2024

@bazel-io fork 7.2.0

@bazel-io bazel-io removed the potential release blocker Flagged by community members using "@bazel-io flag". Should be added to a release blocker milestone label May 10, 2024
@keertk
Copy link
Member

keertk commented May 10, 2024

@fmeum is this a hard blocker for 7.2?

@fmeum
Copy link
Collaborator

fmeum commented May 10, 2024

@keertk Based on the slowdown reported in #22309 (comment) I would say yes, but I'll leave the decision to @oquenchil.

@oquenchil
Copy link
Contributor

I can try to come up with a repro if that would be helpful?

Yes please. I don't understand why this would have regressed from 7.0.0 to 7.1.0 even if the culprit were e9022f6 (still have to verify). A repro would help understanding what's going on.

Regarding blocking 7.2.0, I'd be fine with the culprit being reverted once found and not blocking the release instead of a forward fix.

@oquenchil
Copy link
Contributor

@DavidZbarsky-at can you provide the repro that shows the regression from 7.0.0 to 7.1.0?

@oquenchil oquenchil added team-Local-Exec Issues and PRs for the Execution (Local) team P2 We'll consider working on this in future. (Assignee optional) and removed team-Performance Issues for Performance teams untriaged labels May 14, 2024
@DavidZbarsky-at
Copy link
Author

DavidZbarsky-at commented May 14, 2024

Check out https://github.com/DavidZbarsky-at/nodejs-repro
Repro with CC=false bazel test -k :all --cache_test_results=no

It looks like on linux both 7.0.0 and 7.1.0 take around 700-900ms in sandbox setup with the tmpfs sandbox base, though I did see some actions taking 2s+ on 7.1.0
On darwin I saw around 28s in 7.0.0 and 7s in 7.1.0, so even though there's a big improvement there (thanks!), there's still a ways to go to have the sandbox reuse work well on our Macs.

BTW on darwin I run with common --sandbox_base=/Volumes/RAMDisk which I created with

diskutil erasevolume APFS 'RAMDisk' `hdiutil attach -nobrowse -nomount ram://33554432`
echo 'common --sandbox_base=/Volumes/RAMDisk' > ~/.bazelrc

But given the timings I am not sure it's actually helping :)

Some profiles I collected if you're curious:
darwin_profile_7.0.0.gz
darwin_profile_7.1.0.gz
linux_profile_7.0.0_no_reuse.gz
linux_profile_7.0.0.gz
linux_profile_7.1.0_no_reuse.gz
linux_profile_7.1.0.gz

@oquenchil
Copy link
Contributor

Hi David, so am I understanding correctly that you aren't seeing a regression in 7.1.0? I will certainly use your example to improve this further but we need to clarify if 7.1.0 introduced a regression with respect to 7.0.0.

@dzbarsky
Copy link

We chatted over slack, we experience a 1x-1.5x regression in sandbox creation time, but it is masking the enablement of async sandbox deletion. 7.1.0 is faster for us than 7.0.0 with non-async sandbox deletion, so it's not a regression overall.

@oquenchil
Copy link
Contributor

Hi @DavidZbarsky-at, here I have the code for the in-memory stashes: https://github.com/oquenchil/bazel/tree/in_memory_reuse_sandbox_directory_stashes

You can build with USE_BAZEL_VERSION=7.1.1 bazelisk build //src:bazel

Please if you have time give it a try on Linux and macOS and let me know if you see an improvement.

@DavidZbarsky-at
Copy link
Author

Ack, I am moving this week and will be mostly offline, but I'll give it a shot next week!

@Wyverald
Copy link
Member

We chatted over slack, we experience a 1x-1.5x regression in sandbox creation time, but it is masking the enablement of async sandbox deletion. 7.1.0 is faster for us than 7.0.0 with non-async sandbox deletion, so it's not a regression overall.

Given this, I'm closing the 7.2.0 release blocker issue (#22319).

@DavidZbarsky-at
Copy link
Author

@oquenchil I had a chance to give it a quick spin - on my repro, it takes the sandbox creation times from 7s to 200ms, this is amazing! Unfortunately I had issues with genrule not being able to write files, and even when I fixed that to write_file, I saw an issue with the default test.xml generation action, so it's a bit hard for me to run this in our more realistic setup. But this looks very promising, thank you!

@oquenchil
Copy link
Contributor

Glad to hear there was an improvement. Let's fix the errors you are seeing then. Did you see these errors with the repro from https://github.com/DavidZbarsky-at/nodejs-repro? If you did, would you mind telling me the bazel commands you ran? I didn't see these errors myself.

If it wasn't with that repro, would it be possible to create an example with the errors you are seeing?

Thanks!

@DavidZbarsky-at
Copy link
Author

Yes, it was the master branch of that repo, and it's the same build command I posted earlier. I wonder if my self-built bazel is weird somehow, - I've never tried building it myself before. I'll try on a clean branch of bazel and see if it exhibits the same brokenness

@oquenchil
Copy link
Contributor

Alright thanks. I will see if I can get it to reproduce. I think it's unlikely that your custom bazel would introduce any differences.

@oquenchil
Copy link
Contributor

I forgot to ask, did you see these errors on Linux or macOS?

@DavidZbarsky-at
Copy link
Author

It was on macos

@DavidZbarsky-at
Copy link
Author

Here's the first error I saw:

touch: bazel-out/darwin_arm64-fastbuild/bin/out_37: Operation not permitted
ERROR: /Users/david.zbarsky/nodejs-repro/BUILD.bazel:61:12: Executing genrule //:gen_out_43 failed: (Exit 1): bash failed: error executing Genrule command (from target //:gen_out_43) /bin/bash -c 'source external/bazel_tools/tools/genrule/genrule-setup.sh; touch bazel-out/darwin_arm64-fastbuild/bin/out_43'

I retried using the HEAD commit you branched from, and saw the same, so I don't think your PR breaks anything for me:

(env-18.16.0) david.zbarsky@JF6FQ9PXP9 nodejs-repro % /Users/david.zbarsky/bazel/bazel-bin/src/bazel test -k //:all --cache_test_results=no --profile=/tmp/profile.gz
Extracting Bazel installation...
Starting local Bazel server and connecting to it...
WARNING: Option 'experimental_action_cache_store_output_metadata' is deprecated: Use --action_cache_store_output_metadata instead
WARNING: Option 'experimental_action_cache_store_output_metadata' is deprecated: Use --action_cache_store_output_metadata instead
ERROR: /Users/david.zbarsky/nodejs-repro/BUILD.bazel:61:12: Executing genrule //:gen_out_49 failed: (Exit 1): bash failed: error executing Genrule command (from target //:gen_out_49) /bin/bash -c 'source external/bazel_tools/tools/genrule/genrule-setup.sh; touch bazel-out/darwin_arm64-fastbuild/bin/out_49'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
P2 We'll consider working on this in future. (Assignee optional) team-Local-Exec Issues and PRs for the Execution (Local) team type: bug
Projects
None yet
Development

No branches or pull requests

10 participants