Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

parca.dev/debuginfod-client/0.18.0 nodes put a lot of unnecessary load on public debuginfod servers #3916

Open
fche opened this issue Oct 18, 2023 · 11 comments

Comments

@fche
Copy link

fche commented Oct 18, 2023

debuginfod.elfutils.org is getting O(10Hz) requests from a moderate number of distinct GCE nodes advertising themselves as parca. That'd be fine if they were legitimate useful requests, but almost all of them are 404 not-found lookups that are repeated, sometimes several times in a second. The server is having to throttle these clients.

For example, over the last 3 days, we've received 1312974 requests for /buildid/c393c1f2a760a00a/debuginfo, all 404's. These really should be cached aggressively.

@Foxboron
Copy link

Foxboron commented Oct 18, 2023

For comparison, debuginfod.archlinux.org has gotten 55 requests for the same buildid since yesterday.

# journalctl --since yesterday -u debuginfod  | grep "c393c1f2a760a00a" | wc -l
55

55 requests for something serving 404 is less, but still not what I would claim is reasonable.

@brancz
Copy link
Member

brancz commented Oct 19, 2023

We're very sorry for this. We dramatically improved the situation in 0.19.0, which is why I assume you're not seeing this very often with 0.19.0+. (all three of these only landed in 0.19 #3413, #2924, #2847)

So while not great, I think this is going to get better with time as more people upgrade from 0.18 to newer versions.

Something additional that we could do (but it would take a little bit of time), is we (as in Polar Signals) could host a debuginfod server that caches upstream responses and make that endpoint the default in Parca, then if there is ever an issue like this we could at least try to fix things and cache more aggressively without having to wait for users to upgrade. That would also come with the issue that this would only become the default in 0.20+.

Let me know what you would like to see or if you have other suggestions, we of course want to play well with the ecosystem, and I apologize again for having created this problem in the first place.

@Foxboron
Copy link

We've realized that the 55 requests Arch is getting is spill-over from the main debuginfod.elfutils.org proxy. So any negative cache hits are just forwarded to us (and probably other mirrors).

@Foxboron
Copy link

Something additional that we could do (but it would take a little bit of time), is we (as in Polar Signals) could host a debuginfod server that caches upstream responses and make that endpoint the default in Parca, then if there is ever an issue like this we could at least try to fix things and cache more aggressively without having to wait for users to upgrade. That would also come with the issue that this would only become the default in 0.20+.

I think this sounds like a good idea. It would allow you more leeway to manage negative hits and ensure you are playing well with the upstream mirrors.

@fche
Copy link
Author

fche commented Oct 19, 2023

Understood, so if all this traffic comes from clients running old code that cannot be retroactively reconfigured to use a server of your own, then there's not much either of us can do except wait for the user population to upgrade. ;-) OK, no problem, the server will protect itself as best it can, and we'll carry on.

@brancz
Copy link
Member

brancz commented Oct 19, 2023

Thanks for understanding! :)

@brancz
Copy link
Member

brancz commented Oct 19, 2023

I'll leave this open until we move to a Polar Signals managed debuginfod endpoint.

@fche
Copy link
Author

fche commented Oct 20, 2023

By the way, we can install httpd-level redirects for these old 0.18 clients from debuginfod.elfutils.org to another server pretty easily.

@fche
Copy link
Author

fche commented Jan 19, 2024

Fresher observations include a steady stream of parca traffic onto debuginfod.elfutils.org, which is great. One problem is that even successful fetches don't appear to be cached reliably on your side. e.g., for just one buildid, and over the course of one hour, we're seeing:

Jan 19 12:04:04 elastic.org debuginfod[3834]: [Fri 19 Jan 2024 12:04:04 PM GMT] (3834/3835): 127.0.0.1:37828 UA:parca.dev/debuginfodclient/0.20.0 XFF:35.230.188.104 GET /buildid/ff69401f5b9593dd4b1d31d7f2ff1688f265e371/debuginfo 200 3367072 0+0ms
Jan 19 12:10:32 elastic.org debuginfod[3834]: [Fri 19 Jan 2024 12:10:32 PM GMT] (3834/3835): 127.0.0.1:50404 UA:parca.dev/debuginfodclient/0.20.0 XFF:35.188.226.152 GET /buildid/ff69401f5b9593dd4b1d31d7f2ff1688f265e371/debuginfo 200 3367072 0+0ms
Jan 19 12:19:49 elastic.org debuginfod[3834]: [Fri 19 Jan 2024 12:19:49 PM GMT] (3834/3835): 127.0.0.1:58460 UA:parca.dev/debuginfodclient/0.20.0 XFF:35.230.180.226 GET /buildid/ff69401f5b9593dd4b1d31d7f2ff1688f265e371/debuginfo 200 3367072 0+0ms
Jan 19 12:19:51 elastic.org debuginfod[3834]: [Fri 19 Jan 2024 12:19:51 PM GMT] (3834/3836): 127.0.0.1:58512 UA:parca.dev/debuginfodclient/0.20.0 XFF:35.188.226.152 GET /buildid/ff69401f5b9593dd4b1d31d7f2ff1688f265e371/debuginfo 200 3367072 0+0ms
Jan 19 12:29:01 elastic.org debuginfod[3834]: [Fri 19 Jan 2024 12:29:01 PM GMT] (3834/3835): 127.0.0.1:57992 UA:parca.dev/debuginfodclient/0.20.0 XFF:35.188.226.152 GET /buildid/ff69401f5b9593dd4b1d31d7f2ff1688f265e371/debuginfo 200 3367072 0+0ms
Jan 19 12:37:59 elastic.org debuginfod[3834]: [Fri 19 Jan 2024 12:37:59 PM GMT] (3834/3835): 127.0.0.1:56594 UA:parca.dev/debuginfodclient/0.20.0 XFF:35.230.180.226 GET /buildid/ff69401f5b9593dd4b1d31d7f2ff1688f265e371/debuginfo 200 3367072 0+0ms
Jan 19 12:45:52 elastic.org debuginfod[3834]: [Fri 19 Jan 2024 12:45:52 PM GMT] (3834/3836): 127.0.0.1:45524 UA:parca.dev/debuginfodclient/0.20.0 XFF:35.230.180.226 GET /buildid/ff69401f5b9593dd4b1d31d7f2ff1688f265e371/debuginfo 200 3367072 0+0ms
Jan 19 12:55:58 elastic.org debuginfod[3834]: [Fri 19 Jan 2024 12:55:58 PM GMT] (3834/3836): 127.0.0.1:40784 UA:parca.dev/debuginfodclient/0.20.0 XFF:35.188.226.152 GET /buildid/ff69401f5b9593dd4b1d31d7f2ff1688f265e371/debuginfo 200 3367072 0+0ms
Jan 19 12:55:59 elastic.org debuginfod[3834]: [Fri 19 Jan 2024 12:55:59 PM GMT] (3834/3835): 127.0.0.1:38540 UA:parca.dev/debuginfodclient/0.20.0 XFF:35.188.226.152 GET /buildid/ff69401f5b9593dd4b1d31d7f2ff1688f265e371/debuginfo 200 3367072 0+0ms

Note how both the .152 and the .226 machine fetch the same file multiple times over the same hour. The total data flow to parca appears to be on the order of 1 TB/week, which is right around our upper limit of tolerability. Can you check whether there's anything simple that can be done from your side for this case?

@brancz
Copy link
Member

brancz commented Jan 20, 2024

We perform two requests, one to know whether it exists and one to actually download it. Can we do a head request instead or is there another way to inform whether a “future” request should work?

@fche
Copy link
Author

fche commented Jan 20, 2024

I see more than two requests per IP address per hour for the same randomly chosen build-id, so something's not working quite that way.

By the way, what's the downside of directly asking for the item? If it doesn't exist, you'll be told pretty quickly. If it does, you'll get the file. The client can abort the download if it wishes.

By the way by the way, with env $DEBUGINFOD_MAXSIZE=1, the client can get a different rc for present vs. absent, though it may still take some processing time. It may be communicated to the server via the "X-DEBUGINFOD-MAXSIZE: 1" request header.

By the way by the way by the way, the forthcoming "metadata" debuginfod api extensions will be another way to query the contents of the server.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants