Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Re-Use Connections When Publishing to the Aggregator #51

Open
copperlight opened this issue Oct 5, 2021 · 2 comments · May be fixed by #80
Open

Re-Use Connections When Publishing to the Aggregator #51

copperlight opened this issue Oct 5, 2021 · 2 comments · May be fixed by #80

Comments

@copperlight
Copy link
Collaborator

This should reduce the overhead per request.

copperlight added a commit to copperlight/spectatord that referenced this issue Oct 19, 2023
The Connection Reuse section of the Everything curl book explains how this
works:

https://everything.curl.dev/libcurl/connectionreuse

> When you are using the easy API, or, more specifically, curl_easy_perform(),
> libcurl will keep the pool associated with the specific easy handle. Then
> reusing the same easy handle will ensure it can reuse its connection.

Before, in the `HttpClient::perform` method, a new curl handle was created
upon every invocation, which meant that there was no opportunity to reuse
connections from the underlying connection pool.

By making the curl handle static, it ensures that the same one remains in use,
thus providing the means to reuse the established connection pool.

Fixes Netflix-Skunkworks#51.
@copperlight copperlight linked a pull request Oct 19, 2023 that will close this issue
@copperlight
Copy link
Collaborator Author

copperlight commented Oct 19, 2023

The Registry:

  • Initializes a publisher_ in the constructor.
  • Calls publisher_.Start() in the Start() method.

The Publisher:

  • Initializes num_sender_threads_, which is used to create an asio::thread_pool of that size, and resizes the buffers_ vector for SmilePayload to match the thread count.
  • Has a Start() method which calls HttpClient::GlobalInit() (which calls curl_global_init()) and starts a sender_thread_.
  • The sender_thread_ runs the sender() function in a loop with a delay, calling out to send_metrics() to do the publishing work.
  • The send_metrics() method creates a new HttpClient upon every invocation. That client is captured as a part of the function object sent to the thread pool for execution.
  • Every invocation of HttpClient::perform() instantiates a new CurlHandle as a local variable, which leads to the lack of connection reuse.
  • Every instantiation of CurlHandle calls curl_easy_init() in the constructor, and curl_easy_cleanup() in the destructor, thus eliminating the possibility of reusing connections from the curl connection cache.

https://curl.se/libcurl/c/libcurl-tutorial.html

After each single curl_easy_perform operation, libcurl will keep the connection alive and open. A subsequent request using the same easy handle to the same host might just be able to use the already open connection! This reduces network impact a lot.

https://everything.curl.dev/libcurl/connectionreuse

Easy API pool

When you are using the easy API, or, more specifically, curl_easy_perform(),
libcurl will keep the pool associated with the specific easy handle. Then
reusing the same easy handle will ensure it can reuse its connection.

Multi API pool

When you are using the multi API, the connection pool is instead kept associated
with the multi handle. This allows you to cleanup and re-create easy handles freely
without risking losing the connection pool, and it allows the connection used by
one easy handle to get reused by a separate one in a later transfer.

https://curl.se/libcurl/c/threadsafe.html

You must never share the same handle in multiple threads. You can pass the handles around among threads, but you must never use a single handle from more than one thread at any given time.

https://stackoverflow.com/a/44740820

The libcurl multi interface is a single-core single-thread way to do a large amount of parallel transfers in the same thread. It allows for easy reuse of caches, connections and more. That has its clear advantages but will make it CPU-bound in that single CPU.

Doing multi-threaded transfers will make each thread/handle has its own cache and connection pool etc which will change when they will be useful, but will make the transfers less likely to be CPU-bound when you can spread them out over a larger set of cores/CPUs.

https://curl.se/libcurl/c/libcurl-multi.html

multi interface overview

https://www.onlineaspect.com/2009/01/26/how-to-use-curl_multi-without-blocking/

Rolling curl?

Given this behavior, it's not quite clear how it is possible to have CLOSE_WAIT connections stacking up as described in #68. Unless the curl_easy_cleanup stuff shuts down any janitor or cleanup work that might occur, if the sockets get into a bad state.

Possible solutions:

  • Mark the CurlHandle in the perform() method as thread_local storage. Not viable - this leads to ASAN access after scope issues.
  • Create a wrapper class for CurlHandle which uses an atomic integer to keep track of the handles allocated to the thread pool. Implement a vector of HttpClients, bump the CurlHandle to a private variable in that class, and pre-create the clients for all of the threads. Destroy them all on Stop().
  • Switch from an ASIO thread_pool to direct use of the curl multi interface to support many transfers while maintaining the connection cache. Since we already have a separate Publisher thread, we should be able to implement this without disturbing the base performance of the server. We would need to verify that the publishing performance remains about the same under high load.

copperlight added a commit to copperlight/spectatord that referenced this issue Oct 19, 2023
Originally, in the `HttpClient::perform()` method, a new `CurlHandle` was created
upon every invocation, leading to subsequent calls to `curl_easy_init()` and
`curl_easy_cleanup()`, as the variable went in and out of scope while it executed
on the `asio::thread_pool`. This meant that there was no opportunity to reuse
connections from the underlying connection pool, because it was always cleaned up.

By marking the `CurlHandle` with the `thread_local` storage duration, it ensures
that the same one remains in use for each thread, thus providing the means to
reuse the established connection pool.

Fixes Netflix-Skunkworks#51.
copperlight added a commit to copperlight/spectatord that referenced this issue Oct 19, 2023
Originally, in the `HttpClient::perform()` method, a new `CurlHandle` was created
upon every invocation, leading to subsequent calls to `curl_easy_init()` and
`curl_easy_cleanup()`, as the variable went in and out of scope while it executed
on the `asio::thread_pool`. This meant that there was no opportunity to reuse
connections from the underlying connection pool, because it was always cleaned up.

By marking the `CurlHandle` with the `thread_local` storage duration, it ensures
that the same one remains in use for each thread, thus providing the means to
reuse the established connection pool.

Fixes Netflix-Skunkworks#51.
copperlight added a commit to copperlight/spectatord that referenced this issue Oct 20, 2023
Originally, in the `HttpClient::perform()` method, a new `CurlHandle` was created
upon every invocation, leading to subsequent calls to `curl_easy_init()` and
`curl_easy_cleanup()`, as the variable went in and out of scope while it executed
on the `asio::thread_pool`. This meant that there was no opportunity to reuse
connections from the underlying connection pool, because it was always cleaned up.

By marking the `CurlHandle` with the `thread_local` storage duration, it ensures
that the same one remains in use for each thread, thus providing the means to
reuse the established connection pool.

Fixes Netflix-Skunkworks#51.
@copperlight
Copy link
Collaborator Author

copperlight commented Oct 21, 2023

Using the thread_local approach to the CurlHandle in the HttpClient::perform() method, we get the following results.

On the MacOS platform (apple-clang-15), we see two different ASAN errors:

  • container-overflow (related to Poco admin server, maybe false positive).
  • stack-use-after-scope (related to thread_local in CurlHandle).

On the Ubuntu Jammy platform:

  • g++-11 compiler does not report any ASAN errors.
  • clang-14 compiler reports stack-use-after-scope (related to thread_local in CurlHandle).
  • clang-14 compiler DOES NOT report a container-overflow error in the AdminServer tests.

Marking some of the HttpClients in the tests as thread_local clears some of the ASAN errors. This points to the need to keep a set of independent HttpClients for the thread pool.

Since the Publisher is responsible for calling the HttpClient::GlobalInit and HttpClient::GlobalShutdown methods on Start() and Shutdown(), we should not need any additional handling for curl setup.

In order to control publisher thread access to the HttpClient pool, because the number of batches may exceed the number of threads, and we cannot know how fast each thread will progress, we may need to implement something like an object pool:

https://codereview.stackexchange.com/questions/273467/c-thread-safe-object-pool

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant