Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Callback factories #23

Open
michihenning opened this issue Apr 11, 2023 · 3 comments
Open

Callback factories #23

michihenning opened this issue Apr 11, 2023 · 3 comments

Comments

@michihenning
Copy link

michihenning commented Apr 11, 2023

Why would I use a callback factory when setting up an async call? I can do this:

auto callback = socket_->createReceiveCallback([this](auto receiver, auto blob, auto event) {
    received_identity(receiver, blob, event);
});
auto error = socket_->receive(r_opts, callback);

This works fine. But I can also do this:

auto error = socket_->receive(r_opts, [this](auto receiver, auto blob, auto event) {
    received_identity(receiver, blob, event);
});

Whether I use the callback from the factory or pass a lambda directly makes no apparent difference. Why would I use one over the other?

@mattrm456
Copy link
Contributor

The difference is described in the section titled "Asynchronous Operation" in the ntci::StreamSocket class documentation. The result of createXyxCallback associates a function with an arbitrary strand and an optional mechanism to prevent the function from being called when the operation completes. Some users may never care about such strands and authorization; the overloads that accept a lamba are provided for convenience.

@michihenning
Copy link
Author

Thank you for that, I missed this.

@michihenning
Copy link
Author

michihenning commented May 19, 2023

What the documentation does not make clear is that, if there is only one thread in an interface, the strand is implicit and all the callbacks will be invoked on that single thread. (At least, that's what my experimentation suggests.) Any hoops I might jump through with callback factories and strands are wasted (and possibly wasteful as well?) if there is only one thread in the interface.

It might be nice to say a little bit more about what strands are, how they work, and how they relate to the concurrency (or otherwise) of callbacks sharing the same strand or using different strands. Does a strand guarantee that all callbacks with that strand will be invoked sequentially? Does it guarantee that all callbacks with that strand will be invoked by the same thread? Does a strand guarantee that, if I invoke async operation A followed by operation B, the completion handler for A will be invoked before B, even if B finishes before A? These are all questions in the reader's mind.

As it stands, the doc pretty much assumes that the reader knows what a strand is. I suspect that quite a few readers will know only vaguely.

@michihenning michihenning reopened this May 19, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants