Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Request: batch message sending #772

Open
dec0dOS opened this issue Nov 17, 2022 · 8 comments
Open

Feature Request: batch message sending #772

dec0dOS opened this issue Nov 17, 2022 · 8 comments
Assignees
Labels
A-requester Area: requester trait, bot adaptors feature-accepted Feature Request or Proposal was accepted by maintainers and will be implemented K-feature-request Kind: request for implementing a feature

Comments

@dec0dOS
Copy link

dec0dOS commented Nov 17, 2022

Within the given delay (batching_delay) aggregate messages before sending. After that delay join the messages with batching_separator and send them as one message.

Implementation example could be found here:
https://github.com/ivanmarban/winston-telegram
https://github.com/ivanmarban/winston-telegram/blob/6d903eb0db39c2205f548e20ee14fca1e3667022/lib/winston-telegram.js#L84

Pros

Instead of running into the Telegram rate limit with many small messages, one big message could be sent.

@dec0dOS dec0dOS added the K-feature-request Kind: request for implementing a feature label Nov 17, 2022
@WaffleLapkin
Copy link
Member

I'm not really sure if that's useful. Most of the bots I've seen, usually send a single message to a chat at a time, so there is simply nothing to batch. Moreover when you merge messages together you have a chance of exceeding the character limit, so you'll also need to handle that. Also if the messages have some other different parameters (replies, parse mode, etc) then they can't be merged either.

So to me it seems this would be useful in a very small number of cases, and even then probably will cause more trouble than good. Do you have a use-case for this feature?

@dec0dOS
Copy link
Author

dec0dOS commented Nov 17, 2022

We use teloxide mostly for our infrastructure monitoring (so only sending messages to user, but not processing the messages sent by user). We have some peak times when we need some attention from our team. Message sending limits in groups are really strict. So we have a big message queue, so throttle adaptor does not solve the problem.

Batching the messages also solves the notification problem - only one notification is sent, instead of spamming with the messages.

By the way, thanks for the awesome project!

@WaffleLapkin
Copy link
Member

Oh, I see! Yeah, for single-group notification batching like so makes a lot of sense (I also have a bot like this lmao).

I think, this can be implemented as a bot adaptor that specifically handles SendMessage requests, batching them. Something alike throttle, but a lot simpler.

Would you be down to implementing this? I can provide mentor-ship :)

@WaffleLapkin WaffleLapkin added the feature-accepted Feature Request or Proposal was accepted by maintainers and will be implemented label Nov 17, 2022
@dec0dOS
Copy link
Author

dec0dOS commented Nov 17, 2022

I'm a pretty novice in Rust, so I would be grateful for the mentor-ship :)

@WaffleLapkin WaffleLapkin assigned dec0dOS and unassigned WaffleLapkin and Hirrolot Nov 17, 2022
@dec0dOS
Copy link
Author

dec0dOS commented Nov 17, 2022

Seems to be the code of the throttle module is still in the teloxide-core repo, but in README there is info that the teloxide-core isn't used anymore.

I should fork the teloxide-core and the teloxide as well?

@WaffleLapkin
Copy link
Member

You should switch to the dev branch of this repo, it contains the teloxide-core source code.

@dec0dOS
Copy link
Author

dec0dOS commented Nov 17, 2022

If I understand correctly, most of the throttle functionality should be used in the batcher adaptor. Should the throttle be decomposed to use the same functionality in the batcher adaptor?

@WaffleLapkin
Copy link
Member

I don't think that we can make a single abstraction that can be used by both throttle and batcher. While they share some things, they differ a lot in details.

@WaffleLapkin WaffleLapkin added the A-requester Area: requester trait, bot adaptors label Jan 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-requester Area: requester trait, bot adaptors feature-accepted Feature Request or Proposal was accepted by maintainers and will be implemented K-feature-request Kind: request for implementing a feature
Projects
None yet
Development

No branches or pull requests

3 participants