New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Any way to check if all messages sent before calling client.Close()? #157
Comments
I would use the callback API to determine this - there's an example of CLI that simply sends one event and waits until it succeeds or fails https://github.com/segmentio/analytics-go/blob/v3.0/cmd/cli/main.go#L85-L96. |
Thanks for the example @f2prateek. My concern with that approach is that any success or failure would write to the channel, which would cause my function to return. Again running in AWS lambda (as opposed to a long-running service) it's going to exit main() immediately after that, so anything pending would be lost. Please correct me if I'm not understanding, but it seems like this approach wouldn't allow me to check if the msgs channel has fully drained. |
I'm not sure what you mean - you can wait for the error success response like this https://github.com/segmentio/analytics-go/blob/v3.0/cmd/cli/main.go#L68-L70 in your lambda before finishing function execution. |
Sorry, I should have said that I'm starting a client, sending multiple messages, then closing the client. That example works for one message, or where BatchSize/Interval are set so that my code is waiting at the callback channel before it sends anything, and it must send all my msgs in one batch. Otherwise the callback will trigger when some messages are sent -- might be all messages, or it might not. |
Here's an example: https://play.golang.org/p/Uk1zB_bnyLx
It will print out a different number of results each time. Can also hack some debugging messages directly into |
The callback will be invoked once per message. In your example. you are sending 100 messages. So with this implementation, you should wait on the callback channel 100 times if you want to ensure all 100 messages have been sent or failed.
You don't HAVE to use a channel as well (that's just what this example uses), e.g. you could also use a waitgroup that is incremented before calling enqueue and decremented in the callback implementation, and then call |
Counting messages only works if I set the BatchSize to one, otherwise the number of times the callback is triggered depends on BatchSize and Interval setting. Same issue with using a WaitGroup in the callback. The Close() function nearly does what I'm asking, it waits for the Lines 272 to 275 in c98da3f
I'm sure there's a reason that it dumps the messages and gives up on retrying, but perhaps there could be an alternative that would allow message retries to still happen? |
I find it pretty weird that in the code it says that it is draining messages on |
So am I missing something or you can just end up with no messages actually sent when you |
Oh, I see, the |
@defrankland 's issue is still a thing. Is a bugfix on its way? |
Getting the same issue while tracking a lot of events and then calling
Please implement a |
This should help with ensuring that |
Using
analytics-go.v3
.The
Client
interface has a comment that outlines the client setup:I'm doing that, but get a very occasional error like
segment 2019/09/16 13:43:38 ERROR: 15 messages dropped because they failed to be sent and the client was closed
. From the code it looks like I hit a failure to send, but theclient.Close()
call happened before the retry was attempted. Callingclient.Close()
closes thequit
channel, printing that error message and dropping the unsent messages.Is there currently a way to check if all messages sent before calling
client.Close()
? I did see the config.Callback but that doesn't seem like it's what I want, as I want to make sure ALL messages have sent or failed all retries before callingclient.Close()
.I should also add that I'm running this in AWS lambda so I'd rather not just add a
sleep
before calling Close().Thanks!
The text was updated successfully, but these errors were encountered: