New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
go-stream-locator #210
Comments
If you could provide a repository or source code to reproduce the issue would help to understand the problem. |
Or a Wireshark traffic capture and a set of exact steps used to "throttle RabbitMQ". Guessing is a very expensive way of troubleshooting distributed systems. |
Also, you should have some log client side. reader := bufio.NewReader(os.Stdin)
env, err := stream.NewEnvironment(
stream.NewEnvironmentOptions().
SetVHost("/"))
CheckErr(err)
fmt.Print("Starting test")
waitChan := make(chan struct{}, 1)
for i := 0; i < 5000; i++ {
waitChan <- struct{}{}
if i%500 == 0 {
fmt.Printf(" progress:%d \n", i)
}
go func() {
streamName := uuid.New().String()
err = env.DeclareStream(streamName, nil)
CheckErr(err)
producer, err := env.NewProducer(streamName, nil)
CheckErr(err)
producer.Close()
consumer, err := env.NewConsumer(streamName, func(consumerContext stream.ConsumerContext, message *amqp.Message) {
fmt.Printf("message received %s \n", message.GetData())
}, nil)
CheckErr(err)
consumer.Close()
_ = env.DeleteStream(streamName)
<-waitChan
}()
}
fmt.Print("Test completed") |
Thank you for your fast replies. Will try to create a example-project today that reproduces the issue. During our testing we had throttled RabbitMQ in Kubernetes by setting the resource limits. Giving the pod a maximum of But will reply later when we got a repo with an example reproducing the issue. Thanks! |
Describe the bug
Our service using rabbitmq-stream-go-client gets stuck at random times resulting in downtime.
The issue starts when the RabbitMQ server logs "unknown command".
¤ unknown command {request,11,{close,1,<<79,75>>}}, closing connection.
¤ unknown command {response,18,{close,1}}, closing connection.
After seeing the unknown command our service fails to create new consumers. Instead we can see an increasing number of go-steam-locator in the RabbitMQ UI. This will go on until our service runs out of memory and restarts. So the issue is that the go-stream-locators get stuck forever. (Usually around 30-40k connections with only go-stream-locators before our service OOM)
Reproduction steps
...
Expected behavior
For the go-stream-locators to vanish and messages going out to the consumers. Not hanging and taking about resources until crashing.
Additional context
We had this issue for 6 months now. We have decreased the frequency by lowering the opening and closing of connections. But it still happens randomly even with a lot of resources available. Using RabbitMQ 3.11.14 and rabbitmq-stream-go-client 1.1.2
The text was updated successfully, but these errors were encountered: