You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
we are using the FROST-Server (v1.14.0) with kubernetes.
After investigating our last issues, it looks like the pods has to start in the correct order.
Database
HTTP
MessageBus
MQTT
If the Mqtt is starting before Database modul or MessageBus does, it failes and never try to reconnect again.
The only way to solve this issue is to restart the pod manually.
Is this a known Problem or wrong configured from our side?
Many thanks in advance.
Best regard,
Marc Beckord
The text was updated successfully, but these errors were encountered:
Yes, it's a known issue. I've already improved it in the 2.0 branch. Maybe I can backport the improvements.
Note that the MessageBus should be on 2 (or together with the DB on 1)
to start the pods in the correct order, we need to write an kubernetes-operator by our own.
Unfortunately Kubernetes is not able to setup service-dependencies like Docker Swarm does.
Would be great if you could keep me up to date about backporting the improvements.
Like mentioned by @hylkevds, this is a known issue. That's something that should be extended in the Kubernetes manifest files and the Helm chart. An operator is not needed in this case. It can be solved by using init-containers, that wait until the dependent services are available.
Unfortunately init-containers only cover the initial startup - but we also observed issues where already running components (HTTP/MQTT) stopped working after others (DB/Message Bus) were restarted for example due to being moved to another node. This could be a different issue though ....
Hi,
we are using the FROST-Server (v1.14.0) with kubernetes.
After investigating our last issues, it looks like the pods has to start in the correct order.
If the Mqtt is starting before Database modul or MessageBus does, it failes and never try to reconnect again.
The only way to solve this issue is to restart the pod manually.
Is this a known Problem or wrong configured from our side?
Many thanks in advance.
Best regard,
Marc Beckord
The text was updated successfully, but these errors were encountered: