New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hot Reloading lvmd.conf #785
Comments
Thank you for your post. I suggest you to use Reloader. It is often referred to in similar posts such as the following.
This product is generic and will help if you are having similar problems with other products. |
To be honest, I am not sure why we should do a restart of topolvm for this case. It is a global variable that we can simply update on-demand without issues, and avoid a restart completely. |
lvmd.yaml is not global. It intends to configure settings on a per-node basis. For example, when using groups of nodes with different specs, we intended to have a different lvmd.yaml. It was difficult to write such settings in Helm, which may have been misleading you. |
I think that this is not a common use case because the bug with the default device class missing and causing configuration failures for different nodes was only discovered a few days ago.
I understand. However, even if a volume group gets changed we would need to restart topolvm-node and thus have downtime which we would like to avoid. The tool you suggested just automates the restart, which is not something we want / need. If you want, we can hide this functionality behind a feature flag if required. |
I don't understand why you're worried about downtime. When we update the image, the pods are restarted, so the downtime is unavoidable. |
A downtime during an upgrade is unavoidable, yes. But that is part of regular maintenance of normal SRE maintenance windows in any operational cluster right? I am talking about changes of configurations. Either way, this is not a major issue for us. We will integrate it in lvm-operator and trigger a restart from there based on a file-watch. It does not complicate our logic so its okay not to have in upstream for us and we're not pushing for it. I still think there is room for improvement here. How do you protect users from expecting that the correct configuration is used when topolvm-node is not restarted? |
It's not so often to change configurations for most users. So, I guess it's acceptable to wait for downtime during pod restart to read topolvm.conf.
Thanks, I'm glad if you do so.
It can also be said for all applications which don't support the hot-reload of configuration files. It might be nice to write this behavior in the document. For instance, Rook does so.
|
Closing as per discussion above |
What should the feature do:
Currently, topolvm-node reads out lvmd.conf file into a global variable on startup based on a command-line flag. In our tests we already bind the file to a config map instead. However, any changes to the config maps or the file after topolvm-node has been started don't get respected until a restart. It would be great to include a "hot-reload" functionality into lvmd.conf that is able to take care of reloading topolvm-node when the content changes.
Ways to solve this:
What is use case behind this feature:
Use case is that we have changing lvmd config entries after initial configuration and startup time. We can call it a part of normal "day-2-operations" in topolvm.
We are happy to provide a solution if the use case is accepted.
The text was updated successfully, but these errors were encountered: