You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello! Thank you for your excellent work in helping me solve many problems! However, I have some questions and I sincerely ask for your assistance:
question 1:
In the process of implementing federated learning based on Flower and PyTorch, I found that the initial model loaded by the client before training is local.
However, in certain cases, I need the client to train using the initial model provided by the server(For example, I need the client to continue training using the global model obtained from previous training sessions.).
question 2:
During the training process, on certain conditions (e.g., a specific port is accessed and requests an immediate halt to the training), how can I make the server or client stop the training process proactively?
I noticed that the disconnect_all_clients function can stop the federated learning process for a server that exceeds the specified number of rounds. Can I modify it to achieve my goal?
Thank you for your assistance!
The text was updated successfully, but these errors were encountered:
What is your question?
Dear Flower developers,
Hello! Thank you for your excellent work in helping me solve many problems! However, I have some questions and I sincerely ask for your assistance:
question 1:
question 2:
Thank you for your assistance!
The text was updated successfully, but these errors were encountered: