Access the learned controller? #37
Replies: 5 comments 11 replies
-
Hi @wissamkafa, sorry for the late reply, I missed your message again. I would suggest you to give more datapoints which are information-rich, i.e. taken from different experiments. Regarding how to inspect the MPC, the trained MPC behaves exactly as a normal neural network class: you can call |
Beta Was this translation helpful? Give feedback.
-
HI Wissam, Let's go through your issues one at time: Problem with the controller not behaving wellThis is probably due to the fact that you the neural network is seeing very little. When doing
you are collecting the data of the closed loop system, in other words, the orange lines that you see in these plots This is very little information for the neural network, as it might learn well if the system is very close to how the orange line, but as soon as you have (even a veeeery little) variation the controller unstabilizes the system. One idea to fix is the following. Run several
and then train with data_set_tot. One suggestion when testing the learned MPC: start easy, possibly by setting initial conditions and reference to an equilibrium point, in this case the system should stay in the equilibrium point. Make sure that the "areas" close to the equilibrium point are in the training dataset. How to add external data to the datasetThe previous example should clarify this. Just create an (empty if you want) dataset and the add features and labels using 'add_data' How to get the trained weights from the trained NNI tried but I could not find a way, surely @jpohlodek can help with that? |
Beta Was this translation helpful? Give feedback.
-
@BrunoMorabito Continuing our last discussion about Learning_MPC: Actually the total dataset could not be created in this way (or I have missed something) (because it needs to be of a special structure):
I think the controller (Ann), needs a setpoint ref, and now we are just giving the initial condition and runs with no ref. is that the case, or I missed something? |
Beta Was this translation helpful? Give feedback.
-
did this plot both original nmpc and the learned ann? |
Beta Was this translation helpful? Give feedback.
-
Dear Bruno After our last discussion, I want to inform you and report the improvement in working with HILO, You were right that the training data created by running one simulation is not enough, so After running hundreds of experiments to choose the best network structure and parameters, I got such good performance, as shown in the photos, below. but I have noticed some unusual (or I think so), that I want to share:
For Now, I'm trying to simulate a more complex system (the first one was a simplified version), the system I have is of the form: The problem here is when trying to create the NMPC with HILO, I always get "infeasible_problem_detected".
but always I get the same error "infeasible_problem_detected". |
Beta Was this translation helpful? Give feedback.
-
Good evening.
@BrunoMorabito
I am trying to learn the MPC controlling a specific system (with 3rd-order differential equations).
While training, the loss value decreases to almost zero, but when plotting the results after using the learned controller with
scl = SimpleControlLoop(system, ann)
scl.run(n_steps)
The results seem very different and do not follow the labels.
1- Should not the loss values indicate the learned controller effectiveness?
2 - How can I access the learned controller (the network) to directly pass values (features) and see the corresponding output, to compare with the original mpc controller?
3 - Did HILO-mpc have been tested with nonlinear models of higher orders?
Original MPC Controller:
Learned Controller:
Loss:
Beta Was this translation helpful? Give feedback.
All reactions