New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fitting time domain data of electrical circuit - Best Algorithm? #2
Comments
Hello DerAlbiG, thanks for using least-squares-cpp. I hope I can help you with your issue. Generally the performance of your optimization highly depends on your algorithm configuration. Now, a question that bothers me is, which algorithm and step size functor do you use in your python version? Is it equivalent to your C++ implementation (GaussNewton and ArmijoBacktracking)? As a first measure, maybe you can try the following configuration and see how the performance and results are: lsq::LevenbergMarquardt <double, MyErrorFunctor> optimizer;
optimizer.setMaxIterations(100);
optimizer.setMaxIterationsLM(100);
optimizer.setLambdaIncrease(2);
optimizer.setLambdaDecrease(0.5);
optimizer.setVerbosity(2);
optimizer.setThreads(0); Generally, if you are not sure about which algorithm fits your optimization problem, LevenbergMarquardt is a good and robust choice, because it is a combination of GaussNewton and GradientDescent. Also you don't need a line search algorithm here, which saves computation time. |
If this does not work, feel free to send me your error functor and some sample data and I would love to play around with it ;) |
So, just that I get a clearer picture: Result quality is ok for you for now? So first priority is to improve performance, correct? In terms of performance there are a few more things you can try: First off compile your C++ project in "Release" mode (if you use CMake Make sure you enable the OpenMP option for your compiler, such that multithreading is enabled for the optimization process (see here). Now, you say you limited the number of evaluations of your error function to 300. How did you get / count that number? The safest way would be to add a counter to your error function, but be aware that it has to be thread-safe. Keep in mind, that if any optimization algorithm of
Again keep in mind, all this accounts for a single iteration of the optimization algorithm. I had a look at the scipy function you mentioned. So as I understand you, you actually have a constrained optimization problem, thus you used the Can you tell me how big your problem space is (so the dimension of your xval / unknowns)? And how big is your error space (dimension of fval / number of error values)? Do I also understand correctly, that the unknowns in your system are |
For contacting me, simply use the email address on my profile site, so you can send me sample data and code snippets, but I'd like to keep the general discussion public here. |
Complicated. I need both :-) I have currently the python reference implementation error function running and that is slow but yields the best and most complete results. I also have tried to simplify the error function and that is running very well with nice convergence and low number of iterations / error function calls but it does not give me intrinsically everything i need. In my system i basically know which values to expect. So learning can take initially a long time (one time), but then with very good initial guess, i need fast convergence to observe component drift due to temperature change or mechanical stress/damage. One fitting procedure should not take longer than 5..10ms. On an STM32 (ARM Cortex M4 with hardware floating point support [including multiply-accumulate]) with 168MHz. Roughly speaking, i have 1 million cycles to spare. This is why i am currently not really looking at time as a measurement of performance but number of iterations and calls to the error function. And yes, i use a global counter in the error function. I havent figured out / tried if i can change the default double to float or even a CustomFloat that counts every operation on it through operator overload - that would be the best metric since it would capture the background operations too. The test-sequence consists of roughly 2000 Samples. This can be optimized with possible sub-sampling or other methods. The unknowns in my equation are C0, C1, C2 (that is the formula for the simplified model). I_n is "current" (sorry electrical stuff) and I_(n+1) is "the next current value". It is basically iteratively computing an integral "I_n+1 = I_n + Delta_I*dt". This formula is repeated for every sample. Then it is checked how "I" develops compared to the measurement on a sample-by sample basis. That difference ("Measurement - ModelPrediction"), or the square of it, is the error vector, and currently it is ~1000 long. I unfortunately have no deeper understanding of the math beneath the surface. Although "Jacobian" is not an unheard-of word to me, i dont really know what it is or how to compute it. If providing the matrix would speed up the process, i am willing to learn. But currently i am struggling with quite some amount of doubts. My measurement data was captured with very inferior hardware and over the last year i made some good progress in that regard. As I understand, fitting to better data with less noise and greater adherence to the actual model (lets call it "less second order effects") should also increase convergence speed. Measuring new data is unfortunately somewhat dangerous and it requires a lot of work. It will take me 3..4 weeks to literally get a better picture. I have made the most progress by tuning the ArmijoBacktracking parameters. Most important was seemingly the reduction of "c1" for faster convergence. (compared to sample code). But i am really a stupid monkey here. Just trial and error. I dont know what i am doing. I will work on my code to clean it up. I will send it over later; Just a warning: I am not good with code distribution as I usually dont do it. And it is C++20. |
In your case you can try another option: One note: for your error function you should use the plain difference (measurement - model_prediction), not the square of it. Least-Squares optimization algorithms take care of calculating the correct squared error and assume a non-squared error vector as input. The jacobian - simplified - is the gradient for multi-dimensional functions. When you have a problem space of 8 unknowns and an error vector of 1000 results then the jacobian is a 1000x8 matrix , where each row is the gradient of the corresponding function of the error vector.
If you have access to a good scientific library you could also check if the following book is available: https://www.springer.com/de/book/9780387303031 For your armijo backtracking configuration try the following:
Basically the decrease should be chosen not too low, to avoid divergence of the step size calculation. c1 should be chosen quite small, to relax the armijo condition and improve convergence (as you figured also figured out). Minimum step size should not be chosen too big, because otherwise gauss newton might not converge (as the minimum step size might be too big to achieve the convergence guarantee), but it should also not be too small, because then we might spend too much time for searching for a suitable step size. Maximum step size is usually chosen 1.0, because we want to accept a full step if it satisfies the armijo condition. And use a maximum number of iterations to avoid too many or near to infinite computations during line search: just to be save, you never know what numeric computations might do during real-live applications, if you have not fully analyzed your optimization problem (convexity of the objective, definiteness of the jacobian, and so on) ;). |
Thank you again for your Input.
I agree, however i need to fit 2 waveforms at once - they are measured at different excitation voltages. Only when fitting 2 waveforms, the problem has a unique solution. ...I havent done that, because i simply dont know, if the error vector needs some sort of systematic behaviour to it? What bothers me about algorithm choice is that i had basically no good result with anything else than GaussNewton+Armijo. This either could be the ground truth, or I simply did not configure the other algorithms correctly. But without having had a real competition amongst algorithms, how can i be sure i used the right one. Strange. Your Idea to apply different algorithms is viable for sure. Sounds great to be honest. |
Hi Rookfighter,
i try using your least-squares lib to fit measured time-domain signals of an electrical circuit to extract the circuits component values.
I have a reference implementation in python running and i have some sort of success using your lib with the GaussNewton method and the ArmijoBacktracking step size algorithm. (basically your provided sample but i use my own error function object).
However i fail to get the quality of fitting that python provided and the same error function is much slower in c++ compared to python.
Python obviously may use different algorithms and I wonder if you would know what algorithm would fit my needs.
And i fully realize that it is a damn specific question, and i dont know how to give more detail about it given this communication method.
The funny thing is that this is supposed to work on a microcontroller and has to run factor 10.000x faster in the end. I am a bit lost.
The text was updated successfully, but these errors were encountered: