Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance and possible speedup #163

Open
jonas-eschle opened this issue Jun 2, 2021 · 10 comments
Open

Performance and possible speedup #163

jonas-eschle opened this issue Jun 2, 2021 · 10 comments

Comments

@jonas-eschle
Copy link

Hi all, we are using flavio to generate and fit heavily. Since we perform many toys, speed becomes a critical issue for us and we started looking into possible ways of speeding flavio up.

Are there any plans or ideas to increase the speed of flavio or anything performance related in the code of flavio? Either on a software level or on a mathematical level such as more caching?

As we had a look at common speedups with JIT such as numba or tracing with JAX, TensorFlow, it seems that the code is not uniformly written: some parts use math, others numpy. A lot of Python boilerplate (such as a dict for a complex number) and constructs seem around that make it difficult, seemingly to obtain a simple speedup.

Are there any plans/ideas in this direction?

@jonas-eschle
Copy link
Author

jonas-eschle commented Jun 2, 2021

Where?

Acknowledging, I don't remember, but it was a bottleneck for JAX, will find it again, but it speaks for itself that I don't do now. So it was maybe rather an unlucky encounter instead of the norm.

Yes indeed, there is already quite some stuff around in terms of speedup, that's what makes further gains non-trivial. Good to hear that this is intentionally already searched and the bottleneck is the same I found, the integration. What JAX may helps with is to JIT the function and to remove Python overhead and maybe to obtain automatic gradients. In principle. And maybe there is speedup, but how much is indeed unclear. It's more that other things are already implemented, that's the only real speedup we could think of in terms of pure coding improvements. But as mentioned, if, that may requires a few changes.

Another thing would be mathematically by rewriting certain amplitudes, factoring out the q2 dependent part and then having a local cache @Abhijit_Mathad (he's notified by chat)

In other words, these are ideas that we had; the question is also how much could we change flavio for speed and af there are other ideas around in the back of your head.

@abhijitm08
Copy link

@DavidMStraub We are talking particularly about FCNC b hadron decays. Since the slow down comes from integration, indeed as Jonas mentioned, in cases where the amplitudes are linear functions of parameters of interests (Wilson coefficients and form factors), one could factor out the phase space dependent part and cache the integrals. AFAIK, such type of caching is not currently there in Flavio (please correct me, if I am wrong). With this, the first evaluation of the observable would be very slow but this would really speed up the subsequent evaluations at the fitting stage.

@dlanci
Copy link

dlanci commented Jun 3, 2021

such as a dict for a complex number

😱 Where?

@DavidMStraub, I might be wrong, but everytime a Wilson Coefficient dictionary is instantiated, in the evaluation of the LogL in a WC fit isn't this what happens?

@peterstangl
Copy link
Collaborator

@DavidMStraub We are talking particularly about FCNC b hadron decays. Since the slow down comes from integration, indeed as Jonas mentioned, in cases where the amplitudes are linear functions of parameters of interests (Wilson coefficients and form factors), one could factor out the phase space dependent part and cache the integrals. AFAIK, such type of caching is not currently there in Flavio (please correct me, if I am wrong). With this, the first evaluation of the observable would be very slow but this would really speed up the subsequent evaluations at the fitting stage.

I have actually a preliminary implementation of bascially what you describe. I express observables as functions of polynomials that are quadratic in the Wilson coefficients or other parameters that enter the amplitudes linearly. The coefficients of these polynomials have to be computed only once and this is done using flavio. Such a polynomial can be expressed as a scalar product of a vector containing the polynomial coefficients and another vector containing Wilson coefficient and parameter bilinears. Computing theory predictions therefore reduces to a simple linear algebra problem that can be solved very efficiently. This approach makes it even possible to compute a covariance matrix for the polynomial coefficients of all observables in a fit and thus to extend flavio's FastLikelihood by including theory uncertainties that actually depend on the new physics Wilson coefficients (this was used in https://arxiv.org/abs/2103.13370).

I am planning to add this functionality to flavio at some point. Unfortunately I do not have enough time at the moment to implement it and there are also some other issues in flavio, like bug fixes, that have a higher priority.

@abhijitm08
Copy link

@DavidMStraub and @peterstangl : Thank you very much for the replies. We love Flavio and fully understand the motivation for the design choice during the development. Agreed that the generalisation is not a straight forward task. For us, the observables that we are interested to begin with are R(K), R(K*), BF(Bs->mumu) and angular observables related to B->K*mumu, with WCs being the parameters of interest. For our study, we are conducting fits to quite a lot of toy measurements O(20k). Each fit takes between 20-45 minute. Adding more observables and parameters will only increase this. So we were brain storming on the possible speeds-ups that one could gain through Flavio (JIT compilation, auto-differentiation, caching of integrals, etc) and thought it was best to discuss with the maintainers on the ideas you had moving forward.

@peterstangl : The implementation you talk of would be pretty amazing indeed! What observables do you have this implementation for? Any of the above ones perhaps? We completely understand your other priorities, however if you have an example implementation for a certain observable and if it is somehow a matter of person power to extend this to the other ones, we could perhaps be of some help to you here (?). At the moment, I do not know of how much of a speed gain we would get from JIT compilation (@mayou36 and @dlanci ), but we can certainly investigate this with Flavio.

@peterstangl
Copy link
Collaborator

@peterstangl : The implementation you talk of would be pretty amazing indeed! What observables do you have this implementation for? Any of the above ones perhaps? We completely understand your other priorities, however if you have an example implementation for a certain observable and if it is somehow a matter of person power to extend this to the other ones, we could perhaps be of some help to you here (?). At the moment, I do not know of how much of a speed gain we would get from JIT compilation (@mayou36 and @dlanci ), but we can certainly investigate this with Flavio.

@abhijitm08 I have done the implementation in terms of a separate python package that provides everything needed to construct a likelihood in terms of second order polynomials in the Wilson coefficients. Using this package, I have constructed a likelihood containing all of the observables you mention above. One idea would be to make this package a submodule of flavio at some point. But currently the package is still under development in the context of unpublished work. That's also why I do not want to make it public yet. Anyway, I will think about how I could still help you with your issue and maybe provide you with parts of my implementation.

@jonas-eschle
Copy link
Author

jonas-eschle commented Jun 22, 2021

I've done some more benchmarking and it seems that with a few minor changes, we can gain some speedup with numba (10-20% estimated for the tested cases, with some changes to the code). So I am just looking here at the technical speedups in flavio, @peterstangl improvements by using the polynomials is independent and of course something else to examine.

and then having a likelihood with gradient for gradient-based optimization or Hamiltonian Monte Carlo

That would be nice and can work well in general (it is e.g. used in zfit or pyhf to speed up the minimization), but JAX is just difficult to use with the JIT, as we would also need to adjust python logic and therefore make it quite dependent on a package. For example any if-else logic needs to be jax, while numba can deal with it (but has no analytic gradients). And that would maybe need to be taken into account from the beginning on (but maybe it's easier to adjust, and the non-jitted should work without a lot of modifications). Also, the benefit of autograd is only there if everything is written in JAX, we don't really gain something from just half-way rewriting (or only jit speedup)

So my conclusions on this:

  • JAX and friends are too difficult to apply now (assuming JIT and autograd) and would need a redesign, including a permanent dependence on this packages, making it harder to contribute (and that seems like an anti-goal). Especially the autograd part, which needs everything be written in JAX.
  • numba could deliver us some free speedup by jitting (similar to JAX but it is -- AFAIK -- more optimized on Python logic and objects). It can be used for a specific function, but does not has to be. This would allow to gradually gain some speed without the requirement to use it.
  • for anything else -- full JAX, full numba, ... -- we would need a quite different approach and rewrite quite a few parts, if, then this should be rather done in a completely new library (or in a completely new part of the library). Given that the requirement of having everything written in JAX may be too high, this endeavour, if ever, will most likely not take place in the near future.

Do you think a few modifications to the code for an improved speed using numba (meaning to rewrite math heavy functions such as https://github.com/flav-io/flavio/blob/master/flavio/physics/bdecays/angular.py#L47) are welcome as PRs? This will not change anything in the building process or similar (not like cython would).

P.S: @DavidMStraub, very understandable this considerations were not taken into account in the beginning, but that's maybe what made flavio what it is now, and that's good, thanks a lot for all this work! Also, the more I inspected, the more cachings and optimizations I found, so the low hanging fruits are truly gone, I had a somewhat different impression in the beginning, an incorrect judgement.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants
@abhijitm08 @peterstangl @jonas-eschle @dlanci and others