Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory allocation #9

Open
nrontsis opened this issue Dec 13, 2018 · 4 comments
Open

Memory allocation #9

nrontsis opened this issue Dec 13, 2018 · 4 comments

Comments

@nrontsis
Copy link

nrontsis commented Dec 13, 2018

Thanks a lot for the very nice package.

Although it seems that your package is faster and more robust than eigs it allocates significantly more memory than eigs. For example, performing Arnoldi with eigsolve on a 1000 x 1000 matrix allocates ~20x more memory as compared to eigs:

X = randn(1000, 1000);
@allocated eigs(X, nev=1)
@allocated eigsolve(X, 1)

while the number of iterations/matrix multiplications are comparable for the two functions.

On a related note, it would be nice to allow for functions f(y, x) that mutate y.

I am happy to contribute if you provide guidance.

@nrontsis
Copy link
Author

nrontsis commented Dec 13, 2018

It seems that, for the example above, most of the memory is allocated in orthonormal.jl, function basistransform!.

Can I ask why not to use a package like ElasticArrays.jl for OrthonormalBasis? I think it would simplify the code of the file orthonormal.jl and allow the use of BLAS level three commands.

@Jutho
Copy link
Owner

Jutho commented Dec 14, 2018

Can I ask why not to use a package like ElasticArrays.jl for OrthonormalBasis?

Because one of the bullet points of this package is that vectors don't need to be plain Vectors, or even AbstractVectors or even AbstractArrays. I have many use cases in private repositories (some of which may become public at some point), where e.g. I just use some object that represents a periodic matrix valued function in terms of its matrix valued Fourier coefficients, or some very generic tensor type which does not simply map to a plain vector. The only requirement is that they support a number of methods listed in the manual.

That being said, there is already some specialization for the case where the vectors are some AbstractArray type with IndexLinear behaviour, and maybe it would be good to use a more efficient representation in that case, that can directly exploit BLAS functionality.

I think, however, this is only useful once indeed your first point is addressed, supporting in-place linear operators. Myself, I would be fine with only supporting in-place/mutating linear maps f!(y,x), but for legacy reasons (even though this package is not so old, it is used by some of my collaborators), it would be best to support both f!(y,x) and y=f(x). Preferably, I would just be able to detect this. If I remember correctly, the whole DifferentialEquations suite accepts both in-place and out-of-place specifications of the problem, so maybe @ChrisRackauckas could explain how he detects whether the problem is specified using an in-place versus an out-of-place function in a clean, Julian way?

@ChrisRackauckas
Copy link
Contributor

The code is here:

https://github.com/JuliaDiffEq/DiffEqBase.jl/blob/master/src/utils.jl#L1-L27

Ignoring https://github.com/JuliaDiffEq/DiffEqBase.jl/blob/master/src/utils.jl#L7-L12 as an old way for passing extra functions via dispatch, what this code does is it reads the current methods table and then returns a boolean if the number of arguments (for the maximum argument dispatch) is sufficiently large. Then that function is used to set a parameter of an inner constructor

https://github.com/JuliaDiffEq/DiffEqBase.jl/blob/master/src/problems/ode_problems.jl#L29-L31

or the user can set the parameter themselves to override the inplace checking, and that then ends up as a type parameter. Everything works off of the problem type as input, so from that point on everything is inferred by that. The inference isn't totally automatic because the inplace function checking the methods table is not done at compile-time, but user literals make it all inferred, and it doesn't actually impact normal usage because solve acts as a function barrier.

Using the maximum of the number of arguments for all methods has seemed to work. Surprisingly, this is something that we haven't had any users open issues for, so while it's maybe not "Julian" to be taking a peak at the methods table, it's been very robust.

@Jutho
Copy link
Owner

Jutho commented Dec 26, 2018

@ChrisRackauckas , thanks for the detailed information. Checking the methods table was also what I had in mind, but I was not certain whether this was a sane approach.

@nrontsis , another issue that I will certainly tackle at some point, though I will probably have different priorities the first month or so. Any attempts are welcome though.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants