Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support conj in expressions #89

Closed
ssfrr opened this issue Feb 25, 2020 · 6 comments
Closed

support conj in expressions #89

ssfrr opened this issue Feb 25, 2020 · 6 comments
Labels
enhancement New feature or request

Comments

@ssfrr
Copy link

ssfrr commented Feb 25, 2020

For example, for a Complex inner product the expression should look something like:

x1 = randn(ComplexF64, 10)
x2 = randn(ComplexF64, 10)
@ein y[] := x1[i]' * x2[i]

(I'm actually not sure if this is the right way to represent a scalar output, but you get the idea).

But this throws an error.

LoadError: type Nothing has no field inds
in expression starting at /home/sfr/research_journal/2020/02/16/multichannel_wiener_filter.jmd:353
getproperty(::Nothing, ::Symbol) at Base.jl:20
(::OMEinsum.var"#78#80")(::Nothing) at einsequence.jl:147
_mapreduce(::OMEinsum.var"#78#80", ::typeof(vcat), ::IndexLinear, ::Array{Union{Nothing, OMEinsum.IndexGroup{Int64}},1}) at reduce.jl:309
...

I get the same error with an explicit conj(x1[i]).

@mcabbott
Copy link
Collaborator

You can write @ein y[] := conj(x1)[i] * x2[i], it turns out, or ein"i,i->"(conj(x1), x2). But perhaps not ideal?

@ssfrr
Copy link
Author

ssfrr commented Feb 26, 2020

When applying the function to the array name I'd want to write it conj.(x1)[i] and in my mental model it's just applying a function to the whole array before doing the einsum magic, but it seems like from a performance standpoint you'd want it to be happening in the loop rather than acting on a copy.

Thanks for the tip! It's great there's a way to do this, and it doesn't need to be ideal. Seems like it would be a bit of work on the @ein parser to identify function calls (and/or the ' syntax for adjoint) and move them inside the loop.

@under-Peter
Copy link
Owner

It would indeed by nice to apply elementwise-functions in the loops and/or special dispatches but while it would be easy-ish for the macro-syntax and dispatches to our custom loops, it wouldn't work for the string-macro form (ein"i,i->"(conj(x1),xs)) since conj(x1) would evaluate before the einsum-functionality is called which would go against my intend to have the string-macro, macro and regular function call work basically the same. It would also add complexity to the custom dispatches.

If you work with large tensors I'd think the O(N) element-wise operations should generally be negligible compared to the e.g. O(N^3) tensor-contraction operations. If the memory is a problem, you might wanna do the inplace-version of conj! although that'd conflict with AD-functionality if you need that.

@ssfrr
Copy link
Author

ssfrr commented Feb 26, 2020

oh, I just realized that the reason that conj works is that it is doing the elementwise conj before applying the einsum magic. It seems that conj has a vector method defined.

I just ran across a similar issue trying to use abs2, which does not have a ::Vector method defined, and using dot-notation worked well.

As far as I know the elementwise operations aren't a bottleneck for me right now, so this is definitely not high-priority. Just to throw out my perspective though - I don't always work with large tensors, I'm starting to want to use @ein for all kinds of things, most of which probably aren't tensor contractions. It's just really nice notation for munging multidimensional data. :)

@mcabbott
Copy link
Collaborator

I guess you could have notation like ein"i*,i->"(x,y) == dot(x,y) == @ein x[i]' * y[i], or something? BLAS stuff all has methods for conjugated / transposed / normal, but adjusting this package to propagate this extra info sounds like a fairly big job. (It seems like none of the python einsum flavous care much about complex numbers.)

For things that aren't matrix multiplication, BTW, you may want my TensorCast which allows @reduce z := sum(i) x[i]' * y[i].

@GiggleLiu GiggleLiu added the enhancement New feature or request label Sep 23, 2021
@GiggleLiu
Copy link
Collaborator

conj is not a part of einsum, TensorCast should be the correct place to go.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants