Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BlockSparseArrays] BlockSparseArray functionality #1336

Open
8 of 19 tasks
ogauthe opened this issue Feb 16, 2024 · 15 comments
Open
8 of 19 tasks

[BlockSparseArrays] BlockSparseArray functionality #1336

ogauthe opened this issue Feb 16, 2024 · 15 comments
Labels
BlockSparseArrays enhancement New feature or request NDTensors Requires changes to the NDTensors.jl library.

Comments

@ogauthe
Copy link
Contributor

ogauthe commented Feb 16, 2024

This issue lists functionalities and feature requests for BlockSparseArray.

using LinearAlgebra
using NDTensors.BlockSparseArrays: BlockSparseArray

a = BlockSparseArray{Float64}([2, 3], [2, 3]);

Feature requests/issues

  • Generalize axes of BlockSparseArray to any AbstractUnitRange (e.g. fix r = gradedrange([U1(0) => 1]); a = BlockSparseArray{Float64}(dual(r), r,)).
  • Proper handling of (Hermitian) transposed BlockSparseMatrix, i.e. a' (in progress in1).
  • Take the dual of the axes in Base.adjoint(::BlockSparseMatrix).
  • display(::Adjoint{T, BlockSparseArray) and display(::Transpose{T, BlockSparseArray) are not implemented
  • Display non-initialized blocks differently from a zero blocks, currently they just print as zeros.
  • Fix a[Block(2), Block(2)] = randn(3, 3) for initializing a block (currently only a[Block(2, 2)] = randn(3, 3) works).
  • Fix initializing a block with broadcasting syntax, i.e. a[Block(2, 2)] .= 1.0 as well as from a slice a[Block(1,1)][1:1,1:1] = ones((1,1))
  • Initializing a block with an invalid shape should throw an error.
  • LinearAlgebra.norm(a) crashes when a contains NaN
  • Block-wise linear algebra operations like svd, qr, etc. These are well defined if the block sparse matrix has a block structure (i.e. the sparsity pattern of the sparse array of arrays blocks(a)) corresponding to a generalized permutation matrix. Probably they should be called block_svd, block_qr, etc. to distinguish that they are meant to be used on block sparse matrices with those structures (and error if they don't have that structure). See 2 for a prototype of a block-wise QR.
  • Rename block_stored_indices to stored_blocks, for outputting a list Vector{<:Block} representing which blocks are nonzero (or zero but stored). We can keep block_stored_indices around but have it output Vector{<:CartesianIndex}, which is the same list of blocks but in the format CartesianIndex instead of Block. That can be helpful when interpreting the BlockSparseArray as a SparseArrayDOK of blocks, i.e. the data structure blocks(a::BlockSparseArray). However, that can easily be accessed with stored_indices(blocks(a::BlockSparseArray)) so I don't know if it is needed (it follows the general pattern in BlockArrays of defining block*(a) functions that act as *(blocks(a)), i.e. blocksize(a) == size(blocks(a)), etc.).
  • Base.similar(a::BlockSparseArray, eltype::type) and Base.similar(a::BlockSparseArray, eltype::type, size::NTuple{N,AbstractUnitRange}) do not set eltype
  • Make copy(::BlockSparseArray) copy the blocks.
  • Slicing with a[1:2, 1:2] is not implemented yet and needs to be implemented (in progress in1).
  • Function for accessing a list of initialized blocks. Currently you can use stored_indices(blocks(a))) to get a list of Block corresponding to initialized/stored blocks. Ideally there would be shorthands for this like block_stored_indices(a) (in progress in1).
  • Function for accessing the number of initialized blocks. Currently you can use nstored(blocks(a)) to get the number of initialized/stored blocks. Ideally there would be shorthands for this like block_nstored(a) (in progress in1).
  • In place operations .*= and ./=, such asa .*= 2, are broken (in progress in1).
  • Base.:*(::BlockSparseArray, x::Number) and Base.:/(::BlockSparseArray, x::Number) are not defined
  • Base.:*(::ComplexF64, ::BlockSparseArray{Float64}) does not change data type for empty array and crashes if a contains data.

Footnotes

  1. [BlockSparseArrays] More general broadcasting and slicing #1332 2 3 4 5

  2. https://github.com/ITensor/ITensors.jl/blob/v0.3.57/NDTensors/src/lib/BlockSparseArrays/src/backup/LinearAlgebraExt/qr.jl

@ogauthe ogauthe added enhancement New feature or request NDTensors Requires changes to the NDTensors.jl library. labels Feb 16, 2024
@mtfishman
Copy link
Member

mtfishman commented Feb 16, 2024

Thanks. This currently works:

julia> using NDTensors.BlockSparseArrays: Block, BlockSparseArray, blocks

julia> using LinearAlgebra: I

julia> a = BlockSparseArray{Float64}([2, 2], [2, 2])
2×2-blocked 4×4 BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedUnitRange{Vector{Int64}}, BlockArrays.BlockedUnitRange{Vector{Int64}}}}}, Tuple{BlockArrays.BlockedUnitRange{Vector{Int64}}, BlockArrays.BlockedUnitRange{Vector{Int64}}}}:
 0.0  0.00.0  0.0
 0.0  0.00.0  0.0
 ──────────┼──────────
 0.0  0.00.0  0.0
 0.0  0.00.0  0.0

julia> a[Block(2, 2)] = I(3)
3×3 Diagonal{Bool, Vector{Bool}}:
 1    
   1  
     1

julia> a
2×2-blocked 4×4 BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedUnitRange{Vector{Int64}}, BlockArrays.BlockedUnitRange{Vector{Int64}}}}}, Tuple{BlockArrays.BlockedUnitRange{Vector{Int64}}, BlockArrays.BlockedUnitRange{Vector{Int64}}}}:
 0.0  0.00.0  0.0
 0.0  0.00.0  0.0
 ──────────┼──────────
 0.0  0.01.0  0.0
 0.0  0.00.0  1.0

julia> using NDTensors.SparseArrayInterface: stored_indices

julia> stored_indices(blocks(a))
1-element Dictionaries.MappedDictionary{CartesianIndex{2}, CartesianIndex{2}, NDTensors.SparseArrayInterface.var"#1#2"{NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedUnitRange{Vector{Int64}}, BlockArrays.BlockedUnitRange{Vector{Int64}}}}}}, Tuple{Dictionaries.Indices{CartesianIndex{2}}}}
 CartesianIndex(2, 2) │ CartesianIndex(2, 2)

though using this alternative syntax is currently broken:

julia> a = BlockSparseArray{Float64}([2, 2], [2, 2])
2×2-blocked 4×4 BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedUnitRange{Vector{Int64}}, BlockArrays.BlockedUnitRange{Vector{Int64}}}}}, Tuple{BlockArrays.BlockedUnitRange{Vector{Int64}}, BlockArrays.BlockedUnitRange{Vector{Int64}}}}:
 0.0  0.00.0  0.0
 0.0  0.00.0  0.0
 ──────────┼──────────
 0.0  0.00.0  0.0
 0.0  0.00.0  0.0

julia> a[Block(2), Block(2)] = I(3)
ERROR: DimensionMismatch: tried to assign (3, 3) array to (2, 2) block
Stacktrace:
 [1] setindex!(::BlockSparseArray{…}, ::Diagonal{…}, ::Block{…}, ::Block{…})
   @ BlockArrays ~/.julia/packages/BlockArrays/L5yjb/src/abstractblockarray.jl:165
 [2] top-level scope
   @ REPL[30]:1
Some type information was truncated. Use `show(err)` to see complete types.

I would have to think about if it makes sense to support a[Block(2, 2)] .= 1.0 or a[Block(2), Block(2)] .= 1.0 if Block(2, 2) isn't initialized yet (probably we should support that).

@mtfishman
Copy link
Member

mtfishman commented Feb 16, 2024

In terms of svd and qr of BlockSparseMatrix, it is well defined to perform them block-wise (i.e. map the problem to factorizing the individual non-zero blocks) as long as there is only one block per row or column, i.e. diagonal up to block permutations of the rows and columns. So I was thinking we would have specialized block_svd and block_qr that checks if they have that structure and performs svd or qr block-wise, and otherwise it errors (it can check if blocks(a) is a generalized permutation matrix).

I have a protype of a QR decomposition of a BlockSparseMatrix here: https://github.com/ITensor/ITensors.jl/blob/v0.3.57/NDTensors/src/lib/BlockSparseArrays/src/backup/LinearAlgebraExt/qr.jl but it may be outdated since it was written based on an earlier version of BlockSparseArrays.

@mtfishman mtfishman changed the title [NDTensors] [ENHANCEMENT] BlockSparseArray functionalities [BlockSparseArrays][ENHANCEMENT] BlockSparseArray functionalities Feb 16, 2024
@mtfishman
Copy link
Member

mtfishman commented Feb 18, 2024

Also note that slicing like this should work right now:

a[Block(1, 1)[1:2, 1:2]]

i.e. you can take slices within a specified block. See BlockArrays.jl for a reference on that slicing notation.

@ogauthe
Copy link
Contributor Author

ogauthe commented Feb 26, 2024

new feature request: Base.:*(::BlockSparseArray, x::Number) and Base.:/(::BlockSparseArray, x::Number)

I updated the first comment.

@ogauthe
Copy link
Contributor Author

ogauthe commented Feb 26, 2024

new issue: 1im * a does not modify a data type if a is empty. It crashes if a contains data.

@ogauthe
Copy link
Contributor Author

ogauthe commented Mar 1, 2024

new issue: similar(a::BlockSparseArray, eltype::type) do not set new eltype.
similar(a::BlockSparseArray, eltype::type, size) behavior depends on size type.

  • For size::NTuple{N,Int}, it returns a dense julia Array with correct eltype.
  • For size::NTuple{N,AbstractUnitRange}, it returns a BlockSparseArray{Float64} and eltype is not correct.

@mtfishman
Copy link
Member

@ogauthe a number of these issues were fixed by #1332, I've updated the list in the first post accordingly. I added regression tests in #1360 for ones that still need to be fixed, and additionally added placeholder tests that I've marked as broken in the BlockSparseArrays tests. Please continue to update this post with new issues you find, and/or make PRs with broken behavior marked with @test_broken or bug fixes and regression tests. I think with the reorganization I did in #1332 these kinds of issues will be easier to fix going forward, though I have to admit it may be a bit tough to jump into that code right now since there are many abstraction layers involved, since the goal is for that code to be as minimal and general as possible.

@ogauthe
Copy link
Contributor Author

ogauthe commented Apr 24, 2024

Feature request: display(::Adjoint). The function Base.adjoint is well defined and returns an Adjoint type, but it throws an error at display.
Closely related, it is not clear to me how GradedAxes.dual and BlockSparseArray should work together. The current behavior is to only accept a GradedUnitRange as an axis and to throw for a UnitRangeDual input. Base.adjoint does not dual the axis. I do not have an opinion yet whether this should be modified or not.

@mtfishman
Copy link
Member

mtfishman commented Apr 24, 2024

I think ideally BlockSparseArray would accept any AbstractUnitRange.

Alternatively, BlockArrays.jl introduced an AbstractBlockedUnitRange in JuliaArrays/BlockArrays.jl#348, we could define a BlockedUnitRangeDual <: AbstractBlockedUnitRange type which wraps another AbstractBlockedUnitRange. I think it will require some experimentation to see which one leads to simpler code.

Good question about whether or not the axes should get dualed if Base.adjoint(::BlockSparseMatrix) is called. Probably we should dual the axes. For example, if we want operations like a' * a to work for a::BlockSparseMatrix and GradedUnitRange axes, I think we need to dual the axes.

@ogauthe
Copy link
Contributor Author

ogauthe commented May 2, 2024

The solution to accept any AbstractUnitRange is to replace the definition of Axes with

  Axes<:Tuple{Vararg{<:AbstractUnitRange,N}},

Then one can construct a BlockSparseArray with a dual axis. For a reason I do not really understand, the resulting array can be printed but not displayed. The error is not the same as when one tries to display a transpose or adoint.

g1 = gradedrange([U1(0) => 1])
m1 = BlockSparseArray{Float64}(dual(g1), g1,)

outputs

1×1-blocked 1×1 BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{UnitRangeDual{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}, BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}, BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}}}, Tuple{UnitRangeDual{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}, BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}, BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}}:
Error showing value of type BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{UnitRangeDual{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}, BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}, BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}}}, Tuple{UnitRangeDual{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}, BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}, BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}}:
ERROR: MethodError: no method matching NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}(::Int64)

Closest candidates are:
  (::Type{NDTensors.LabelledNumbers.LabelledInteger{Value, Label}} where {Value<:Integer, Label})(::Any, ::Any)
   @ NDTensors ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/LabelledNumbers/src/labelledinteger.jl:2
  (::Type{T})(::T) where T<:Number
   @ Core boot.jl:792
  (::Type{T})(::BigFloat) where T<:Integer
   @ Base mpfr.jl:378
  ...

Stacktrace:
  [1] convert(::Type{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}, x::Int64)
    @ Base ./number.jl:7
  [2] cvt1
    @ ./essentials.jl:468 [inlined]
  [3] ntuple
    @ ./ntuple.jl:49 [inlined]
  [4] convert(::Type{Tuple{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}, Int64}}, x::Tuple{Int64, Int64})
    @ Base ./essentials.jl:470
  [5] push!(a::Vector{Tuple{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}, Int64}}, item::Tuple{Int64, Int64})
    @ Base ./array.jl:1118
  [6] alignment(io::IOContext{Base.TTY}, X::AbstractVecOrMat, rows::Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}, cols::Vector{Int64}, cols_if_complete::Int64, cols_otherwise::Int64, sep::Int64, ncols::Int64)
    @ Base ./arrayshow.jl:76
  [7] _print_matrix(io::IOContext{…}, X::AbstractVecOrMat, pre::String, sep::String, post::String, hdots::String, vdots::String, ddots::String, hmod::Int64, vmod::Int64, rowsA::UnitRange{…}, colsA::UnitRange{…})
    @ Base ./arrayshow.jl:207
  [8] print_matrix(io::IOContext{…}, X::BlockSparseArray{…}, pre::String, sep::String, post::String, hdots::String, vdots::String, ddots::String, hmod::Int64, vmod::Int64)
    @ Base ./arrayshow.jl:171
  [9] print_matrix
    @ ./arrayshow.jl:171 [inlined]
 [10] print_array
    @ ./arrayshow.jl:358 [inlined]
 [11] show(io::IOContext{Base.TTY}, ::MIME{Symbol("text/plain")}, X::BlockSparseArray{Float64, 2, Matrix{…}, NDTensors.SparseArrayDOKs.SparseArrayDOK{…}, Tuple{…}})
    @ Base ./arrayshow.jl:399
 [12] (::REPL.var"#55#56"{REPL.REPLDisplay{REPL.LineEditREPL}, MIME{Symbol("text/plain")}, Base.RefValue{Any}})(io::Any)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:273
 [13] with_repl_linfo(f::Any, repl::REPL.LineEditREPL)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:569
 [14] display(d::REPL.REPLDisplay, mime::MIME{Symbol("text/plain")}, x::Any)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:259
 [15] display
    @ ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:278 [inlined]
 [16] display(x::Any)
    @ Base.Multimedia ./multimedia.jl:340
 [17] #invokelatest#2
    @ ./essentials.jl:892 [inlined]
 [18] invokelatest
    @ ./essentials.jl:889 [inlined]
 [19] print_response(errio::IO, response::Any, show_value::Bool, have_color::Bool, specialdisplay::Union{Nothing, AbstractDisplay})
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:315
 [20] (::REPL.var"#57#58"{REPL.LineEditREPL, Pair{Any, Bool}, Bool, Bool})(io::Any)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:284
 [21] with_repl_linfo(f::Any, repl::REPL.LineEditREPL)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:569
 [22] print_response(repl::REPL.AbstractREPL, response::Any, show_value::Bool, have_color::Bool)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:282
 [23] (::REPL.var"#do_respond#80"{Bool, Bool, REPL.var"#93#103"{REPL.LineEditREPL, REPL.REPLHistoryProvider}, REPL.LineEditREPL, REPL.LineEdit.Prompt})(s::REPL.LineEdit.MIState, buf::Any, ok::Bool)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:911
 [24] #invokelatest#2
    @ ./essentials.jl:892 [inlined]
 [25] invokelatest
    @ ./essentials.jl:889 [inlined]
 [26] run_interface(terminal::REPL.Terminals.TextTerminal, m::REPL.LineEdit.ModalInterface, s::REPL.LineEdit.MIState)
    @ REPL.LineEdit ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/LineEdit.jl:2656
 [27] run_frontend(repl::REPL.LineEditREPL, backend::REPL.REPLBackendRef)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:1312
 [28] (::REPL.var"#62#68"{REPL.LineEditREPL, REPL.REPLBackendRef})()
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:386
Some type information was truncated. Use `show(err)` to see complete types.

@mtfishman
Copy link
Member

Thanks for investigating. That seems like the right move to generalize the axes in that way. Hopefully that error is easy enough to circumvent.

@ogauthe
Copy link
Contributor Author

ogauthe commented May 2, 2024

I continue in exploring the effect of dual. eachindex is broken on such an array. This affects ==.

julia> eachindex(m1)
ERROR: MethodError: no method matching AbstractUnitRange{…}(::NDTensors.GradedAxes.UnitRangeDual{…})

Closest candidates are:
  AbstractUnitRange{T}(::AbstractUnitRange{T}) where T
   @ Base range.jl:1308
  AbstractUnitRange{T}(::NDTensors.LabelledNumbers.LabelledUnitRange) where T
   @ NDTensors ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/LabelledNumbers/src/labelledunitrange.jl:17
  AbstractUnitRange{Int64}(::Static.OptionallyStaticUnitRange)
   @ Static ~/.julia/packages/Static/6JeQI/src/ranges.jl:258
  ...

Stacktrace:
 [1] OrdinalRange{…}(r::NDTensors.GradedAxes.UnitRangeDual{…})
   @ Base ./range.jl:1313
 [2] convert(::Type{…}, r::NDTensors.GradedAxes.UnitRangeDual{…})
   @ Base ./range.jl:265
 [3] (::Base.IteratorsMD.var"#3#4")(r::NDTensors.GradedAxes.UnitRangeDual{…})
   @ Base.IteratorsMD ./multidimensional.jl:259
 [4] map
   @ ./tuple.jl:292 [inlined]
 [5] CartesianIndices(inds::Tuple{NDTensors.GradedAxes.UnitRangeDual{…}, BlockArrays.BlockedUnitRange{…}})
   @ Base.IteratorsMD ./multidimensional.jl:259
 [6] eachindex(::IndexCartesian, A::BlockSparseArray{…})
   @ Base.IteratorsMD ./multidimensional.jl:392
 [7] eachindex(A::BlockSparseArray{Float64, 2, Matrix{…}, NDTensors.SparseArrayDOKs.SparseArrayDOK{…}, Tuple{…}})
   @ Base ./abstractarray.jl:378
 [8] top-level scope
   @ REPL[37]:1
Some type information was truncated. Use `show(err)` to see complete types.

@mtfishman mtfishman changed the title [BlockSparseArrays][ENHANCEMENT] BlockSparseArray functionalities [BlockSparseArrays] [ENHANCEMENT] BlockSparseArray functionalities May 10, 2024
@ogauthe
Copy link
Contributor Author

ogauthe commented May 10, 2024

issue: I cannot write a slice of a block

a[BlockArrays.Block(1,1)][1:2,1:2] = ones((2,2))

does not write a. This is probably associated to "Fix initializing a block with broadcasting syntax".

@mtfishman mtfishman changed the title [BlockSparseArrays] [ENHANCEMENT] BlockSparseArray functionalities [BlockSparseArrays] BlockSparseArray functionality May 10, 2024
@ogauthe
Copy link
Contributor Author

ogauthe commented May 11, 2024

issue: LinearAlgebra.norm(a) triggers AssertionError when a contains NaN

a[BlockArrays.Block(1,1)] = ones((2,2))
println(LinearAlgebra.norm(a))  # 2.0
a[BlockArrays.Block(1,1)][1, 1] = NaN
println(LinearAlgebra.norm(a[BlockArrays.Block(1,1)]))  # NaN
println(LinearAlgebra.norm(a))  # AssertionError

outputs

ERROR: AssertionError: op(output, (eltype(output))(f_notstored)) == output
Stacktrace:
  [1] #sparse_mapreduce#16
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/map.jl:118 [inlined]
  [2] sparse_mapreduce
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/map.jl:115 [inlined]
  [3] #mapreduce#29
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/map.jl:82 [inlined]
  [4] mapreduce
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/map.jl:81 [inlined]
  [5] generic_normInf
    @ ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/LinearAlgebra/src/generic.jl:453 [inlined]
  [6] normInf
    @ ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/LinearAlgebra/src/generic.jl:527 [inlined]
  [7] generic_norm2(x::BlockSparseArray{Float64, 2, Matrix{…}, NDTensors.SparseArrayDOKs.SparseArrayDOK{…}, Tuple{…}})
    @ LinearAlgebra ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/LinearAlgebra/src/generic.jl:463
  [8] norm2
    @ ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/LinearAlgebra/src/generic.jl:529 [inlined]
  [9] norm(itr::BlockSparseArray{Float64, 2, Matrix{…}, NDTensors.SparseArrayDOKs.SparseArrayDOK{…}, Tuple{…}}, p::Int64)
    @ LinearAlgebra ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/LinearAlgebra/src/generic.jl:598
 [10] _norm
    @ ~/.julia/packages/ArrayLayouts/i2w0I/src/ArrayLayouts.jl:297 [inlined]
 [11] norm
    @ ~/.julia/packages/ArrayLayouts/i2w0I/src/ArrayLayouts.jl:298 [inlined]
 [12] norm(A::BlockSparseArray{Float64, 2, Matrix{…}, NDTensors.SparseArrayDOKs.SparseArrayDOK{…}, Tuple{…}})
    @ ArrayLayouts ~/.julia/packages/ArrayLayouts/i2w0I/src/ArrayLayouts.jl:298
 [13] top-level scope
    @ REPL[1220]:1
Some type information was truncated. Use `show(err)` to see complete types.

I just checked that replacing NaN with Inf is correct.

@ogauthe
Copy link
Contributor Author

ogauthe commented May 17, 2024

issue: a block can be written with an invalid shape. An error should be raised.

a = BlockSparseArray{Float64}([2, 3], [2, 3])
println(size(a))  # (5,5)
b = BlockArrays.Block(1,1)
println(size(a[b]))  # (2,2)
a[b] = ones((3,3))
println(size(a))  # (5,5)
println(size(a[b]))  # (3,3)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
BlockSparseArrays enhancement New feature or request NDTensors Requires changes to the NDTensors.jl library.
Projects
None yet
Development

No branches or pull requests

2 participants