Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[NDTensors] Roadmap to removing TensorStorage types #1250

Open
6 of 47 tasks
mtfishman opened this issue Nov 16, 2023 · 0 comments
Open
6 of 47 tasks

[NDTensors] Roadmap to removing TensorStorage types #1250

mtfishman opened this issue Nov 16, 2023 · 0 comments
Assignees
Labels
enhancement New feature or request NDTensors Requires changes to the NDTensors.jl library.
Milestone

Comments

@mtfishman
Copy link
Member

mtfishman commented Nov 16, 2023

Here is a roadmap to removing TensorStorage types (EmptyStorage, Dense, Diag, BlockSparse, DiagBlockSparse, Combiner) in favor of more traditional AbstractArray types (UnallocatedZeros, Array, DiagonalArray, BlockSparseArray, CombinerArray), as well as removing Tensor in favor of NamedDimsArray.

NDTensors reorganization

Followup to BlockSparseArrays rewrite in #1272:

  • Move some functionality to SparseArrayInterface, such as TensorAlgebra.contract.
  • Clean up tensor algebra code in BlockSparseArray, making use of broadcasting and mapping functionality defined in SparseArrayInterface.

Followup to SparseArrayInterface/SparseArrayDOKs defined in #1270:

  • TensorAlgebra overloads for SparseArrayInterface/SparseArrayDOK, such as contract.
  • Use SparseArrayDOK as a backend for BlockSparseArray (maybe call it BlockSparseArrayDOK?).
  • Consider making a BlockSparseArrayInterface package to define an interface and generic functionality for block sparse arrays, analogous to SparseArrayInterface (EDIT: Currently lives inside BlockSparseArray library.)

Followup to the reorganization started in #1268:

  • Move low rank qr, eigen, svd definitions to NDTensors.RankFactorization module. Currently they are defined in NamedDimsArrays.NamedDimsArraysTensorAlgebraExt, those should be wrappers around the ones in NDTensors.RankFactorization.
  • Split off the SparseArray type into an NDTensors.SparseArrays module (maybe come up with a different name like NDSparseArrays, GenericSparseArrays, AbstractSparseArrays, etc.). Currently it is in NDTensors.BlockSparseArrays. Also rename it SparseArrayDOK (for dictionary-of-keys) to distinguish it from other formats.
  • Clean up NDTensors/src/TensorAlgebra/src/fusedims.jl.
  • Remove NDTensors.TensorAlgebra.BipartitionedPermutation, figure out how to disambiguate between partitioned permutation and named dimension interface. How much dimension name logic should go in NDTensors.TensorAlgebra vs. NDTensors.NamedDimsArrays?
  • Create NDTensors.CombinerArrays module. Move Combiner and CombinerArray type definitions there.
  • Create NDTensors.CombinerArrays.CombinerArraysTensorAlgebraExt extension. Move Combiner contract definition from ITensorsNamedDimsArraysExt/src/combiner.jl to CombinerArraysTensorAlgebraExt (which is just a simple wrapper around TensorAlgebra.fusedims and TensorAlgebra.splitdims).
  • Dispatch ITensors.jl definitions for qr, eigen, svd, factorize, nullspace, etc. on typeof(tensor(::ITensor)) so then for ITensor wrapping NamedDimsArray we can fully rewrite those functions using NamedDimsArrays and TensorAlgebra where the matricization logic can be handled more elegantly with fusedims.
  • Get all the same functionality working for ITensor wrapping a NamedDimsArray wrapping a BlockSparseArray.
  • Make sure all NamedDimsArrays-based code works on GPU.
  • Make Index a subtype of AbstractNamedInt (or maybe AbstractNamedUnitRange?).
  • Make ITensor a subtype of AbstractNamedDimsArray.
  • Deprecate from NDTensors.RankFactorization: Spectrum, eigs, entropy, truncerror.
  • Decide if size and axes of AbstractNamedDimsArray (including the ITensor type) should output named sizes and ranges.
  • Define an ImmutableArrays submodule and have the ITensor type default to wrapping ImmutableArray data, with copy-on-write semantics. Also come up with an abstraction for arrays that can manage their own memory, such as AbstractCOWArray (for copy-on-write) or AbstractMemoryManagedArray, as well as NamedDimsArray versions, and make ITensor a subtype of AbstractMemoryManagedNamedDimsArray or something like that (perhaps a good use case for an isnamed trait to opt-in to automatic permutation semantics for indexing, contraction, etc.).
  • Use StaticPermutations.jl for dimension permutation logic in TensorAlgebra and NamedDimsArrays.

Testing

  • Unit tests for ITensors.ITensorsNamedDimsArraysExt.
  • Run ITensorsNamedDimsArraysExt examples in tests.
  • Unit tests for NDTensors.RankFactorization module.
  • Unit tests for NamedDimsArrays.NamedDimsArraysTensorAlgebraExt: fusedims, qr, eigen, svd.
  • Unit tests for NDTensors.CombinerArrays and NDTensors.CombinerArrays.CombinerArraysTensorAlgebraExt.

EmptyStorage

Diag

  • Define DiagonalArray.
  • Tensor contraction, addition, QR, eigendecomposition, SVD.
  • Use DiagonalArray as default data type instead of Diag in ITensor constructors.

UniformDiag

  • Replace with DiagonalArray wrapping an UnallocatedZeros type.

BlockSparse

  • Define BlockSparseArray.
  • Tensor contraction, addition, QR, eigendecomposition, SVD.
  • Use BlockSparseArray as default data type instead of BlockSparse in ITensor QN constructors.

DiagBlockSparse

  • Use BlockSparseArray with blocks storing DiagonalArray, make sure all tensor operations work.
  • Replace DiagBlockSparse in ITensor QN constructors.

Combiner

  • Not sure what to do with this, but a lot of functionality will be replaced by the new fusedims/matricize functionality in TensorAlgebra/BlockSparseArrays and also by the new FusionTensor type. Likely will be superseded by CombinerArray, FusionTree, or something like that.

Simplify ITensor and Tensor constructors

  • Make ITensor constructors more uniform by using a style tensor(storage::AbstractArray, inds::Tuple), avoid constructors like DenseTensor, DiagTensor, BlockSparseTensor, etc.
  • Use rand(i, j, k), randn(i, j, k), zeros(i, j, k), fill(1.2, i, j, k), diagonal(i, j, k), etc. instead of randomITensor(i, j, k), ITensor(i, j, k), ITensor(1.2, i, j, k), diagITensor(i, j, k). Maybe make lazy/unallocated by default where appropriate, i.e. use UnallocatedZeros for zeros and UnallocatedFill for fill.
  • Consider randn(2, 2)(i, j) as a shorthand for creating an ITensor with indices (i, j) wrapping an array. Also could use setinds(randn(2, 2), i, j).
  • Remove automatic conversion to floating point in ITensor constructor.

Define TensorAlgebra submodule

  • TensorAlgebra submodule which defines contract[!][!], mul[!][!], add[!][!], permutedims[!][!], fusedims/matricize, contract(::Algorithm"matricize", ...), truncated QR, eigendecomposition, SVD, etc. with generic fallback implementations for AbstractArray and maybe some specialized implementations for Array.(Started in [NDTensors] Start TensorAlgebra module #1265, [TensorAlgebra] Matricized QR tensor decomposition #1266.)
  • Use ErrorTypes.jl for catching errors and calling fallbacks in failed matrix decompositions.
  • Move most matrix factorization logic from ITensors.jl into TensorAlgebra.

New Tensor semantics

@mtfishman mtfishman added enhancement New feature or request NDTensors Requires changes to the NDTensors.jl library. labels Nov 16, 2023
@mtfishman mtfishman added this to the v0.4 milestone Nov 16, 2023
@mtfishman mtfishman pinned this issue Nov 16, 2023
@kmp5VT kmp5VT self-assigned this Nov 16, 2023
@mtfishman mtfishman changed the title [NDTensors] [ENHANCEMENT] Roadmap to removing TensorStorage types [NDTensors] Roadmap to removing TensorStorage types May 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request NDTensors Requires changes to the NDTensors.jl library.
Projects
None yet
Development

No branches or pull requests

2 participants