Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Request: Support for ReadOnlyMemory<T> and ReadOnlySpan<T> in Vector Creation #1063

Open
r-Larch opened this issue Feb 22, 2024 · 2 comments

Comments

@r-Larch
Copy link

r-Larch commented Feb 22, 2024

I am a user of the MathNet.Numerics library, extensively utilizing its linear algebra capabilities for performance-critical applications. I've noticed that the current API provides several methods for creating vectors from different data structures, such as arrays and enumerables. However, there seems to be a gap when it comes to modern memory types introduced in recent versions of .NET, namely ReadOnlyMemory<T> and ReadOnlySpan<T>.

Current Methods:

As of now, the library supports vector creation through the following methods:

  • Vector<T>.Build.DenseOfArray(T[])
  • Vector<T>.Build.DenseOfEnumerable(IEnumerable<T>)
  • Vector<T>.Build.DenseOfIndexed(int length, (int, T)[])
  • Vector<T>.Build.DenseOfVector(Vector<T>)

These methods are incredibly useful but do not accommodate the scenarios where data resides in ReadOnlyMemory<T> or ReadOnlySpan<T>, which are becoming increasingly common in high-performance .NET applications.

Proposed Enhancements:

To bridge this gap, I propose the introduction of the following methods:

  • Vector<T>.Build.DenseOfMemory(ReadOnlyMemory<T>)
  • Vector<T>.Build.DenseOfSpan(ReadOnlySpan<T>)

These additions would not only enhance the flexibility and performance of vector creation in memory-constrained or latency-sensitive environments but also align MathNet.Numerics with modern .NET development practices. Specifically, it would allow for efficient vector creation from both ReadOnlyMemory<T> and Memory<T>, as well as from ReadOnlySpan<T> and Span<T>, without the need for copying or converting the underlying data.

Motivation:

The motivation for this request stems from the increasing use of Span<T> and Memory<T> types in .NET for managing memory more efficiently. By supporting these types, MathNet.Numerics can provide developers with more flexibility in how they handle numerical data, potentially reducing overhead and improving the performance of numerical computations.

I believe that these enhancements will significantly benefit users of MathNet.Numerics who are working with large datasets or in performance-critical applications, where memory efficiency and speed are paramount.

Thank you for considering this feature request. I am looking forward to any discussion on this topic and am happy to contribute to the implementation if needed.

@jkalias
Copy link
Member

jkalias commented Feb 24, 2024

Hi, this sounds like a valid und useful request. Would you mind sending a PR for it so I can have a detailed look on how you envision its concrete usage?

@r-Larch
Copy link
Author

r-Larch commented Feb 28, 2024

After reviewing the source code, I've determined that significant modifications would be necessary to implement this feature effectively.

Accepting Memory or Span would only be advantageous if used for all internal mathematical operations. However, the current implementation of Vector<T> and Matrix<T> relies heavily on memory being stored in arrays. Changing this would require substantial modifications to all helper methods in ILinearAlgebraProvider, among others. Such changes would constitute a massive breaking change, as much of the public API would be altered.

To implement this feature, it would be necessary to first refactor the code to internally utilize ReadOnlySpan<T> and Span<T>, similar to what is done in the new System.Numerics.Tensors.TensorPrimitives.

Subsequently, we could refactor VectorStorage<T> and similar components to work with Memory<T>. Only after these steps could we introduce new Build.DenseOf... methods.

The benefits from a performance and memory efficiency standpoint would be enormous.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants