Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Generic Matrix/Vector/N-Dimensional Type #5

Open
mihirparadkar opened this issue Aug 29, 2016 · 14 comments
Open

Generic Matrix/Vector/N-Dimensional Type #5

mihirparadkar opened this issue Aug 29, 2016 · 14 comments

Comments

@mihirparadkar
Copy link

mihirparadkar commented Aug 29, 2016

Why are the matrix and vector types separate depending on the value they contain? (i.e. Matrix32[M, N] and Matrix64[M, N] instead of Matrix[M, N, T]) ? Since the implementations of matrix and vector operations for 32-bit and 64-bit floats are effectively identical, wouldn't generic procs specialized on the element type allow a lot less code duplication? Couldn't it would also allow easier scaling to complex matrices/vectors and boolean vectors for more concise and expressive indexing.

@freevryheid
Copy link

freevryheid commented Sep 4, 2016

Is it even necessary to define vectors/matrices, as these are already arrays/seqs? What about defining Matrix as a 2-D array as per numpy with matrix algebra applied to these as in A = B.X, etc

@andreaferretti
Copy link
Owner

Sorry it took a while to reply - I was on holidays :-)

@mihirparadkar I tried to use Matrix[M, N, T] with T restricted to be float32 or float64, but I never managed to make the compiler happy about inferring all types and dimensions. If you are able to make this change and simplify the cases, I would be really happy to get a PR. Notice that I cannot just leave T unconstrained, as BLAS only implements matrix multiplication and so on for floating point types and not, say, integers.

There is a tentative implementation of more generic linear algebra routines in my other library emmy. The types are compatible, so if you import both linalg and emmy you can have generic implementations for any ring T, but more specialized and fast implementations for the floating point case. The compiler will choose the more specific implementation if one is available, so you can just mix and match linalg and emmy and everything should work. Notice that this actually is a nice side benefit of the fact that the types in linalg are not generic in T.

The implementation in emmy has limitations, though: I only made it for the dynamic size case, and more importantly the algorithms used are the naif multiplications.

@freevryheid I am not sure I understand your comment. The types defined in linalg are just type aliases, and in fact, matrices are little more than a two-dimensional array (with a little more information about the size and columnwise/rowwise layout)

@freevryheid
Copy link

freevryheid commented Sep 15, 2016

@andreaferretti, I guess that's my point - very little to gain by defining new types when the existing should suffice, no? When interfacing with blas/lapack, arrays need to be column aligned, so define them as such and leave it to to programmer to ensure this or provide a transpose function to facilitate this.

@andreaferretti
Copy link
Owner

Well, I still need the matrix types since they hold some more fields other than the arrays. And I needed to define custom types in the static dimension case, to hold dimension info in the type. It just made sense to define the dynamic vector types by analogy.

In any case, the types are not distinct, just aliases. If you prefer to just spell them out, it is the same. That is, they are perfectly compatible with Nim types

@mratsim
Copy link

mratsim commented Mar 30, 2017

I've started working on this and finished porting types.nim to fully use Matrix[M, N, T] syntax: https://github.com/mratsim/linear-algebra/blob/float-parametric/linalg/private/types.nim

It passes all the tests. Edit: I was actually loading the original linalg.

I didn't test with emmy and how Nim deals with specialized generics.

@freevryheid
Copy link

freevryheid commented Apr 17, 2017

Here's another generic port that incorporates complex numbers as well:
https://github.com/freevryheid/lalite

This was coded to support my comments to @andreaferretti above. Sorry for the late response. Basically it comes down to defining a matrix[T] where T can be any number including complex. It is similar to the linalg matrix definition but uses a sequence[T] object (instead of an array) that stores row, column and order info via the toMatrix call. Calls to lapack and blas transform input matrices to column major and re-transform to row major order on output via pointer to seq[0] - hence no need for arrays and static parameters that IMHO overly complicate and restrict coding. I'm using the fortran blas library for matrix multiplication. Complex numbers are defined as array(2,T) instead of tuple. I modified complex.nim accordingly. The example under lalite.nim calculates complex eigenvectors from some square matrix per the Intel geev example online. This was mainly proof of concept and the code is still lacking and naive without due consideration given to memory, speed, etc.

I offer it as a "lite" version to the current linalg and welcome any feedback .

@andreaferretti
Copy link
Owner

@freevryheid Thank you for your suggestions!

I would like very much to make the library more generic, but it is not so simple. Let me make a few comments on your approach:

  • the type parameter T is unconstrained - I think it would be better to restrict it to the valid types (this should be easy)
  • making the type parameter T generic prevents interoperability with any library that works with types different from real and complex numbers - say integers. Since the types in linalg are specific, one can import it together with a more generic library that works over any ring, and the compiler will choose the correct implementation
  • I would prefer not to touch complex.nim and be interoperable with the standard library (not that linalg supports complex numbers right now anyway...)
  • blas supports specifying the matrix order, so you should not need to go back and forth from column major order

I would like very much to join efforts and come to a common codebase if it makes sense, but I would like to preserve

  • implementation on the GPU
  • support for static checking of matrix sizes

Do you think we can come to some form of convergence?

@mratsim
Copy link

mratsim commented Apr 18, 2017

Would love to discuss on Nim numerical ecosystem as well. Maybe on gitter/irc?

As @andreaferretti knows, I had tough challenges to use linalg's DVector and DMatrix types in my autograd library (a building block for neural networks). More info here on what was tried (object variant, dynamic dispatch ...).

In the end, I built a Tensor library from scratch, re-using nimblas as the backend to suit my needs.
It's not "light" though.

I need unified types for vectors, matrices and extensibility to up to 4D tensors at least (height, width, color, batch images processing for basic), 6D if possible (depth for 3D images, time for video). That also means that I can't use static typing.

For now the library is still rough (tensor display might be cut for example), but here it is: https://github.com/mratsim/Arraymancer.

@andreaferretti
Copy link
Owner

I would be happy to discuss on gitter - I don't think that having 3 or more different approaches benefits anyone, and I would be glad to drop linalg in favour of more convenient alternatives.

By the way, I happen to have some prototype neural network implementation that I did not have time to open source. I had to use methods for it to work nicely, and I did treat matrices and vectors separately.

I am not sure whether we will be able on gitter, though, because of timezones - it is 7PM here in Italy

@mratsim
Copy link

mratsim commented Apr 18, 2017

In France, as well so same timezone. However this is a side project for me so I'm only free during lunch time or in the evening.

@freevryheid
Copy link

I agree, to ensure longevity, the nim community should decide the preferred approach. My tenants ($0.02):

  1. I support the definition of a Matrix as a generic multi-dimension array:

type
Matrix*[M, N: static[int], T] = array[M, array[N, T]]

  1. This does not include the definition of an order, which is assumed row-major, by default.

  2. To ease the use of complex numbers with lapack/blas I recommend the re-definition of complex numbers in terms of array[2,T].

@andreaferretti
Copy link
Owner

The issue with not specifying an order is that transposing becomes a non-constant time operation

@mratsim
Copy link

mratsim commented May 21, 2017

I've updated PR #14

Base type in linalg can now generic over floats with the original DVector32 ... being provided for compatibility/syntactic sugar:

type
  Vector*[N: static[int], T: SomeReal] = ref array[N, T]
  Matrix*[M, N: static[int], T: SomeReal] = object
    order: OrderType
    data: ref array[N * M, T]
  DVector*[T: SomeReal] = seq[T]
  DMatrix*[T: SomeReal] = ref object
    order: OrderType
    M, N: int
    data: seq[T]

  Vector32*[N: static[int]] = Vector[N, float32]
  Vector64*[N: static[int]] = Vector[N, float64]
  Matrix32*[M, N: static[int]] = Matrix[M, N, float32]
  Matrix64*[M, N: static[int]] = Matrix[M, N, float64]
  DVector32* = DVector[float32]
  DVector64* = DVector[float64]
  DMatrix32* = DMatrix[float32]
  DMatrix64* = DMatrix[float64]

Regarding the overall discussion, I see several needs:

On the common points (feel free to add/correct):

  • All target or will target the GPU, nimcuda by Andrea is very promising for such usage
  • All want to have the most performant type-checking and bounds-checking:
    • glm and linalg uses Nim type-checking and static bound-checking
    • Arraymancer uses Nim type-checking but no static. All my boundchecks are deactivated instead for release.

On the differences:

  • glm targets OpenGL applications
  • linalg targets general numerical computing and interoperability with monadic/ring/group concepts introduced by Emmy
  • Arraymancer targets Data Science and in particular Deep Learning

I don't really know the OpenGL domain but I don't see any overlap on glm and linalg/lalite/arraymancer so I will only talk about the last 3 and explain why I choose to do something different.

Why didn't I choose a simple 2D library:

The ideal deep learning library supports at least 4D arrays and up to 6D arrays.
4D being:

  • number of images
  • height, width
  • RGB color channel

6D is for 3D videos (time + depth are added)
For natural language processing, it is common to use 100-D arrays to represent word vectors
image

Why no static parameter or array in the type:

  1. The main reason is that it prevents me from creating a sequence of closures: Matrix[2, 3] -> Matrix[3, 3] and Matrix[3, 3] -> Matrix[3,3]

  2. It might be possible to have a static[seq[int]] (just tested, it actually compiles) but at the time I raised almost 10 bugs, most related to static.

  3. For interfacing with other languages, having the shape as a field is much easier.
    Instead my code uses when compileOption("boundChecks"): in all proc which is unset with -d:release. It's obviously not as ideal but it's more ergonomic

  4. Ergonomics of static parameters and array are non-trivial for the user and the programmer

Data structure chosen

Most of why I choose to do something in a specific way is explained here

  • For instant no-copy views/slicing I need to go beyond Row/Column-major order and use strided array

Unique differentiator

Thanks to Arraymancer strided data structure, it can do arbitrary slicing/views without copying the underlying data. Also Arraymancer can hold any homogeneous type, i.e. you can have integer, floats, bool, objects... . You can't (and it's not a target) have heterogeneous like float + int in the same ndarray.

import math, arraymancer, future

const
    x = @[1, 2, 3, 4, 5]
    y = @[1, 2, 3, 4, 5]

var
    vandermonde: seq[seq[int]]
    row: seq[int]

vandermonde = newSeq[seq[int]]()

for i, xx in x:
    row = newSeq[int]()
    vandermonde.add(row)
    for j, yy in y:
        vandermonde[i].add(xx^yy)

let foo = vandermonde.toTensor(Cpu)

echo foo

# Tensor of shape 5x5 of type "int" on backend "Cpu"
# |1      1       1       1       1|
# |2      4       8       16      32|
# |3      9       27      81      243|
# |4      16      64      256     1024|
# |5      25      125     625     3125|

echo foo.fmap(x => x.isPowerOfTwo) # map a function (=> need import future)

# Tensor of shape 5x5 of type "bool" on backend "Cpu"
# |true   true    true    true    true|
# |true   true    true    true    true|
# |false  false   false   false   false|
# |true   true    true    true    true|
# |false  false   false   false   false|

echo foo[1..2, 3..4] # slice

# Tensor of shape 2x2 of type "int" on backend "Cpu"
# |16     32|
# |81     243|

echo foo[3.._, _] # Span slice

# Tensor of shape 2x5 of type "int" on backend "Cpu"
# |4      16      64      256     1024|
# |5      25      125     625     3125|

echo foo[_..^3, _] # Slice until (inclusive, consistent with Nim)

# Tensor of shape 3x5 of type "int" on backend "Cpu"
# |1      1       1       1       1|
# |2      4       8       16      32|
# |3      9       27      81      243|

echo foo[_.._|2, _] # Step

# Tensor of shape 3x5 of type "int" on backend "Cpu"
# |1      1       1       1       1|
# |3      9       27      81      243|
# |5      25      125     625     3125|

echo foo[^1..0|-1, _] # Reverse step

# Tensor of shape 5x5 of type "int" on backend "Cpu"
# |5      25      125     625     3125|
# |4      16      64      256     1024|
# |3      9       27      81      243|
# |2      4       8       16      32|
# |1      1       1       1       1|

@andreaferretti
Copy link
Owner

I should mention that all the part of this library related to dynamic matrices has been ported to a new library neo, and that has been substantially expanded.

I think tensors should be easily compatible with Neo and one should be able to transform between tensors, matrices and vectors without copying data. I still prefer having matrices and vectors as the main data types, even though tensor incorporate them.

I was thinking that maybe I could add a tensor datatype later, with slice operations, and the possibility to convert 2d-tensors to matrices and 1d-tensor to vectors (and viceversa).

This would allow to have higher dimensional data types, but still make easier to follow BLAS operations on the relevant types (I am sorry, I am always confused when trying to map BLAS to tensors)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants