Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

comparison with related packages (eg Arraymancer) #18

Open
timotheecour opened this issue Jul 24, 2018 · 5 comments
Open

comparison with related packages (eg Arraymancer) #18

timotheecour opened this issue Jul 24, 2018 · 5 comments
Labels

Comments

@timotheecour
Copy link

timotheecour commented Jul 24, 2018

/cc @mratsim @andreaferretti
I'd like to know how https://github.com/unicredit/neo compares with other packages, in particular https://github.com/mratsim/Arraymancer, to know in which specific cases are best suited for neo vs arraymancer.

  • As I understand, Arraymancer supports N-D tensors, whereas neo is 1D/2D only (at least pending Implement tensors #11). What else?

  • what's the best way to convert (without copying underlying data!) between an Arraymancer tensor and a neo vector/matrix?

  • likewise in reverse

EDIT links

@andreaferretti
Copy link
Owner

andreaferretti commented Jul 25, 2018

Let me give my point of view, then @mratsim can leave his.

Neo was born first (in the form of a previous project linalg ), and I decided to give it an interface which would be more familiar to mathematicians instead of using n-dimensional arrays (mathematicians use tensors, but with a slightly different meaning). So I first focused on vectors and matrices, leaving the possibility to add tensors on top of this. Another thing that I'd like to reintroduce on top of neo is the possibility to track dimensions at compile time.

Arraymancer focused from the beginning on n-dimensional arrays, especially with the aim of implementing autograd for machine learning - especially neural networks. Arraymancer also aims to work with openCL, which is not a focus for Neo. I am not particularly expert on the innards of Arraymancer, but if I understand correctly it is rather smart about allocations - which can become a bottleneck in well tuned machine learning systems - while Neo is rather dumb and probably may end up allocating more than needed sometimes.

Given that Arraymancer is out there, I decided to postpone going in the direction of neural networks, and I tried binding more general linear algebraic operations - determinants, eigenvalues and so on. One thing that I would like to support is also sparse linear algebra, although this is only at the beginning.

Nowadays I don't have much time to improve Neo, while Arraymancer is more actively developed.

About converting, fields in Neo are all public exactly for this reason: read here

@mratsim
Copy link

mratsim commented Sep 1, 2018

Oh, I didn't see this issue until today.

Andrea nailed it.

I'm not sure Arraymancer is as smart as I want regarding allocations, I'd like loop-fusion for example, but Arraymancer sure led to the most allocation bugs opened in Nim repo.

I didn't have time in the past 4 months but now I do have so here is Arraymancer short term game plan:

  • Finishing GRU recurrent neural nets on CPU
  • Start on a visualization library for Arraymancer

There are a lot of things I would like to work on beyond (CUDA/OpenCL/CPU parity, object detection, style transfer, einsum, serialization, sparse ...) but I feel like RNNs and data viz are the most important.

Regarding conversion. All fields in Arraymancer are public as well, but that's mostly because working with private field in Nim is very painful. I will probably rewrite Arraymancer backend (the CPUStorage type) once destructors are stable to use pointer + length instead of shallow seq as it is using now:

type
  CpuStorage* {.shallow.} [T] = object
    ## Opaque data storage for Tensors
    ## Currently implemented as a seq with reference semantics (shallow copy on assignment).
    ## It may change in the future for a custom memory managed and 64 bit aligned solution.
    ##
    ## Warning ⚠:
    ##   Do not use Fdata directly, direct access will be removed in 0.4.0.

    # `Fdata` will be transformed into an opaque type once `unsafeToTensorReshape` is removed.
    Fdata*: seq[T]

This will make zero-copy operations easy from Neo, Numpy, or any other matrix/tensor libraries

Also we already discussed it but there is now NimTorch, cc @sinkingsugar

@sinkingsugar
Copy link

@mratsim thanks for the mention.

I'm actually a big fan of Arraymancer, and love the way it solved many issues using pure nim.

But sadly it is missing lot of things compared to frameworks like pytorch that have hundreds of commits every day and the whole data science scene going insanely fast.
That's why I made NimTorch, ATen (the core c++ library that pytorch now uses) is extremely clean and easy to integrate, has every operation possible and supports cuda and cpu transparently.

That said... to be honest I already had in my todo list to investigate how to make arraymancer use nimtorch behind the scene, altho I admit we are giving more priority on being compatible with pytorch right now.

@andreaferretti
Copy link
Owner

@sinkingsugar I just discovered about NimTorch, great project! There is a little confusion right now as there are a few Nim projects about linear algebra, but I think this a sign that people are interested in using Nim for machine learning and is overall healthy

@mratsim
Copy link

mratsim commented Sep 3, 2018

I didn't check ATen implementation as PyTorch was using C Torch when I started Arraymancer but it seems to be using or compatible with DLPack which is a common tensor format proposed by the Mxnet team:

https://github.com/pytorch/pytorch/blob/24eb5ad0c5388bd98f3f0ee3296ab4ad2c13bdd4/aten/src/ATen/dlpack.h#L91-L116

/*!
 * \brief Plain C Tensor object, does not manage memory.
 */
typedef struct {
  /*!
   * \brief The opaque data pointer points to the allocated data.
   *  This will be CUDA device pointer or cl_mem handle in OpenCL.
   *  This pointer is always aligns to 256 bytes as in CUDA.
   */
  void* data;
  /*! \brief The device context of the tensor */
  DLContext ctx;
  /*! \brief Number of dimensions */
  int ndim;
  /*! \brief The data type of the pointer*/
  DLDataType dtype;
  /*! \brief The shape of the tensor */
  int64_t* shape;
  /*!
   * \brief strides of the tensor,
   *  can be NULL, indicating tensor is compact.
   */
  int64_t* strides;
  /*! \brief The offset in bytes to the beginning pointer to data */
  uint64_t byte_offset;
} DLTensor;

By coincidence Arraymancer is something very similar already https://github.com/mratsim/Arraymancer/blob/de678ac12c2c3d3de3ec580cb7ecadcb11ea4b4c/src/tensor/data_structure.nim#L32-L48:

type
  Tensor*[T] = object
    ## Tensor data structure stored on Cpu
    ##   - ``shape``: Dimensions of the tensor
    ##   - ``strides``: Numbers of items to skip to get the next item along a dimension.
    ##   - ``offset``: Offset to get the first item of the tensor. Note: offset can be negative, in particular for slices.
    ##   - ``storage``: An opaque data storage for the tensor
    ## Fields are public so that external libraries can easily construct a Tensor.
    ## You can use ``.data`` to access the opaque data storage.
    ##
    ## Warning ⚠:
    ##   Assignment ```var a = b``` does not copy the data. Data modification on one tensor will be reflected on the other.
    ##   However modification on metadata (shape, strides or offset) will not affect the other tensor.
    ##   Explicit copies can be made with ``clone``: ```var a = b.clone```
    shape*: MetadataArray
    strides*: MetadataArray
    offset*: int
    storage*: CpuStorage[T]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants