Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Column-major vs. row-major broadcasting #57

Open
DavisVaughan opened this issue Nov 6, 2018 · 26 comments
Open

Column-major vs. row-major broadcasting #57

DavisVaughan opened this issue Nov 6, 2018 · 26 comments
Labels

Comments

@DavisVaughan
Copy link
Contributor

DavisVaughan commented Nov 6, 2018

Hi xtensor team, really impressed with your product. I'm working on some examples using R, but I'm having a bit of trouble with the broadcasting piece. I think there might be a bug in the xtensor-r headers somewhere.

My guess is that the issue lies in the difference between how R and xtensor/python denote dimensions (if im not mistaken).

In R) (2, 3, 4) == (2 row, 3 col, 4 deep)
In python/xtensor) (2, 3, 4) == (2 deep, 3 rows, 4 cols)

With that said, take a look at my example below of a simple broadcast. It seems like the dimensions get mixed up somewhere along the way.

Cpp file:

https://gist.githubusercontent.com/DavisVaughan/1bebb3219fb08c48f91fa5ad3411643a/raw/fc75cc083672d7c26db62c47c1dea05a1eb5a0d0/xtensor-cpp.cpp

# download and source the file
url_link <- 'https://gist.githubusercontent.com/DavisVaughan/1bebb3219fb08c48f91fa5ad3411643a/raw/fc75cc083672d7c26db62c47c1dea05a1eb5a0d0/xtensor-cpp.cpp'
tf <- tempfile(fileext = ".cpp")
download.file(url_link, tf)
Rcpp::sourceCpp(tf)

mat <- matrix(c(1,2,3,4), ncol = 2)
mat
#>      [,1] [,2]
#> [1,]    1    3
#> [2,]    2    4

# I expect to be able to go from:
# [2,2] -> [2,2,1] == [2 rows, 2 cols] -> [2 rows, 2 cols, 1 deep]

mtrx_broadcast_cpp(mat, c(2, 2, 1))
#> , , 1
#> 
#>      [,1] [,2]
#> [1,]    1    2
#> [2,]    1    2
#> 
#> , , 2
#> 
#>      [,1] [,2]
#> [1,]    3    4
#> [2,]    3    4

# Okay...maybe I was supposed to do it the python way?
# [1, 2, 2] == [1 deep, 2 row, 2 col]
mtrx_broadcast_cpp(mat, c(1, 2, 2))
#> , , 1
#> 
#>      [,1] [,2]
#> [1,]    1    2
#> 
#> , , 2
#> 
#>      [,1] [,2]
#> [1,]    3    4

# Frustrating!

# this works with numpy, and I think it works with xtensor,
# the diffference is just the R vs xtensor way of defining dimensions
library(reticulate)

np <- import("numpy", convert = FALSE)

two_by_two <- np$ones(c(2L,2L))
two_by_two
#> [[1. 1.]
#>  [1. 1.]]

np$broadcast_to(two_by_two, c(1L,2L,2L))
#> [[[1. 1.]
#>   [1. 1.]]]

# more informative would be to do..
np$broadcast_to(two_by_two, c(3L,2L,2L))
#> [[[1. 1.]
#>   [1. 1.]]
#> 
#>  [[1. 1.]
#>   [1. 1.]]
#> 
#>  [[1. 1.]
#>   [1. 1.]]]

# I want ^ this behavior, but if possible, with R semantics where I would
# specify c(2,2,3) as the dimension to broadcast to, but it doesnt even
# work right now so lets focus on that

Created on 2018-11-06 by the reprex
package
(v0.2.0).

@SylvainCorlay
Copy link
Member

Hi @DavisVaughan thanks for posting here! We have not pushed the R bindings as much as the Python and Julia bindings, but there should not be too much difference hopefully, since it is built upon the same foundations.

(Note that while NumPy is row-major by default (but can have arbitrary layout, through strides), Julia and R are column-major by default, and don't have internal strides, but xtensor should be able handle all of these.)

I am confused about your "deep" keyword. Regardless of the layout, an array M of shape of (d1, d2, ..., dn) corresponds to the maximal indices for accessing an element with M[i1, i2, ... in]. Is it not the same in R?

The semantics of xt::broadcast are exactly the same as that of np.broadcast_to. (Actually there is an issue open about renaming xt::broadcast into xt::broadcast_to).

Python

In [1]: import numpy as np                                                                

In [2]: x = np.array([[1, 2, 3], [4, 5, 6]])                                              

In [3]: np.broadcast_to(x, (4, 2, 3))                                                     
Out[3]: 
array([[[1, 2, 3],
        [4, 5, 6]],

       [[1, 2, 3],
        [4, 5, 6]],

       [[1, 2, 3],
        [4, 5, 6]],

       [[1, 2, 3],
        [4, 5, 6]]])

C++

[cling]$ #include <xtensor/xarray.hpp>
[cling]$ #include <xtensor/xbroadcast.hpp>
[cling]$ #include <xtensor/xio.hpp>
[cling]$ #include <iostream>
[cling]$ auto x = xt::xarray<double>({{1, 2, 3}, {4, 5, 6}});
[cling]$ auto b = xt::broadcast(x, {4, 2, 3});
[cling]$ std::cout << b;
{{{ 1.,  2.,  3.},
  { 4.,  5.,  6.}},
 {{ 1.,  2.,  3.},
  { 4.,  5.,  6.}},
 {{ 1.,  2.,  3.},
  { 4.,  5.,  6.}},
 {{ 1.,  2.,  3.},

@DavisVaughan
Copy link
Contributor Author

DavisVaughan commented Nov 6, 2018

Thanks for the quick response.

"deep" was just my name for the third dimension in simple terms. I needed a way to identify it over just saying "the third dimension" because of how R and Python order them differently.

Regardless of the layout, an array M of shape of (d1, d2, ..., dn) corresponds to the maximal indices for accessing an element with M[i1, i2, ... in]. Is it not the same in R?

You are correct about this, it works the same in R.

d1 <- 2
d2 <- 4
d3 <- 3

M <- array(1:24, dim = c(d1, d2, d3))

# last element (1 index based)
M[2, 4, 3]
#> [1] 24

This should be a clearer example that demonstrates a side effect of the issue. Is this expected?

I would have thought that you can broadcast a (3,4) matrix up to an array of (3,4,2) (or maybe in the python world, (2,3,4)) with no issues.

# download and source the file
url_link <- 'https://gist.githubusercontent.com/DavisVaughan/1bebb3219fb08c48f91fa5ad3411643a/raw/fc75cc083672d7c26db62c47c1dea05a1eb5a0d0/xtensor-cpp.cpp'
tf <- tempfile(fileext = ".cpp")
download.file(url_link, tf)
Rcpp::sourceCpp(tf)

mat <- matrix(c(1,2,3,4,5,6,7,8,9,10,11,12), ncol = 4)
mat
#>      [,1] [,2] [,3] [,4]
#> [1,]    1    4    7   10
#> [2,]    2    5    8   11
#> [3,]    3    6    9   12

arr <- array(mat, c(3,4,1))
arr
#> , , 1
#> 
#>      [,1] [,2] [,3] [,4]
#> [1,]    1    4    7   10
#> [2,]    2    5    8   11
#> [3,]    3    6    9   12

mtrx_broadcast_cpp(mat, c(3,4,2))
#> Error in mtrx_broadcast_cpp(mat, c(3, 4, 2)): Incompatible dimension of arrays, compile in DEBUG for more info

mtrx_broadcast_cpp(arr, c(3,4,2))
#> , , 1
#> 
#>      [,1] [,2] [,3] [,4]
#> [1,]    1    4    7   10
#> [2,]    2    5    8   11
#> [3,]    3    6    9   12
#> 
#> , , 2
#> 
#>      [,1] [,2] [,3] [,4]
#> [1,]    1    4    7   10
#> [2,]    2    5    8   11
#> [3,]    3    6    9   12

@DavisVaughan
Copy link
Contributor Author

I think the issue here is that the broadcasting semantics are trying to do:

   (3, 4)
(3, 4, 2)
-------
error

when they should be doing this to match up with what we use in R:

(3, 4)
(3, 4, 2)
------
(3, 4, 2)

it works for the array because that third dimension is explicit:

(3, 4, 1)
(3, 4, 2)
------
(3, 4, 2)

@SylvainCorlay
Copy link
Member

Ok so xtensor and numpy implicitely prepends the shape with ones before applying same-dimension broadcasting. So for an operation involving a 2-D and a 1-D array, the 1-D array is considered as a row.

From your last message, I understand that R implicitely appends ones to the shape of lower dimensional operands. In an operation involving a 1-D and a 2-D operand, the 1-D array is a column?

@SylvainCorlay
Copy link
Member

By n-th dimension, I always mean the maximal value for the n-th index when accessing array(..., i_n, ...).

@DavisVaughan
Copy link
Contributor Author

That is exactly right! Rather than prepend ones, I guess we implicitly append them as you said.

one_dim <- 1:5 # really a vector
two_dim <- matrix(1:10, ncol = 2)

one_dim
#> [1] 1 2 3 4 5
two_dim
#>      [,1] [,2]
#> [1,]    1    6
#> [2,]    2    7
#> [3,]    3    8
#> [4,]    4    9
#> [5,]    5   10

# 1D is treated as a column
one_dim + two_dim
#>      [,1] [,2]
#> [1,]    2    7
#> [2,]    4    9
#> [3,]    6   11
#> [4,]    8   13
#> [5,]   10   15

@DavisVaughan
Copy link
Contributor Author

DavisVaughan commented Nov 6, 2018

As a side note, in R you can't even do this, which I find insane and is the reason I want xtensor to gain some traction in the R community.

one_dim <- matrix(1:5, ncol = 1)
two_dim <- matrix(1:10, ncol = 2)

one_dim
#>      [,1]
#> [1,]    1
#> [2,]    2
#> [3,]    3
#> [4,]    4
#> [5,]    5
two_dim
#>      [,1] [,2]
#> [1,]    1    6
#> [2,]    2    7
#> [3,]    3    8
#> [4,]    4    9
#> [5,]    5   10

one_dim + two_dim
#> Error in one_dim + two_dim: non-conformable arrays

@SylvainCorlay
Copy link
Member

Ok. So at the moment xtensor supports arbitrary memory layouts but the "prepending" logic is deaply backed in.

For example, we also fully define the behavior of accessing elements of an array without specifying enough coordinates or specifying too many of them. The rule that is applied is the only one garanteeing that (a + b)[i1, ..., in] = a[i1, ..., in] + b[i1, ..., in], where a and b may have different shapes and this is a broadcasting operation.

The only way to guarantee this is to prepend the multi-inded with zeros until the dimension is reached, or discard the left-most indices until it is reached...

Also, we would not want to mix expression with one broadcasting semantics with expressions with another broadcasting semantica.

However, we could provide a version of broadcast_to that has the R semantics.

@DavisVaughan
Copy link
Contributor Author

DavisVaughan commented Nov 6, 2018

Is that enough? This actually affects mathematical operations that use the broadcasting as well.

# download and source the file
url_link <- 'https://gist.githubusercontent.com/DavisVaughan/800f11e21ec1b4ea62332cf35f130fc1/raw/f578022c703990fdf5bad998c82aecc727f84720/xtensor-cpp-sum.cpp'
tf <- tempfile(fileext = ".cpp")
download.file(url_link, tf)
Rcpp::sourceCpp(tf)

mat <- matrix(c(1,2,3,4,5,6,7,8,9,10,11,12), ncol = 4)
mat
#>      [,1] [,2] [,3] [,4]
#> [1,]    1    4    7   10
#> [2,]    2    5    8   11
#> [3,]    3    6    9   12

arr <- array(mat, c(3,4,1))
arr
#> , , 1
#> 
#>      [,1] [,2] [,3] [,4]
#> [1,]    1    4    7   10
#> [2,]    2    5    8   11
#> [3,]    3    6    9   12

arr2 <- mtrx_broadcast_cpp(arr, c(3,4,2))
arr2
#> , , 1
#> 
#>      [,1] [,2] [,3] [,4]
#> [1,]    1    4    7   10
#> [2,]    2    5    8   11
#> [3,]    3    6    9   12
#> 
#> , , 2
#> 
#>      [,1] [,2] [,3] [,4]
#> [1,]    1    4    7   10
#> [2,]    2    5    8   11
#> [3,]    3    6    9   12

# # Crashes RStudio
# # (3, 4) + (3, 4, 2)
# mtrx_add_cpp(mat, arr2)

# From what you say, this is read as:
# (1, 3, 4) + (3, 4, 2)
# which crashes

# This works because 3rd dim is explicit
# (3, 4, 1) + (3, 4, 2)
mtrx_add_cpp(arr, arr2)
#> , , 1
#> 
#>      [,1] [,2] [,3] [,4]
#> [1,]    2    8   14   20
#> [2,]    4   10   16   22
#> [3,]    6   12   18   24
#> 
#> , , 2
#> 
#>      [,1] [,2] [,3] [,4]
#> [1,]    2    8   14   20
#> [2,]    4   10   16   22
#> [3,]    6   12   18   24

My current fix is to pre-calculate the dimensionality required to do the broadcasted operation, and alter the dims on the R side before handing it off to xtensor. So for mtrx_add_cpp(mat, arr2) I would change the dims of mat from (3, 4) to (3, 4, 1) before handing off to xtensor. (This essentially sounds like what you guys do, but I append the 1s and you prepend them)

@SylvainCorlay
Copy link
Member

My current fix is to pre-calculate the dimensionality required to do the broadcasted operation, and alter the dims on the R side before handing it off to xtensor. So for mtrx_add_cpp(mat, arr2) I would change the dims of mat from (3, 4) to (3, 4, 1) before handing off to xtensor. (This essentially sounds like what you guys do, but I append the 1s and you prepend them)

Exactly. What I was proposing was for our broadcast_to to have an optional boolean argument enabling exactly that.

@DavisVaughan
Copy link
Contributor Author

I agree, that sounds good. But does that solve the problem just posed with mathematical operations involving broadcasting? Assuming broadcast_to() is called internally for those (it might not be?), I am not sure how I could pass on that option?

@SylvainCorlay
Copy link
Member

Assuming broadcast_to() is called internally for those (it might not be?), I am not sure how I could pass on that option?

No it is not used internally. The C++ internal arithmetics will keep the current semantics. The broadcasting logic is encoded in the assignment, iteration etc. For example, you can do m.begin<column_major>({4, 5, 6}) and you get an iterator on a broadcasted shape of {4, 5, 6}...

@wolfv
Copy link
Member

wolfv commented Nov 15, 2018

R really seems to be the outlier with column-wise broadcasting. Julia + Matlab/Octave (which also default to col-major memory) seem to have row-major broadcasting (Matlab only since 2016, hehe).

I think we should clearly note this difference. In terms of performance, column-major broadcasting for col major storage should be faster. We should change xtensor core slightly to make sure that broadcast (which also should be renamed to broadcast_to is as fast as the implicit broadcast by adding the appropriate strides (I am thinking we could add a broadcast_extend method or so, that guarantees to only extend with 1s so that we can continue to guarantee a contiguous layout).

@wolfv wolfv changed the title Broadcasting issues - A difference in R vs xtensor "dimension" definition Broadcasting issues - column-major vs. row-major broadcasting Nov 19, 2018
@DavisVaughan
Copy link
Contributor Author

Regarding our conversation about why I append ones rather than prepend them, I think the answer is just "thats how it works in R". As a simple example, think about how you'd convert a 2D matrix to a 3D structure.

# 5x1
x <- matrix(1:5)
x
#>      [,1]
#> [1,]    1
#> [2,]    2
#> [3,]    3
#> [4,]    4
#> [5,]    5

dim(x)
#> [1] 5 1

# I expect this
# 5x1x1 appending a third dimension
y <- x
dim(y) <- c(5, 1, 1)
y
#> , , 1
#> 
#>      [,1]
#> [1,]    1
#> [2,]    2
#> [3,]    3
#> [4,]    4
#> [5,]    5

# I don't expect this
# 1x5x1 prepending a third dimension
y <- x
dim(y) <- c(1, 5, 1)
y
#> , , 1
#> 
#>      [,1] [,2] [,3] [,4] [,5]
#> [1,]    1    2    3    4    5

Where in numpy I think you'd do this to get my "expected" result:

import numpy as np
x = np.array([[1, 2, 3, 4, 5]]).T
x.reshape([1,5,1])
array([[[1],
        [2],
        [3],
        [4],
        [5]]])

With the 1 prepended rather than appended.

Am I missing something?

@SylvainCorlay
Copy link
Member

Quick question: how did you initialize y in this code snippet (feel free to edit the post)

@DavisVaughan
Copy link
Contributor Author

do you see the y <- x?

@SylvainCorlay
Copy link
Member

But at this point, y == x. Both dim(y) <- c(1, 5, 1) and dim(y) <- c(5, 1, 1) should work.

@DavisVaughan
Copy link
Contributor Author

Both do work, that's real and run R code up there.

When you have a 5 row, 1 col matrix, and you add a third dimension, I expect to keep the 5 row, 1 col layout, and then have that extra third dimension "surrounding" it (hard to explain). The only way to keep the 5 row, 1 col layout here is to append the 1 like c(5, 1, 1). In numpy, the layout is kept by prepending the 1 from what I can tell.

Maybe I'm thinking about it too much in terms of what is being printed out, but I think that is the most intuitive result.

@SylvainCorlay
Copy link
Member

Although, in numpy, you can also do reshape(5, 1, 1) as much as reshape(1, 5, 1)...

The main difference is that for numpy & xtensor

[[1, 2, 3],        [10, 20, 30]       [[11, 22, 33]
 [1, 2, 3]    +                   =    [11, 22, 33],
 [1, 2, 3]]                            [11, 22, 33]]

i.e. 1-D arrays implicitly correspond to the last dimension of higher dimensional arrays,
(and lower-dimensional arrays correspond to the last dimensions of higher-dimensional arrays)
while with your rule, you want

[[1, 2, 3],        [10,        [[11, 12, 13]
 [1, 2, 3]    +     20,    =    [21, 22, 23],
 [1, 2, 3]]         30]         [31, 32, 33]]

i.e 1-D arrays implicitly correspond to the first dimension of higher dimensional arrays.
(and lower-dimensional arrays correspond to the first dimensions of higher-dimensional arrays).

@DavisVaughan
Copy link
Contributor Author

DavisVaughan commented Nov 27, 2018

Still digesting what you are saying above, and trying to come up with reasoning for why I think R should do what I'm saying, but in the meantime, the reticulate group from RStudio has dealt with this somewhat and summarized their results in this post. You might find it useful in understanding how R arrays work internally!
https://rstudio.github.io/reticulate/articles/arrays.html

I actually think this section has a lot to do with my personal confusion
https://rstudio.github.io/reticulate/articles/arrays.html#displaying-arrays

@DavisVaughan
Copy link
Contributor Author

So I gave an internal presentation of the current state of rray to the tidyverse team, they seemed fairly interested! As I was describing how broadcasting works for those who didn't know, Hadley immediately picked up on the fact that I was appending implicit dimensions to the RHS, rather than prepending them to the LHS. He agreed that the behavior I was proposing was correct, and his first thoughts are that this is just "the default direction of vectorization in R," but we still couldn't come up with the exact reason of why, even though we are convinced it is the right thing to do.

The best simple example I can come up with to demonstrate this is the default behavior of what happens when a vector is converted to a matrix in R vs numpy:

# 1 column matrix
# i.e. dims appended to RHS (5, 1)
r_matrix <- as.matrix(1:5)

r_matrix
#>      [,1]
#> [1,]    1
#> [2,]    2
#> [3,]    3
#> [4,]    4
#> [5,]    5

dim(r_matrix)
#> [1] 5 1

library(reticulate)
np <- import("numpy", convert = FALSE)

# 1 row matrix
# i.e. dims appended to LHS (1, 5)
py_matrix <- np$asmatrix(1:5)

py_matrix
#> [[1 2 3 4 5]]

py_matrix$shape
#> (1, 5)

py_to_r(py_matrix)
#>      [,1] [,2] [,3] [,4] [,5]
#> [1,]    1    2    3    4    5

Created on 2018-12-19 by the reprex package (v0.2.1.9000)

@SylvainCorlay
Copy link
Member

Thanks for your post. I am glad that rray got positive feedback from the team at R studio. On the packaging side, I have been iterating on the Xtensor.R repository which can be installed from github, and should become the source for the R package.

@SylvainCorlay
Copy link
Member

For the prepending / appending behavior, the problem is symmetrical. I am mostly worried about mixing the two behaviors. I can post a detailed answer on the implications.

@hadley
Copy link

hadley commented Dec 20, 2018

I think you can seem the same underlying thinking when subsetting data frames:

df <- data.frame(x = 1:3, y = 4:6)
df[1] # 1 col
#>   x
#> 1 1
#> 2 2
#> 3 3
df[1, 1:2] # 1 row, 2 columns
#>   x y
#> 1 1 4

@SylvainCorlay SylvainCorlay changed the title Broadcasting issues - column-major vs. row-major broadcasting Column-major vs. row-major broadcasting Dec 20, 2018
@SylvainCorlay
Copy link
Member

@hadley thanks for posting. Here is write-down of my thoughts on this. Let me know what you think:

Broadcasting with same number of dimensions

I think that everybody agrees on the semantics of broadcasting with the same number of dimensions.

Arrays of shape 1 in a given direction are expanded to match the dimension of other arguments in a broadcasting operation (numpy-style ufunc, arithmetic operation)

(4, 1, 3) # shape(A)
(4, 2, 3) # shape(B)
(1, 2, 1) # shape(C)
---------
(4, 2, 3) # shape(Result)

Broadcasting with different number of dimensions

Now, if the number of dimensions is different, we have two choices: implicitly pre-pending the shape with ones or appending ones.

Preprending                        Appending
   (1, 3) # A                      (4, 1)    # A 
(4, 2, 3) # B                      (4, 2, 3) # B
---------                          ---------
(4, 2, 3) # Result                 (4, 2, 3) # Result

The prepending behavior corresponds to "implicitly" seeing a 1-D array as a row in a 2-D operation,

[[1, 2, 3],        [10, 20, 30]       [[11, 22, 33]
 [1, 2, 3]    +                   =    [11, 22, 33],
 [1, 2, 3]]                            [11, 22, 33]]

and the appending behavior corresponds to implicitly seeing a 1-D array as a column in a 2-D operation.

[[1, 2, 3],        [10,        [[11, 12, 13]
 [1, 2, 3]    +     20,    =    [21, 22, 23],
 [1, 2, 3]]         30]         [31, 32, 33]]

Note that this semantic difference is independent of the underlying memory layout. xtensor supports row-major and column-major layouts, and uses the prepending behavior. It optimizes assignments and SIMD-parallelize things when the layouts of arguments match.

However, the layout has an implication on performance. A naive implementation of a 2-D + 1-D operation will be typically faster on row-major layouts using prepending, and faster on column-major layout using appending.

Implication for access operator and iteration

Another implication of this is the behavior of element accessors when passing too many or too few indices to operator(). Since that number of indices is compile-time, and the dimension is runtime, we need to handle both cases.

One desirable property is to always ensure that (a + b)(i1, ..., in) = a(i1, ..., in) + b(i1, ... in) regardless of the number of arguments, when a and b have different number of dimensions an broadcasted.

The only way to ensure this is to

  • in the "prepending" case, prepend (i1, .., in) with zeros, or drop left-most indices until dimensions match
  • in the "appending" case, append (i1, .., in) with zeros, or drop right-most indices until dimensions match

So we since we picked the prepending behavior, we had to prepend with zeros. It also has some similar implication on how we do broadcasting iterators.

Conclusion

I think that it is ultimately a symmetrical choice. However. as I said earlier, it is not bound to the memory layout of n-d array. NumPy has chosen the prepending behavior, and we did stick to it.

xtensor can definitely offer an appending behavior, especially if it is the choice required by R. However, I just think that it might be weird to users familiar with broadcasting in Python to find a subtly different broadcasting behavior between R and NumPy, that will require some reading to understand.

I am not sure from your examples how the appending behavior is more natural to R, but I might be missing something.

@DavisVaughan
Copy link
Contributor Author

I just realized that Julia uses column-major arrays, and they have broadcasting baked in, so I was curious about whether they append or prepend new dimensions when doing broadcasting operations.

tl;dr - They append 1s like I want to be able to do with R + xtensor.

(Hopefully this will be a bit more proof that I'm not insane in wanting this, as it seems the natural thing to do with column major arrays. I'd argue that the most intuitive behavior here is dependent on the memory layout)

Check out these two examples that demonstrate the differences in Julia vs Numpy broadcasting behavior:

Julia: (2, 2) + (2, 2, 1) = (2, 2, 1)
How? The (2, 2) matrix is broadcast to (2, 2, 1) first.

Screen Shot 2019-05-16 at 10 21 53 AM

Numpy: (2, 2) + (2, 2, 1) = (2, 2, 2)
How? The (2, 2) matrix is broadcast to (1, 2, 2) first.

Screen Shot 2019-05-16 at 10 24 13 AM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants