Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Conv2D Layer #39

Open
wants to merge 7 commits into
base: master
Choose a base branch
from
Open

[WIP] Conv2D Layer #39

wants to merge 7 commits into from

Conversation

plavin
Copy link
Contributor

@plavin plavin commented Aug 2, 2017

This still isn't quite perfect. The only thing left to do is to figure out how to make grad_func work with batched output.

/*
std::vector<array> out;
for(int i = 0; i < n; i++){
auto a = matmulNT(grad_out_reshape(span, span, i), weights_reshape); //Problem is here - can't call () on Variable
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This line is all that is preventing me from having batches working.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can make the matmulXY functions in arrayfire-ml support batches for now.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That would be good.

Copy link
Member

@pavanky pavanky left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you rename Conv2D to Convolve2 to be consistent with arrayfire?

/*
std::vector<array> out;
for(int i = 0; i < n; i++){
auto a = matmulNT(grad_out_reshape(span, span, i), weights_reshape); //Problem is here - can't call () on Variable
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can make the matmulXY functions in arrayfire-ml support batches for now.

for(int i = 0; i < 4; i++){
tmp2[tmp[i]] = i;
}
auto reverse = Variable(array(4, tmp2), false);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reverse is not being used anymore.

if (b.array().dims(1) != 1) {
throw af::exception("nn::Linear: Bias must be a vector.");
}
dim4 pdims = w.array().dims();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Btw I added .dims() method for Variable. You dont need to do w.array().dims().

{
auto res = conv2d(input, m_parameters[0], m_wx, m_wy, m_sx, m_sy, m_px, m_py);
if (m_bias) {
res = res + tileAs(m_parameters[1], res);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not familiar with bias in a Convolution layer. Let me know if you find a reference for this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The alexnet model I pulled from caffe's model zoo has both weights and biases for every learned layer.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can view biases in this implementation http://www.cs.toronto.edu/~guerzhoy/tf_alexnet/

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@plavin I mean the way bias is used here. I don't know if it is the same as what we are doing in Linear layer.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants