Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pure Flow Documentation #69

Open
mattia-cyberpunk opened this issue Oct 2, 2017 · 6 comments
Open

Pure Flow Documentation #69

mattia-cyberpunk opened this issue Oct 2, 2017 · 6 comments

Comments

@mattia-cyberpunk
Copy link

Thank you for this great library,

I'm trying to use the pure flow approach, but I have not been able to understand how to do it looking at ExampleCpuSingle....
Could you please provide a sample code showing how to build and train the net in the example image ?

Thank you very much

@cbovar
Copy link
Owner

cbovar commented Oct 2, 2017

Hi. I'm on currently on holidays, having no access to a computer. I will reply when I get back.

@mattia-cyberpunk
Copy link
Author

Fine, thanks! Have nice holidays!
Looking forward for your reply.

@cbovar
Copy link
Owner

cbovar commented Oct 8, 2017

By the way, the example image is a screenshot of the output of the MnistDemo.Flow.GPU demo.

In this demo, a network is built using layers from ConvNetSharp.Flow.Layers. Those layers build a computation graph. You can see the use of the Flow approach in the ConvNetSharp.Flow.Layers.ConvLayer for example:

var cns = ConvNetSharp<T>.Instance;
using (ConvNetSharp<T>.Instance.Scope($"ConvLayer_{this.Id}"))
{
    var content = new T[this._filterCount].Populate(this.BiasPref);
    this._bias = cns.Variable(BuilderInstance<T>.Volume.SameAs(content, new Shape(1, 1, this._filterCount, 1)), "Bias");
    this._conv = cns.Conv(parent.Op, this._width, this._height, this._filterCount, this.Stride, this.Pad);
    this.Op = this._conv + this._bias;
}

@mattia-cyberpunk
Copy link
Author

Thank you for you answer.
So I misunderstood, it is a net obtained by stacking layers represented in a computation graph, right?
Actually, what I'd wish to do is to create a graph with more than one input layer, then after some operations consolidate layers to get a unique last layer and train backpropagating error over each branch of the graph. Can it be done? Should I use Ops? I can't really figure out how to merge volumes coming from different inputs and operations...

@cbovar
Copy link
Owner

cbovar commented Oct 8, 2017

It seems you have the same need as #68

@mattia-cyberpunk
Copy link
Author

Yes, I was considering it too...
Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants