Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to get inputs of a layer? #109

Open
Dzandaa opened this issue Feb 26, 2023 · 4 comments
Open

How to get inputs of a layer? #109

Dzandaa opened this issue Feb 26, 2023 · 4 comments
Assignees
Labels
documentation Improvements or additions to documentation

Comments

@Dzandaa
Copy link

Dzandaa commented Feb 26, 2023

Hi,
You can get the outputs of a layer like:
NNAutoencoder.Layers[LayerCnt].Output.SizeX
NNAutoencoder.Layers[LayerCnt].Output.SizeY
NNAutoencoder.Layers[LayerCnt].Output.Depth
but is it possible to get it's inputs?

Like in NNAutoencoder.DebugStructure

Thank you.

@joaopauloschuler
Copy link
Owner

Hi @Dzandaa,
For a Layers[LayerCnt] layer, except for concatenating layers and input layer, you can get the input via:

NNAutoencoder.Layers[LayerCnt].PrevLayer.Output;

The PrevLayer gives you the previous layer. Then, you can get its output that is the input for LayerCnt layer.

Does this reply solve the question?

@joaopauloschuler joaopauloschuler self-assigned this Mar 4, 2023
@joaopauloschuler joaopauloschuler added the documentation Improvements or additions to documentation label Mar 4, 2023
@Dzandaa
Copy link
Author

Dzandaa commented Mar 4, 2023 via email

@joaopauloschuler
Copy link
Owner

Denoising is one of the most interesting applications for neural networks in my opinion. I should dedicate more time for it myself. In the case that you publish your source code in full, I'll be happy to add a link to it. I also think that the area of autoencoding super interesting.

I've been playing with autoencoders with 64x64 images. In the case that you are interested, this is my current architecture:

    FAutoencoder.AddLayer([
      TNNetInput.Create(64, 64, 3),
      TNNetConvolution.Create({Features=}32 * NeuronMultiplier,{FeatureSize=}3,{Padding=}1,{Stride=}2,{SuppressBias=}1), //32x32
      TNNetConvolution.Create({Features=}32 * NeuronMultiplier,{FeatureSize=}3,{Padding=}1,{Stride=}1,{SuppressBias=}1),
      TNNetConvolution.Create({Features=}32 * NeuronMultiplier,{FeatureSize=}3,{Padding=}1,{Stride=}2,{SuppressBias=}1), //16x16
      TNNetConvolution.Create({Features=}32 * NeuronMultiplier,{FeatureSize=}3,{Padding=}1,{Stride=}1,{SuppressBias=}1),
      TNNetConvolution.Create({Features=}64 * NeuronMultiplier,{FeatureSize=}3,{Padding=}1,{Stride=}2,{SuppressBias=}1), //8x8
      TNNetConvolution.Create({Features=}64 * NeuronMultiplier,{FeatureSize=}3,{Padding=}1,{Stride=}1,{SuppressBias=}1),
      TNNetConvolution.Create({Features=}128 * NeuronMultiplier,{FeatureSize=}3,{Padding=}1,{Stride=}2,{SuppressBias=}1), //4x4
      TNNetConvolution.Create({Features=}128 * NeuronMultiplier,{FeatureSize=}3,{Padding=}1,{Stride=}1,{SuppressBias=}1),

      TNNetUpsample.Create(), //8x8
      TNNetConvolution.Create({Features=}128 * NeuronMultiplier,{FeatureSize=}3,{Padding=}1,{Stride=}1,{SuppressBias=}1),
      TNNetConvolution.Create({Features=}128 * NeuronMultiplier,{FeatureSize=}3,{Padding=}1,{Stride=}1,{SuppressBias=}1),
      TNNetUpsample.Create(), //16x16
      TNNetConvolution.Create({Features=}32 * NeuronMultiplier,{FeatureSize=}3,{Padding=}1,{Stride=}1,{SuppressBias=}1),
      TNNetConvolution.Create({Features=}128 * NeuronMultiplier,{FeatureSize=}3,{Padding=}1,{Stride=}1,{SuppressBias=}1),
      TNNetUpsample.Create(), //32x32
      TNNetConvolution.Create({Features=}32 * NeuronMultiplier,{FeatureSize=}3,{Padding=}1,{Stride=}1,{SuppressBias=}1),
      TNNetConvolution.Create({Features=}128 * NeuronMultiplier,{FeatureSize=}3,{Padding=}1,{Stride=}1,{SuppressBias=}1),
      TNNetUpsample.Create(), //64x64
      TNNetConvolution.Create({Features=}32 * NeuronMultiplier,{FeatureSize=}3,{Padding=}1,{Stride=}1,{SuppressBias=}1),
      TNNetConvolution.Create({Features=}32 * NeuronMultiplier,{FeatureSize=}3,{Padding=}1,{Stride=}1,{SuppressBias=}1),
      TNNetConvolutionLinear.Create({Features=}3,{FeatureSize=}1,{Padding=}0,{Stride=}1,{SuppressBias=}0),
      TNNetReLUL.Create(-40, +40, 0) // Protection against overflow
    ]);

For the encoder/decoder, I personally prefer avoiding maxpoolings. I usually use convolutions with stride=2 instead. I also avoid ReLUs. But this is my personal preference. You are free to use the layer of your preference. These are my other parameters:

  FFit.LearningRateDecay := 0.0;
  FFit.L2Decay := 0.0;
  FFit.AvgWeightEpochCount := 1;
  FFit.InitialLearningRate := 0.0001;
  FFit.ClipDelta := 0.01;
  FFit.FileNameBase := FBaseName+'autoencoder';
  FFit.EnableBipolar99HitComparison();

I'm curious to know what your wife finds comparing CAI against Keras. Feel free to share good and bad news.

Regarding "I train on MNIST or CIFAR data, NeuralFit.TrainingAccuracy is always zero", maybe, the problem is around the preprocessing... Not sure.

Keep posting. I enjoy reading.

@Dzandaa
Copy link
Author

Dzandaa commented Mar 7, 2023

Hello Joao,
If you don't mind,
I prefer to exchange ideas on CAI on the Pascal Lazarus forum or through messaging.
I don't really like what Microsoft is doing with Github

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation
Projects
None yet
Development

No branches or pull requests

2 participants