Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue with changing activation functions #31

Open
rnagurla opened this issue May 9, 2019 · 13 comments
Open

Issue with changing activation functions #31

rnagurla opened this issue May 9, 2019 · 13 comments

Comments

@rnagurla
Copy link

rnagurla commented May 9, 2019

I was wondering how to change the default sigmoid activation function to something else. I've tried changing it to tanh and it's not working. I've also tried using the linear activation function on the examples given and it's failing that as well

@codeplea
Copy link
Owner

codeplea commented May 9, 2019

You can set activation_hidden and activation_output. However, a linear activation function is not able to solve a non-linear problem, such as xor used in the examples.

@rnagurla
Copy link
Author

rnagurla commented May 9, 2019

Tanh and ReLU are non linear activation functions right? So I should be able to use those two functions for the examples. However, when I run using those functions it doesn't pass some of the examples. I'm assuming its because the ranges are different than sigmoid. Would I have to change anything in the source code to allow it use tanh or relu?

@msrdinesh
Copy link

msrdinesh commented Dec 9, 2019

Actually, in the code, the backpropagation algorithm is written only for the sigmoid activation function. We have to change the code for any generic activation function. If no one is working, I can work on this.
similar discussion check here

@codeplea
Copy link
Owner

codeplea commented Dec 9, 2019

Yes, back-propagation is only implemented for sigmoid. Other training methods can still work with other activation functions. If back-prop is needed, it'll need to be implemented.

@msrdinesh
Copy link

Hey @codeplea can I work on this issue? I would like to add back prop for tanh and relu activation functions. If no one else is working on this, pls assign me this issue.

@codeplea
Copy link
Owner

@msrdinesh Sure. Give it a go. Just please keep it short and simple. I think you can mirror the way that output and hidden activation functions are used.

@msrdinesh
Copy link

msrdinesh commented Dec 10, 2019

Ok, I will do it. Thanks.

@mu578
Copy link

mu578 commented Sep 28, 2020

@msrdinesh, @codeplea hello any follow up on that matter? Have a good day.

@ScratchyCode
Copy link

I'm waiting too for update about changing the activation function :)

@lucasart
Copy link

@moe123 @ScratchyCode It's trivial to adapt backprop to any function you want. Read this, preferably with a pen and paper, redoing the calculation on your own until it becomes crystal clear.

@mu578
Copy link

mu578 commented Apr 11, 2021

@lucasart computing the derivative is not the problem, the problem is to have a redesign of the code that reflects the current activation function, so something needs to be known and pass along: a state. We can all patch dirty; we already all do; however, we would prefer a clean redesigned approach to support this option + would let the opportunity to run several instances set up differently without tweaking and stirring the code. When you start maintaining third-party forks and patches, it's already too much. I think we all have a float-single version running on an approx of the exp function somewhere.

@lucasart
Copy link

lucasart commented Apr 12, 2021

I wrote my own nn library library, if anyone's interested.

Same functionality as genann. Also uses a flat memory layout for weights+neurons+delta (great for cache efficiency and use with more advanced gradient optimisations methods, so user code can directly adress the weights vector).

But also better, because:

  • more flexible: hidden layers can each have different number of neurons. error function can be absolute or quadratic (absolute makes more sense than quadratic in a lot of real applications).
  • cleaner code base: reduces indexing hell by using a layer structure (which points to the right location in the flat array).
  • trivial to add your own activation functions, without having to touch the backprop code.

@mu578
Copy link

mu578 commented Apr 16, 2021

@lucasart ; the implementation is interesting; meanwhile, I would go deeper, adding a layer of indirection on any internal arithmetic operations then moving nn_float_t to nn_numeric_t or so ; thus, you'd give the choice to interface with a half-float extension or fixed point representation to the end-user. To note, most people will not be so confortable with your licensing choice even academics.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants