Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Q: How to get a specific activation function's equation? #197

Open
GorkaAbad opened this issue May 15, 2024 · 3 comments
Open

Q: How to get a specific activation function's equation? #197

GorkaAbad opened this issue May 15, 2024 · 3 comments

Comments

@GorkaAbad
Copy link

Hi,

for a trained model, is there any way to get the equation that defines the activation function (spline) for some specific connection?

Since we can get the plots, I would like to get the equation also, e.g., f(x) = ax +b for some linear activation functions, including a and b values.

Best,

@KindXiaoming
Copy link
Owner

This might be useful: https://kindxiaoming.github.io/pykan/API_demo/API_4_extract_activations.html

Also, function names are stored in model.symbolic_fun[l].funs_name and coefficients are stored in model.symbolic_fun[l].affine (l is the layer index)

@tk3016
Copy link

tk3016 commented Jun 6, 2024

Hi @KindXiaoming,
Many thanks and congratulations on this amazing work. Following up on this, I have been trying to understand the architecture of the network and I want to report an inconsistency (to the best of my understanding) between the paper and the implementation.
Please look at the screenshot from the paper below:
image
However, this is inconsistent with the implementation where a bias term is added to the summation of all outputs from the previous layer to represent the input to the next layer.
Here is a screenshot from the code:
image

I hope this helps.

Best wishes,
Tanuj

@KindXiaoming
Copy link
Owner

hi, thank you for reporting this. yes, in the code the bias term is by default included, but I did not write them out since it can be absorbed into any activation functions. Bias terms are needed for sparsity regularization (without them, regularization seem to behave weird. But maybe there is a better way).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants