Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Global epistasis model isn't monotonic #54

Open
an1lam opened this issue Jan 17, 2022 · 1 comment
Open

Global epistasis model isn't monotonic #54

an1lam opened this issue Jan 17, 2022 · 1 comment

Comments

@an1lam
Copy link
Contributor

an1lam commented Jan 17, 2022

The current implementation of the global epistasis model doesn't guarantee monotonicity between the output of the initial linear layer and the final output. In my experience, the easiest way to guarantee monotonicity is to transform the weights in the nonlinear layer to non-negative values using a softplus or something similar. (I've tried it using torch but assume it will work similarly well in keras.)

@an1lam
Copy link
Contributor Author

an1lam commented Jan 17, 2022

As an example of a non-monotonic function the current setup can learn, consider a simplified version of your nonlinear function:

# Output of linear layer for three inputs
l = [[-1], [0], [1], [2]]
# Project out to k=2 dimensions + Relu
w_0 = [[2, -2]]
# Output: [[0, 2], [0, 0], [2, 0], [4, 0]]
# Reduce back down to 1 dimension
w_1 = [[.75], [2]]
# Output (squeezed): [2, 0, 1.5, 3]

Clearly this is not monotonic and all we did was use a single hidden layer + ReLU for the non-linearity.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant