Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When is the pytorch version available? #2

Open
benjiachong opened this issue Dec 14, 2020 · 3 comments
Open

When is the pytorch version available? #2

benjiachong opened this issue Dec 14, 2020 · 3 comments

Comments

@benjiachong
Copy link

Hi, it is a very interesting and valuable work. I have two questions:
1.When the pytorch version is available?
2.I notice the regression problem is used in the paper. What about the classification problem? Is it suitable for the classification problem?
Thanks.

@aamini
Copy link
Owner

aamini commented Dec 14, 2020

Thank you for your interest in the work!

  1. It is in active development, but I imagine should be done soon. @Dariusrussellkish has already submitted a PR drafting a pytorch version as part of this repository that we will be merging into master ASAP. Please see Pytorch implementation #1 for more details. TLDR: PyTorch backend to this same repo will be available very soon!

  2. Yes! Evidential learning is also possible for classification as well [ref]. We provided the layers and loss functions to handle classification in this repo (please see code snippets below for pointers to the appropriate layers and losses). Since our work was focused on regression, the examples in the repo are also in this domain -- we have some code to also use the classification portion of the codebase to solve tasks like MNIST, CIFAR10, for example but have not pushed these. If you would like to contribute some classification examples using the existing layers and losses, we would be very happy to review and merge in!

Classification layers:

class DenseDirichlet(Layer):
def __init__(self, units):
super(DenseDirichlet, self).__init__()
self.units = int(units)
self.dense = Dense(int(units))
def call(self, x):
output = self.dense(x)
evidence = tf.exp(output)
alpha = evidence + 1
prob = alpha / tf.reduce_sum(alpha, 1, keepdims=True)
return tf.concat([alpha, prob], axis=-1)
def compute_output_shape(self, input_shape):
return (input_shape[0], 2 * self.units)

Classification loss: https://github.com/aamini/evidential-deep-learning/blob/main/evidential_deep_learning/losses/discrete.py#L5-L32

@birolkuyumcu
Copy link

birolkuyumcu commented Jul 6, 2021

Dirichlet_SOS is not a loss function
how to used for classification ?

@Ali-799
Copy link

Ali-799 commented Aug 12, 2021

Thank you for your interest in the work!

1. It is in active development, but I imagine should be done soon. @Dariusrussellkish has already submitted a PR drafting a pytorch version as part of this repository that we will be merging into master ASAP. Please see [Pytorch implementation #1](https://github.com/aamini/evidential-deep-learning/pull/1) for more details. TLDR: PyTorch backend to this same repo will be available very soon!

2. Yes! Evidential learning is also possible for classification as well [[ref](https://arxiv.org/pdf/1806.01768.pdf)]. We provided the layers and loss functions to handle classification in this repo (please see code snippets below for pointers to the appropriate layers and losses). Since our work was focused on regression, the examples in the repo are also in this domain -- we have some code to also use the classification portion of the codebase to solve tasks like MNIST, CIFAR10, for example but have not pushed these. If you would like to contribute some classification examples using the existing layers and losses, we would be very happy to review and merge in!

Classification layers:

class DenseDirichlet(Layer):
def __init__(self, units):
super(DenseDirichlet, self).__init__()
self.units = int(units)
self.dense = Dense(int(units))
def call(self, x):
output = self.dense(x)
evidence = tf.exp(output)
alpha = evidence + 1
prob = alpha / tf.reduce_sum(alpha, 1, keepdims=True)
return tf.concat([alpha, prob], axis=-1)
def compute_output_shape(self, input_shape):
return (input_shape[0], 2 * self.units)

Classification loss: https://github.com/aamini/evidential-deep-learning/blob/main/evidential_deep_learning/losses/discrete.py#L5-L32

Hi @aamini ,
Thanks for sharing your valuable work with the community. Refer to your comment, i am working on a classification task and using the classfication layers and loss mentioned by you, for evidential learning. However, i faced errors while training with the Dirichlet_SOS loss. I troubleshooted and here is what solved my issue:

  1. 1st is to remove parameter 't' from def Dirichlet_SOS(y, alpha, t). This isn't used anywhere in the loss function and calling the function without this parameter in model.compile, gives you an error.
  2. 2nd is to add following line of code in the loss function before 'S = tf.reduce_sum(alpha, axis=1, keepdims=True)':
    alpha,prob =tf.split(alpha,2,axis=-1)
    Without it, the dimensions of 'y' and 'm' don't match.
    Hoping that you can verify my observations? Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants