You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the model architecture, the "MLPVanilla" class inserts 2 hidden, linear layers. It seems like you don't have a "Softmax" layer at the end to obtain the survival probability distribution. Is this because it is already included in the loss function or do we not need to use it at all ?
net = tt.practical.MLPVanilla(in_features, num_nodes, out_features, batch_norm, dropout) model = DeepHitSingle(net, tt.optim.Adam, alpha=0.2, sigma=0.1, duration_index=labtrans.cuts)
Also, it seems like you are not implementing "residual connections " as mentioned in the DeepHit paper. Could you please explain the reason for this ?
Thanks
Ani
The text was updated successfully, but these errors were encountered:
Hi,
I am following this DeepHit tutorial for a single event - https://github.com/havakv/pycox/blob/master/examples/deephit.ipynb
In the model architecture, the "MLPVanilla" class inserts 2 hidden, linear layers. It seems like you don't have a "Softmax" layer at the end to obtain the survival probability distribution. Is this because it is already included in the loss function or do we not need to use it at all ?
net = tt.practical.MLPVanilla(in_features, num_nodes, out_features, batch_norm, dropout)
model = DeepHitSingle(net, tt.optim.Adam, alpha=0.2, sigma=0.1, duration_index=labtrans.cuts)
Also, it seems like you are not implementing "residual connections " as mentioned in the DeepHit paper. Could you please explain the reason for this ?
Thanks
Ani
The text was updated successfully, but these errors were encountered: