Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiple Caps Layers #17

Open
yanivgilad opened this issue Jan 11, 2022 · 2 comments
Open

Multiple Caps Layers #17

yanivgilad opened this issue Jan 11, 2022 · 2 comments
Labels
question Further information is requested

Comments

@yanivgilad
Copy link

Hi

we are very impressed by your article and the improvement you implemented.
In the article you wrote:
"However, we adopt only two layers of capsules due to the relative simplicity of the dataset investigated"
we are trying to use your caps implementation with our problem. our input is 40X40X40 and a more complicated image than MNIST, therefore we want to use more caps layers.
should we simply duplicate the PrimaryCaps layers before the FCCaps layer?

Thanks

Yaniv.

@EscVM EscVM added the question Further information is requested label Jan 11, 2022
@EscVM
Copy link
Owner

EscVM commented Jan 11, 2022

Hi @yanivgilad,

Thank you. We appreciate it 😊

No, to create a second layer of capsules ,you have to place another "FCCaps" layer, setting the number of capsules and their relative dimension.

@yanivgilad
Copy link
Author

Hi @EscVM
Thanks for the response, we are still not sure how to proceed.
our input is a series of signals which could look like this:
image

BTW do you think its better if we use wavelet transform on it so it is converted to this?
image

Our Network looks like this:

x = tf.keras.layers.Conv2D(32,5,activation="relu", padding='valid', kernel_initializer='he_normal')(inputs)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.Conv2D(64,3, activation='relu', padding='valid', kernel_initializer='he_normal')(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.Conv2D(64,3, activation='relu', padding='valid', kernel_initializer='he_normal')(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.Conv2D(128,3,2, activation='relu', padding='valid', kernel_initializer='he_normal')(x)
x = tf.keras.layers.BatchNormalization()(x)
x = PrimaryCaps(128, 15, 16, 8)(x)
digit_caps = FCCaps(2, 16)(x)   

we have 40 signals so our input is 40X40X40,
how should we add another FCCAPS layer?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants