Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Batch size built into the model [tf2 branch] #111

Open
bklooste opened this issue Jun 11, 2020 · 4 comments
Open

Batch size built into the model [tf2 branch] #111

bklooste opened this issue Jun 11, 2020 · 4 comments

Comments

@bklooste
Copy link

bklooste commented Jun 11, 2020

The batch size if built into the model , this creates quite a few issues and can fail when dealing with different training sizes or filtering data .

@CoffeeStraw
Copy link

Have you managed to solve that?

@essamgoda
Copy link

I have Same error.

@bklooste
Copy link
Author

bklooste commented Oct 4, 2020

Its a fair bit of work due to the way it was done .. almost need to redo .

@CoffeeStraw
Copy link

CoffeeStraw commented Oct 4, 2020

I think I've managed to make this work... Unfortunately, I've organized my codebase differently from Xifeng implementation (even if my work is based on its implementation) and I will not be able to prepare a PR for this project in a while. For those interested, here's my work:

https://github.com/CoffeeStraw/CapsNet-Knowledge-Extractor/blob/master/capsnet_trainer/_share/capslayers.py

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants