Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

run time and loss #76

Open
noranali opened this issue Jun 29, 2022 · 2 comments
Open

run time and loss #76

noranali opened this issue Jun 29, 2022 · 2 comments

Comments

@noranali
Copy link

thank you for your code , what is the reason of that the loss is different at every time i restart the run time and run the train ?
can you help me please

@jiangsutx
Copy link
Owner

The training process include random process, e.g. random data sequence order, random initialization etc.
If you want deterministic process, you need to fix random seed for python random module, numpy random module and tensorflow random seed.

@noranali
Copy link
Author

noranali commented Jul 3, 2022

thank you for your replay
i have read your paper and i found that the code converge at 4000 epochs but colab will be disconnected at 180 epochs . i want to use the final weight as initial . can you help me please ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants