Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ideas to improve the training accuracy #93

Open
Leslie-Fang opened this issue Feb 12, 2020 · 4 comments
Open

Ideas to improve the training accuracy #93

Leslie-Fang opened this issue Feb 12, 2020 · 4 comments

Comments

@Leslie-Fang
Copy link

I am trying to fine-tune the model with UCF-101 dataset.
with the SGD optimizer and lr decay, I can get almost 91.79% accuracy after 6000 steps with BS: 32.

global_step = tf.Variable(0, trainable=False)
learning_rate = tf.compat.v1.train.exponential_decay(3, global_step, 100, 0.96)
optimizer = tf.compat.v1.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)

From the paper, I see the accuracy could be 94.x%, any ideas to improve my accuracy?

@joaoluiscarreira
Copy link

joaoluiscarreira commented Feb 12, 2020 via email

@Leslie-Fang
Copy link
Author

@joaoluiscarreira Thanks for looking into my issue.
Training: almost 9500 frame in one epoch.
Testing: almost 3700 frame.
BTW: I am using depth 64.

@joaoluiscarreira
Copy link

joaoluiscarreira commented Feb 12, 2020 via email

@jzq0102
Copy link

jzq0102 commented Sep 29, 2021

我正在尝试使用 UCF-101 数据集微调模型。 使用 SGD 优化器和 lr 衰减,在 BS: 32 的 6000 步后,我可以获得几乎 91.79% 的准确率。

global_step = tf.Variable(0, trainable=False)
learning_rate = tf.compat.v1.train.exponential_decay(3, global_step, 100, 0.96)
optimizer = tf.compat.v1.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)

从论文中,我看到准确率可能是 94.x%,有什么想法可以提高我的准确率吗?

Can I share the code?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants