Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement an RMSProp Optimizer with Exponential Decay #724

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

liudong16
Copy link

Proposal

In the previous tensorflow code, the RMSProp Optimizer is often used with a constant Learning Rate, such as:
rmsp = tflearn.RMSProp(learning_rate=0.01, decay=0.999)
net = tflearn.regression(net, optimizer=rmsp, loss='categorical_crossentropy')
In my proposal, I try to use exponential decay before using RMSProp OPtimizer, i.e., append the following codes:
self.learning_rate = tf.train.exponential_decay(self.learning_rate, self.decay, self.momentum, self.epsilon, self.use_locking, self.name)
tf.add_to_collection(tf.GraphKeys.LR_VARIABLES, self.learning_rate)

Effects

Based on the experiment results, I find that the modifications could promote the accuracy of classifications to some extent. For instance, I use the new optimizer in the work of image classification on cifar-10 data set with ResNet 20. The training performs 10 rounds, each taking 50 epochs with the same parameter settings. According to the results, the accuracy of the previous RMSProp Optimizer on average is 80.05% while the new one is 82.46%.

@1292765944
Copy link

I'm interested in your proposal. Could you give some insight behind this design or example code? Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants