Implement an RMSProp Optimizer with Exponential Decay #724
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Proposal
In the previous tensorflow code, the RMSProp Optimizer is often used with a constant Learning Rate, such as:
rmsp = tflearn.RMSProp(learning_rate=0.01, decay=0.999)
net = tflearn.regression(net, optimizer=rmsp, loss='categorical_crossentropy')
In my proposal, I try to use exponential decay before using RMSProp OPtimizer, i.e., append the following codes:
self.learning_rate = tf.train.exponential_decay(self.learning_rate, self.decay, self.momentum, self.epsilon, self.use_locking, self.name)
tf.add_to_collection(tf.GraphKeys.LR_VARIABLES, self.learning_rate)
Effects
Based on the experiment results, I find that the modifications could promote the accuracy of classifications to some extent. For instance, I use the new optimizer in the work of image classification on cifar-10 data set with ResNet 20. The training performs 10 rounds, each taking 50 epochs with the same parameter settings. According to the results, the accuracy of the previous RMSProp Optimizer on average is 80.05% while the new one is 82.46%.