Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Accuracy very low and not improving #16

Open
bhawmik opened this issue Sep 21, 2018 · 0 comments
Open

Accuracy very low and not improving #16

bhawmik opened this issue Sep 21, 2018 · 0 comments

Comments

@bhawmik
Copy link

bhawmik commented Sep 21, 2018

Hi, I am using your basic LSTM architecture to recreate the chatbot. However, I am using GloVe embedding.
During my training process, my Training accuracy gets stuck at very low values (0.1969) and no progress happens. I am attaching my code below. Can you tell me what can be done to improve the training?

from keras.models import Sequential
from keras.layers import Embedding, Flatten, Dense, LSTM
from keras.optimizers import Adam

#model.reset_states()
model=Sequential()
model.add(Embedding(max_words,embedding_dim,input_length=maxlen))
model.add(LSTM(units=100,return_sequences=True, kernel_initializer="glorot_normal", recurrent_initializer="glorot_normal", activation='sigmoid'))
model.add(LSTM(units=100,return_sequences=True, kernel_initializer="glorot_normal", recurrent_initializer="glorot_normal", activation='sigmoid'))
model.add(LSTM(units=100,return_sequences=True, kernel_initializer="glorot_normal", recurrent_initializer="glorot_normal", activation='sigmoid'))
model.add(LSTM(units=100,return_sequences=True, kernel_initializer="glorot_normal", recurrent_initializer="glorot_normal", activation='sigmoid'))
model.summary()

model.layers[0].set_weights([embedding_matrix])
model.layers[0].trainable = False

model.compile(loss='cosine_proximity', optimizer='adam', metrics=['accuracy'])
#model.compile(loss='mse', optimizer='adam', metrics=['accuracy'])
model.fit(x_train, y_train,
epochs = 500,
batch_size = 32,
validation_data=(x_val,y_val))

Epoch 498/500
60/60 [==============================] - 0s 3ms/step - loss: -0.1303 - acc: 0.1969 - val_loss: -0.1785 - val_acc: 0.2909
Epoch 499/500
60/60 [==============================] - 0s 3ms/step - loss: -0.1303 - acc: 0.1969 - val_loss: -0.1785 - val_acc: 0.2909
Epoch 500/500
60/60 [==============================] - 0s 3ms/step - loss: -0.1303 - acc: 0.1969 - val_loss: -0.1785 - val_acc: 0.2909

Further training (on the same conversation data set ) does not improve accuracy.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant