-
Notifications
You must be signed in to change notification settings - Fork 19.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Extracting embeddings from layers #621
Comments
From the skipgram word embeddings example:
If instead you're looking to extract the hidden layer representation of a given input, refer to #41 |
hi @Smerity ,i use the graph model,refer to #41
And i want to see the output of the
But it show me that Do you give me some advices? Thanks. |
More generally you can visualise the output/activations of every layer of your model. I wrote an example with MNIST to show how here: https://github.com/philipperemy/keras-visualize-activations So far it's the least painful I've seen. |
* original file from keras-io * port to keras core * chore: lint * move string lookup to preprocessing - lint - test all backends * Final code:
* original file from keras-io * port to keras core * chore: lint * move string lookup to preprocessing - lint - test all backends * Final code:
Embeddings obtained from training a discriminative NN towards a specific task can be extremely useful on related tasks (e.g. Transfer learning). We can extract a lot of potentially useful embeddings by looking at the weights of a layer of the model. By looking at the documentation http://keras.io/models/ it doesn't seem like Keras supports the abstraction of extracting weight values from individual layers. Seems like it would be relatively easy to implement.
The text was updated successfully, but these errors were encountered: