Skip to content
This repository has been archived by the owner on Jan 15, 2024. It is now read-only.

how to get sentence level embeddings #645

Answered by haven-jeon
KavyaGujjala asked this question in Q&A
Discussion options

You must be logged in to vote

Bi-LSTM max-pooling network(https://arxiv.org/pdf/1705.02364.pdf) shows simple method to get sentence embedding.
if you get (1,128, 768) shape representation for BERT last layer of encoder, just 'F.max(X,axis=1)' will be sentence embedding with shape like (1,768).

I'm not sure this will work on BERT, but simple sentence embedding worked well on my case.

This(https://github.com/hanxiao/bert-as-service/blob/master/README.md) also good for reference.

Replies: 9 comments

Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Answer selected by szha
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
4 participants
Converted from issue

This discussion was converted from issue #645 on August 30, 2020 19:13.