Skip to content

Latest commit

 

History

History
18 lines (10 loc) · 631 Bytes

README.md

File metadata and controls

18 lines (10 loc) · 631 Bytes

Implementation of word2vec embedding model from scratch using C++

Description

Word2Vec is a popular word embedding model which is used in major NLP models. Instead of representing every word in the document as one-hot vector, we learn the representation of word in form of an embedding based on the context and semantics around it

For more information, please follow https://arxiv.org/pdf/1301.3781.pdf

Build

g++ -std=c++11 word2vec.cc -o word2vec

Remarks

Please note that this project was built for learning and not yet close for production usage. Please use it at your own risk

Author

Rohith Uppala