Content-based recommendation system for cosmetics
-
Updated
Feb 7, 2021 - Jupyter Notebook
Content-based recommendation system for cosmetics
Adaptive Skip-gram implementation in Julia
Classification of "BBC News" and comparison of performance between 3 types of model's architectures. Then 2D word embedding visualization using PCA and 3D word embedding visualisation using T-SNE
The project researches sentiment analysis on Twitter, with the goal of evaluating the positivity, negativity or neutrality of comments. Using Word Embeddings, an advanced method in natural language processing, our model achieved a high accuracy of 96.61%. The model was trained on Twitter data and tested on a data comment dataset from Binance.
The task involves developing a system capable of translating text from Arabic to English. This system will serve as a tool to facilitate understanding and communication between Arabic-speaking individuals and English-speaking individuals.
In this project, the authors propose to use contextual Word2Vec model for understanding OOV (out of vocabulary). The OOV is extracted by using left-right entropy and point information entropy. They choose to use Word2Vec to construct the word vector space and CBOW (continuous bag of words) to obtain the contextual information of the words.
NLP
Natural Language processing with Disaster Tweets using word embeddings.
Explore text classification with Logistic Regression and Naive Bayes models. Implementing from scratch, we compare feature engineering techniques like Bag-of-Words, TF-IDF, and Word Embedding for accurate labeling
We have implemented, expanded and reviewed the paper “Sense2Vec - A Fast and Accurate Method For Word Sense Disambiguation In Neural Word Embeddings" by Andrew Trask, Phil Michalak and John Liu.
'AI Hiphop lyrics Generator🎙' project which makes hiphop lyrics based on a few user's keywords liks 'love', 'money'.
Polysemy Embedding - Iterative approach to address the sense based embedding
Labeled Word2Vec for Semi-Supervised Learning
A word embedding is a learned representation for text where words that have the same meaning have a similar representation.Word embeddings are in fact a class of techniques where individual words are represented as real-valued vectors in a predefined vector space. Each word is mapped to one vector and the vector values are learned in a way that …
Showcase of Natural Language Processing (NLP) on sentiment analysis of text in survey
This repository contains deep learning projects. The code for each project is provided, and the explanations can be found in the ReadMe.md file of each project !
Add a description, image, and links to the wordembedding topic page so that developers can more easily learn about it.
To associate your repository with the wordembedding topic, visit your repo's landing page and select "manage topics."