You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
"Word2Vec for Russian text" - my project for course "Scientific Data Computing" in University of Tartu. It was presented as 20-minutes talk on 6th Estonian Digital Humanities Conference at September 2018.
Universal-Sentence-Encoder-Multilingual-QA is a model developed by researchers at Google mainly for the purpose of question answering. You can use this template to import the model in Inferless.
MedCPT generates embeddings of biomedical texts that can be used for semantic search (dense retrieval). MedCPT Query Encoder: compute the embeddings of short texts (e.g., questions, search queries, sentences). In this template, we will import the MedCPT Query Encoder on the Inferless Platform.
This is a sentence embedding model, initialized from xlm-roberta-large and continually trained on a mixture of multilingual datasets. It supports 100 languages from xlm-roberta, but low-resource languages may see performance degradation.
MS-marco-MiniLM-L-12-v2 model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order.
"BrightPsych" is a holistic mental health platform featuring a supportive chatbot and detail CBT analysis for disorders. Daily Mood Tracking aids emotional well-being, while data analysis unveils student mental health trends. Guided mindfulness contribute to resilience in a nurturing space. Empower, Engage and Elevate through Community Forum.
This project is based on the Cocktail Recommendation System, which utilizes the Retrieval-Augmented Generation (RAG) approach to provide users with personalized cocktail recommendations based on their queries.
jina-embeddings-v2-base-en is an English, monolingual embedding model supporting 8192 sequence length. It is based on a BERT architecture (JinaBERT) that supports the symmetric bidirectional variant of ALiBi to allow longer sequence length. The backbone jina-bert-v2-base-en is pretrained on the C4 dataset.