Deep Xi: A deep learning approach to a priori SNR estimation implemented in TensorFlow 2/Keras. For speech enhancement and robust ASR.
-
Updated
Feb 17, 2022 - MATLAB
Deep Xi: A deep learning approach to a priori SNR estimation implemented in TensorFlow 2/Keras. For speech enhancement and robust ASR.
PyTorch implementation of some attentions for Deep Learning Researchers.
[VLDB'22] Anomaly Detection using Transformers, self-conditioning and adversarial training.
Exploring attention weights in transformer-based models with linguistic knowledge.
"Attention, Learn to Solve Routing Problems!"[Kool+, 2019], Capacitated Vehicle Routing Problem solver
This repository contain various types of attention mechanism like Bahdanau , Soft attention , Additive Attention , Hierarchical Attention etc in Pytorch, Tensorflow, Keras
A Faster Pytorch Implementation of Multi-Head Self-Attention
Visualization for simple attention and Google's multi-head attention.
Multi^2OIE: Multilingual Open Information Extraction Based on Multi-Head Attention with BERT (Findings of ACL: EMNLP 2020)
Attention-based Induction Networks for Few-Shot Text Classification
This is the official repository of the original Point Transformer architecture.
Self-Supervised Vision Transformers for multiplexed imaging datasets
several types of attention modules written in PyTorch
EMNLP 2018: Multi-Head Attention with Disagreement Regularization; NAACL 2019: Information Aggregation for Multi-Head Attention with Routing-by-Agreement
Sentence encoder and training code for Mean-Max AAE
Collection of different types of transformers for learning purposes
Code for the runners up entry on the English subtask on the Shared-Task-On-Fighting the COVID-19 Infodemic, NLP4IF workshop, NAACL'21.
HydraViT is a PyTorch implementation of the HydraViT model, an adaptive multi-branch transformer for multi-label disease classification from chest X-ray images. The repository provides the necessary code to train and evaluate the HydraViT model on the NIH Chest X-ray dataset.
Image Captioning with Encoder as Efficientnet and Decoder as Decoder of Transformer combined with the attention mechanism.
Add a description, image, and links to the multi-head-attention topic page so that developers can more easily learn about it.
To associate your repository with the multi-head-attention topic, visit your repo's landing page and select "manage topics."