Nonparametric Modern Hopfield Models
-
Updated
Jan 8, 2024 - Jupyter Notebook
Nonparametric Modern Hopfield Models
Gated Attention Unit (TensorFlow implementation)
MetaFormer-based Global Contexts-aware Network for Efficient Semantic Segmentation (Accepted by WACV 2024)
Pytorch implementation of LISA (Linear-Time Self Attention with Codeword Histogram for Efficient Recommendation. WWW 2021)
This repository contains the official code for Energy Transformer---an efficient Energy-based Transformer variant for graph classification
Official Implementation of Energy Transformer in PyTorch for Mask Image Reconstruction
[NeurIPS 2022 Spotlight] This is the official PyTorch implementation of "EcoFormer: Energy-Saving Attention with Linear Complexity"
[ICCV 2023] Efficient Video Action Detection with Token Dropout and Context Refinement
Demo code for CVPR2023 paper "Sparsifiner: Learning Sparse Instance-Dependent Attention for Efficient Vision Transformers"
A custom Tensorflow implementation of Google's Electra NLP model with compositional embeddings using complementary partitions
This is the source code of article how to create a chatbot in python . i.e A chatbot using the Reformer, also known as the efficient Transformer, to generate dialogues between two bots.
[ICLR 2022] "Unified Vision Transformer Compression" by Shixing Yu*, Tianlong Chen*, Jiayi Shen, Huan Yuan, Jianchao Tan, Sen Yang, Ji Liu, Zhangyang Wang
Master thesis with code investigating methods for incorporating long-context reasoning in low-resource languages, without the need to pre-train from scratch. We investigated if multilingual models could inherit these properties by making it an Efficient Transformer (s.a. the Longformer architecture).
Official PyTorch implementation of our ECCV 2022 paper "Sliced Recursive Transformer"
[MICCAI 2023] DAE-Former: Dual Attention-guided Efficient Transformer for Medical Image Segmentation
[CVPR 2023] IMP: iterative matching and pose estimation with transformer-based recurrent module
[NeurIPS'21] "Chasing Sparsity in Vision Transformers: An End-to-End Exploration" by Tianlong Chen, Yu Cheng, Zhe Gan, Lu Yuan, Lei Zhang, Zhangyang Wang
Implementation of the Transformer variant proposed in "Transformer Quality in Linear Time"
Official PyTorch Implementation of Long-Short Transformer (NeurIPS 2021).
Mask Transfiner for High-Quality Instance Segmentation, CVPR 2022
Add a description, image, and links to the efficient-transformers topic page so that developers can more easily learn about it.
To associate your repository with the efficient-transformers topic, visit your repo's landing page and select "manage topics."