Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add OpenFold workflow #98

Open
athbaltzis opened this issue May 17, 2023 · 0 comments
Open

Add OpenFold workflow #98

athbaltzis opened this issue May 17, 2023 · 0 comments
Labels
enhancement Improvement for existing functionality

Comments

@athbaltzis
Copy link
Member

athbaltzis commented May 17, 2023

Description of feature

Integrate OpenFold into the nf-core/proteinfold pipeline. OpenFold is a faithful but trainable PyTorch reproduction of AlphaFold2.

OpenFold has the following advantages over the reference implementation:

  • Faster inference on GPU, sometimes by as much as 2x. The greatest speedups are achieved on (>= Ampere) GPUs.
    Inference on extremely long chains, made possible by our implementation of low-memory attention (Rabe & Staats 2021). OpenFold can predict the structures of sequences with more than 4000 residues on a single A100, and even longer ones with CPU offloading.
  • Custom CUDA attention kernels modified from FastFold's kernels support in-place attention during inference and training. They use 4x and 5x less GPU memory than equivalent FastFold and stock PyTorch implementations, respectively.
  • Efficient alignment scripts using the original AlphaFold HHblits/JackHMMER pipeline or ColabFold's, which uses the faster MMseqs2 instead.
  • FlashAttention support greatly speeds up MSA attention.

Source code: https://github.com/aqlaboratory/openfold
Publication: https://www.biorxiv.org/content/10.1101/2022.11.20.517210v2

@athbaltzis athbaltzis added the enhancement Improvement for existing functionality label May 17, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Improvement for existing functionality
Projects
None yet
Development

No branches or pull requests

1 participant