Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How can I test 5 models at once #122

Open
zhoujingyu13687306871 opened this issue Dec 15, 2022 · 1 comment
Open

How can I test 5 models at once #122

zhoujingyu13687306871 opened this issue Dec 15, 2022 · 1 comment

Comments

@zhoujingyu13687306871
Copy link

zhoujingyu13687306871 commented Dec 15, 2022

hello,I am very happy to see that fastfold has added the multimer function, but I have a problem. When using the monomer function, I still cannot predict 5 models at once. Is there a solution for this?
here is my scripts:

#!/bin/bash
#DSUB --job_type cosched
#DSUB -n fastfold
#DSUB -A root.bingxing2.gpuuser001
#DSUB -q root.default
#DSUB -R 'cpu=12;gpu=2;mem=90000'
#DSUB -l wuhanG5500
#DSUB -N 1
#DSUB -e %J.out
#DSUB -o %J.out
######################查看gpu利用率################################################
STATE_FILE="state_${BATCH_JOB_ID}"
/usr/bin/touch ${STATE_FILE}
function gpus_collection(){
while [[ cat "${STATE_FILE}" | grep "over" | wc -l == "0" ]]; do
/usr/bin/sleep 1
/usr/bin/nvidia-smi >> "gpu_${BATCH_JOB_ID}.log"
done
}
gpus_collection &
#####################AF2计算部分###################################################
module load anaconda/2021.11
module load cuda/11.3.0-gcc-4.8.5-oaa
module load gcc/9.3.0-gcc-4.8.5-bxl
source activate fastfold
af2Root=/home/bingxing2/public

add '--gpus [N]' to use N gpus for inference

add '--enable_workflow' to use parallel workflow for data processing

add '--use_precomputed_alignments [path_to_alignments]' to use precomputed msa

add '--chunk_size [N]' to use chunk to reduce peak memory

add '--inplace' to use inplace to save memory

python inference.py mono.fasta $af2Root/alphafold2.2.0/pdb_mmcif/mmcif_files
--output_dir ./mono_out
--uniref90_database_path $af2Root/uniref90/uniref90.fasta
--mgnify_database_path $af2Root/mgnify/mgy_clusters.fa
--pdb70_database_path $af2Root/pdb70/pdb70
--param_path $af2Root/alphafold2.2.0/params/params_model_1.npz
--uniclust30_database_path $af2Root/uniclust30/uniclust30_2018_08/uniclust30_2018_08
--bfd_database_path $af2Root/bfd/bfd_metaclust_clu_complete_id30_c90_final_seq.sorted_opt
--jackhmmer_binary_path which jackhmmer
--hhblits_binary_path which hhblits
--hhsearch_binary_path which hhsearch
--kalign_binary_path which kalign
--gpus 2
--enable_workflow
--chunk_size 1
--inplace
echo "over" >> "${STATE_FILE}"

@Shenggan
Copy link
Contributor

Our inference scripts can only inference one model at a time. I think you can specify --model_name to use the other 4 model.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants