Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to use multi GPU to finetune Llama2 #751

Open
Qwtdgh opened this issue Jan 24, 2024 · 2 comments
Open

How to use multi GPU to finetune Llama2 #751

Qwtdgh opened this issue Jan 24, 2024 · 2 comments

Comments

@Qwtdgh
Copy link

Qwtdgh commented Jan 24, 2024

Hi, I have a question about how to finetune Llama2 by using multi-GPU.

env: 4*A100 40G
yaml: llm/vaseline/exp_yaml/dolly_lda/dolly_federate.yaml

this yaml likes as follow

use_gpu: True
device: 0
early_stop:
  patience: 0
federate:
  mode: standalone
  client_num: 3
  total_round_num: 500

only one A100 is not enough, how can I use other three GPUS to finetune my model.

I try to modify train.data_para_dids=[0, 1, 2, 3], but it is not work, i think the reason is cfg.device only specify one GPU.

Wish your reply!

@rayrayraykk
Copy link
Collaborator

data_para_dids is for data-parallel. You should use deepspeed:
Please use the following configs to setup Deepspeed (for other usage, please refer to https://github.com/alibaba/FederatedScope/blob/llm/federatedscope/core/configs/cfg_llm.py):

# ---------------------------------------------------------------------- #
# Deepspeed related options
# ---------------------------------------------------------------------- #
cfg.llm.deepspeed = CN()
cfg.llm.deepspeed.use = False
cfg.llm.deepspeed.ds_config = ''  # compatible with deepspeed config

@Qwtdgh
Copy link
Author

Qwtdgh commented Jan 25, 2024

Thanks for your reply!
So, I'm not need to set cfg.device? Or device is not work when deepspeed=True?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants