Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

作者您好!我在代码复现时遇到了CPU内存不足的问题,可以帮我提示一下为什么会遇到这个问题以及解决方法吗?非常感谢!! #12

Open
DW934 opened this issue Apr 20, 2024 · 0 comments

Comments

@DW934
Copy link

DW934 commented Apr 20, 2024

Traceback (most recent call last):
File "LLMs/LLaMA/src/train_bash.py", line 16, in
main()
File "LLMs/LLaMA/src/train_bash.py", line 7, in main
run_exp()
File "C:\Code\ChatKBQA-main\LLMs\LLaMA\src\llmtuner\tuner\tune.py", line 26, in run_exp
run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks)
File "C:\Code\ChatKBQA-main\LLMs\LLaMA\src\llmtuner\tuner\sft\workflow.py", line 28, in run_sft
model, tokenizer = load_model_and_tokenizer(model_args, finetuning_args, training_args.do_train, stage="sft")
File "C:\Code\ChatKBQA-main\LLMs\LLaMA\src\llmtuner\tuner\core\loader.py", line 171, in load_model_and_tokenizer
model = AutoModelForCausalLM.from_pretrained(
File "C:\ProgramData\anaconda3\envs\ChatKBQA\lib\site-packages\transformers\models\auto\auto_factory.py", line 556, in from_pretrained
return model_class.from_pretrained(
File "C:\ProgramData\anaconda3\envs\ChatKBQA\lib\site-packages\transformers\modeling_utils.py", line 3375, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
File "C:\Users\DW.cache\huggingface\modules\transformers_modules\chatglm2-6b\modeling_chatglm.py", line 856, in init
self.transformer = ChatGLMModel(config, empty_init=empty_init, device=device)
File "C:\Users\DW.cache\huggingface\modules\transformers_modules\chatglm2-6b\modeling_chatglm.py", line 756, in init
self.encoder = init_method(GLMTransformer, config, **init_kwargs)
File "C:\ProgramData\anaconda3\envs\ChatKBQA\lib\site-packages\torch\nn\utils\init.py", line 52, in skip_init
return module_cls(*args, **kwargs).to_empty(device=final_device)
File "C:\ProgramData\anaconda3\envs\ChatKBQA\lib\site-packages\torch\nn\modules\module.py", line 868, in to_empty
return self._apply(lambda t: torch.empty_like(t, device=device))
File "C:\ProgramData\anaconda3\envs\ChatKBQA\lib\site-packages\torch\nn\modules\module.py", line 641, in _apply
module._apply(fn)
File "C:\ProgramData\anaconda3\envs\ChatKBQA\lib\site-packages\torch\nn\modules\module.py", line 641, in _apply
module._apply(fn)
File "C:\ProgramData\anaconda3\envs\ChatKBQA\lib\site-packages\torch\nn\modules\module.py", line 641, in _apply
module._apply(fn)
[Previous line repeated 1 more time]
File "C:\ProgramData\anaconda3\envs\ChatKBQA\lib\site-packages\torch\nn\modules\module.py", line 664, in _apply
param_applied = fn(param)
File "C:\ProgramData\anaconda3\envs\ChatKBQA\lib\site-packages\torch\nn\modules\module.py", line 868, in
return self._apply(lambda t: torch.empty_like(t, device=device))
RuntimeError: [enforce fail at C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\c10\core\impl\alloc_cpu.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 112197632 bytes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant