Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

baichuan2-13b 微调后模型使用vllm输出与官方web_demo结果不一致 #403

Open
kingduxia opened this issue Apr 19, 2024 · 0 comments

Comments

@kingduxia
Copy link

kingduxia commented Apr 19, 2024

加载的是基于bichuan2-13b SFT lora训练后的模型权重,但是官方web_demo加载模型和vllm加载模型的推理输出不一致

查看代码web demo会使用基于模型中的generate_config参数
image
同样的输入,输出结果符合预期
image
使用vllm进行推理加速,环境 A100,tp=2
请求参数
image
prompt的组织方式在server侧调整为openai format
image
但是输出结果为
image
多了一段不完整的问题内容重复

我理解不是模型微调的问题,毕竟是同一份模型权重数据,我理解还是模型输入的参数哪里没对齐,应该也不是vllm框架本身的问题

看vllm的代码,也做了类似generate_util的baichuan模型适配工作
vllm
image
baichuan generate_util
image

看需要怎么解?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant