Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

💡 [REQUEST] - 请问如何支持 Qwen/Qwen-VL-Chat #105

Open
wangschang opened this issue Sep 6, 2023 · 1 comment
Open

💡 [REQUEST] - 请问如何支持 Qwen/Qwen-VL-Chat #105

wangschang opened this issue Sep 6, 2023 · 1 comment
Labels
question Further information is requested

Comments

@wangschang
Copy link

起始日期 | Start Date

No response

实现PR | Implementation PR

No response

相关Issues | Reference Issues

No response

摘要 | Summary

请问如何支持 Qwen/Qwen-VL-Chat

基本示例 | Basic Example

File "server.py", line 2, in <module> from api.models import EMBEDDED_MODEL, GENERATE_MDDEL, app, VLLM_ENGINE File "/root/api-for-open-llm/api/models.py", line 140, in <module> VLLM_ENGINE = create_vllm_engine() if (config.USE_VLLM and config.ACTIVATE_INFERENCE) else None File "/root/api-for-open-llm/api/models.py", line 98, in create_vllm_engine engine = AsyncLLMEngine.from_engine_args(engine_args) File "/usr/local/miniconda3/lib/python3.8/site-packages/vllm/engine/async_llm_engine.py", line 232, in from_engine_args engine = cls(engine_args.worker_use_ray, File "/usr/local/miniconda3/lib/python3.8/site-packages/vllm/engine/async_llm_engine.py", line 55, in __init__ self.engine = engine_class(*args, **kwargs) File "/usr/local/miniconda3/lib/python3.8/site-packages/vllm/engine/llm_engine.py", line 101, in __init__ self._init_workers(distributed_init_method) File "/usr/local/miniconda3/lib/python3.8/site-packages/vllm/engine/llm_engine.py", line 133, in _init_workers self._run_workers( File "/usr/local/miniconda3/lib/python3.8/site-packages/vllm/engine/llm_engine.py", line 470, in _run_workers output = executor(*args, **kwargs) File "/usr/local/miniconda3/lib/python3.8/site-packages/vllm/worker/worker.py", line 67, in init_model self.model = get_model(self.model_config) File "/usr/local/miniconda3/lib/python3.8/site-packages/vllm/model_executor/model_loader.py", line 57, in get_model model.load_weights(model_config.model, model_config.download_dir, File "/usr/local/miniconda3/lib/python3.8/site-packages/vllm/model_executor/models/qwen.py", line 308, in load_weights param = state_dict[name] KeyError: 'transformer.visual.positional_embedding'

缺陷 | Drawbacks

Qwen/Qwen-VL-Chat

未解决问题 | Unresolved questions

No response

@wangschang wangschang added the question Further information is requested label Sep 6, 2023
@xiaogui340
Copy link

同问,是否可以增加 视觉模型或多模态模型支持?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants