Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CodeQwen returns extra white space for code completion #1947

Open
ycclnn opened this issue Apr 24, 2024 · 4 comments
Open

CodeQwen returns extra white space for code completion #1947

ycclnn opened this issue Apr 24, 2024 · 4 comments
Assignees
Labels
bug Something isn't working

Comments

@ycclnn
Copy link
Contributor

ycclnn commented Apr 24, 2024

image

Other models like deepseek works without a problem.

@wsxiaoys
Copy link
Member

wsxiaoys commented Apr 24, 2024

Thanks for reporting the issue - I have also observed the issue and am looking into the debug process. It seems that the tokenizer treats _ as a space, regardless of the context for CodeQwen.

@wsxiaoys
Copy link
Member

wsxiaoys commented May 2, 2024

Can confirm this presents in upstream llama.cpp as well:

cross posted at: ggerganov/llama.cpp#7050

@ycclnn
Copy link
Contributor Author

ycclnn commented May 6, 2024

Can confirm this presents in upstream llama.cpp as well:

cross posted at: ggerganov/llama.cpp#7050

yeah, white space exists for using vllm as well so its a model thing rather than serving framework thing. I was thinking shiftleft the cursor a few chars and then check the completion result with overlapping substrings to overcome this and thats the only workaround in my mind.

@ycclnn
Copy link
Contributor Author

ycclnn commented May 6, 2024

Observed that codeqwen not always returns extra white space, and sometimes the white space is meaningful. Therefore, trim the leading white space may not be an approach.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants