New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Continue adding spaces to code #1241
Comments
@JamesAllerlei this was recently solved in the latest pre-release (0.9.x). It will be published to a main release (0.8.26) today |
Awesome! Thanks. Continue rocks :) |
@Zhangtiande Can you share what your config.json looks like? I'm thinking this might be a different problem, more related to tab autocomplete |
I'm seeing the exact same extra-space issue some of the time in the few minutes that I've spent testing the extension in PyCharm, and the extra space before the suggestion will be added if I tab accept it, which breaks Python code. It also has a habit of duplicating suggestions. Extra space + duplicated suggestion: Just a duplicated suggestion: if I tab accept, it only inserts one, instead of inserting both copies, but the visual bug is unpleasant. I'm using ollama to provide the suggestions, and the extra space occurs on both PyCharm and VS Code. The duplicate suggestion only occurs on PyCharm. I should probably open a separate issue for the duplicate suggestion, but I figured I would mention it here since people were actively discussing the extra-space issue. |
this is my config: {
"models": [
{
"model": "Qwen",
"title": "Qwen-32B-Chat",
"apiBase": "http://ip:9997/v1",
"contextLength": 32768,
"completionOptions": {
"temperature": 0.8
},
"provider": "openai",
"apiKey": "1111"
},
{
"model": "codeqwen",
"title": "codeqwen:7B",
"apiBase": "http://ip:11434",
"contextLength": 65536,
"provider": "ollama"
},
{
"model": "llama3",
"title": "llama3:8B",
"apiBase": "http://ip:11434",
"contextLength": 8096,
"provider": "ollama"
}
],
"contextProviders": [
{
"name": "code"
},
{
"name": "tree"
},
{
"name": "search"
},
{
"name": "outline"
},
{
"name": "diff",
"params": {}
},
{
"name": "open",
"params": {}
},
{
"name": "terminal",
"params": {}
},
{
"name": "problems",
"params": {}
},
{
"name": "docs",
"params": {}
}
],
"slashCommands": [
{
"name": "edit",
"description": "Edit highlighted code"
}
],
"allowAnonymousTelemetry": false,
"tabAutocompleteModel": {
"title": "Tab Autocomplete Model",
"provider": "ollama",
"model": "codeqwen",
"apiBase": "http://ip:11434"
},
"embeddingsProvider": {
"provider": "ollama",
"model": "codeqwen",
"apiBase": "http://ip:11434"
},
"disableIndexing": true
} |
I'm also getting the duplicate suggestions issue. Here is my config file, if it helps.: { |
Before submitting your bug report
Relevant environment info
Description
In the continue chat window, all code and text has extra random spaces that I have to remove manually for the code to work. Eg, it will suggest code with extra spaces and write spaces into my code when paraphrasing, eg, things like "start row = 34 # Enter the star ting row", where an extra space has been added before "" and in the comment. They appear in variables, comments, file names, operators, anywhere and everywhere, about 1 per 10 lines, and break the code. Continue is otherwise working great so it is worth it, despite this limitation and the slowdown.
As an aside, is it possible to add a system prompt at the start of a chat, eg so the model knows basic context and preferences such as IDE, programming language, formatting preferences for the code? It looks like I could set up a slash command but great if this was automated.
To reproduce
The problem does not occur when using the built in selection of free trial models, But it is everpresent when using any of the models I have added to the config file and use via api. I am mostly using Gemini 1.5 and Gemini 1.0 but have tested with mistral and llama too with the same error. Here is another example (extra space before "_"):
def log(text):
with open('output _log.txt', "a") as f:
f.write(text)
Log output
No recent errors in log console.
The text was updated successfully, but these errors were encountered: