You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I found a potential fix for the context length bug where the llm keeps outputting gibberish.
It still does so but does not get stuck on it now.
Also increase the alpha_value to 3 and experiment with different n_batch values, like 1024 instead of 512 (increases the input length of context)
The text was updated successfully, but these errors were encountered:
I found a potential fix for the context length bug where the llm keeps outputting gibberish. It still does so but does not get stuck on it now. Also increase the alpha_value to 3 and experiment with different n_batch values, like 1024 instead of 512 (increases the input length of context)
I also added this here, works like a charm to avoid gibberish: data["truncation_length"] = MAX_GPT_MODEL_TOKENS * 2
Although it may make the output sometimes not as accurate as GPT-4 Turbo:
It does at least not fix the infinite "_A_A_A_A" and random token outputs finally.
Version
VisualStudio Code extension
Operating System
Windows 10
What happened?
I found a potential fix for the context length bug where the llm keeps outputting gibberish. It still does so but does not get stuck on it now. Also increase the alpha_value to 3 and experiment with different n_batch values, like 1024 instead of 512 (increases the input length of context)The text was updated successfully, but these errors were encountered: