Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update inference.py #3326

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

Update inference.py #3326

wants to merge 1 commit into from

Conversation

angelala00
Copy link

Add eos_token_id from the generation config file so that Llama3 can perform inference correctly.

Why are these changes needed?

The addition of eos_token_id from the generation config file to the stop_token_ids is crucial for ensuring correct inference termination in frameworks like Llama3. Typically, the eos_token_id used for model inference can be sourced from the tokenizer's config file and is usually identical to the one in the model's generation config. However, for certain models like Llama3, discrepancies between these two can prevent inference from stopping correctly. By explicitly specifying the eos_token_id from the generation config, our framework can handle inference more accurately and support a wider range of models.

Related issue number (if applicable)

N/A

Checks

  • I've run format.sh to lint the changes in this PR.
  • I've included any doc changes needed.
  • I've made sure the relevant tests are passing (if applicable).

Add eos_token_id from the generation config file so that Llama3 can perform inference correctly.
@angelala00
Copy link
Author

angelala00 commented May 21, 2024

Hi @infwinston Could you please review this PR when you have a moment?

@Puupuls
Copy link

Puupuls commented May 27, 2024

This saved me :D, Could not get llama 3 to run.
Patching in site packages fixed it!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants