You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi everyone, is it possible (or will be in the future) to set the -model parameter to an arbitrary llm outside the models list? For example, this can be useful for extracting entities in documents written in a different language or using domain-specific fine-tuned llm. Thanks for your time.
The text was updated successfully, but these errors were encountered:
That's awesome! After some tests on an italian crime news dataset, I found that llama-2-7b-chat gets the better results in a foreign language, and adding some simple prompt engineering adjustments in the class' prompt section (for example "act as an italian speaker") really improves the results. I hope this can help the research meanwhile the llm upgrade is done.
Hi everyone, is it possible (or will be in the future) to set the -model parameter to an arbitrary llm outside the models list? For example, this can be useful for extracting entities in documents written in a different language or using domain-specific fine-tuned llm. Thanks for your time.
The text was updated successfully, but these errors were encountered: