New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] Incorrect input layout for langchain ChatOllama LLM #3106
Comments
To address the issue of separating system and human messages in the
Here's a simplified example to illustrate these modifications: from typing import List
from pydantic import BaseModel
class Message(BaseModel):
role: str # "system", "assistant", or "user"
content: str
class SystemMessage(Message):
pass
class HumanMessage(Message):
pass
class MessagesPayload(BaseModel):
system_messages: List[SystemMessage] = []
human_messages: List[HumanMessage] = []
class MessagesResponse(BaseModel):
system_message: SystemMessage
human_message: HumanMessage This approach ensures that system and human messages are processed and displayed separately, aligning with the structure required by the
|
Hey @feishen thanks for the ticket! LangChain combines the system and user templates into a single prompt sometimes for some reason - we'll look to make this more user friendly in the future. Thanks for the feedback! |
thank you. @mikeldking looking forward to get it fixed. |
Describe the bug
I use langchain ChatOllama with a system message and a human message. However, the input of the LLM mixed them together, instead of displaying separate sections for each message.
langchain version: 0.1.17
arize-phoenix version: 3.24.0
llama-index-callbacks-arize-phoenix version: 0.1.5
code sample:
LangChainInstrumentor().instrument()
llm = ChatOllama(model="mistral:v0.2")
messages = [
SystemMessage(content=system_prompt),
HumanMessage(content=routing_prompt.format(input=prompt))
]
result = llm.invoke(messages).content
The text was updated successfully, but these errors were encountered: