Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Incorrect input layout for langchain ChatOllama LLM #3106

Open
feishen opened this issue May 7, 2024 · 3 comments
Open

[BUG] Incorrect input layout for langchain ChatOllama LLM #3106

feishen opened this issue May 7, 2024 · 3 comments
Labels
backlog bug Something isn't working good first issue Good for newcomers help wanted Extra attention is needed

Comments

@feishen
Copy link

feishen commented May 7, 2024

Describe the bug
I use langchain ChatOllama with a system message and a human message. However, the input of the LLM mixed them together, instead of displaying separate sections for each message.
image

langchain version: 0.1.17
arize-phoenix version: 3.24.0
llama-index-callbacks-arize-phoenix version: 0.1.5

code sample:

LangChainInstrumentor().instrument()

llm = ChatOllama(model="mistral:v0.2")
messages = [
SystemMessage(content=system_prompt),
HumanMessage(content=routing_prompt.format(input=prompt))
]

result = llm.invoke(messages).content

@feishen feishen added bug Something isn't working triage issues that need triage labels May 7, 2024
Copy link

dosubot bot commented May 7, 2024

To address the issue of separating system and human messages in the ChatOllama input, you can modify your approach to ensure that messages are categorized correctly. Here's a tailored solution:

  1. Define separate classes for SystemMessage and HumanMessage, both inheriting from a base Message class. This allows for a clear distinction between messages generated by the system and those from users.

  2. Adjust the MessagesPayload class to include two lists: one for system_messages and another for human_messages. This ensures that your payload categorizes messages appropriately.

  3. Modify the MessagesResponse class to have separate fields for system and human messages, allowing responses to distinguish clearly between the two types.

Here's a simplified example to illustrate these modifications:

from typing import List
from pydantic import BaseModel

class Message(BaseModel):
    role: str  # "system", "assistant", or "user"
    content: str

class SystemMessage(Message):
    pass

class HumanMessage(Message):
    pass

class MessagesPayload(BaseModel):
    system_messages: List[SystemMessage] = []
    human_messages: List[HumanMessage] = []

class MessagesResponse(BaseModel):
    system_message: SystemMessage
    human_message: HumanMessage

This approach ensures that system and human messages are processed and displayed separately, aligning with the structure required by the ChatOllama class. Make sure that the ChatOllama class and its invoke method can handle this new structure. If there are specific requirements or constraints, further adjustments might be necessary.

Details

Sources


About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

@mikeldking
Copy link
Contributor

Hey @feishen thanks for the ticket! LangChain combines the system and user templates into a single prompt sometimes for some reason - we'll look to make this more user friendly in the future. Thanks for the feedback!

@mikeldking mikeldking added backlog and removed triage issues that need triage labels May 7, 2024
@feishen
Copy link
Author

feishen commented May 8, 2024

thank you. @mikeldking looking forward to get it fixed.

@mikeldking mikeldking added help wanted Extra attention is needed good first issue Good for newcomers labels May 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backlog bug Something isn't working good first issue Good for newcomers help wanted Extra attention is needed
Projects
Status: 📘 Todo
Development

No branches or pull requests

3 participants