Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Issue]: Write samle base on SemanticKernel with content copy from Example04_Dynamic_GroupChat_Coding_Task, can't work #2688

Open
feynmanloo opened this issue May 15, 2024 · 5 comments

Comments

@feynmanloo
Copy link

Describe the issue

[Issue]: Write samle base on SemanticKernel with content copy from Example04_Dynamic_GroupChat_Coding_Task, can't work. I don't know what's wrong happend.
This is my code : https://github.com/feynmanloo/AutoGen.BasicSamples/blob/master/Example04_Dynamic_GroupChat_Coding_Task.cs

Steps to reproduce

No response

Screenshots and logs

"C:\Users\Feynman Loo\AppData\Local\Programs\Rider\plugins\dpa\DotFiles\JetBrains.DPA.Runner.exe" --handle=23160 --backend-pid=25060 --etw-collect-flags=67108622 --detach-event-name=dpa.detach.23160 "C:/Users/Feynman Loo/Documents/Workspaces/AutoGen.BasicSamples/bin/Debug/net8.0/AutoGen.BasicSamples.exe"
Hello, World!
from: admin

{
    "to": "coder",
    "task": "Write a C# program to calculate the 39th Fibonacci number and save the result in result.txt",
    "context": ""
}

Unhandled exception. System.ArgumentException: Invalid message type
at AutoGen.SemanticKernel.SemanticKernelAgent.<>c.b__11_0(IMessage m)
at System.Linq.Enumerable.SelectEnumerableIterator2.MoveNext() at System.Linq.Enumerable.Any[TSource](IEnumerable1 source, Func2 predicate) at AutoGen.SemanticKernel.SemanticKernelAgent.BuildChatHistory(IEnumerable1 messages)
at AutoGen.SemanticKernel.SemanticKernelAgent.GenerateReplyAsync(IEnumerable1 messages, GenerateReplyOptions options, CancellationToken cancellationToken) at AutoGen.Core.GroupChat.SelectNextSpeakerAsync(IAgent currentSpeaker, IEnumerable1 conversationHistory)
at AutoGen.Core.GroupChat.CallAsync(IEnumerable1 conversationWithName, Int32 maxRound, CancellationToken ct) at AutoGen.Core.AgentExtension.SendMessageToGroupAsync(IAgent _, IGroupChat groupChat, IEnumerable1 chatHistory, Int32 maxRound, CancellationToken ct)
at AutoGen.Core.AgentExtension.SendAsync(IAgent agent, IAgent receiver, IEnumerable`1 chatHistory, Int32 maxRound, CancellationToken ct)
at AutoGen.Core.AgentExtension.InitiateChatAsync(IAgent agent, IAgent receiver, String message, Int32 maxRound, CancellationToken ct)
at AutoGen.BasicSamples.Example04_Dynamic_GroupChat_Coding_Task.RunAsync(IKernelBuilder kernelBuilder, OpenAIPromptExecutionSettings settings) in C:\Users\Feynman Loo\Documents\Workspaces\AutoGen.BasicSamples\Example04_Dynamic_GroupChat_Coding_Task.cs:line 235
at Program.

$(String[] args) in C:\Users\Feynman Loo\Documents\Workspaces\AutoGen.BasicSamples\Program.cs:line 20
at Program.(String[] args)

Process finished with exit code -532,462,766.

Additional Information

No response

@feynmanloo
Copy link
Author

@LittleLittleCloud plz

@LittleLittleCloud
Copy link
Collaborator

LittleLittleCloud commented May 15, 2024

Seems that you forget to register message connector for the helper agent? ( and maybe admin agent as well)

https://github.com/feynmanloo/AutoGen.BasicSamples/blob/07d03885cb3a7b83cdd217f45699da49aa1af355/Example04_Dynamic_GroupChat_Coding_Task.cs#L29

@feynmanloo
Copy link
Author

Seems that you forget to register message connector for the helper agent? ( and maybe admin agent as well)

https://github.com/feynmanloo/AutoGen.BasicSamples/blob/07d03885cb3a7b83cdd217f45699da49aa1af355/Example04_Dynamic_GroupChat_Coding_Task.cs#L29

@LittleLittleCloud Any document about register messsage connector and other api ?
After I register message connector for the helper agent and admin agent, I got another exceptions,below here:

System.InvalidOperationException: Sequence contains no matching element
at System.Linq.ThrowHelper.ThrowNoMatchException()
at System.Linq.Enumerable.First[TSource](IEnumerable1 source, Func2 predicate)
at AutoGen.Core.GroupChat.SelectNextSpeakerAsync(IAgent currentSpeaker, IEnumerable1 conversationHistory) at AutoGen.Core.GroupChat.CallAsync(IEnumerable1 conversationWithName, Int32 maxRound, CancellationToken ct)
at AutoGen.Core.AgentExtension.SendMessageToGroupAsync(IAgent _, IGroupChat groupChat, IEnumerable1 chatHistory, Int32 maxRound, CancellationToken ct) at AutoGen.Core.AgentExtension.SendAsync(IAgent agent, IAgent receiver, IEnumerable1 chatHistory, Int32 maxRound, CancellationToken ct)
at AutoGen.Core.AgentExtension.SendAsync(IAgent agent, IAgent receiver, String message, IEnumerable`1 chatHistory, Int32 maxRound, CancellationToken ct)
at AutoGen.BasicSamples.Example04_Dynamic_GroupChat_Coding_Task.RunAsync(IKernelBuilder kernelBuilder, OpenAIPromptExecutionSettings settings) in C:\Users\Feynman Loo\Documents\Workspaces\AutoGen.BasicSamples\Example04_Dynamic_GroupChat_Coding_Task.cs:line 238

@LittleLittleCloud
Copy link
Collaborator

LittleLittleCloud commented May 15, 2024

@feynmanloo I need to run your code for further investigation. In the meantime, the error indicates that group admin fails to generate the next speaker that is one of the current group members. The fail reason can be various though, it might because of hallucination (fabric an agent that doesn't exist), fail to follow the prompt and generate next speaker in given format (The legitimate format needs to be From xxx, or misconfigure in LLM

@LittleLittleCloud
Copy link
Collaborator

@feynmanloo I took a look at your code. The issue is caused by missing stop words in admin's prompt setting. You might want to pass [":"] to stop sequence when creating admin. When I add the right stop sequence to admin, the error was gone.
image

However, the task still fail because of group admin fails to generate the next speaker in the right format even after I add stop sequence. I note that you are using llama3:7b as the backend model for all agents, including group admin. Yet the prompt for speaker selection used by group admin is based on gpt-3.5/4. So that might be why the admin fails to follow the prompt here because different LLM behaves differently even using the same prompt. And next speaker selection prompt for GPT-series might not work well with llama-series.

So here is the suggestion that you may try

  • use gpt-3.5/4 for admin, and llama3 for others.
  • If you still want to use llama3 for admin, adding a fallback logic using middleware to admin so it always returns a legitimate next speaker. For example, in the middleware, if the reply from inner agent is not From xxx, return a hard-code value.
  • manually orchestrate agents, for example, having a seperate agent generate step and assign that step to specific agent. In this way you no longer need group chat and can achieve the full flexibility in constructing agentic workflow.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants