Have been working with this and very impressed so far - it’s a step ahead of LangChain agents and seems to be receiving more attention/development than LangChain was interested in committing to agents.
FWIW the “group research” and “chess” examples from the notebooks folder in their repo have been the best for explaining the utility of this tech to others - the meme generator does a good job showing functions stripped down but misses a lot of the important bits
However from his examples (and his own admission) it seems that AutoGen isn't benefitting from full GPT4-level performance even tho he's pointed it directly at OpenAPI GPT4 (and other LLMs). The back and forth between the agents does not produce great results even tho similar prompts pumped directly into ChatGPT seem to give better results.
This just reminds me: I have been wondering, if you get multiple instances of GPT-4 talking to each other, each seeded with a different personality prompt, do they have interesting conversations? I suspect it would devolve in to nonsense quickly, but I’ve never seen any chat log of two GPT instances talking. Does anyone have a reference for this? Thanks.
I did a DSL to facilitate this at https://prlang.com. I've had some success setting up agents to "act" out scenes, where each plays a different part, but it was kinda limited in that conversations would kinda de cohere into nonsense after a bit.
Right but someone with API access could do this much more easily. I don’t really want to sit there copying and pasting back and forth. I’d rather write two or three starting prompts and have a few agents do all the work of talking.
I can't find the script now, but my kids and I did this had the Queen Elizabeth 1st talking to a pirate about his life on the high seas. It was quite fun. I wouldn't want to read a historical novel written that way tho...
A question for people researching LLMs and their capabilities:
Is there any reason to believe that the interaction of multiple agents (using the same model) will yield some emergent property that is beyond the capabilities of the agent model?
I'm not working with LLMs, but my intuition is that whatever these multi agent setups come up with could also be achieved by a single agent just talking to itself, as they all are "just guessing" what the most probable next token is.
Since a single inference is limited by context length, a multiple agents model is able to process more context at each steps of the reasoning chain, which might improve the overall quality. However, given how easy it is getting to fine tune models, it's likely that multi-agent models will make a lot of sense to split the workload and assign each part to a specialized agent.
> a single inference is limited by context length,
Yes.
> multiple agents model is able to process more context at each steps of the reasoning chain
What?
How can a multi agent model have more context at a single step? The single step runs on a single agent. It would literally the same as a single agent?
The multi agent approach is simply packaging up different “personas” for single steps; and yes, it is entirely reasonable to assume that given N configurations for an agent (different props, different temp, different models even) you would see emergent behaviour that a single agent wouldn’t.
For example, you might have a “creative agent” to scaffold something and a “conservative” agent to fix syntax errors.
…but what are you talking about with different context sizes? I think you’re mixing domain terms; context is the input to an LLM. I don’t know what you’re referring to, but multi agent setups make absolutely no difference to the context size.
Their comment uses two (valid) context lengths: "organizational total" and "single agent." The latter is a subset of the former.
By analogy: no agent can summarize War and Peace, but several agents can, Peace-wise (sorry). Like AI map reduce. The question is thus "why not use one agent for this recursive merger?" Answers maybe being:
1. Different scholars (Russian lit. agents, ...war strategists?, etc) pay attention to different things with valuable insights
2. Multiple readers parallelize well, and some are faster than others
3. Managers can direct talent to (re)read chapters most relevant to their specialties, and coordinate meta-learning and communication
You might not get much mileage out of this approach with book summaries, but other domains are a different story (sorry).
Yes, multiple agents with different personas will give different takes and may lead to emergent behaviour, eg. discussing the book.
Yes, they could run in parallel.
No, any single multi agent step will not have any more context than any other single step.
If you believe that the Nth prompt in a chat to a LLM eternal multiple agents has “more context” than a chat between a single agent (and itself) you don’t understand how this works.
…or you are choosing to invent your own definition of “context”.
I think this is right inline with the utility of multi agent models. Whether distributing tasks to specialized agents trained on domain knowledge or collaborating with context aware agents. I think the context is where we are going to find limitations early on especially when models are expected to work on live data. Rather than constantly retraining a model, you leverage a model that is already primed through in-context learning based on previous interactions and relevant data.
When you give it a specific role it essentially hones in on the relevant part of the training data. Researcher in X field? Papers from that field get priority in formulating responses and the accuracy of token prediction for contextually relevant tasks goes up.
OTOH, if you try to go 'meta' - ie. you give it a scenario where it imagines a group of scholars chatting with each other, then it hones in on situations where there is a dialogue amongst a group (ie. a play/script).
In a way it is the same thing, agents are mostly an abstraction that make it easier to know what’s going on.
I think of agents more or less as python classes with a mixture of natural language and code functions. You design them to do something with information they produce, and to interface with other agents or “tools” in some way.
But all the agents can be the same language model under the hood, they are frames used to build different kinds of contexts.
And yes I think the idea is that emergent behaviour can be useful. This comes to mind
Given we know different prompts perform better on different tasks (via evals, papers, etc), you can think of multiple agents interacting (especially when there's a specialized "router" or orchestrator) as sub problems of a larger task being solved by "agents" specialized for that task - prompts + context crafted for that sub-problem.
* sometimes we want an LLM with longer context, faster speed, higher quality, etc: so even in a model family, in the same job, diff model configs
* we do a lot of prompt tuning for agent calls, like what a good Splunk query is, what SQL tables are currently available, what a good chart is, how to using a graph library, ...
* we also do accompanying code-level work, like running a generated python data analysis in a sandbox and feeding back exceptions to the LLM, or checking for parse errors when running a DB query, which feed back to the LLM
* When working directly on data, we might run it through the LLM, which might get into parallel chunked calls, a summary tree, etc, where a single LLM call would be insufficient, costly, slow, etc
> Is there any reason to believe that the interaction of multiple agents (using the same model) will yield some emergent property that is beyond the capabilities of the agent model?
If you write a short story it's often better to split it into parts (make an outline, write the story, edit the story) than if you would try to do the whole process at once. The same can be true for LLMs I suppose.
In an LLM sense this would be like the different system prompts are sampling different parts of the training distribution, but I'm not able to validate such a claim or know if someone has validated it before.
From my experience it's a modularization technique. It makes it easier to reason about and improve the system. For example, instead of one big model capable of doing anything, you can separate the system into specialized subsystems with different prompts and improve them over time.
Mixture of experts: Make each model world-class within a single domain.
If adding one more common-sense QnA makes the calculus-bot even slightly worse at caculus, don't do it.
The “mixture of experts” concept in LLMs is a way of training a single model, it’s not based on training many different models (although that was the idea when the term was originally coined).
The breakthrough I've had is realizing how important it is to control the conversation between agents.
Just like in our work environments and in our relationships, HOW conversations occur largely determines the impact of the conversation. With or without AutoGen
We're building a multi-agent postgres data analytics tool. If you're building agentic software, join the conversation: https://youtu.be/4o8tymMQ5GM
Unless I'm missing something, how is this library different from prompting a single chatbot: "Write a dialog in which A, B, and C, each playing a different role, have a conversation and do something D"?
You can have the character description more front and center, if that makes any sense.
So instead of diluting attention across three separate character descriptions, your model will see just the chat log and the single description of the persona it should respond from. This may or may not make a difference.
Maybe it depends on the model but I find you'll get a different result if you say "write a dialog in which, A, B, and C talk about D" versus "read what A said and reply as B". The latter will result in each participant talking longer.
Not sure talking longer is the goal. More so, the focus and separation of each facilitates a interplay and dynamic with which an attention window on a segmented linear response (be A, B and C) cannot be individually represent nearly as rhobustly (the main inference is the primary focus). Would love to hear some other opinions chime in here.
Having conversations amongst agents is it like treating each agent as your traditional nodes? Maybe in the future there would be millions of nodes(agents) conversing and maybe this is how next gen AGI will form
OneLake is Fabric's lake-centric architecture that provides a single, integrated environment for data professionals and the business to collaborate on data projects. Think of it like OneDrive for data; OneLake combines storage locations across different regions and clouds into a single logical lake, without moving or duplicating data. Data can be stored in any file format in OneLake and can be structured or unstructured. For tabular data, the analytical engines in Fabric will write data in delta format when writing to OneLake. All engines will know how to read this format and treat delta files as tables no matter which engine writes it.
It doesn’t help you inherently solve the problem per se, but what it does allow you to do that is distinctive is keep the human and the loop that can assist the agents to solve problems.
To some degree it can also keep problems in the logic chain from snowballing, and causing the overall objective to fail because there’s invalid logic in the sequence
i noticed autogen creates a new docker container each time code is executed by agents (default behaviour, can be turned off), so it's safe as you think docker is safe
FWIW the “group research” and “chess” examples from the notebooks folder in their repo have been the best for explaining the utility of this tech to others - the meme generator does a good job showing functions stripped down but misses a lot of the important bits