Mon. Dec 9th, 2024

Microsoft AutoGen

Build LLM applications via multiple agents

AutoGen provides a multi-agent conversation framework as a high-level abstraction. It is an open-source library for enabling next-generation LLM applications with multi-agent collaborations, teachability and personalization. With this framework, users can build LLM workflows. The agent modularity and conversation-based programming simplifies development and enables reuse for developers. End-users benefit from multiple agents independently learning and collaborating on their behalf, enabling them to accomplish more with less work. Benefits of the multi agent approach with AutoGen include agents that can be backed by various LLM configurations; native support for a generic form of tool usage through code generation and execution; and, a special agent, the Human Proxy Agent that enables easy integration of human feedback and involvement at different levels.

Easily build LLM workflows

With AutoGen, building a complex multi-agent conversation system boils down to:

  • Defining a set of agents with specialized capabilities and roles.
  • Defining the interaction behavior between agents, i.e., what to reply when an agent receives messages from another agent.
  • AutoGen is an open-source, community-driven project under active development (as a spinoff from FLAML, a fast library for automated machine learning and tuning), which encourages contributions from individuals of all backgrounds. Many Microsoft Research collaborators have made great contributions to this project, including academic contributors like Pennsylvania State University and the University of Washington, and product teams like Microsoft Fabric and ML.NET. AutoGen aims to provide an effective and easy-to-use framework for developers to build next-generation applications, and already demonstrates promising opportunities to build creative applications and provide a large space for innovation.

Microsoft AutoGen: Enabling next-generation large language model applications

 

“Capabilities like AutoGen are poised to fundamentally transform and extend what large language models are capable of. This is one of the most exciting developments I have seen in AI recently.”

Doug Burger, Technical Fellow, Microsoft

Figure 1 shows three shaded boxes, each containing symbols that represent AutoGen agents and the large language models, tools, and humans that comprise them, and illustrates how AutoGen agents can converse to solve tasks.
Figure 1. AutoGen enables complex LLM-based workflows using multi-agent conversations. (Left) AutoGen agents are customizable and can be based on LLMs, tools, humans, and even a combination of them. (Top-right) Agents can converse to solve tasks. (Bottom-right) The framework supports many additional complex conversation patterns.

It requires a lot of effort and expertise to design, implement, and optimize a workflow that can leverage the full potential of large language models (LLMs). Automating these workflows has tremendous value. As developers begin to create increasingly complex LLM-based applications, workflows will inevitably grow more intricate. The potential design space for such workflows could be vast and complex, thereby heightening the challenge of orchestrating an optimal workflow with robust performance.

AutoGen is a framework for simplifying the orchestration, optimization, and automation of LLM workflows. It offers customizable and conversable agents that leverage the strongest capabilities of the most advanced LLMs, like GPT-4, while addressing their limitations by integrating with humans and tools and having conversations between multiple agents via automated chat.

error: Content is protected !!