<ahref="https://autoagent-ai.github.io/docs"><imgsrc="https://img.shields.io/badge/Documentation-000?logo=googledocs&logoColor=FFE165&style=for-the-badge"alt="Check out the documentation"></a>
Welcome to AutoAgent! AutoAgent is a **Fully-Automated** and highly **Self-Developing** framework that enables users to create and deploy LLM agents through **Natural Language Alone**.
</br>Automatically constructs and orchestrates collaborative agent systems purely through natural dialogue, eliminating the need for manual coding or technical configuration.
</br>Democratizes AI development by allowing anyone, regardless of coding experience, to create and customize their own agents, tools, and workflows using natural language alone.
</br>Dynamically creates, optimizes and adapts agent workflows based on high-level task descriptions, even when users cannot fully specify implementation details.
</br>Enables controlled code generation for creating tools, agents, and workflows through iterative self-improvement, supporting both single agent creation and multi-agent workflow generation.
</br>Enables controlled code generation for creating tools, agents, and workflows through iterative self-improvement, supporting both single agent creation and multi-agent workflow generation.
<li><strong>[2025, Feb 17]</strong>: 🎉🎉We've updated and released AutoAgent v0.2.0 (formerly known as MetaChain). Detailed changes include: 1) fix the bug of different LLM providers from issues; 2) add automatic installation of AutoAgent in the container environment according to issues; 3) add more easy-to-use commands for the CLI mode. 4) Rename the project to AutoAgent for better understanding.</li>
<li><strong>[2025, Feb 10]</strong>: 🎉🎉We've released <b>MetaChain!</b>, including framework, evaluation codes and CLI mode! Check our <ahref="https://arxiv.org/abs/2502.05957">paper</a> for more details.</li>
AutoAgent features a ready-to-use multi-agent system accessible through user mode on the start page. This system serves as a comprehensive AI research assistant designed for information retrieval, complex analytical tasks, and comprehensive report generation.
The most distinctive feature of AutoAgent is its natural language customization capability. Unlike other agent frameworks, AutoAgent allows you to create tools, agents, and workflows using natural language alone. Simply choose `agent editor` or `workflow editor` mode to start your journey of building agents through conversations.
You can also create the agent workflows using natural language description with the `workflow editor` mode, as shown in the following figure. (Tips: this mode does not support tool creation temporarily.)
We use Docker to containerize the agent-interactive environment. So please install [Docker](https://www.docker.com/) first. You don't need to manually pull the pre-built image, because we have let Auto-Deep-Research **automatically pull the pre-built image based on your architecture of your machine**.
<spanid='api-keys-setup'/>
### API Keys Setup
Create an environment variable file, just like `.env.template`, and set the API keys for the LLMs you want to use. Not every LLM API Key is required, use what you need.
```bash
# Required Github Tokens of your own
GITHUB_AI_TOKEN=
# Optional API Keys
OPENAI_API_KEY=
DEEPSEEK_API_KEY=
ANTHROPIC_API_KEY=
GEMINI_API_KEY=
HUGGINGFACE_API_KEY=
GROQ_API_KEY=
XAI_API_KEY=
```
<spanid='start-with-cli-mode'/>
### Start with CLI Mode
> [🚨 **News**: ] We have updated a more easy-to-use command to start the CLI mode and fix the bug of different LLM providers from issues. You can follow the following steps to start the CLI mode with different LLM providers with much less configuration.
#### Command Options:
You can run `auto main` to start full part of AutoAgent, including `user mode`, `agent editor` and `workflow editor`. Btw, you can also run `auto deep-research` to start more lightweight `user mode`, just like the [Auto-Deep-Research](https://github.com/HKUDS/Auto-Deep-Research) project. Some configuration of this command is shown below.
-`--container_name`: Name of the Docker container (default: 'deepresearch')
-`--port`: Port for the container (default: 12346)
-`COMPLETION_MODEL`: Specify the LLM model to use, you should follow the name of [Litellm](https://github.com/BerriAI/litellm) to set the model name. (Default: `claude-3-5-sonnet-20241022`)
-`DEBUG`: Enable debug mode for detailed logs (default: False)
-`API_BASE_URL`: The base URL for the LLM provider (default: None)
-`FN_CALL`: Enable function calling (default: None). Most of time, you could ignore this option because we have already set the default value based on the model name.
-`git_clone`: Clone the AutoAgent repository to the local environment (only support with the `auto main` command, default: True)
-`test_pull_name`: The name of the test pull. (only support with the `auto main` command, default: 'autoagent_mirror')
#### More details about `git_clone` and `test_pull_name`]
In the `agent editor` and `workflow editor` mode, we should clone a mirror of the AutoAgent repository to the local agent-interactive environment and let our **AutoAgent** automatically update the AutoAgent itself, such as creating new tools, agents and workflows. So if you want to use the `agent editor` and `workflow editor` mode, you should set the `git_clone` to True and set the `test_pull_name` to 'autoagent_mirror' or other branches.
#### `auto main` with different LLM Providers
Then I will show you how to use the full part of AutoAgent with the `auto main` command and different LLM providers. If you want to use the `auto deep-research` command, you can refer to the [Auto-Deep-Research](https://github.com/HKUDS/Auto-Deep-Research) project for more details.
##### Anthropic
* set the `ANTHROPIC_API_KEY` in the `.env` file.
```bash
ANTHROPIC_API_KEY=your_anthropic_api_key
```
* run the following command to start Auto-Deep-Research.
```bash
auto main # default model is claude-3-5-sonnet-20241022
```
##### OpenAI
* set the `OPENAI_API_KEY` in the `.env` file.
```bash
OPENAI_API_KEY=your_openai_api_key
```
* run the following command to start Auto-Deep-Research.
```bash
COMPLETION_MODEL=gpt-4o auto main
```
##### Mistral
* set the `MISTRAL_API_KEY` in the `.env` file.
```bash
MISTRAL_API_KEY=your_mistral_api_key
```
* run the following command to start Auto-Deep-Research.
```bash
COMPLETION_MODEL=mistral/mistral-large-2407 auto main
```
##### Gemini - Google AI Studio
* set the `GEMINI_API_KEY` in the `.env` file.
```bash
GEMINI_API_KEY=your_gemini_api_key
```
* run the following command to start Auto-Deep-Research.
```bash
COMPLETION_MODEL=gemini/gemini-2.0-flash auto main
```
##### Huggingface
* set the `HUGGINGFACE_API_KEY` in the `.env` file.
```bash
HUGGINGFACE_API_KEY=your_huggingface_api_key
```
* run the following command to start Auto-Deep-Research.
```bash
COMPLETION_MODEL=huggingface/meta-llama/Llama-3.3-70B-Instruct auto main
```
##### Groq
* set the `GROQ_API_KEY` in the `.env` file.
```bash
GROQ_API_KEY=your_groq_api_key
```
* run the following command to start Auto-Deep-Research.
```bash
COMPLETION_MODEL=groq/deepseek-r1-distill-llama-70b auto main
<figcaption><em>Start Page of AutoAgent.</em></figcaption>
</figure>
</div>
### Tips
#### Import browser cookies to browser environment
You can import the browser cookies to the browser environment to let the agent better access some specific websites. For more details, please refer to the [cookies](./AutoAgent/environment/cookie_json/README.md) folder.
#### Add your own API keys for third-party Tool Platforms
If you want to create tools from the third-party tool platforms, such as RapidAPI, you should subscribe tools from the platform and add your own API keys by running [process_tool_docs.py](./process_tool_docs.py).
```bash
python process_tool_docs.py
```
More features coming soon! 🚀 **Web GUI interface** under development.
- [Join our Slack workspace](https://join.slack.com/t/AutoAgent-workspace/shared_invite/zt-2zibtmutw-v7xOJObBf9jE2w3x7nctFQ) - Here we talk about research, architecture, and future development.
Rome wasn't built in a day. AutoAgent stands on the shoulders of giants, and we are deeply grateful for the outstanding work that came before us. Our framework architecture draws inspiration from [OpenAI Swarm](https://github.com/openai/swarm), while our user mode's three-agent design benefits from [Magentic-one](https://github.com/microsoft/autogen/tree/main/python/packages/autogen-magentic-one)'s insights. We've also learned from [OpenHands](https://github.com/All-Hands-AI/OpenHands) for documentation structure and many other excellent projects for agent-environment interaction design, among others. We express our sincere gratitude and respect to all these pioneering works that have been instrumental in shaping AutoAgent.