AutoAgents is a multi-agent framework built in Rust for creating intelligent, autonomous agents powered by Large Language Models (LLMs). The framework enables agents that can reason, act, remember, and collaborate through a modular architecture with swappable components.
Key Design Principles:
For installation instructions, see page 2.1. For hands-on examples, see page 6.
Sources: README.md20-28 docs/src/introduction.md1-15
AutoAgents provides a complete framework for building LLM-powered agents with:
ReActAgent and BasicAgent executors implementing reasoning patternsToolT and ToolRuntime traits for connecting agents to external capabilitiesSlidingWindowMemory and extensible memory backends for context retentionActorAgent with pub/sub topics for agent communicationLLMProvider trait supporting 12+ providers (OpenAI, Anthropic, Ollama, local models)The framework supports cloud-native deployments with API-based LLMs, edge-native deployments with local inference (LlamaCpp, Mistral-rs), and hybrid architectures combining both.
Sources: README.md32-56 Cargo.lock471-633
AutoAgents uses a layered architecture separating agent definition, execution logic, and runtime infrastructure. The following diagram shows the major components and their relationships using actual code entities from the codebase.
High-Level System Architecture
Key Layers:
Task instances containing prompts and context#[agent] macro generates AgentDeriveT implementations with metadataAgentBuilder wires agents with LLM, memory, and tools; DirectAgent or ActorAgent handle executionLLMProvider, Memory, and ToolRuntime provide the foundational capabilitiesSee page 1.1 for detailed architecture diagrams and page 1.2 for workspace organization.
Sources: README.md154-266 Cargo.toml1-12
The AutoAgents workspace is organized into specialized crates, each handling distinct concerns. This modular design enables selective dependency inclusion and clear separation of responsibilities.
Workspace Crate Structure
| Crate | Purpose | Key Exports |
|---|---|---|
autoagents | Main API facade | Re-exports core types, feature flags |
autoagents-core | Core framework | AgentBuilder, ReActAgent, DirectAgent, ActorAgent, Task |
autoagents-llm | LLM abstraction | LLMProvider, ChatProvider, LLMBuilder, provider implementations |
autoagents-derive | Procedural macros | #[agent], #[tool], #[derive(ToolInput)], #[derive(AgentOutput)] |
autoagents-toolkit | Reusable tools | WolframAlpha, file operations, search, code analysis tools |
autoagents-qdrant | Vector store | Qdrant integration for RAG patterns |
autoagents-llamacpp | Local inference | LlamaCpp backend for GGUF models |
autoagents-mistral-rs | Local inference | Mistral-rs backend for local models |
See page 1.2 for detailed dependency information and publishing order.
Sources: Cargo.toml1-97 README.md317-334
AutoAgents is built around several fundamental concepts that work together to enable intelligent agent behavior. These are covered in detail on page 1.3, but here's a brief overview:
Agents are defined using the #[agent] macro and implement the AgentDeriveT trait. An agent consists of:
#[derive(AgentOutput)]AgentHooks traitAgent Definition Pattern:
See page 3.1 for detailed agent system documentation.
Sources: README.md154-212
Executors implement the reasoning logic for agents. AutoAgents provides two built-in executors:
ReActAgent: Implements the ReAct (Reasoning + Acting) pattern with tool use in a loopBasicAgent: Simple single-turn execution without iterative reasoningThe ReActAgent executor follows this flow:
Task with user promptSee page 3.3 for ReAct executor details.
Sources: README.md154-266
Agents can run in two modes:
DirectAgent: Synchronous execution with .run(task) or .run_stream(task) methods. Suitable for single-agent workflows and CLI applications.
ActorAgent: Event-driven execution using pub/sub topics via the Ractor actor framework. Agents subscribe to Topic<Task> channels and emit structured events. Enables multi-agent coordination patterns.
Execution Mode Comparison:
See page 3.2 for execution patterns and page 3.4 for environment and runtime details.
Sources: README.md154-266
Tools extend agent capabilities through the ToolT and ToolRuntime traits. The #[tool] macro generates the metadata, while users implement ToolRuntime::execute() for the execution logic.
Tool Definition:
#[tool] macro on struct generates ToolT implementation#[derive(ToolInput)] on input struct generates JSON schemaimpl ToolRuntime provides execution logicResult<Value, ToolCallError>AutoAgents supports three tool execution modes:
See page 4 for comprehensive tool documentation.
Sources: README.md168-192
Memory systems maintain conversational context and agent state. The primary implementation is SlidingWindowMemory, which keeps a rolling window of recent messages.
Memory is configured via AgentBuilder:
AgentBuilder::new(executor)
.memory(Box::new(SlidingWindowMemory::new(10)))
See page 3.6 for memory system details.
Sources: README.md213 README.md234-238
Task is the work unit passed to agents, containing:
Tasks are created with Task::new(prompt) and passed to agent.run(task) or published to topics in actor mode.
Sources: README.md244
AutoAgents abstracts LLM interaction through the LLMProvider trait hierarchy, enabling seamless switching between 12+ backends. All providers implement the same interface, allowing agents to remain backend-agnostic.
Provider Abstraction Layer:
Cloud Providers (require API keys, internet connectivity):
| Provider | Feature Flag | Models |
|---|---|---|
| OpenAI | openai | gpt-4, gpt-4o, gpt-3.5-turbo |
| Anthropic | anthropic | claude-3-opus, claude-3-sonnet, claude-3-haiku |
google | gemini-pro, gemini-flash | |
| Groq | groq | Fast inference for various models |
| Azure OpenAI | azure_openai | Enterprise OpenAI deployment |
| xAI | xai | grok-* models |
| DeepSeek | deepseek | deepseek-coder, deepseek-chat |
| Phind | phind | Developer-focused models |
| OpenRouter | openrouter | Unified API for multiple providers |
Local Providers (offline operation, privacy-focused):
| Provider | Crate | Capabilities |
|---|---|---|
| Ollama | ollama feature | Self-hosted models via localhost:11434 |
| LlamaCpp | autoagents-llamacpp | GGUF models, GPU offloading |
| Mistral-rs | autoagents-mistral-rs | Rust-native inference engine |
All providers use the LLMBuilder pattern for consistent configuration:
See page 5 for comprehensive LLM integration documentation.
Sources: README.md59-95 crates/autoagents/Cargo.toml12-26 README.md252-261
AutoAgents implements the ReAct (Reasoning and Acting) pattern through the ReActExecutor trait, enabling agents to iteratively reason about problems and take actions using available tools.
Agents can produce type-safe outputs using the #[AgentOutput] derive macro, with automatic JSON schema generation for validation.
The framework supports sandboxed tool execution using WebAssembly through the wasm_runner example, providing secure and cross-platform tool isolation.
Type-safe pub/sub communication via Topic<Task> channels enables coordination between multiple agents with shared memory and context.
Refresh this wiki