Skip to main content

Overview

The hard part of building agents (or any LLM application) is making them reliable enough. While they may work for a prototype, they often fail in real-world use cases.

Why do agents fail?

When agents fail, it’s usually because the LLM call inside the agent took the wrong action / didn’t do what we expected. LLMs fail for one of two reasons:
  1. The underlying LLM is not capable enough
  2. The “right” context was not passed to the LLM
More often than not - it’s actually the second reason that causes agents to not be reliable. Context engineering is providing the right information and tools in the right format so the LLM can accomplish a task. This is the number one job of AI Engineers. This lack of “right” context is the number one blocker for more reliable agents, and LangChain’s agent abstractions are uniquely designed to facilitate context engineering.
New to context engineering? Start with the conceptual overview to understand the different types of context and when to use them.

The agent loop

A typical agent loop consists of two main steps:
  1. Model call - calls the LLM with a prompt and available tools, returns either a response or a request to execute tools
  2. Tool execution - executes the tools that the LLM requested, returns tool results