In this tutorial we will build a custom agent that can answer questions about a SQL database using LangGraph.LangChain offers built-in agent implementations, implemented using LangGraph primitives. If deeper customization is required, agents can be implemented directly in LangGraph. This guide demonstrates an example implementation of a SQL agent. You can find a tutorial building a SQL agent using higher-level LangChain abstractions here.
Building Q&A systems of SQL databases requires executing model-generated SQL queries. There are inherent risks in doing this. Make sure that your database connection permissions are always scoped as narrowly as possible for your agent’s needs. This will mitigate, though not eliminate, the risks of building a model-driven system.
The prebuilt agent lets us get started quickly, but we relied on the system prompt to constrain its behavior— for example, we instructed the agent to always start with the “list tables” tool, and to always run a query-checker tool before executing the query.We can enforce a higher degree of control in LangGraph by customizing the agent. Here, we implement a simple ReAct-agent setup, with dedicated nodes for specific tool-calls. We will use the same [state] as the pre-built agent.
from langchain.chat_models import init_chat_model# Follow the steps here to configure your credentials:# https://docs.aws.amazon.com/bedrock/latest/userguide/getting-started.htmlmodel = init_chat_model( "anthropic.claude-3-5-sonnet-20240620-v1:0", model_provider="bedrock_converse",)
You will be creating a SQLite database for this tutorial. SQLite is a lightweight database that is easy to set up and use. We will be loading the chinook database, which is a sample database that represents a digital media store.For convenience, we have hosted the database (Chinook.db) on a public GCS bucket.
Copy
import requests, pathliburl = "https://storage.googleapis.com/benchmarks-artifacts/chinook/Chinook.db"local_path = pathlib.Path("Chinook.db")if local_path.exists(): print(f"{local_path} already exists, skipping download.")else: response = requests.get(url) if response.status_code == 200: local_path.write_bytes(response.content) print(f"File downloaded and saved as {local_path}") else: print(f"Failed to download the file. Status code: {response.status_code}")
We will use a handy SQL database wrapper available in the langchain_community package to interact with the database. The wrapper provides a simple interface to execute SQL queries and fetch results:
Copy
from langchain_community.utilities import SQLDatabasedb = SQLDatabase.from_uri("sqlite:///Chinook.db")print(f"Dialect: {db.dialect}")print(f"Available tables: {db.get_usable_table_names()}")print(f'Sample output: {db.run("SELECT * FROM Artist LIMIT 5;")}')
Use the SQLDatabase wrapper available in the langchain_community package to interact with the database. The wrapper provides a simple interface to execute SQL queries and fetch results:
Copy
from langchain_community.agent_toolkits import SQLDatabaseToolkittoolkit = SQLDatabaseToolkit(db=db, llm=model)tools = toolkit.get_tools()for tool in tools: print(f"{tool.name}: {tool.description}\n")
Copy
sql_db_query: Input to this tool is a detailed and correct SQL query, output is a result from the database. If the query is not correct, an error message will be returned. If an error is returned, rewrite the query, check the query, and try again. If you encounter an issue with Unknown column 'xxxx' in 'field list', use sql_db_schema to query the correct table fields.sql_db_schema: Input to this tool is a comma-separated list of tables, output is the schema and sample rows for those tables. Be sure that the tables actually exist by calling sql_db_list_tables first! Example Input: table1, table2, table3sql_db_list_tables: Input is an empty string, output is a comma-separated list of tables in the database.sql_db_query_checker: Use this tool to double check if your query is correct before executing it. Always use this tool before executing a query with sql_db_query!
We construct dedicated nodes for the following steps:
Listing DB tables
Calling the “get schema” tool
Generating a query
Checking the query
Putting these steps in dedicated nodes lets us (1) force tool-calls when needed, and (2) customize the prompts associated with each step.
Copy
from typing import Literalfrom langchain.messages import AIMessagefrom langchain_core.runnables import RunnableConfigfrom langgraph.graph import END, START, MessagesState, StateGraphfrom langgraph.prebuilt import ToolNodeget_schema_tool = next(tool for tool in tools if tool.name == "sql_db_schema")get_schema_node = ToolNode([get_schema_tool], name="get_schema")run_query_tool = next(tool for tool in tools if tool.name == "sql_db_query")run_query_node = ToolNode([run_query_tool], name="run_query")# Example: create a predetermined tool calldef list_tables(state: MessagesState): tool_call = { "name": "sql_db_list_tables", "args": {}, "id": "abc123", "type": "tool_call", } tool_call_message = AIMessage(content="", tool_calls=[tool_call]) list_tables_tool = next(tool for tool in tools if tool.name == "sql_db_list_tables") tool_message = list_tables_tool.invoke(tool_call) response = AIMessage(f"Available tables: {tool_message.content}") return {"messages": [tool_call_message, tool_message, response]}# Example: force a model to create a tool calldef call_get_schema(state: MessagesState): # Note that LangChain enforces that all models accept `tool_choice="any"` # as well as `tool_choice=<string name of tool>`. llm_with_tools = model.bind_tools([get_schema_tool], tool_choice="any") response = llm_with_tools.invoke(state["messages"]) return {"messages": [response]}generate_query_system_prompt = """You are an agent designed to interact with a SQL database.Given an input question, create a syntactically correct {dialect} query to run,then look at the results of the query and return the answer. Unless the userspecifies a specific number of examples they wish to obtain, always limit yourquery to at most {top_k} results.You can order the results by a relevant column to return the most interestingexamples in the database. Never query for all the columns from a specific table,only ask for the relevant columns given the question.DO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.""".format( dialect=db.dialect, top_k=5,)def generate_query(state: MessagesState): system_message = { "role": "system", "content": generate_query_system_prompt, } # We do not force a tool call here, to allow the model to # respond naturally when it obtains the solution. llm_with_tools = model.bind_tools([run_query_tool]) response = llm_with_tools.invoke([system_message] + state["messages"]) return {"messages": [response]}check_query_system_prompt = """You are a SQL expert with a strong attention to detail.Double check the {dialect} query for common mistakes, including:- Using NOT IN with NULL values- Using UNION when UNION ALL should have been used- Using BETWEEN for exclusive ranges- Data type mismatch in predicates- Properly quoting identifiers- Using the correct number of arguments for functions- Casting to the correct data type- Using the proper columns for joinsIf there are any of the above mistakes, rewrite the query. If there are no mistakes,just reproduce the original query.You will call the appropriate tool to execute the query after running this check.""".format(dialect=db.dialect)def check_query(state: MessagesState): system_message = { "role": "system", "content": check_query_system_prompt, } # Generate an artificial user message to check tool_call = state["messages"][-1].tool_calls[0] user_message = {"role": "user", "content": tool_call["args"]["query"]} llm_with_tools = model.bind_tools([run_query_tool], tool_choice="any") response = llm_with_tools.invoke([system_message, user_message]) response.id = state["messages"][-1].id return {"messages": [response]}
We can now assemble these steps into a workflow using the Graph API. We define a conditional edge at the query generation step that will route to the query checker if a query is generated, or end if there are no tool calls present, such that the LLM has delivered a response to the query.