Get started using OpenAI chat models in LangChain.
You can find information about OpenAI’s latest models, their costs, context windows, and supported input types in the OpenAI Platform docs.
API ReferenceFor detailed documentation of all features and configuration options, head to the ChatOpenAI API reference.
Chat Completions API compatibilityChatOpenAI is fully compatible with OpenAI’s Chat Completions API. If you are looking to connect to other model providers that support the Chat Completions API, you can do so – see instructions.
Now we can instantiate our model object and generate responses:
Copy
from langchain_openai import ChatOpenAIllm = ChatOpenAI( model="gpt-5-nano", # stream_usage=True, # temperature=None, # max_tokens=None, # timeout=None, # reasoning_effort="low", # max_retries=2, # api_key="...", # If you prefer to pass api key in directly # base_url="...", # organization="...", # other params...)
See the ChatOpenAI API Reference for the full set of available model parameters.
Token parameter deprecationOpenAI deprecated max_tokens in favor of max_completion_tokens in September 2024. While max_tokens is still supported for backwards compatibility, it’s automatically converted to max_completion_tokens internally.
messages = [ ( "system", "You are a helpful assistant that translates English to French. Translate the user sentence.", ), ("human", "I love programming."),]ai_msg = llm.invoke(messages)ai_msg
OpenAI’s Chat Completions API does not stream token usage statistics by default (see API reference here).To recover token counts when streaming with ChatOpenAI or AzureChatOpenAI, set stream_usage=True as an initialization parameter or on invocation:
Copy
from langchain_openai import ChatOpenAIllm = ChatOpenAI(model="gpt-4.1-mini", stream_usage=True)
Azure OpenAI v1 API supportAs of langchain-openai>=1.0.1, ChatOpenAI can be used directly with Azure OpenAI endpoints using the new v1 API. This provides a unified way to use OpenAI models whether hosted on OpenAI or Azure.For the traditional Azure-specific implementation, continue to use AzureChatOpenAI.
Using Azure OpenAI v1 API with API Key
To use ChatOpenAI with Azure OpenAI, set the base_url to your Azure endpoint with /openai/v1/ appended:
Copy
from langchain_openai import ChatOpenAIllm = ChatOpenAI( model="gpt-5-mini", # Your Azure deployment name base_url="https://{your-resource-name}.openai.azure.com/openai/v1/", api_key="your-azure-api-key")response = llm.invoke("Hello, how are you?")print(response.content)
Using Azure OpenAI with Microsoft Entra ID
The v1 API adds native support for Microsoft Entra ID (formerly Azure AD) authentication with automatic token refresh. Pass a token provider callable to the api_key parameter:
Copy
from azure.identity import DefaultAzureCredential, get_bearer_token_providerfrom langchain_openai import ChatOpenAI# Create a token provider that handles automatic refreshtoken_provider = get_bearer_token_provider( DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default")llm = ChatOpenAI( model="gpt-5-mini", # Your Azure deployment name base_url="https://{your-resource-name}.openai.azure.com/openai/v1/", api_key=token_provider # Callable that handles token refresh)# Use the model as normalmessages = [ ("system", "You are a helpful assistant."), ("human", "Translate 'I love programming' to French.")]response = llm.invoke(messages)print(response.content)
The token provider is a callable that automatically retrieves and refreshes authentication tokens, eliminating the need to manually manage token expiration.
Installation requirementsTo use Microsoft Entra ID authentication, install the Azure Identity library:
Copy
pip install azure-identity
You can also pass a token provider callable to the api_key parameter when using
asynchronous functions. You must import DefaultAzureCredential from azure.identity.aio:
Copy
from azure.identity.aio import DefaultAzureCredentialfrom langchain_openai import ChatOpenAIcredential = DefaultAzureCredential()llm_async = ChatOpenAI( model="gpt-5-nano", api_key=credential)# Use async methods when using async callableresponse = await llm_async.ainvoke("Hello!")
When using an async callable for the API key, you must use async methods (ainvoke, astream, etc.). Sync methods will raise an error.
OpenAI has a tool calling (we use “tool calling” and “function calling” interchangeably here) API that lets you describe tools and their arguments, and have the model return a JSON object with a tool to invoke and the inputs to that tool. tool-calling is extremely useful for building tool-using chains and agents, and for getting structured outputs from models more generally.
With ChatOpenAI.bind_tools, we can easily pass in Pydantic classes, dict schemas, LangChain tools, or even functions as tools to the model. Under the hood these are converted to an OpenAI tool schemas, which looks like:
from pydantic import BaseModel, Fieldclass GetWeather(BaseModel): """Get the current weather in a given location""" location: str = Field(..., description="The city and state, e.g. San Francisco, CA")llm_with_tools = llm.bind_tools([GetWeather])
Copy
ai_msg = llm_with_tools.invoke( "what is the weather like in San Francisco",)ai_msg
As of Aug 6, 2024, OpenAI supports a strict argument when calling tools that will enforce that the tool argument schema is respected by the model. See more.
If strict=True the tool definition will also be validated, and a subset of JSON schema are accepted. Crucially, schema cannot have optional args (those with default values).Read the full docs on what types of schema are supported.
Copy
llm_with_tools = llm.bind_tools([GetWeather], strict=True)ai_msg = llm_with_tools.invoke( "what is the weather like in San Francisco",)ai_msg
OpenAI’s structured output feature can be used simultaneously with tool-calling. The model will either generate tool calls or a response adhering to a desired schema. See example below:
Copy
from langchain_openai import ChatOpenAIfrom pydantic import BaseModeldef get_weather(location: str) -> None: """Get weather at a location.""" return "It's sunny."class OutputSchema(BaseModel): """Schema for response.""" answer: str justification: strllm = ChatOpenAI(model="gpt-4.1")structured_llm = llm.bind_tools( [get_weather], response_format=OutputSchema, strict=True,)# Response contains tool calls:tool_call_response = structured_llm.invoke("What is the weather in SF?")# structured_response.additional_kwargs["parsed"] contains parsed outputstructured_response = structured_llm.invoke( "What weighs more, a pound of feathers or a pound of gold?")
OpenAI supports the specification of a context-free grammar for custom tool inputs in lark or regex format. See OpenAI docs for details. The format parameter can be passed into @custom_tool as shown below:
OpenAI supports a Responses API that is oriented toward building agentic applications. It includes a suite of built-in tools, including web and file search. It also supports management of conversation state, allowing you to continue a conversational thread without explicitly passing in previous messages, as well as the output from reasoning processes.ChatOpenAI will route to the Responses API if one of these features is used. You can also specify use_responses_api=True when instantiating ChatOpenAI.
from langchain_openai import ChatOpenAIllm = ChatOpenAI(model="gpt-4.1-mini")tool = {"type": "web_search_preview"}llm_with_tools = llm.bind_tools([tool])response = llm_with_tools.invoke("What was a positive news story from today?")
Note that the response includes structured content blocks that include both the text of the response and OpenAI annotations citing its sources. The output message will also contain information from any tool invocations:
Copy
response.content_blocks
Copy
[{'type': 'server_tool_call', 'name': 'web_search', 'args': {'query': 'positive news stories today', 'type': 'search'}, 'id': 'ws_68cd6f8d72e4819591dab080f4b0c340080067ad5ea8144a'}, {'type': 'server_tool_result', 'tool_call_id': 'ws_68cd6f8d72e4819591dab080f4b0c340080067ad5ea8144a', 'status': 'success'}, {'type': 'text', 'text': 'Here are some positive news stories from today...', 'annotations': [{'end_index': 410, 'start_index': 337, 'title': 'Positive News | Real Stories. Real Positive Impact', 'type': 'citation', 'url': 'https://www.positivenews.press/?utm_source=openai'}, {'end_index': 969, 'start_index': 798, 'title': "From Green Innovation to Community Triumphs: Uplifting US Stories Lighting Up September 2025 | That's Great News", 'type': 'citation', 'url': 'https://info.thatsgreatnews.com/from-green-innovation-to-community-triumphs-uplifting-us-stories-lighting-up-september-2025/?utm_source=openai'}, 'id': 'msg_68cd6f8e8d448195a807b89f483a1277080067ad5ea8144a'}]
You can recover just the text content of the response as a string by using response.text. For example, to stream response text:
Copy
for token in llm_with_tools.stream("..."): print(token.text, end="|")
from langchain_openai import ChatOpenAIllm = ChatOpenAI(model="gpt-4.1-mini")tool = {"type": "image_generation", "quality": "low"}llm_with_tools = llm.bind_tools([tool])ai_message = llm_with_tools.invoke( "Draw a picture of a cute fuzzy cat with an umbrella")
Copy
import base64from IPython.display import Imageimage = next( item for item in ai_message.content_blocks if item["type"] == "image")Image(base64.b64decode(image["base64"]), width=200)