DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workkloads.

Secure your stack and shape the future! Help dev teams across the globe navigate their software supply chain security challenges.

Releasing software shouldn't be stressful or risky. Learn how to leverage progressive delivery techniques to ensure safer deployments.

Avoid machine learning mistakes and boost model performance! Discover key ML patterns, anti-patterns, data strategies, and more.

Related

  • Have LLMs Solved the Search Problem?
  • Unlocking Local AI: Build RAG Apps Without Cloud or API Keys
  • Driving RAG-Based AI Infrastructure
  • Supercharge Your Coding Workflow With Ollama, LangChain, and RAG

Trending

  • Beyond ChatGPT, AI Reasoning 2.0: Engineering AI Models With Human-Like Reasoning
  • Issue and Present Verifiable Credentials With Spring Boot and Android
  • Java Virtual Threads and Scaling
  • Cookies Revisited: A Networking Solution for Third-Party Cookies
  1. DZone
  2. Data Engineering
  3. AI/ML
  4. Getting Started With LangChain for Beginners

Getting Started With LangChain for Beginners

This tutorial demonstrates how to use the LangChain framework to connect with OpenAI and other LLMs, work with various chains, and build a basic chatbot with history.

By 
Amit Chaudhary user avatar
Amit Chaudhary
·
Mar. 28, 25 · Tutorial
Likes (3)
Comment
Save
Tweet
Share
2.9K Views

Join the DZone community and get the full member experience.

Join For Free

Large language models (LLMs) like OpenAI’s GPT-4 and Hugging Face models are powerful, but using them effectively in applications requires more than just calling an API. LangChain is a framework that simplifies working with LLMs, enabling developers to create advanced AI applications with ease.

In this article, we’ll cover:

  1. What is LangChain?
  2. How to install and set up LangChain
  3. Basic usage: Access OpenAI LLLs, LLMs on Hugging Face, Prompt Templates, Chains
  4. A simple LangChain chatbot example

What Is LangChain?

LangChain is an open-source framework designed to help developers build applications powered by LLMs. It provides tools to structure LLM interactions, manage memory, integrate APIs, and create complex workflows.

Benefits of LangChain

  • Simplifies handling prompts and responses
  • Supports multiple LLM providers (OpenAI, Hugging Face, Anthropic, etc.)
  • Enables memory, retrieval, and chaining multiple AI calls
  • Supports building chatbots, agents, and AI-powered apps

A Step-by-Step Guide

Step 1: Installation

To get started, install LangChain and OpenAI’s API package using pip, open your terminal, and run the following command:

Plain Text
 
pip install langchain langchain_openai openai


Setup your API key in an environment variable:

Plain Text
 
import os
os.environ["OPENAI_API_KEY"] = "your-api-key-here"


Step 2: Using LangChain’s ChatOpenAI

Now, let’s use OpenAI’s model to generate text.

Basic Example: Generating a Response

Python
 
from langchain_openai import ChatOpenAI

# Initialize the model
llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0.5)

# write your prompt
prompt = "What is LangChain ?"

# print the response
print(llm.invoke(prompt))


Explanation

  • from langchain_openai import ChatOpenAI(). This imports the ChatOpenAI class from the langchain_openai package and allows to use OpenAI’s GPT-based models for conversational AI.
  • ChatOpenAI(). This initializes the GPT model.
  • model ="gpt-3.5-turbo". As Open AI has several models to use, we have to pass the model that we want to use for prompt response. However, by default, Open AI uses the text-davinci-003 model.
  • temperaure=0.5. ChatOpenAI is initialized with a temperature of 0.5. Temperature controls randomness in the response:
    • 0.0: Deterministic (always returns the same output for the same input).
    • 0.7: More creative/random responses.
    • 1.0: Highly random and unpredictable responses.
    • Since temperature = 0.5, it balances between creativity and reliability.
  • prompt = "What is LangChain?". Here, we are defining the prompt, which is coming from LangChain and will be sent to the ChatOpenAI model for processing.
  • llm.invoke(prompt). This sends the prompt to the given LLM and gets the response.

Step 3: Using Other LLM Models Using HuggingFacePipeline

Python
 
from langchain_huggingface import HuggingFacePipeline

# Initialize the model, here are trying to use this model - google/flan-t5-base

llm = HuggingFacePipeline.from_model_id(
    model_id="google/flan-t5-base",
    task="text-generation",
    pipeline_kwargs={"max_new_tokens": 200, "temperature" :0.1},
)

# print the response
print(llm.invoke("What is Deep Learning?"))


# In summary here we learned about using different llm using langchain,
# instead OpenAI we used a model on Huggingface. 
# This helps us to interact with models uploaded by community.


Step 4: Chaining Prompts With LLMs

LangChain lets you connect prompts and models into chains.

Python
 
# Prompts template and chaining using langchain
from langchain.prompts import PromptTemplate
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model_name="gpt-4o",temperature=0.9)

# Prompt Template - let you generate prompts that accepts variable, 
# we can have multiple variables as well
template = "What is the impact on my health, if I eat {food} and drink {drink}?"
prompt = PromptTemplate.from_template(template)

# Here chains comes into picture to go beyond single llm call 
# and involve sequence of llm calls, and chains llms and prompt togetger
# Now we initialize our chain with prompt and llm model reference
chain = prompt | llm

# here we are invok the chain with food parameter as Bread and drink parameter as wine.
print(chain.invoke({"food" : "Bread","drink":"wine"}))


Why Use LangChain?

  • Automates the process of formatting prompts
  • Helps in multi-step workflows
  • Makes code modular and scalable

Step 5: Chain Multiple Tasks in a Sequence

LangChain chains allow to combine multiple chains, where output from first chain can be used as a input to second chain.

Python
 
from langchain_core.output_parsers import StrOutputParser
from langchain_openai import ChatOpenAI
from langchain.prompts import PromptTemplate

llm = ChatOpenAI(model_name="gpt-4o", temperature=0)

#first template and chain
template = "Which is the most {adjectective} building in the world ?"
prompt = PromptTemplate.from_template(template)
chain = prompt | llm | StrOutputParser()

#second template and chain with the first chain
template_second = "Tell me more about the {building}?"
prompt_second = PromptTemplate.from_template(template_second)
chain_second = {"noun" : chain} | prompt_second | llm | StrOutputParser()

#invoking the chains of calls passing the value to chain 1 parameter
print(chain_second.invoke({"adjectective" : "famous"}))


Why Use Sequential Chains?

  • Merges various chains by using the output of one chain as the input for the next.
  • Operates by executing a series of chains
  • Creating a seamless flow of processes

Step 5: Adding Memory (Chatbot Example)

Want your chatbot to remember past conversations? LangChain Memory helps!

Python
 
from langchain.chains import ConversationChain
from langchain.memory import ConversationBufferMemory
from langchain_openai import ChatOpenAI

# Initialize model with memory
llm = ChatOpenAI(model="gpt-3.5-turbo")
memory = ConversationBufferMemory()

# Create a conversation chain
conversation = ConversationChain(llm=llm, memory=memory)

# Start chatting!
print(conversation.invoke("Hello! How is weather today ?")["response"])
print(conversation.invoke("Can I go out for biking today ?")["response"])


Why Use Memory?

  • Enables AI to remember past inputs
  • Creates a more interactive chatbot
  • Supports multiple types of memory (buffer, summarization, vector, etc.)

What’s Next?

Here, we have explored some basic components of LangChain. Next, we will explore the below items to use the real power of LangChain:

  • Explore LangChain agents for AI-driven decision-making
  • Implement retrieval-augmented generation (RAG) to fetch real-time data
AI large language model RAG

Opinions expressed by DZone contributors are their own.

Related

  • Have LLMs Solved the Search Problem?
  • Unlocking Local AI: Build RAG Apps Without Cloud or API Keys
  • Driving RAG-Based AI Infrastructure
  • Supercharge Your Coding Workflow With Ollama, LangChain, and RAG

Partner Resources

×

Comments

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • [email protected]

Let's be friends: