LangChain in Action: How to Build Intelligent AI Applications Easily and Efficiently ?
LangGraph
What Is LangGraph?
LangGraph is a stateful orchestration library built on top of LangChain, designed to simplify the creation of multi-step, reactive, memory-aware agents. While LangChain provides the building blocks (tools, prompts, memory, chains), LangGraph introduces graphs — workflows where language models act as control functions, deciding what to do next and maintaining state over time. LangGraph is a powerful tool for building complex, multi-step agent workflows that can adapt to changing contexts and user inputs.
Unlike simple LLM pipelines, LangGraph agents behave more like state machines, with checkpoints, tool invocations, and conversational memory woven into a dynamic process.
Core LangGraph Concepts & Components
createReactAgent
This is the easiest and most popular entry point into LangGraph. It creates a react-style agent — one that loops through steps until it reaches a final answer.
It automatically:
- Wraps a language model in a decision-making loop
- Connects tools so the LLM can call them
- Handles memory across interactions
- Implements reactivity — the model can revise steps based on tool outputs
This is best for situations where you want an agent to reason + act + observe + repeat, until a stopping condition is met.
MemorySaver
LangGraph includes this utility to provide state persistence. It stores previous user/agent messages or tool outputs tied to a thread_id.
Think of it like chat memory, where you can:
- Retain context between sessions
- Resume previous conversations
- Support multi-turn flows
MemorySaver stores everything in memory, which is ideal for development or short-lived sessions. In production, LangGraph also allows swapping in persistent backends like Redis or databases.
Runnable Interfaces and Graph Nodes
LangGraph is built around runnables, which are composable functions that take input and return output. When you build a custom graph (instead of using createReactAgent), you define:
- Nodes: These are individual steps in the graph (e.g., call an LLM, call a tool, transform input)
- Edges: Define how data flows from one node to the next
- Conditions: The logic that determines which path to follow
This low-level flexibility is perfect when you want to explicitly define a workflow, such as:
- Step 1: Extract entities
- Step 2: Search for documents
- Step 3: Summarize results
- Step 4: Respond to user
RunnableGraph and GraphNode
If you need full control, you can use the lower-level classes to construct custom graphs. These include:
- RunnableGraph: the full orchestrated system
- GraphNode: individual nodes that wrap steps like LLM calls, tool executions, or branching logic
This enables building decision trees, loops, retries, or workflows with conditional behavior (e.g., different logic for internal users vs external users).
State Objects and thread_id
LangGraph supports thread-based state tracking, meaning each conversation is uniquely identified by a thread_id. This is passed when invoking the agent and allows LangGraph to:
- Recall previous context
- Resume workflows
- Avoid duplicated tool calls
The underlying state object can include messages, tool results, metadata, or arbitrary variables, depending on your application.
Built-In Graph Templates
Besides createReactAgent, LangGraph offers other patterns (less common, but powerful):
- createToolCallingAgent: Specifically optimized for agents that primarily call tools
- createRetrievalAgent: Great for question-answering over documents
- createFunctionAgent: Uses OpenAI’s function-calling format for structured outputs
These templates let you skip the boilerplate and focus on your logic, while still giving access to LangGraph’s memory and reactivity.
Hooks and Middleware
LangGraph allows injecting custom hooks during execution — for debugging, logging, or modifying behavior. For example, you could:
- Log tool usage
- Add retry logic on tool failure
- Measure token usage at each step
This is powerful for building production-ready applications where observability and resilience are required.
Summary: Why LangGraph?
LangGraph gives you agent-level control with framework-level structure. Instead of manually stitching together tools, LLMs, and memory, LangGraph offers:
- A graph-based structure for multi-step reasoning
- Built-in memory management with thread state
- Tool orchestration with flexible invocation logic
- Support for both declarative templates and full control
It sits between the simplicity of a LangChain chain and the full complexity of designing your own orchestration framework.
Building a Weather Agent with LangGraph and LangChain
In this post, we’ll explore how to create a conversational agent that can answer weather-related questions using OpenAI and a real-world weather API. We’ll leverage LangGraph — a stateful graph framework built on top of LangChain — to manage agent memory and tool usage across multiple interactions. LangGraph makes it easy to build reactive, multi-step language agent workflows that persist state between user queries.
We’ll create a custom tool that queries the OpenWeatherMap API and integrate it into a LangGraph-powered agent using createReactAgent(). This allows us to maintain context between user messages, like following up a weather query about San Francisco with another about New York, all while staying within the same session memory.
// index.ts
import { Tool } from "@langchain/core/tools";
import { ChatOpenAI } from "@langchain/openai";
import { MemorySaver } from "@langchain/langgraph";
import { HumanMessage } from "@langchain/core/messages";
import { createReactAgent } from "@langchain/langgraph/prebuilt";
import * as dotenv from "dotenv";
dotenv.config();
// ----------------------
// Custom Weather Tool
// ----------------------
class WeatherTool extends Tool {
name = "get_weather";
description = "Fetches current weather data for a given city. Use this when asked about weather.";
async _call(city: string): Promise<string> {
try {
const url = `${process.env.WEATHER_API_URL}?q=${encodeURIComponent(
city,
)}&appid=${process.env.WEATHER_API_KEY}&units=metric`;
const res = await fetch(url);
if (!res.ok) return `Weather API error: ${res.statusText}`;
const data = await res.json();
const weather = data.weather?.[0]?.description ?? "No description";
const temp = data.main?.temp ?? "N/A";
const feels_like = data.main?.feels_like ?? "N/A";
return `Weather in ${city}: ${weather}, Temp: ${temp}°C, Feels like: ${feels_like}°C`;
} catch (err) {
return `Error: ${(err as Error).message}`;
}
}
}
// ----------------------
// Setup LLM Agent
// ----------------------
const weatherTool = new WeatherTool();
const agentModel = new ChatOpenAI({ temperature: 0 });
const memory = new MemorySaver();
const agent = createReactAgent({
llm: agentModel,
tools: [weatherTool],
checkpointSaver: memory,
});
// ----------------------
// Use the Agent
// ----------------------
const run = async () => {
const first = await agent.invoke(
{ messages: [new HumanMessage("What is the weather in San Francisco?")] },
{ configurable: { thread_id: "42" } },
);
console.log("[SF]", first.messages[first.messages.length - 1].content);
const second = await agent.invoke(
{ messages: [new HumanMessage("what about New York?")] },
{ configurable: { thread_id: "42" } },
);
console.log("[NY]", second.messages[second.messages.length - 1].content);
};
run();
Defining a Custom Tool for Weather Data
The application begins by creating a custom tool responsible for fetching live weather information. This tool is designed to integrate with the agent system and follows the interface conventions provided by LangChain. The tool is constructed to accept a city name, make an HTTP request to the OpenWeatherMap API using environment variables for the endpoint and key, and return a formatted string describing the current weather conditions. This encapsulation allows the language model to rely on external functionality when needed — in this case, accessing real-time weather data — without embedding any logic directly in the model itself.
Initializing the Language Model and Memory System
Next, a language model from OpenAI is initialized with a low temperature setting to encourage deterministic outputs. This model is the core reasoning engine for the agent. Alongside the model, a memory component is instantiated using LangGraph’s MemorySaver. This is a built-in checkpointing mechanism that allows the agent to persist state across multiple invocations. Unlike stateless chains, which reset between inputs, LangGraph agents can maintain conversational context, which is critical for building multi-turn, reactive applications.
Creating the Reactive Agent with LangGraph
The core of the logic centers around the use of createReactAgent, a utility provided by LangGraph. This function constructs a graph-based agent that operates in a loop: it processes user input, reasons about the next step using the language model, determines whether any tools should be used, invokes the relevant tool if needed, and updates the memory accordingly. The term “react” in this context refers to the agent’s ability to re-evaluate its plan based on the results of tool usage or prior steps. It creates a dynamic and adaptive workflow rather than a static sequence of actions.
The agent is configured with the language model, the weather tool, and the memory system. With this setup, it is capable of not only generating responses but also choosing when to rely on tools and preserving the history of the interaction for continuity.
Executing Multi-Turn Interactions with Shared Memory
The final section of the code demonstrates how the agent is used across multiple user inputs. Two queries are made sequentially, each asking about the weather in different cities. Both invocations include the same thread identifier, which signals to the agent that they belong to the same conversation. This is where LangGraph’s memory system becomes essential — it allows the agent to treat the second message as a follow-up to the first, leveraging the full context of the ongoing dialogue.
By preserving the message history, LangGraph enables more natural interactions where the user doesn’t need to repeat themselves. The agent retains what was previously discussed and builds on it, just like a human would in a real conversation.
Leveraging LangGraph for Stateful, Tool-Augmented Agents
This code illustrates LangGraph’s core strength: it allows developers to construct agents that are not only intelligent but also stateful and tool-aware. Unlike single-turn language models, LangGraph agents can reactively navigate complex workflows, remember previous steps, and incorporate external tools in a seamless way. This design is particularly useful in cases where the model must perform reasoning and action over multiple steps — such as retrieving information, validating data, or responding to follow-up questions.