Sign up for my FREE incoming seminar at Soft Uni:
LangChain in Action: How to Build Intelligent AI Applications Easily and Efficiently ?

Harnessing LangChain Agents: Building Smarter AI Interactions

In the world of AI and large language models, agents are a core concept that make applications more dynamic and adaptable. Simply put, an agent in LangChain is an intelligent component that uses a language model to perform tasks by interacting with various tools and external systems. Unlike static workflows, agents can decide which tools to use and in what sequence, based on the context of a query or a task.

Agents are not just passive responders - they are capable of reasoning, planning, and taking actions autonomously. By leveraging LangChain's framework, agents can interact with APIs, databases, vector stores, and more to provide intelligent, multi-step solutions to complex problems.

LangChain Agents
LangChain Agents

Use Cases for Agents

Knowledge-Driven Customer Support

Building an AI-powered agent that helps resolve customer queries by fetching relevant information from various data sources.

  • 🔥 The agent uses a vector database to retrieve relevant FAQ or documentation snippets.
  • 🔥 It queries a CRM API to pull customer details or order history.
  • 🔥 It combines these inputs to provide a context-aware, personalized response to the user.

Example: A customer asks, "Can you help me with my order status and explain the return policy?" The agent retrieves the order details from the API and fetches the policy from a document store, presenting a cohesive response.

Financial Assistant for Investment Recommendations

Assisting users in making informed investment decisions by fetching and analyzing financial data.

  • 🔥 The agent queries real-time stock price APIs or financial news APIs.
  • 🔥 It uses a calculator tool to assess portfolio performance or risk analysis.
  • 🔥 Based on the user’s preferences (e.g., "low-risk investments"), it recommends potential options.

Example: A user asks, "What are some low-risk stocks to invest in this month?" The agent retrieves market data, filters based on volatility, and explains its recommendations.

Automated Travel Planner

Planning trips by integrating multiple data sources, like flight search engines, weather APIs, and hotel booking services.

  • 🔥 The agent queries a flight search API to find the best deals for a destination.
  • 🔥 It retrieves weather data to check suitable travel dates.
  • 🔥 It interacts with a hotel booking API to recommend accommodations within the user’s budget.

Example: A user asks, "Plan a trip to Paris next month with flights under $500 and a hotel close to the Eiffel Tower." The agent provides a complete itinerary with flights, weather forecasts, and hotel suggestions.


Why These Use Cases Shine

These use cases highlight the agent’s ability to:

  • 🔥Integrate Multiple Tools: Interact with APIs, databases, and external tools.
  • 🔥Provide Personalized Responses: Tailor outputs based on user inputs and retrieved data.
  • 🔥Handle Complex, Multi-Step Tasks: Break down user requests into actionable steps and deliver a unified result.

Agents empower applications to go beyond static, single-step workflows, making them suitable for real-world challenges across industries.

Understanding Zero Shot React Description in AI Agents

Zero Shot React Description is a technique used in AI agents to make decisions dynamically without requiring prior training examples or fine-tuning. It enables the agent to reason about a problem, decide on an appropriate tool, and generate an action—all in real-time.

Why Use Zero Shot React Agents?

  • 🔥No Training Required: Unlike supervised learning, this method allows AI to solve tasks without pre-labeled examples.
  • 🔥Dynamic Decision Making: The agent can decide on the best tool to use based on the input question.
  • 🔥Explainable AI: The ReAct (Reason + Act) framework provides a step-by-step explanation of how the AI reaches a decision.
  • 🔥Efficient Use of Tools: The agent selects only relevant tools for the given task, making it adaptable to various scenarios.

Where Should You Use It?

  • 🔥AI-powered Assistants: When building AI agents that can dynamically call APIs, search databases, or use tools.
  • 🔥Automated Data Processing: When working with calculators, data parsers, or document analyzers.
  • 🔥Chatbots and Helpdesk Agents: Where the bot needs to determine whether to use an external tool or answer directly.
  • 🔥Scientific and Engineering Applications: To dynamically calculate formulas, process text data, or retrieve structured knowledge.

╔════════════════════════════════════════════════════════════════════════════╗
║                         🌟 EDUCATIONAL EXAMPLE 🌟                         ║
║                                                                            ║
║  📌 This is a minimal and short working example for educational purposes.  
║  ⚠️ Not optimized for production!║                                                                            ║
║  📦 Versions Used:- "@langchain/core": "^0.3.38"- "@langchain/openai": "^0.4.2"║                                                                            ║
║  🔄 Note: LangChain is transitioning from a monolithic structure to a      ║
║      modular package structure. Ensure compatibility with future updates.  
╚════════════════════════════════════════════════════════════════════════════╝

import { Tool } from "@langchain/core/tools"; // used to define custom tools that an agent can use to perform specific tasks
import { AgentExecutor, createOpenAIFunctionsAgent } from "langchain/agents"; // used to create an agent that can execute tasks
import { ChatOpenAI } from "@langchain/openai"; // used to create an OpenAI language model
import { ChatPromptTemplate } from "@langchain/core/prompts"; // used to create a prompt template for the agent
import dotenv from "dotenv"; // used to load environment variables from the .env file

dotenv.config(); // load environment variables from the .env file

const model = new ChatOpenAI({ // create an instance of the ChatOpenAI class
    modelName: "gpt-3.5-turbo",
    temperature: 0,
    openAIApiKey: process.env.OPENAI_API_KEY!,
})

class Calculator extends Tool { // define a custom tool that can perform basic arithmetic operations
    name = "calculator";
    description = "Perform basic arithmetic operations.";

    async _call(input: string): Promise<string> {
        try {
            const result = eval(input); // Use eval safely for arithmetic
            return `The result of ${input} is ${result}`;
        } catch (error) {
            return "Invalid mathematical expression.";
        }
    }
}

async function run() { // define an async function to run the agent
    const tools = [new Calculator()]; // create an array of tools

    const prompt = ChatPromptTemplate.fromMessages([ // create a prompt template for the agent
        ["system", "You are an AI assistant that can answer user questions using a calculator."],
        ["human", "{input}"],
        ["ai", "{agent_scratchpad}"], // Include agent_scratchpad
    ]);

    const agent = await createOpenAIFunctionsAgent({ // create an agent that can execute tasks
        llm: model,
        tools,
        prompt,
    });

    const executor = new AgentExecutor({ // create an executor that can execute tasks
        agent,
        tools,
        verbose: false, //  LangChain's verbose debug logs, which show detailed information about the agent's execution process.
    })

    const result = await executor.invoke({ input: "What is 42 times 7?" }); // execute the agent with the given input

    console.log("Agent Response:", result.output); // print the agent's response
}

run().catch(console.error); // run the function and catch any potential errors

Explanation of the Code

  • 🔥Imports Required Libraries: Loads LangChain, dotenv (for API keys), and OpenAI API.
  • 🔥Defines a Calculator Tool: Creates a custom tool for performing arithmetic calculations.
  • 🔥Creates an OpenAI GPT Model: Initializes gpt-3.5-turbo to process responses.
  • 🔥Defines a Zero Shot ReAct Prompt: Ensures AI follows the structured ReAct framework (Reason + Act).
  • 🔥Creates a ReAct Agent: Uses createOpenAIFunctionsAgent() to build an AI assistant.
  • 🔥Executes the Agent Using an Executor: Runs the agent and retrieves the response dynamically.
  • 🔥Handles Errors Gracefully: Prevents crashes by catching errors at runtime.

Conversational Re-Act Description agent

A Conversational ReAct Description Agent is a special type of AI agent that:

  • 🔥Remembers past interactions – It maintains memory of the conversation.
  • 🔥Uses reasoning before acting – It analyzes the query before deciding what action to take.
  • 🔥Utilizes tools dynamically – If needed, it calls external tools (e.g., a calculator) to solve queries.
  • 🔥Provides structured responses – The agent follows a predefined format when responding.

This type of agent is based on the ReAct (Reason + Act) - concept that combines reasoning and action, enables llm to think step-by-step and interact with external tools
Instead of just generating an answer, the model follows this loop:
Thought: Thinks step-by-step through the problem.
Action: Calls a tool or API (e.g., search, math, API call).
Observation: Gets result from that tool.
Repeat: Continues reasoning with the new info.
Answer: Final result after enough steps.

You can use it for: AI Chatbots, Customer Support Bots, Virtual Assistants, Automation Agents


╔════════════════════════════════════════════════════════════════════════════╗
║                         🌟 EDUCATIONAL EXAMPLE 🌟                         ║
║                                                                            ║
║  📌 This is a minimal and short working example for educational purposes.  
║  ⚠️ Not optimized for production!║                                                                            ║
║  📦 Versions Used:- "@langchain/core": "^0.3.38"- "@langchain/openai": "^0.4.2"║                                                                            ║
║  🔄 Note: LangChain is transitioning from a monolithic structure to a      ║
║      modular package structure. Ensure compatibility with future updates.  
╚════════════════════════════════════════════════════════════════════════════╝

import { ChatOpenAI } from "@langchain/openai"; // ✅ Enables interaction with OpenAI's API, allowing the AI model to process user input and generate responses.
import { AgentExecutor, createOpenAIFunctionsAgent } from "langchain/agents"; // ✅ Manages AI agents, enabling them to reason, select, and execute external tools dynamically.
import { Tool } from "langchain/tools"; // ✅ Defines custom tools (functions) that the AI agent can call to extend its capabilities (e.g., calculations, time retrieval).
import { ChatPromptTemplate, MessagesPlaceholder } from "@langchain/core/prompts"; // ✅ Helps structure and format prompts, ensuring AI follows a logical reasoning process before responding.
import dotenv from "dotenv"; // ✅ Loads environment variables from a `.env` file, allowing secure storage of API keys and configurations.
import readline from "readline" // ✅ Creates a command-line interface (CLI) for real-time user input and interaction with the AI.

dotenv.config(); // ✅ loads environment variables from .env file

const model = new ChatOpenAI({ temperature: 0, modelName: "gpt-3.5-turbo", apiKey: process.env.OPENAI_API_KEY }); // ✅ creates an instance of the ChatOpenAI class, which is a wrapper around OpenAI's API

class CalculatorTool extends Tool { // ✅ creates a custom tool that can perform arithmetic calculations
    name = "calculator";
    description = "Useful for when you need to answer questions about math";

    async _call(input: string) {
        try {
            return eval(input).toString();
        } catch (error) {
            console.error(error);
            return "Error: " + error;
        }
    }
}

class TimerTool extends Tool { // ✅ creates a custom tool that can track time
    name = "timer";
    description = "Useful for when you need to track time";

    async _call(input: string) {
        return new Date().toLocaleTimeString();
    }
}

class WeatherTool extends Tool { // ✅ creates a custom tool that can fetch weather data
    name = "weather";
    description = "Fetches the current weather for a given city. Provide the city name as input.";

    async _call(city: string) {
        if (!city) return "Error: Please provide a city name.";

        try {
            const response = await fetch(`${process.env.WEATHER_API_URL}?q=${city}&appid=${process.env.WEATHER_API_KEY}&units=metric`);
            const data = await response.json();

            if (data.cod !== 200) return `Error: ${data.message}`;
            
            return `The weather in ${data.name} is ${data.weather[0].description} with a temperature of ${data.main.temp}°C.`;
        } catch (error) {
            return "Error fetching weather data.";
        }
    }
}

async function run() {
    // ✅ Loads the tools into the AI agent.
    const tools = [ new CalculatorTool(), new TimerTool(), new WeatherTool() ]; 
    let chat_history: { role: string, content: string }[] = [] // 🔥 Memory to retain past interactions

    // ✅ Creates a structured prompt template that enforces reasoning before acting. 
    // The AI must analyze the question, decide if a tool is needed, 
    // execute the tool, and only then respond.
    const prompt = ChatPromptTemplate.fromMessages([ 
        ["system", "You are a helpful AI assistant with access to tools. Follow these steps:" +
            "1. Think about the user's question" + 
            "2. If a tool is needed, decide which one to use" + 
            "3. Call the tool and observe it's result" +
            "4. Respond to the user in a structured format" +
            "Do not respond until you have obsered the tool's result."
        ],
        // ✅ Stores the conversation history for better context awareness.
        new MessagesPlaceholder("chat_history"), 
        ["human", "{input}"], // ✅ Represents the user’s input.

        // ✅ A scratchpad where the agent stores its reasoning process and tool interactions.
        new MessagesPlaceholder("agent_scratchpad"), 
    ]);

    // ✅ Create an AI agent that can reason and use external tools when necessary.
    const agent = await createOpenAIFunctionsAgent({ llm: model, tools, prompt }); 

    // ✅ Create an executor to handle agent interactions and tool execution.
    const executor = new AgentExecutor({ agent, tools }); 
    
    // ✅ Set up a readline interface to enable user interaction via the terminal.
    const rl = readline.createInterface({ input: process.stdin, output: process.stdout }); 

    const askQuestion = async () => { // ✅ Asks the user for input and processes their question.
        // 1️⃣ Reads user input from the terminal.
        rl.question("You: ", async (input) => { 

            // 2️⃣ Passes the input to the AI agent for processing.
            const result = await executor.invoke({ input, chat_history, agent_scratchpad: [] }); 

            console.log("🤖 Agent:", result.output); // prints the AI's response to the user in the terminal.

            // 3️⃣ Stores conversation history to maintain context.
            chat_history.push({ role: 'user', content: input }); 

            // 3️⃣ Stores conversation history to maintain context.
            chat_history.push({ role: 'assistant', content: result.output }); 

            askQuestion(); // 4️⃣ Loops the function to enable continuous interaction.
        })
    }
    askQuestion(); // ✅ Starts the conversation loop.
}
run().catch(console.error); // ✅ Handles any errors that may occur during the execution.

How It All Comes Together

  • 1️⃣ User types a question → e.g., "What is 10 * 5 ?"
  • 2️⃣ AI analyzes the question
  • 3️⃣ AI decides whether it needs a tool (e.g., calculator, timer, or weather)
  • 4️⃣ AI executes the tool and observes the result
  • 5️⃣ AI constructs a structured response and replies
  • 6️⃣ The conversation continues until the user exits

Why Is This Approach Powerful?

  • Tool-based AI reasoning → The AI thinks before acting, ensuring accuracy.
  • Expandable → You can add more tools (e.g., weather API, currency converter).
  • Memory retention → The chatbot remembers past conversations.
  • Interactive → Works in real-time via the terminal.

This chatbot is a great example of LangChain’s ReAct (Reason + Act) Framework in action. By structuring the AI’s thinking process and enabling external tool usage, we create an assistant that goes beyond simple text responses—it can calculate, track time, use weather data, and even be extended with more functionality. With just a few more tools, this AI could become a powerful assistant for any domain!

Ohter LangChain Agents

  • 🔥MRKL (Modular Reasoning, Knowledge, and Language) System: A more advanced agent type that combines reasoning, knowledge retrieval, and language capabilities. It selects appropriate tools or APIs based on modular reasoning. Complex, multi-step tasks that require combining reasoning with external knowledge retrieval.
  • 🔥Self-Ask with Search: This agent type uses a search tool to retrieve information as needed before answering the user's query. It iteratively searches until sufficient information is gathered. Open-domain question answering where the agent needs to retrieve external information.
  • 🔥Multi-Action Agent: An agent that can take multiple actions in a single turn. It is useful when a query requires invoking several tools simultaneously or in sequence. Complex workflows involving multiple APIs or tools, such as planning a trip with flight, hotel, and activity booking.
  • 🔥 Tool-Only Agent: This agent does not use the LLM for reasoning and instead directly invokes a predefined tool based on the user’s query. Simplified scenarios where the user input directly maps to a single tool action.
  • And Others

MCP - Model Context Protocol

Model Context Protocol (MCP) is a standard protocol that allows language models (LLMs) or AI agents to interact with external tools and APIs in a safe, controlled, and modular way. It creates a set of rules or standards how AI agents will interact with external tools.


// Project structure suggestion based on MCP (Model Context Protocol)
// - /client (CLI interface)
// - /server (Orchestration logic)
// - /tool (All external tool definitions)
// - /llm (LLM model config)
// - index.ts (entry point)

🧱 Project Folder Structure (MCP Pattern) This structure follows the MCP, where each layer has a single responsibility. It helps you scale and debug complex LLM applications by clearly separating the interface, logic, tools, and model.


// === FILE: /tool/CalculatorTool.ts ===
import { Tool } from "langchain/tools";

export class CalculatorTool extends Tool {
  name = "calculator";
  description = "Useful for when you need to answer questions about math";

  async _call(input: string) {
    try {
      return eval(input).toString();
    } catch (error) {
      console.error(error);
      return "Error: " + error;
    }
  }
}

🛠️ Calculator Tool This tool is part of the /tool layer in the MCP structure. It extends LangChain’s Tool class and allows the LLM agent to evaluate basic math expressions like 5 + 3 or 12 / 4. The _call() method uses JavaScript’s eval() to compute the result and returns it as a string. If the expression is invalid, it catches the error and returns a message.


// === FILE: /tool/TimerTool.ts ===
import { Tool } from "langchain/tools";

export class TimerTool extends Tool {
  name = "timer";
  description = "Useful for when you need to track time";

  async _call(input: string) {
    return new Date().toLocaleTimeString();
  }
}

⏰ Timer Tool This tool belongs to the /tool layer in the MCP (Model Context Protocol) structure. It allows the agent to return the current local time. When the tool is invoked, the _call() method simply responds with the system time using JavaScript’s toLocaleTimeString().
This is especially useful when a user asks something like “What time is it now?” — the agent can call this tool to give a real-time answer.


// === FILE: /tool/WeatherTool.ts ===
import { Tool } from "langchain/tools";

export class WeatherTool extends Tool {
  name = "weather";
  description = "Fetches the current weather for a given city. Provide the city name as input.";

  async _call(city: string) {
    if (!city) return "Error: Please provide a city name.";

    try {
      const response = await fetch(`${process.env.WEATHER_API_URL}?q=${city}&appid=${process.env.WEATHER_API_KEY}&units=metric`);
      const data = await response.json();

      if (data.cod !== 200) return `Error: ${data.message}`;

      return `The weather in ${data.name} is ${data.weather[0].description} with a temperature of ${data.main.temp}°C.`;
    } catch (error) {
      return "Error fetching weather data.";
    }
  }
}

🌤 Weather Tool This file lives in the /tool layer of the MCP architecture and defines a custom LangChain Tool that fetches real-time weather data for a given city using the OpenWeatherMap API.
The _call() method takes a city name as input, sends an API request, and returns a formatted string with the weather description and temperature. It also handles errors gracefully, providing fallback messages when input is missing or the API call fails.
This tool enables the agent to respond to natural language queries like: “What is the weather like in New York?” or “What is the temperature in London today?”
🔗 Weather Tool Link

  • WEATHER_API_URL — API endpoint, e.g., https://api.openweathermap.org/data/2.5/weather
  • WEATHER_API_KEY — API key, obtain from https://openweathermap.org/api

// === FILE: /llm/openaiModel.ts ===
import { ChatOpenAI } from "@langchain/openai";
import dotenv from "dotenv";
dotenv.config();

export const model = new ChatOpenAI({
  temperature: 0,
  modelName: "gpt-3.5-turbo",
  apiKey: process.env.OPENAI_API_KEY,
});

🧠 LLM Setup - openaiModel.ts This file defines and exports the Large Language Model (LLM) configuration used in the MCP architecture. It lives in the /llm directory, which is responsible for reasoning and generating responses.
The code uses LangChain’s ChatOpenAI wrapper to configure and initialize the gpt-3.5-turbo model from OpenAI with a fixed temperature of 0 for deterministic, consistent outputs. The API key is securely loaded from environment variables via dotenv.
This abstraction keeps the LLM configuration modular and clean, allowing the orchestration layer to easily import and reuse the model wherever needed.


// === FILE: /server/agentExecutor.ts ===
import { createOpenAIFunctionsAgent, AgentExecutor } from "langchain/agents";
import { ChatPromptTemplate, MessagesPlaceholder } from "@langchain/core/prompts";
import { model } from "../llm/openaiModel";
import { CalculatorTool } from "../tool/CalculatorTool";
import { TimerTool } from "../tool/TimerTool";
import { WeatherTool } from "../tool/WeatherTool";

export async function createExecutor() {
  const tools = [new CalculatorTool(), new TimerTool(), new WeatherTool()];

  const prompt = ChatPromptTemplate.fromMessages([
    [
      "system",
      "You are a helpful AI assistant with access to tools. Follow these steps:" +
        "1. Think about the user's question" +
        "2. If a tool is needed, decide which one to use" +
        "3. Call the tool and observe it's result" +
        "4. Respond to the user in a structured format" +
        "Do not respond until you have obsered the tool's result."
    ],
    new MessagesPlaceholder("chat_history"),
    ["human", "{input}"],
    new MessagesPlaceholder("agent_scratchpad"),
  ]);

  const agent = await createOpenAIFunctionsAgent({ llm: model, tools, prompt });
  return new AgentExecutor({ agent, tools });
}

🧠 Agent Orchestration - agentExecutor.ts This file sets up the MCP server layer, which orchestrates communication between the reasoning model (LLM) and external tools.
Here’s what happens:

  • 🔥 Tools Registered: It imports and registers CalculatorTool, TimerTool, and WeatherTool to extend the agent’s capabilities.
  • 🔥 Prompt Setup: The ChatPromptTemplate instructs the agent to reason before acting — guiding it to think, choose a tool, execute it, and only then respond.
  • 🔥 Agent Creation: createOpenAIFunctionsAgent binds the LLM, tools, and prompt together into a reasoning agent.
  • 🔥 Executor Ready: The final AgentExecutor is returned — it handles the actual runtime interactions and tool execution logic.

🔁 This is the thinking and decision-making hub in the MCP structure, sitting between the tools and the client interface.


// === FILE: /client/cli.ts ===
import readline from "readline";
import { createExecutor } from "../server/agentExecutor";

export async function startCLI() {
  const executor = await createExecutor();
  const rl = readline.createInterface({ input: process.stdin, output: process.stdout });

  let chat_history: { role: string; content: string }[] = [];

  const askQuestion = async () => {
    rl.question("You: ", async (input) => {
      const result = await executor.invoke({ input, chat_history, agent_scratchpad: [] });
      console.log("🤖 Agent:", result.output);
      chat_history.push({ role: "user", content: input });
      chat_history.push({ role: "assistant", content: result.output });
      askQuestion();
    });
  };

  askQuestion();
}

🧑‍💻 CLI Interface – cli.tsThis file represents the MCP client layer — a simple command-line interface where the user interacts with the AI agent.
What it does:

  • 🔥 Connects to the Orchestrator: It imports and initializes the AgentExecutor from the server.
  • 🔥 Handles Input/Output: Uses Node’s readline module to prompt user input and display agent responses in real-time.
  • 🔥 Maintains Memory: It stores the full conversation history (chat_history) so the AI can reason with past context.
  • 🔥 Enables Continuous Conversation: The askQuestion() function loops recursively, enabling an ongoing chat flow.

🎯 In MCP terms, this is the UI layer where human input flows into the system and responses are surfaced — a lightweight, terminal-based frontend for your intelligent assistant.


// === FILE: index.ts ===
import { startCLI } from "./client/cli";

startCLI().catch(console.error);

📌 Entry Point – index.ts This file serves as the starting point of MCP-based application.
What it does:

  • 🔥 Bootstraps the entire system by calling startCLI() — the function that launches your command-line interface.
  • 🔥 Triggers the full reasoning loop: from capturing user input, to tool invocation, to AI response.
  • 🔥 Handles errors gracefully by catching and logging any issues during startup.

🧱 In the MCP architecture, index.ts is the control hub that wires together the client, server, LLM, and tools — keeping the flow modular and maintainable.

Conclusion

LangChain agents revolutionize how applications interact with large language models by enabling dynamic, context-aware, and intelligent behavior. Through the examples explored — Zero Shot React Description, Conversational React Description, and Function Calling Agent — we see how agents can leverage tools to solve real-world problems, from performing arithmetic operations to fetching weather data and maintaining multi-turn conversational context. These agents highlight the flexibility of LangChain in combining LLM reasoning with external integrations, making it a powerful framework for building next-generation AI-powered applications. By incorporating robust error handling, dynamic queries, and tool-based actions, LangChain empowers developers to create scalable, task-oriented systems that deliver seamless user experiences across industries. Whether you're building chatbots, financial assistants, or travel planners, LangChain's agent framework opens up a world of possibilities for intelligent, interactive solutions.