Unlocking the Power of AI: Mastering LangChain Prompts

In a world where artificial intelligence is rapidly becoming a cornerstone of innovation, the ability to craft precise and effective prompts is a game-changer. Imagine interacting with AI systems that not only understand your instructions but also deliver responses that are contextually rich and tailored to your needs. Whether you're building a virtual assistant, an intelligent chatbot, or a powerful automation tool, mastering LangChain prompts is your key to harnessing the full potential of AI.

In this tutorial, we'll take you on a journey through the art and science of LangChain prompts. You'll learn how to design prompts that drive accurate, creative, and impactful AI responses. From the basics of setting up your environment to advanced techniques for prompt optimization, this guide will equip you with the skills to create AI-driven applications that truly resonate with users.

LangChain Prompts
LangChain Prompts

In LangChain, there are several types of prompts that cater to different use cases and models. Here's an overview of the primary types of prompts available in LangChain:

Text Prompts

Basic Prompt: A straightforward prompt where you provide a simple string input to generate a response.
Prompt Template: A more sophisticated version where the prompt contains placeholders for variables. You can dynamically fill these placeholders with values at runtime.
Example: "Write a short story about "{subject}" in the style of  "{genre}"."  

Basic Text Prompt Example

In this case, we directly provide a simple string to the model, without using placeholders or complex structures.


// src/index.ts
import { OpenAI } from "@langchain/openai";
import * as dotenv from "dotenv";
dotenv.config();

async function main() {
  // Create model
  const model = new OpenAI({
    modelName: "gpt-3.5-turbo",
    temperature: 0.7,
    openAIApiKey: process.env.OPENAI_API_KEY!,  // Ensure API key is loaded
  });

  // Basic prompt: directly provide a prompt string
  const prompt = "Tell me a joke about a cat.";

  // Use model.generate or updated method (depends on version)
  const response = await model.generate([prompt]);  // Pass the prompt in an array

  // Access the result from the response object
  console.log(response.generations[0][0].text);  // First response from the model
}

main().catch(console.error);

Prompt Template

In this version, we create a more sophisticated prompt with placeholders

({subject}, {genre})

that are dynamically filled at runtime.


import { OpenAI } from "@langchain/openai";  // For OpenAI models
import { PromptTemplate } from "@langchain/core/prompts";  // For PromptTemplate from core
import * as dotenv from "dotenv";
dotenv.config();

async function main() {
  // Create the model
  const model = new OpenAI({
    modelName: "gpt-3.5-turbo",
    temperature: 0.7,
    openAIApiKey: process.env.OPENAI_API_KEY!,  // Ensure the API key is loaded
  });

  // Create a prompt template with placeholders
  const promptTemplate = new PromptTemplate({
    inputVariables: ['subject', 'genre'],
    template: "Write a short story about {subject} in the style of {genre}.",
  });

  // Fill the template with actual values
  const prompt = await promptTemplate.format({
    subject: "a dog",
    genre: "science fiction",
  });

  // Use the model to generate a response and await the Promise
  const response = await model.generate([prompt]);  // Pass the prompt as an array

  // Access the result from the response object
  const result = response.generations[0][0].text;  // Get the first result from the response

  // Output the result to the console
  console.log(result);
}

main().catch(console.error);

Chat Prompts

ChatPromptTemplate: Designed specifically for chat-based models like OpenAI's gpt-3.5-turbo. This allows you to structure a conversation with multiple roles (e.g., "system", "user", "assistant"). We'll create a chat template that simulates a dialogue where the system sets the rules for generating a joke, and then the user interacts by providing input.


import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from '@langchain/core/prompts';
import { StringOutputParser } from '@langchain/core/output_parsers';
import * as dotenv from "dotenv";
import * as readline from 'readline';

dotenv.config();

// Initialize readline interface for terminal input
const rl = readline.createInterface({
  input: process.stdin,
  output: process.stdout,
});

async function main() {

  // Create model
  const model = new ChatOpenAI({
    modelName: "gpt-3.5-turbo",
    temperature: 0.7,
    openAIApiKey: process.env.OPENAI_API_KEY!,  // Ensure the API key is loaded
  });

  // Function to prompt user for input from the terminal
  rl.question('Please enter a word for the joke: ', async (inputWord) => {
    
    // Create a structured Chat Prompt Template
    const prompt = ChatPromptTemplate.fromMessages([
      // System message sets the tone or the rules for the assistant
      ["system", "You are a helpful assistant that generates creative and funny jokes based on the input word provided by the user."],
      
      // The assistant is ready for the user's input
      ["assistant", "Hello! I'm here to brighten your day with a joke. What's the word you'd like me to base the joke on?"],
      
      // The human (user) provides the input word
      ["human", "{input}"],

      // The assistant responds with a joke based on the user's word
      ["assistant", "Let me think... Here's a joke for you about {input}:"]
    ]);

    // Format the prompt with the input from the user
    const formattedMessages = await prompt.formatMessages({ input: inputWord });

    // Log the conversation flow (system, assistant, human) before the model generates the final joke
    formattedMessages.forEach((message) => {
      console.log(`[${message._getType().toUpperCase()}]: ${message.content}`);
    });

    // Generate the response using the model and the prompt
    const response = await model.generate([formattedMessages]);

    // Output the assistant's generated response (the joke)
    console.log(`[ASSISTANT]: ${response.generations[0][0].text}`);

    // Close the readline interface
    rl.close();
  });
}

main().catch(console.error);

Explanation:

- Expanded Chat Roles:
System: The system sets the rules, stating that the assistant should be friendly and generate jokes based on the word provided by the user.
Assistant (Initial Message): The assistant initiates the conversation, asking the user to provide a word for the joke.
Human: The user responds with the input word ({input} placeholder).
Assistant (Joke Generation): The assistant generates the joke based on the word provided.
- Dynamic Prompt Generation:
The placeholder {input} is dynamically filled with the word "dog" (or any other word you provide) when invoking the chain.

Few-Shot Prompts

Few-Shot Prompts is a technique used in natural language processing where a model is given a few examples of a task in the prompt to help it understand how to respond to new, similar inputs. Instead of fine-tuning the model, you provide it with a small set of demonstrations (e.g., input-output pairs) as part of the prompt. This way, the model can infer the correct pattern and apply it to new tasks. It is useful when you want the model to generalize behavior from a few examples without needing extensive training data.


import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from '@langchain/core/prompts';
import { StringOutputParser } from '@langchain/core/output_parsers';
import * as dotenv from "dotenv";
import * as readline from 'readline';

dotenv.config();

// Initialize readline interface for terminal input
const rl = readline.createInterface({
  input: process.stdin,
  output: process.stdout,
});

async function main() {

  // Create model
  const model = new ChatOpenAI({
    modelName: "gpt-3.5-turbo",
    temperature: 0.7,
    openAIApiKey: process.env.OPENAI_API_KEY!,  // Ensure the API key is loaded
  });

  // Function to prompt user for input from the terminal
  rl.question('Please enter a word for the joke: ', async (inputWord) => {
    
    // Create a structured Chat Prompt Template
    const prompt = ChatPromptTemplate.fromMessages([
      // System message sets the tone or the rules for the assistant
      ["system", "You are a helpful assistant that generates creative and funny jokes based on the input word provided by the user."],
      
      // The assistant is ready for the user's input
      ["assistant", "Hello! I'm here to brighten your day with a joke. What's the word you'd like me to base the joke on?"],
      
      // The human (user) provides the input word
      ["human", "{input}"],

      // The assistant responds with a joke based on the user's word
      ["assistant", "Let me think... Here's a joke for you about {input}:"]
    ]);

    // Format the prompt with the input from the user
    const formattedMessages = await prompt.formatMessages({ input: inputWord });

    // Log the conversation flow (system, assistant, human) before the model generates the final joke
    formattedMessages.forEach((message) => {
      console.log(`[${message._getType().toUpperCase()}]: ${message.content}`);
    });

    // Generate the response using the model and the prompt
    const response = await model.generate([formattedMessages]);

    // Output the assistant's generated response (the joke)
    console.log(`[ASSISTANT]: ${response.generations[0][0].text}`);

    // Close the readline interface
    rl.close();
  });
}

main().catch(console.error);

Explanation:

- Few-Shot Prompting: In this example, the assistant is given two examples of math problem-solving (2 + 2 and 5 * 3) before being asked to solve the user's math problem. These examples act as "shots" to prime the model on how to respond to similar queries.
- Custom Input: The user provides a math problem, and the prompt template is dynamically adjusted with their input.
- Model Invocation: After constructing the prompt with few-shot examples, the model generates a response based on the new input.
- This setup leverages LangChain's structured prompting to guide the model with context.

Conditional Prompts

Conditional Prompts in LangChain refer to dynamically adjusting the content or structure of a prompt based on certain conditions or input data. This allows the assistant to tailor its responses to different scenarios. For example, the system can switch between different tones, instructions, or behaviors based on keywords, user preferences, or contextual factors, ensuring more relevant and context-specific interactions. This flexibility enhances the model's adaptability to various use cases without requiring hardcoded responses.

The idea is that based on certain conditions (like user input or context), the system will modify the behavior of the assistant.

In this case, if the user's input contains the word "tech", the assistant will give a technology-related response, and if the word is anything else, the assistant will give a generic response.


import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from '@langchain/core/prompts';
import { StringOutputParser } from '@langchain/core/output_parsers';
import * as dotenv from "dotenv";
import * as readline from 'readline';

dotenv.config();

// Initialize readline interface for terminal input
const rl = readline.createInterface({
  input: process.stdin,
  output: process.stdout,
});

export async function main() {

  // Create model
  const model = new ChatOpenAI({
    modelName: "gpt-3.5-turbo",
    temperature: 0.7,
    openAIApiKey: process.env.OPENAI_API_KEY!,  // Ensure the API key is loaded
  });

  // Function to prompt user for input from the terminal
  rl.question('Please enter a topic: ', async (inputWord) => {

    // Conditional prompt logic
    let conditionalMessage;
    if (inputWord.toLowerCase().includes("tech")) {
      conditionalMessage = "You are a tech-savvy assistant who provides information and jokes about technology.";
    } else {
      conditionalMessage = "You are a friendly assistant who provides generic jokes and information.";
    }

    // Create a structured Chat Prompt Template
    const prompt = ChatPromptTemplate.fromMessages([
      // System message is conditionally set based on the input
      ["system", conditionalMessage],

      // Assistant gets ready to respond to the user's topic
      ["assistant", "Hi! I see you're interested in {input}. Let's see what I can come up with."],

      // The human provides the topic
      ["human", "{input}"],

      // The assistant responds with a joke or fact about the topic
      ["assistant", "Here's something fun about {input}: "]
    ]);

    // Format the prompt with the input from the user
    const formattedMessages = await prompt.formatMessages({ input: inputWord });

    // Log the conversation flow (system, assistant, human) before the model generates the final response
    formattedMessages.forEach((message) => {
      console.log(`[${message._getType().toUpperCase()}]: ${message.content}`);
    });

    // Generate the response using the model and the prompt
    const response = await model.generate([formattedMessages]);

    // Output the assistant's generated response (the joke or fact)
    console.log(`[ASSISTANT]: ${response.generations[0][0].text}`);

    // Close the readline interface
    rl.close();
  });
}

main().catch(console.error);

Explanation:

- Conditional Prompts: Based on whether the input contains the word "tech", the assistant changes its behavior, focusing on either tech-related or generic responses.
- The "system" message is set dynamically based on this condition, changing the way the assistant interacts.

This shows how LangChain can use conditional logic to modify the behavior of an assistant depending on the input.

Chained Prompts

Chained Prompts refer to a technique where multiple prompts are linked together in a sequence, with the output of one prompt serving as the input or context for the next. This allows for more dynamic and interactive exchanges, where the model can build on previous responses to generate more detailed or follow-up answers. It's useful for creating more complex conversations or multi-step tasks, such as asking for initial input, generating a response, and then using that response to drive further questions or actions.

Here's an example of how you can implement "Chained Prompts" in a similar style to the one you provided, using LangChain's ChatOpenAI and ChatPromptTemplate. In this case, the idea is to first ask a question, then take the response from the model, and use that to ask a follow-up question in a chained sequence.


import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from '@langchain/core/prompts';
import { StringOutputParser } from '@langchain/core/output_parsers';
import * as dotenv from "dotenv";
import * as readline from 'readline';

dotenv.config();

// Initialize readline interface for terminal input
const rl = readline.createInterface({
  input: process.stdin,
  output: process.stdout,
});

export async function main() {

  // Create model
  const model = new ChatOpenAI({
    modelName: "gpt-3.5-turbo",
    temperature: 0.7,
    openAIApiKey: process.env.OPENAI_API_KEY!,  // Ensure the API key is loaded
  });

  // Function to prompt user for input from the terminal
  rl.question('Please enter your favorite animal: ', async (inputAnimal) => {
    
    // First prompt: Ask a question about the user's favorite animal
    const firstPrompt = ChatPromptTemplate.fromMessages([
      ["system", "You are a helpful assistant that provides interesting facts."],
      ["human", "Tell me an interesting fact about {animal}."]
    ]);

    // Format the first prompt with the user's input
    const firstFormattedMessages = await firstPrompt.formatMessages({ animal: inputAnimal });

    // Log the conversation flow for the first question
    firstFormattedMessages.forEach((message) => {
      console.log(`[${message._getType().toUpperCase()}]: ${message.content}`);
    });

    // Generate the response using the model
    const firstResponse = await model.generate([firstFormattedMessages]);
    const animalFact = firstResponse.generations[0][0].text;

    // Output the model's response (interesting fact)
    console.log(`[ASSISTANT]: ${animalFact}`);

    // Chain the second prompt using the model's response
    rl.question('Would you like to know more details about this fact? (yes/no): ', async (inputAnswer) => {

      // Second prompt: Follow-up question based on user's answer
      const secondPrompt = ChatPromptTemplate.fromMessages([
        ["system", "You are a helpful assistant that provides additional information."],
        ["human", "User asked about {fact}. Now, they would like more details: {answer}."]
      ]);

      // Format the second prompt with the user's answer and the fact from the first response
      const secondFormattedMessages = await secondPrompt.formatMessages({ fact: animalFact, answer: inputAnswer });

      // Log the conversation flow for the second question
      secondFormattedMessages.forEach((message) => {
        console.log(`[${message._getType().toUpperCase()}]: ${message.content}`);
      });

      // Generate the second response using the model
      const secondResponse = await model.generate([secondFormattedMessages]);

      // Output the assistant's second response
      console.log(`[ASSISTANT]: ${secondResponse.generations[0][0].text}`);

      // Close the readline interface
      rl.close();
    });
  });
}

main().catch(console.error);

Explanation:

- First Question: The program first asks the user for their favorite animal. Then, it sends this input to the model, which returns an interesting fact about the animal.
- Chaining: After the model returns the fact, the program asks the user if they want to learn more about the fact (yes/no). Based on their response, the second prompt is sent to the model, and it generates additional details if requested.

This demonstrates a simple example of "Chained Prompts" where the output of the first interaction is used as input for the next.

Prompt Variables and Serialization

Prompt Variables refer to placeholders within a prompt template that can be dynamically replaced with specific user inputs or data at runtime. They allow the same prompt structure to be reused while adapting to different inputs.

Serialization in this context involves formatting and preparing the prompt by replacing these variables with actual values before passing it to the model. This ensures the model receives a fully constructed prompt, enabling it to generate contextually relevant responses based on the provided inputs.

Here's a simple example of using Prompt Variables and Serialization in the style you provided. In this case, we will define a prompt with a variable, and the prompt template will serialize the input data for the model to generate a response.


import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from '@langchain/core/prompts';
import { StringOutputParser } from '@langchain/core/output_parsers';
import * as dotenv from "dotenv";
import * as readline from 'readline';

dotenv.config();

// Initialize readline interface for terminal input
const rl = readline.createInterface({
  input: process.stdin,
  output: process.stdout,
});

export async function main() {

  // Create model
  const model = new ChatOpenAI({
    modelName: "gpt-3.5-turbo",
    temperature: 0.7,
    openAIApiKey: process.env.OPENAI_API_KEY!,  // Ensure the API key is loaded
  });

  // Function to prompt user for input from the terminal
  rl.question('Please enter your favorite animal: ', async (inputAnimal) => {
    
    // First prompt: Ask a question about the user's favorite animal
    const firstPrompt = ChatPromptTemplate.fromMessages([
      ["system", "You are a helpful assistant that provides interesting facts."],
      ["human", "Tell me an interesting fact about {animal}."]
    ]);

    // Format the first prompt with the user's input
    const firstFormattedMessages = await firstPrompt.formatMessages({ animal: inputAnimal });

    // Log the conversation flow for the first question
    firstFormattedMessages.forEach((message) => {
      console.log(`[${message._getType().toUpperCase()}]: ${message.content}`);
    });

    // Generate the response using the model
    const firstResponse = await model.generate([firstFormattedMessages]);
    const animalFact = firstResponse.generations[0][0].text;

    // Output the model's response (interesting fact)
    console.log(`[ASSISTANT]: ${animalFact}`);

    // Chain the second prompt using the model's response
    rl.question('Would you like to know more details about this fact? (yes/no): ', async (inputAnswer) => {

      // Second prompt: Follow-up question based on user's answer
      const secondPrompt = ChatPromptTemplate.fromMessages([
        ["system", "You are a helpful assistant that provides additional information."],
        ["human", "User asked about {fact}. Now, they would like more details: {answer}."]
      ]);

      // Format the second prompt with the user's answer and the fact from the first response
      const secondFormattedMessages = await secondPrompt.formatMessages({ fact: animalFact, answer: inputAnswer });

      // Log the conversation flow for the second question
      secondFormattedMessages.forEach((message) => {
        console.log(`[${message._getType().toUpperCase()}]: ${message.content}`);
      });

      // Generate the second response using the model
      const secondResponse = await model.generate([secondFormattedMessages]);

      // Output the assistant's second response
      console.log(`[ASSISTANT]: ${secondResponse.generations[0][0].text}`);

      // Close the readline interface
      rl.close();
    });
  });
}

main().catch(console.error);

Explanation:

- Prompt Variables: We use the placeholder "favorite_animal" in the prompt template, which will be replaced with the actual user input during runtime.
- Serialization: The prompt.formatMessages function serializes the input into a structured format (replacing variables) before sending it to the model for generation.

This simple example takes an animal name as input and generates an interesting fact about that animal using the model.

Advanced/Custom Prompts

Advanced/Custom Prompts in LangChain involve structuring and fine-tuning how a language model interacts with inputs by defining a multi-step conversation flow. Instead of sending a single prompt, custom prompts use a combination of system, assistant, and user messages to guide the model’s behavior, tone, and response style. This approach allows for greater control over the model's output, enabling complex interactions such as stories, creative tasks, or specific domain-focused tasks, and ensures that the model adheres to the desired structure throughout the conversation.

Here's an example of an "Advanced/Custom Prompt" using LangChain, following the style you've provided. In this case, we'll craft a prompt that asks the assistant to generate a fictional story based on a user's input for a character's name and a location:


import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from '@langchain/core/prompts';
import { StringOutputParser } from '@langchain/core/output_parsers';
import * as dotenv from "dotenv";
import * as readline from 'readline';

dotenv.config();

// Initialize readline interface for terminal input
const rl = readline.createInterface({
  input: process.stdin,
  output: process.stdout,
});

export async function main() {

  // Create model
  const model = new ChatOpenAI({
    modelName: "gpt-3.5-turbo",
    temperature: 0.7,
    openAIApiKey: process.env.OPENAI_API_KEY!,  // Ensure the API key is loaded
  });

  // Function to prompt user for input from the terminal
  rl.question('Please enter your favorite animal: ', async (inputAnimal) => {
    
    // First prompt: Ask a question about the user's favorite animal
    const firstPrompt = ChatPromptTemplate.fromMessages([
      ["system", "You are a helpful assistant that provides interesting facts."],
      ["human", "Tell me an interesting fact about {animal}."]
    ]);

    // Format the first prompt with the user's input
    const firstFormattedMessages = await firstPrompt.formatMessages({ animal: inputAnimal });

    // Log the conversation flow for the first question
    firstFormattedMessages.forEach((message) => {
      console.log(`[${message._getType().toUpperCase()}]: ${message.content}`);
    });

    // Generate the response using the model
    const firstResponse = await model.generate([firstFormattedMessages]);
    const animalFact = firstResponse.generations[0][0].text;

    // Output the model's response (interesting fact)
    console.log(`[ASSISTANT]: ${animalFact}`);

    // Chain the second prompt using the model's response
    rl.question('Would you like to know more details about this fact? (yes/no): ', async (inputAnswer) => {

      // Second prompt: Follow-up question based on user's answer
      const secondPrompt = ChatPromptTemplate.fromMessages([
        ["system", "You are a helpful assistant that provides additional information."],
        ["human", "User asked about {fact}. Now, they would like more details: {answer}."]
      ]);

      // Format the second prompt with the user's answer and the fact from the first response
      const secondFormattedMessages = await secondPrompt.formatMessages({ fact: animalFact, answer: inputAnswer });

      // Log the conversation flow for the second question
      secondFormattedMessages.forEach((message) => {
        console.log(`[${message._getType().toUpperCase()}]: ${message.content}`);
      });

      // Generate the second response using the model
      const secondResponse = await model.generate([secondFormattedMessages]);

      // Output the assistant's second response
      console.log(`[ASSISTANT]: ${secondResponse.generations[0][0].text}`);

      // Close the readline interface
      rl.close();
    });
  });
}

main().catch(console.error);

Explanation:

- The code prompts the user for a character's name and a location.
- It formats a chat prompt using ChatPromptTemplate with system and assistant messages to instruct the model.
- The model responds with a fictional story based on the inputs.

These types provide a wide range of flexibility, allowing developers to tailor their prompts to the specific needs of their application, whether they are working with basic text generation, complex conversational AI, or multi-step workflows.