LangChain in Action: How to Build Intelligent AI Applications Easily and Efficiently ?
The Art of LangChain Prompts: Designing Intelligent AI Interactions
In a world where artificial intelligence is rapidly becoming a cornerstone of innovation, the ability to craft precise and effective prompts is a game-changer. Imagine interacting with AI systems that not only understand your instructions but also deliver responses that are contextually rich and tailored to your needs. Whether you're building a virtual assistant, an intelligent chatbot, or a powerful automation tool, mastering LangChain prompts is your key to harnessing the full potential of AI.
In this tutorial, we'll take you on a journey through the art and science of LangChain prompts. You'll learn how to design prompts that drive accurate, creative, and impactful AI responses. From the basics of setting up your environment to advanced techniques for prompt optimization, this guide will equip you with the skills to create AI-driven applications that truly resonate with users.

In LangChain, there are several types of prompts that cater to different use cases and models. Here's an overview of the primary types of prompts available in LangChain:
Basic Text Prompt Example: Chat with a Historian Using LangChain
In LangChain, a text prompt template is a structured way of creating prompts for LLMs (like GPT-3.5 or GPT-4). Instead of hardcoding prompts, prompt templates allow dynamic generation of text prompts, making them flexible and reusable.
This example demonstrates how to use LangChain with OpenAI’s GPT model to act as a historian assistant. We define a structured conversation where:
- The system sets a role for the assistant (a historian).
- The user asks a historical question (e.g., "Who is Gaius Marius?").
- The assistant generates a response based on its training data.
╔════════════════════════════════════════════════════════════════════════════╗
║ 🌟 EDUCATIONAL EXAMPLE 🌟 ║
║ ║
║ 📌 This is a minimal and short working example for educational purposes. ║
║ ⚠️ Not optimized for production! ║
║ ║
║ 📦 Versions Used: ║
║ - "@langchain/core": "^0.3.38" ║
║ - "@langchain/openai": "^0.4.2" ║
║ ║
║ 🔄 Note: LangChain is transitioning from a monolithic structure to a ║
║ modular package structure. Ensure compatibility with future updates. ║
╚════════════════════════════════════════════════════════════════════════════╝
// src/index.ts
// Importing the ChatOpenAI class from the @langchain package for working with the OpenAI model
import { ChatOpenAI } from "@langchain/openai";
// Importing the HumanMessage and SystemMessage classes from the @langchain package
// for handling messages
import { HumanMessage, SystemMessage } from "@langchain/core/messages";
// Importing dotenv to load environment variables from the .env file
import dotenv from "dotenv";
dotenv.config();
const model = new ChatOpenAI({
openAIApiKey: process.env.OPENAI_API_KEY!,
modelName: "gpt-3.5-turbo", // Select the OpenAI model to use
temperature: 0.3, // Set the model's temperature
})
async function run() {
const msg = [
new SystemMessage("You are a helpful historian."), // Creating a system message
new HumanMessage("Who is Gaius Marius?"), // Creating a human message
]
const response = await model.generate([msg]); // Executing the model
const result = response.generations[0][0].text // Extracts the generated text from the response object
console.log(result); // Prints the generated response to the console
}
run().catch(console.error); // Runs the function and logs any errors that occur
Structured Chat Messages
- SystemMessage: Establishes the assistant’s role as a historian.
- HumanMessage: Represents the user’s question ("Who is Gaius Marius?").
- AI Response: OpenAI processes these messages and generates a historically accurate answer.
Model Configuration
- Model Name: "gpt-3.5-turbo" – Optimized for chat applications.
- Temperature: 0.3 – Keeps responses factual and controlled.
Response Extraction
- The function response.generations[0][0].text extracts the AI’s reply.
- The response is then logged to the console.
Prompt Template
PromptTemplate in LangChain is a tool for dynamically creating prompts using templates/placeholders with variable input data.
How does it work?
- Define a template with placeholders ({variable_name} placeholder).
- Replace the placeholders with real values using ".format({...})".
- Get a personalized prompt ready for the model.
({historicalFigure})
that are dynamically filled at runtime.
╔════════════════════════════════════════════════════════════════════════════╗
║ 🌟 EDUCATIONAL EXAMPLE 🌟 ║
║ ║
║ 📌 This is a minimal and short working example for educational purposes. ║
║ ⚠️ Not optimized for production! ║
║ ║
║ 📦 Versions Used: ║
║ - "@langchain/core": "^0.3.38" ║
║ - "@langchain/openai": "^0.4.2" ║
║ ║
║ 🔄 Note: LangChain is transitioning from a monolithic structure to a ║
║ modular package structure. Ensure compatibility with future updates. ║
╚════════════════════════════════════════════════════════════════════════════╝
// src/index.ts
import { ChatOpenAI } from "@langchain/openai";
import { PromptTemplate } from "@langchain/core/prompts";
import dotenv from "dotenv";
import { HumanMessage, SystemMessage } from "@langchain/core/messages";
dotenv.config();
// Create an instance of ChatOpenAI with API key, model selection, and temperature setting
const model = new ChatOpenAI({
openAIApiKey: process.env.OPENAI_API_KEY!, // Retrieves API key from environment variables
modelName: "gpt-3.5-turbo", // Specifies the OpenAI model to use
temperature: 0.7, // Controls randomness of responses (lower = more deterministic)
});
async function run() {
// Create a prompt template with a placeholder for a historical figure
const promptTemplate = new PromptTemplate({
inputVariables: ['historicalFigure'], // Defines variables to be dynamically replaced
template: `Write a historical story about {historicalFigure}, including key moments from their life.`,
});
// Format the template with the actual value for 'historicalFigure'
const prompt = await promptTemplate.format({ historicalFigure: "Mad Jack" });
// Define the message sequence for the chat model
const messages = [
new SystemMessage("You are a professional historian who writes detailed and engaging historical accounts."),
// This message sets the AI's role as a historian
new HumanMessage(prompt), // This message provides the specific historical story request
];
// Generate a response from the AI model based on the provided messages
const response = await model.generate([messages]);
// Extract the generated text from the response object
const result = response.generations[0][0].text;
// Print the generated historical story to the console
console.log(result);
}
// Run the function and catch any errors
run().catch(console.error);
Why is it useful?
- ✅ Flexibility – Use a single template for different inputs.
- ✅ Automation – Generate dynamic prompts without manual intervention.
- ✅ Optimization – Helps in structuring and efficiently generating text.
Chat Prompts
The ChatPromptTemplate is designed specifically for chat-based models like OpenAI's gpt-3.5-turbo. It allows structuring a conversation with multiple roles, such as:
- System: Sets the rules and context.
- User (Human): Inputs questions or prompts.
- Assistant: Provides responses based on the model.
In this example, we create a chat template where the system assigns the assistant a role as a quantum physics expert. The user then asks a question, and the AI responds accordingly.
╔════════════════════════════════════════════════════════════════════════════╗
║ 🌟 EDUCATIONAL EXAMPLE 🌟 ║
║ ║
║ 📌 This is a minimal and short working example for educational purposes. ║
║ ⚠️ Not optimized for production! ║
║ ║
║ 📦 Versions Used: ║
║ - "@langchain/core": "^0.3.38" ║
║ - "@langchain/openai": "^0.4.2" ║
║ ║
║ 🔄 Note: LangChain is transitioning from a monolithic structure to a ║
║ modular package structure. Ensure compatibility with future updates. ║
╚════════════════════════════════════════════════════════════════════════════╝
import { ChatOpenAI } from "@langchain/openai"; // Import the ChatOpenAI class
import { ChatPromptTemplate } from "@langchain/core/prompts"; // Import the ChatPromptTemplate class
import dotenv from "dotenv"; // Import the dotenv module
import readline from 'readline'; // Import the readline module
// Load environment variables from the .env file
dotenv.config();
const model = new ChatOpenAI({
openAIApiKey: process.env.OPENAI_API_KEY!, // Ensure the API key is loaded
modelName: "gpt-3.5-turbo", // Define which OpenAI model to use
temperature: 0.7, // Adjust the randomness (lower = more factual, higher = more creative)
})
// Initialize readline interface for user input in the terminal
const rl = readline.createInterface({
input: process.stdin, // Set standard input (keyboard)
output: process.stdout, // Set standard output (console)
})
async function run() {
// Prompt the user to ask about quantum physics
rl.question('Ask a question in the field of quantum physics: ', async (userQuestion) => {
// Correct way to define a structured chat prompt template
const promptTemplate = ChatPromptTemplate.fromMessages([
["system", "You are a quantum physics expert."], // Set the role of the assistant
["human", `${userQuestion}`], // Placeholder for user question
["system", "Let me explain it in a simple yet detailed way."], // Set the role of the assistant
]);
// Format the template with the actual user question
const formattedMessages = await promptTemplate.formatMessages({ userQuestion: userQuestion });
// Print each message in the chat for better visibility
formattedMessages.forEach((message) => {
console.log(`[${message.constructor.name.toUpperCase()}]: ${message.content}`);
});
// Generate a response using the formatted messages
const response = await model.generate([formattedMessages]);
// Print the AI's detailed explanation
console.log(`[ASSISTANT]: ${response.generations[0][0].text}`);
// Close the readline interface after generating the response
rl.close();
});
}
// Run the function and catch any potential errors
run().catch(console.error);
Explanation:
- System: The system message establishes the assistant’s role as a quantum physics expert. It guides the AI's behavior to provide detailed yet clear explanations.
- Human (User Input): The user asks a question about quantum physics. The input is dynamically inserted into the prompt.
- Assistant (AI Response): The AI responds based on the structured chat prompt. The system enforces the tone and detail level of the assistant’s reply.
Dynamic Prompt Generation
- The ChatPromptTemplate structures the conversation before sending it to the model.
- The placeholder userQuestion is dynamically replaced with real user input at runtime.
- The AI processes the structured messages to generate a precise and helpful answer.
Few-Shot Prompts
Few-Shot Prompts is a technique used in natural language processing where a model is given a few examples of a task in the prompt to help it understand how to respond to new, similar inputs. Instead of fine-tuning the model, you provide it with a small set of demonstrations (e.g., input-output pairs) as part of the prompt. This way, the model can infer the correct pattern and apply it to new tasks. It is useful when you want the model to generalize behavior from a few examples without needing extensive training data.
╔════════════════════════════════════════════════════════════════════════════╗
║ 🌟 EDUCATIONAL EXAMPLE 🌟 ║
║ ║
║ 📌 This is a minimal and short working example for educational purposes. ║
║ ⚠️ Not optimized for production! ║
║ ║
║ 📦 Versions Used: ║
║ - "@langchain/core": "^0.3.38" ║
║ - "@langchain/openai": "^0.4.2" ║
║ ║
║ 🔄 Note: LangChain is transitioning from a monolithic structure to a ║
║ modular package structure. Ensure compatibility with future updates. ║
╚════════════════════════════════════════════════════════════════════════════╝
import { ChatOpenAI } from "@langchain/openai"; // Import the ChatOpenAI class
import { ChatPromptTemplate } from "@langchain/core/prompts"; // Import the ChatPromptTemplate class
import dotenv from "dotenv"; // Import the dotenv module
import readline from 'readline'; // Import the readline module
// Load environment variables from the .env file
dotenv.config();
const model = new ChatOpenAI({
openAIApiKey: process.env.OPENAI_API_KEY!, // Ensure the API key is loaded
modelName: "gpt-3.5-turbo", // Define which OpenAI model to use
temperature: 0.7, // Adjust the randomness (lower = more factual, higher = more creative)
})
// Initialize readline interface for user input in the terminal
const rl = readline.createInterface({
input: process.stdin, // Set standard input (keyboard)
output: process.stdout, // Set standard output (console)
})
async function run() {
// Prompt the user to ask about quantum physics
rl.question('Ask a question in the field of quantum physics: ', async (userQuestion) => {
// Correct way to define a structured chat prompt template
const promptTemplate = ChatPromptTemplate.fromMessages([
["system", "You are a quantum physics expert."], // Set the role of the assistant
["human", `${userQuestion}`], // Placeholder for user question
["system", "Let me explain it in a simple yet detailed way."], // Set the role of the assistant
]);
// Format the template with the actual user question
const formattedMessages = await promptTemplate.formatMessages({ userQuestion: userQuestion });
// Print each message in the chat for better visibility
formattedMessages.forEach((message) => {
console.log(`[${message.constructor.name.toUpperCase()}]: ${message.content}`);
});
// Generate a response using the formatted messages
const response = await model.generate([formattedMessages]);
// Print the AI's detailed explanation
console.log(`[ASSISTANT]: ${response.generations[0][0].text}`);
// Close the readline interface after generating the response
rl.close();
});
}
// Run the function and catch any potential errors
run().catch(console.error);
Explanation:
- Few-Shot Prompting: In this example, the assistant is given two examples of math problem-solving (2 + 2 and 5 * 3) before being asked to solve the user's math problem. These examples act as "shots" to prime the model on how to respond to similar queries.
- Custom Input: The user provides a math problem, and the prompt template is dynamically adjusted with their input.
- Model Invocation: After constructing the prompt with few-shot examples, the model generates a response based on the new input.
- This setup leverages LangChain's structured prompting to guide the model with context.
Conditional Prompts
Conditional Prompts in LangChain refer to dynamically adjusting the content or structure of a prompt based on certain conditions or input data. This allows the assistant to tailor its responses to different scenarios. For example, the system can switch between different tones, instructions, or behaviors based on keywords, user preferences, or contextual factors, ensuring more relevant and context-specific interactions. This flexibility enhances the model's adaptability to various use cases without requiring hardcoded responses.
The idea is that based on certain conditions (like user input or context), the system will modify the behavior of the assistant.
In this case, if the user's input contains the word "tech", the assistant will give a technology-related response, and if the word is anything else, the assistant will give a generic response.
╔════════════════════════════════════════════════════════════════════════════╗
║ 🌟 EDUCATIONAL EXAMPLE 🌟 ║
║ ║
║ 📌 This is a minimal and short working example for educational purposes. ║
║ ⚠️ Not optimized for production! ║
║ ║
║ 📦 Versions Used: ║
║ - "@langchain/core": "^0.3.38" ║
║ - "@langchain/openai": "^0.4.2" ║
║ ║
║ 🔄 Note: LangChain is transitioning from a monolithic structure to a ║
║ modular package structure. Ensure compatibility with future updates. ║
╚════════════════════════════════════════════════════════════════════════════╝
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from '@langchain/core/prompts';
import { StringOutputParser } from '@langchain/core/output_parsers';
import * as dotenv from "dotenv";
import * as readline from 'readline';
dotenv.config();
// Initialize readline interface for terminal input
const rl = readline.createInterface({
input: process.stdin,
output: process.stdout,
});
export async function main() {
// Create model
const model = new ChatOpenAI({
modelName: "gpt-3.5-turbo",
temperature: 0.7,
openAIApiKey: process.env.OPENAI_API_KEY!, // Ensure the API key is loaded
});
// Function to prompt user for input from the terminal
rl.question('Please enter a topic: ', async (inputWord) => {
// Conditional prompt logic
let conditionalMessage;
if (inputWord.toLowerCase().includes("tech")) {
conditionalMessage = "You are a tech-savvy assistant who provides information and jokes about technology.";
} else {
conditionalMessage = "You are a friendly assistant who provides generic jokes and information.";
}
// Create a structured Chat Prompt Template
const prompt = ChatPromptTemplate.fromMessages([
// System message is conditionally set based on the input
["system", conditionalMessage],
// Assistant gets ready to respond to the user's topic
["assistant", "Hi! I see you're interested in {input}. Let's see what I can come up with."],
// The human provides the topic
["human", "{input}"],
// The assistant responds with a joke or fact about the topic
["assistant", "Here's something fun about {input}: "]
]);
// Format the prompt with the input from the user
const formattedMessages = await prompt.formatMessages({ input: inputWord });
// Log the conversation flow (system, assistant, human) before the model generates the final response
formattedMessages.forEach((message) => {
console.log(`[${message._getType().toUpperCase()}]: ${message.content}`);
});
// Generate the response using the model and the prompt
const response = await model.generate([formattedMessages]);
// Output the assistant's generated response (the joke or fact)
console.log(`[ASSISTANT]: ${response.generations[0][0].text}`);
// Close the readline interface
rl.close();
});
}
main().catch(console.error);
Explanation:
- Conditional Prompts: Based on whether the input contains the word "tech", the assistant changes its behavior, focusing on either tech-related or generic responses.
- The "system" message is set dynamically based on this condition, changing the way the assistant interacts.
This shows how LangChain can use conditional logic to modify the behavior of an assistant depending on the input.
Chained Prompts
Chained Prompts refer to a technique where multiple prompts are linked together in a sequence, with the output of one prompt serving as the input or context for the next. This allows for more dynamic and interactive exchanges, where the model can build on previous responses to generate more detailed or follow-up answers. It's useful for creating more complex conversations or multi-step tasks, such as asking for initial input, generating a response, and then using that response to drive further questions or actions.
Here's an example of how you can implement "Chained Prompts" in a similar style to the one you provided, using LangChain's ChatOpenAI and ChatPromptTemplate. In this case, the idea is to first ask a question, then take the response from the model, and use that to ask a follow-up question in a chained sequence.
╔════════════════════════════════════════════════════════════════════════════╗
║ 🌟 EDUCATIONAL EXAMPLE 🌟 ║
║ ║
║ 📌 This is a minimal and short working example for educational purposes. ║
║ ⚠️ Not optimized for production! ║
║ ║
║ 📦 Versions Used: ║
║ - "@langchain/core": "^0.3.38" ║
║ - "@langchain/openai": "^0.4.2" ║
║ ║
║ 🔄 Note: LangChain is transitioning from a monolithic structure to a ║
║ modular package structure. Ensure compatibility with future updates. ║
╚════════════════════════════════════════════════════════════════════════════╝
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from '@langchain/core/prompts';
import { StringOutputParser } from '@langchain/core/output_parsers';
import * as dotenv from "dotenv";
import * as readline from 'readline';
dotenv.config();
// Initialize readline interface for terminal input
const rl = readline.createInterface({
input: process.stdin,
output: process.stdout,
});
export async function main() {
// Create model
const model = new ChatOpenAI({
modelName: "gpt-3.5-turbo",
temperature: 0.7,
openAIApiKey: process.env.OPENAI_API_KEY!, // Ensure the API key is loaded
});
// Function to prompt user for input from the terminal
rl.question('Please enter your favorite animal: ', async (inputAnimal) => {
// First prompt: Ask a question about the user's favorite animal
const firstPrompt = ChatPromptTemplate.fromMessages([
["system", "You are a helpful assistant that provides interesting facts."],
["human", "Tell me an interesting fact about {animal}."]
]);
// Format the first prompt with the user's input
const firstFormattedMessages = await firstPrompt.formatMessages({ animal: inputAnimal });
// Log the conversation flow for the first question
firstFormattedMessages.forEach((message) => {
console.log(`[${message._getType().toUpperCase()}]: ${message.content}`);
});
// Generate the response using the model
const firstResponse = await model.generate([firstFormattedMessages]);
const animalFact = firstResponse.generations[0][0].text;
// Output the model's response (interesting fact)
console.log(`[ASSISTANT]: ${animalFact}`);
// Chain the second prompt using the model's response
rl.question('Would you like to know more details about this fact? (yes/no): ', async (inputAnswer) => {
// Second prompt: Follow-up question based on user's answer
const secondPrompt = ChatPromptTemplate.fromMessages([
["system", "You are a helpful assistant that provides additional information."],
["human", "User asked about {fact}. Now, they would like more details: {answer}."]
]);
// Format the second prompt with the user's answer and the fact from the first response
const secondFormattedMessages = await secondPrompt.formatMessages({ fact: animalFact, answer: inputAnswer });
// Log the conversation flow for the second question
secondFormattedMessages.forEach((message) => {
console.log(`[${message._getType().toUpperCase()}]: ${message.content}`);
});
// Generate the second response using the model
const secondResponse = await model.generate([secondFormattedMessages]);
// Output the assistant's second response
console.log(`[ASSISTANT]: ${secondResponse.generations[0][0].text}`);
// Close the readline interface
rl.close();
});
});
}
main().catch(console.error);
Explanation:
- First Question: The program first asks the user for their favorite animal. Then, it sends this input to the model, which returns an interesting fact about the animal.
- Chaining: After the model returns the fact, the program asks the user if they want to learn more about the fact (yes/no). Based on their response, the second prompt is sent to the model, and it generates additional details if requested.
This demonstrates a simple example of "Chained Prompts" where the output of the first interaction is used as input for the next.
Prompt Variables and Serialization
Prompt Variables refer to placeholders within a prompt template that can be dynamically replaced with specific user inputs or data at runtime. They allow the same prompt structure to be reused while adapting to different inputs.
Serialization in this context involves formatting and preparing the prompt by replacing these variables with actual values before passing it to the model. This ensures the model receives a fully constructed prompt, enabling it to generate contextually relevant responses based on the provided inputs.
Here's a simple example of using Prompt Variables and Serialization in the style you provided. In this case, we will define a prompt with a variable, and the prompt template will serialize the input data for the model to generate a response.
╔════════════════════════════════════════════════════════════════════════════╗
║ 🌟 EDUCATIONAL EXAMPLE 🌟 ║
║ ║
║ 📌 This is a minimal and short working example for educational purposes. ║
║ ⚠️ Not optimized for production! ║
║ ║
║ 📦 Versions Used: ║
║ - "@langchain/core": "^0.3.38" ║
║ - "@langchain/openai": "^0.4.2" ║
║ ║
║ 🔄 Note: LangChain is transitioning from a monolithic structure to a ║
║ modular package structure. Ensure compatibility with future updates. ║
╚════════════════════════════════════════════════════════════════════════════╝
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from '@langchain/core/prompts';
import { StringOutputParser } from '@langchain/core/output_parsers';
import * as dotenv from "dotenv";
import * as readline from 'readline';
dotenv.config();
// Initialize readline interface for terminal input
const rl = readline.createInterface({
input: process.stdin,
output: process.stdout,
});
export async function main() {
// Create model
const model = new ChatOpenAI({
modelName: "gpt-3.5-turbo",
temperature: 0.7,
openAIApiKey: process.env.OPENAI_API_KEY!, // Ensure the API key is loaded
});
// Function to prompt user for input from the terminal
rl.question('Please enter your favorite animal: ', async (inputAnimal) => {
// First prompt: Ask a question about the user's favorite animal
const firstPrompt = ChatPromptTemplate.fromMessages([
["system", "You are a helpful assistant that provides interesting facts."],
["human", "Tell me an interesting fact about {animal}."]
]);
// Format the first prompt with the user's input
const firstFormattedMessages = await firstPrompt.formatMessages({ animal: inputAnimal });
// Log the conversation flow for the first question
firstFormattedMessages.forEach((message) => {
console.log(`[${message._getType().toUpperCase()}]: ${message.content}`);
});
// Generate the response using the model
const firstResponse = await model.generate([firstFormattedMessages]);
const animalFact = firstResponse.generations[0][0].text;
// Output the model's response (interesting fact)
console.log(`[ASSISTANT]: ${animalFact}`);
// Chain the second prompt using the model's response
rl.question('Would you like to know more details about this fact? (yes/no): ', async (inputAnswer) => {
// Second prompt: Follow-up question based on user's answer
const secondPrompt = ChatPromptTemplate.fromMessages([
["system", "You are a helpful assistant that provides additional information."],
["human", "User asked about {fact}. Now, they would like more details: {answer}."]
]);
// Format the second prompt with the user's answer and the fact from the first response
const secondFormattedMessages = await secondPrompt.formatMessages({ fact: animalFact, answer: inputAnswer });
// Log the conversation flow for the second question
secondFormattedMessages.forEach((message) => {
console.log(`[${message._getType().toUpperCase()}]: ${message.content}`);
});
// Generate the second response using the model
const secondResponse = await model.generate([secondFormattedMessages]);
// Output the assistant's second response
console.log(`[ASSISTANT]: ${secondResponse.generations[0][0].text}`);
// Close the readline interface
rl.close();
});
});
}
main().catch(console.error);
Explanation:
- Prompt Variables: We use the placeholder "favorite_animal" in the prompt template, which will be replaced with the actual user input during runtime.
- Serialization: The prompt.formatMessages function serializes the input into a structured format (replacing variables) before sending it to the model for generation.
This simple example takes an animal name as input and generates an interesting fact about that animal using the model.
Advanced/Custom Prompts
Advanced/Custom Prompts in LangChain involve structuring and fine-tuning how a language model interacts with inputs by defining a multi-step conversation flow. Instead of sending a single prompt, custom prompts use a combination of system, assistant, and user messages to guide the model’s behavior, tone, and response style. This approach allows for greater control over the model's output, enabling complex interactions such as stories, creative tasks, or specific domain-focused tasks, and ensures that the model adheres to the desired structure throughout the conversation.
Here's an example of an "Advanced/Custom Prompt" using LangChain, following the style you've provided. In this case, we'll craft a prompt that asks the assistant to generate a fictional story based on a user's input for a character's name and a location:
╔════════════════════════════════════════════════════════════════════════════╗
║ 🌟 EDUCATIONAL EXAMPLE 🌟 ║
║ ║
║ 📌 This is a minimal and short working example for educational purposes. ║
║ ⚠️ Not optimized for production! ║
║ ║
║ 📦 Versions Used: ║
║ - "@langchain/core": "^0.3.38" ║
║ - "@langchain/openai": "^0.4.2" ║
║ ║
║ 🔄 Note: LangChain is transitioning from a monolithic structure to a ║
║ modular package structure. Ensure compatibility with future updates. ║
╚════════════════════════════════════════════════════════════════════════════╝
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from '@langchain/core/prompts';
import { StringOutputParser } from '@langchain/core/output_parsers';
import * as dotenv from "dotenv";
import * as readline from 'readline';
dotenv.config();
// Initialize readline interface for terminal input
const rl = readline.createInterface({
input: process.stdin,
output: process.stdout,
});
export async function main() {
// Create model
const model = new ChatOpenAI({
modelName: "gpt-3.5-turbo",
temperature: 0.7,
openAIApiKey: process.env.OPENAI_API_KEY!, // Ensure the API key is loaded
});
// Function to prompt user for input from the terminal
rl.question('Please enter your favorite animal: ', async (inputAnimal) => {
// First prompt: Ask a question about the user's favorite animal
const firstPrompt = ChatPromptTemplate.fromMessages([
["system", "You are a helpful assistant that provides interesting facts."],
["human", "Tell me an interesting fact about {animal}."]
]);
// Format the first prompt with the user's input
const firstFormattedMessages = await firstPrompt.formatMessages({ animal: inputAnimal });
// Log the conversation flow for the first question
firstFormattedMessages.forEach((message) => {
console.log(`[${message._getType().toUpperCase()}]: ${message.content}`);
});
// Generate the response using the model
const firstResponse = await model.generate([firstFormattedMessages]);
const animalFact = firstResponse.generations[0][0].text;
// Output the model's response (interesting fact)
console.log(`[ASSISTANT]: ${animalFact}`);
// Chain the second prompt using the model's response
rl.question('Would you like to know more details about this fact? (yes/no): ', async (inputAnswer) => {
// Second prompt: Follow-up question based on user's answer
const secondPrompt = ChatPromptTemplate.fromMessages([
["system", "You are a helpful assistant that provides additional information."],
["human", "User asked about {fact}. Now, they would like more details: {answer}."]
]);
// Format the second prompt with the user's answer and the fact from the first response
const secondFormattedMessages = await secondPrompt.formatMessages({ fact: animalFact, answer: inputAnswer });
// Log the conversation flow for the second question
secondFormattedMessages.forEach((message) => {
console.log(`[${message._getType().toUpperCase()}]: ${message.content}`);
});
// Generate the second response using the model
const secondResponse = await model.generate([secondFormattedMessages]);
// Output the assistant's second response
console.log(`[ASSISTANT]: ${secondResponse.generations[0][0].text}`);
// Close the readline interface
rl.close();
});
});
}
main().catch(console.error);
Explanation:
- The code prompts the user for a character's name and a location.
- It formats a chat prompt using ChatPromptTemplate with system and assistant messages to instruct the model.
- The model responds with a fictional story based on the inputs.
Conclusions
In this guide, we explored the versatility and power of LangChain prompts in unlocking the full potential of AI-driven applications. From basic text prompts to advanced techniques like chained prompts, conditional prompts, and few-shot learning, LangChain provides a comprehensive toolkit for developers to craft precise, context-aware, and impactful prompts.
By mastering these techniques, you can design AI-driven systems that not only understand user inputs but also respond with creativity and accuracy. Whether you're building conversational agents, virtual assistants, or innovative automation tools, LangChain's flexible prompting system ensures that your applications deliver exceptional user experiences.
As you continue your journey with LangChain, experiment with different prompt structures, customize them for your use case, and fine-tune them for optimal performance. The ability to craft effective prompts is a game-changing skill that sets the foundation for building intelligent, AI-powered solutions. So dive in, innovate, and harness the true potential of LangChain prompts!