Unlock Long-Term Memory with LangChain: Persistent Conversations Simplified
Long Term Memory
import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage, AIMessage } from "@langchain/core/messages";
import * as dotenv from "dotenv";
dotenv.config();
async function main() {
const chatModel = new ChatOpenAI({
modelName: "gpt-3.5-turbo",
temperature: 0.7,
openAIApiKey: process.env.OPENAI_API_KEY!,
});
// Simple in-memory storage for chat history
const memoryStore: { role: string; content: string }[] = [
{ role: "human", content: "Hi! What can you do?" },
{
role: "ai",
content:
"Hello! I can assist with a variety of tasks, such as answering questions, summarizing content, or helping with programming.",
},
];
// Function to add new messages to the memory store
function addToMemory(role: string, content: string) {
memoryStore.push({ role, content });
}
// New user input
const userInput = "Can you remind me what you said earlier?";
addToMemory("human", userInput);
// Combine memory store into a format compatible with the model
const conversationHistory = memoryStore.map((msg) =>
msg.role === "human" ? new HumanMessage(msg.content) : new AIMessage(msg.content)
);
// Generate AI response
const response = await chatModel.call(conversationHistory);
// Add the AI response to memory
addToMemory("ai", response.text || "");
// Output the full conversation
console.log("Conversation History:");
memoryStore.forEach((msg) => {
console.log(`${msg.role.toUpperCase()}: ${msg.content}`);
});
console.log("\nAI Response:");
console.log(response.text || response);
}
main().catch(console.error);
This example demonstrates how to implement long-term conversation memory using LangChain's ChatOpenAI model. The script stores conversation history in a simple in-memory data structure, allowing AI to retain context across multiple interactions. By keeping track of previous exchanges, the function enables more dynamic, context-aware responses. Whether you're building chatbots, virtual assistants, or interactive AI tools, this implementation showcases how to create a seamless, memory-enabled conversational experience.
https://js.langchain.com/docs/Conclusion
...