In this tutorial, we lead you by configuring a totally functional bot on Google Colab, which he uses Claude Anthropic model along Meme0 for trouble -free memory withdrawal. Combining the intuitive state orchestration of Langraph with a robust mem0 memory store based on vectors will allow our assistant to recollect previous conversations, download appropriate details about demand and maintain natural continuity in various sessions. Regardless of whether you’re constructing support bots, virtual assistants or interactive demos, this guide equips you with a solid foundation for AI experiences based on memory.
!pip install -qU langgraph mem0ai langchain langchain-anthropic anthropic
First, we install and update Langraph, MEM0 AI customer, Langchain with an anthropic connector and a basic anthropic SDK, ensuring that we’ve got all the newest libraries required to construct Chatbot Claude in Google Calab. Starting it in advance will avoid problems with dependence and improve the configuration process.
import os
from typing import Annotated, TypedDict, List
from langgraph.graph import StateGraph, START
from langgraph.graph.message import add_messages
from langchain_core.messages import SystemMessage, HumanMessage, AIMessage
from langchain_anthropic import ChatAnthropic
from mem0 import MemoryClient
We mix the essential structural elements for our Chatbot Colab: it loads the interface of the API operating system, written dictionaries and annotations of Python to define the state of conversation, chart and news Langraph and message decorators to arrange chat flow.
os.environ["ANTHROPIC_API_KEY"] = "Use Your Own API Key"
MEM0_API_KEY = "Use Your Own API Key"
We safely inject our anthropic and meM0 certificates into the environment and the local variable, ensuring that the Chatanthropic customer and the MEM0 memory store can authenticate without coding sensitive keys throughout our notebook. By centralizing our API keys here, we maintain pure code separation from secrets, while enabling trouble -free access to the Claude model and a durable layer of memory.
llm = ChatAnthropic(
model="claude-3-5-haiku-latest",
temperature=0.0,
max_tokens=1024,
anthropic_api_key=os.environ["ANTHROPIC_API_KEY"]
)
mem0 = MemoryClient(api_key=MEM0_API_KEY)
We initiate our AI conversation core: First, it creates a chatanthropic instance configured to consult with the Sonet Claude 3.5 at zero temperature for deterministic answers and as much as 1024 tokens to response, using our saved anthropic key for authentication. Then the Mem0 Mem0 MemoryClient is rotated with our API MEM0 key, which supplies our bot a everlasting memorial store based on vectors to simply save and download past interactions.
class State(TypedDict):
messages: Annotated[List[HumanMessage | AIMessage], add_messages]
mem0_user_id: str
graph = StateGraph(State)
def chatbot(state: State):
messages = state["messages"]
user_id = state["mem0_user_id"]
memories = mem0.search(messages[-1].content, user_id=user_id)
context = "n".join(f"- {m['memory']}" for m in memories)
system_message = SystemMessage(content=(
"You are a helpful customer support assistant. "
"Use the context below to personalize your answers:n" + context
))
full_msgs = [system_message] + messages
ai_resp: AIMessage = llm.invoke(full_msgs)
mem0.add(
f"User: {messages[-1].content}nAssistant: {ai_resp.content}",
user_id=user_id
)
return {"messages": [ai_resp]}
We define the scheme of the conversation status and connect it to the Langraph State Machine: Typical status tracks MEM0’s message and user ID, and Graph = Stategraph (State) configures the flow controller. In chatbot, the user’s latest message is used to ask MEM0 in the case of appropriate memories, a system prompt is constructed with an enlarged context, Claude generates the reply, and the brand new exchange is saved back to MEM0 before the assistant response.
graph.add_node("chatbot", chatbot)
graph.add_edge(START, "chatbot")
graph.add_edge("chatbot", "chatbot")
compiled_graph = graph.compile()
We connect our Chatbot function to the flow of Langrafh, registering it as a node called “chatbot”, after which connecting the built -in start tag with this node. That is why the conversation begins and at last creates a bonus in a self -loop, so every latest message of the user again enters the identical logic. Calling Graph.compile () Then transforms this node and edge configuration into an optimized graphic object, which routinely manages every turn of our chat session.
def run_conversation(user_input: str, mem0_user_id: str):
config = {"configurable": {"thread_id": mem0_user_id}}
state = {"messages": [HumanMessage(content=user_input)], "mem0_user_id": mem0_user_id}
for event in compiled_graph.stream(state, config):
for node_output in event.values():
if node_output.get("messages"):
print("Assistant:", node_output["messages"][-1].content)
return
if __name__ == "__main__":
print("Welcome! (type 'exit' to quit)")
mem0_user_id = "customer_123"
while True:
user_in = input("You: ")
if user_in.lower() in ["exit", "quit", "bye"]:
print("Assistant: Goodbye!")
break
run_conversation(user_in, mem0_user_id)
We tie every part together, defining Run_Conversation, which packs our user’s entrance to the Langraph state, sends it through a compiled chart to call the Chatbot node and prints Claude’s answer. The guard __Main__ then launches a straightforward representation loop, encouraging us to write down messages, lead them through our chart serving the memory and gracefully going out after the introduction of the “exit”.
In summary, we collected a conversational AI pipeline, which connects the newest Claude Anthropica model with everlasting MEM0 memory possibilities, all organized through Langraph in Google Colab. This architecture allows our bot to recall the main points of the user, adapt the answers in time and supply personalized support. Hence, consider experimenting with richer memory strategies, adjusting Claude’s hints or the mixing of additional tools with the chart.
Check out All recognition for these research is as a result of researchers of this project. Do not restore yourself either Twitter And remember to affix ours 95K+ ML Subreddit.
Here is a brief review of what we construct on MarktechPost:
Asif Razzaq is the final director of the MarktechPost Media Inc .. As a visionary entrepreneur and engineer, ASIF is involved in the usage of the potential of the bogus intelligence of social good. His latest undertaking is to launch the bogus intelligence media platform, Marktechpost, which is distinguished by an in -depth relationship from machine learning and deep learning news, that are each technically solid and simply comprehensible by a large audience. The platform boasts over 2 million monthly views, illustrating its popularity amongst recipients.