Hello everyone, and welcome to this tutorial! In this video, we will create a comprehensive, end-to-end multi-AI agent application using LangGraph, AstraDB, and Llama 3.1. If you are unfamiliar with LangGraph, it helps you manage AI agents' states and communications. In this post, we will walk through the project architecture, step-by-step implementation using these tools, and conclude with the final working application.
The architecture of our project will include:
We will start by creating a vector database with AstraDB and then add nodes to handle user queries. Here’s a step-by-step guide:
First, create an AstraDB vector database:
vectorDB.createServerlessDatabase({
name: 'testDatabase',
region: 'us-east-1'
});
Next, generate and secure access tokens and retrieve your database ID.
We will connect to AstraDB using Casio:
from casio import Casio
casio = Casio(token=AstraDB_application_token, db_id=astraDB_ID)
We'll read data from websites, split it into chunks, and convert it to vectors before storing it in the database:
from langchain.libs import LangChain, LangGraph
from doc_loader import WebBasedLoader
# Loading data
loader = WebBasedLoader(urls=["https://example.com/agent", "https://example.com/prompt", "https://example.com/adversarial"])
documents = loader.load()
# Splitting data
text_splitter = LangChain.splitters.CharTextSplitter(chunk_size=500, overlap=0)
split_docs = text_splitter.split(documents)
# Converting to vectors
embeddings = LangChain.embeddings.HuggingFaceEmbeddings(model="all-MiniLM-L6-v2")
vector_store = LangChain.vector_stores.Cassandra(embeddings=embeddings, table_name="QA_mini_demo")
vector_store.insert_documents(split_docs)
For our language model, we will use Llama 3.1:
from langchain.models import Llama
llm = Llama(api_token=grok_api_key, model_name="llama-3.1-70B")
structured_llm = llm.with_structured(Output=RouteQuery)
Prompt engineering will help route user queries to the appropriate node:
# Define prompt
prompt = "You are an expert at routing user queries to either the vector store or Wikipedia. Use the vector store for questions on agents, prompt engineering, and attacks; otherwise, use Wiki search."
# Creating a question router
question_router = LangChain.ChatPrompt(template=prompt, model=structured_llm)
response = question_router.invoke("What is an agent?")
We will define different nodes and connect them conditionally:
workflow = LangGraph.StateGraph(class_=GraphState)
# Defining nodes
workflow.add_node("wiki_search", function=wiki_search_function)
workflow.add_node("retrieve", function=retrieve_function)
# Adding edges
workflow.add_conditional_edges("start", "route_query", {"wiki_search": "wiki_search", "vector_store": "retrieve"})
# Compile the workflow
app = workflow.compile()
Finally, let’s run the application:
for output in app.stream("What is an agent?"):
print(output)
This should output results from the vector store, as the query is relevant to the database.
This comprehensive project combines multiple AI agents using LangGraph and AstraDB, and integrates with an LLM such as Llama 3.1. By following these steps, you can effectively manage and route user queries to appropriate data sources, making your chatbots and AI tools more versatile and efficient.
Thank you for following along! For an in-depth tutorial, please watch the video using the link below: