We will build a LangGraph agent with Python and no LLM as an example for a basic tutorial.
Our agent will do the following:
You receive a user’s question
Determine if you need to search for information (simulated)
Respond with or without searching
Save state between nodes
First, we will install the library:
pip install langgraph langchain openai
Key Concept of LangGraph
| n8n | LangGraph |
|---|---|
| Nodo | Function |
| Workflow | Graph |
| Variables globales | State |
| Condición IF | Conditional edges |
<
p Now we are going to use this example:
<
pre class=”EnlighterJSRAW” data-enlighter-language=”generic” data-enlighter-theme=”” data-enlighter-highlight=”” data-enlighter-linenumbers=”” data-enlighter-lineoffset=”” data-enlighter-title=”” data-enlighter-group=””>from typing import TypedDict from langgraph.graph import StateGraph, END # 1 Define global state class AgentState(TypedDict): question: str needs_search: bool answer: str # 2 Node: decide if to search def decide_search(state: AgentState): question = state[“question”] needs_search = “who” in question.lower() or “what is” in question.lower() print(“Decide search:”, needs_search) return {“needs_search”: needs_search} # 3 Node: fake (simulated) search def search_node(state: AgentState): print(“Looking for info…”) fake_info = “LangGraph is a library of graphs for AI agents.” return {“answer”: f”Found this: {fake_info}”} # 4 Node: respond without search def answer_direct(state: AgentState): return {“answer”: “I don’t need to search, I’ll tell you what I know base.”} # 5 Create the graph builder = StateGraph(AgentState) builder.add_node(“decide_search”, decide_search) builder.add_node(“search”, search_node) builder.add_node(“direct_answer”, answer_direct) # Initial flow builder.set_entry_point(“decide_search”) # Conditional type IF builder.add_conditional_edges( “decide_search”, lambda state: “search” if state[“needs_search”] else “direct_answer”, ) # Ends builder.add_edge(“search”, END) builder.add_edge(“direct_answer”, END) graph = builder.compile() # 6 Execute if name == “main“: result = graph.invoke({“question”: “What is LangGraph?”}) print(“\n Final result:”) print(result)
class AgentState(TypedDict): question: str needs_search: bool answer: str
What is State in LangGraph
It’s the object that travels between nodes.
Like global workflow variables in n8n.
state = { "question": "What is LangGraph?", "needs_search": True, "answer": "text" }
Why TypedDict
Because LangGraph needs to know:
It’s not required, but very recommended.
def decide_search(state: AgentState): question = state["question"] needs_search = "who" in question.lower() or "what is" in question.lower() print("Decide search:", needs_search) return {"needs_search": needs_search}
What is a node in LangGraph
A node = a pure Python function.
Always:
{"needs_search": False}
LangGraph merges automatically this with the global state.
def search_node(state: AgentState): print("Searching...") fake_info = "LangGraph is an AI graph library." return {"answer": f"Found this: {fake_info}"}
Important
This node simulates a tool (like Google Search, RAG, DB).
In a real system here would go:
Return only:
{"answer": "..."}
Node 3: direct response
def answer_direct(state: AgentState): return {"answer": "I don't need to search, I will respond with my base knowledge."}
Simple node that responds directly.
In a real agent, it would be an LLM call.
builder = StateGraph(AgentState)
What is builder
The workflow editor.
You say:
builder.add_node("decide_search", decide_search) builder.add_node("search", search_node) builder.add_node("direct_answer", answer_direct)
Format:
add_node("node_name", function)
Is the same as creating nodes in n8n visually.
builder.set_entry_point("decide_search")
This defines:
→ Where the workflow starts
It would be the trigger node in n8n.
builder.add_conditional_edges( "decide_search", lambda state: "search" if state["needs_search"] else "direct_answer", )
This is key.
After decide_search, execute this function:
lambda state: ...
If needs_search == True → goes to the node "search"
If not → goes to the node "direct_answer"
It’s literally:
IF needs_search → search ELSE → direct_answer
Final nodes
builder.add_edge("search", END) builder.add_edge("direct_answer", END)
This means:
After search → finish
After direct_answer → finish
If you don’t put it, LangGraph won’t know when to stop.
graph = builder.compile()
This converts the builder into:
A runtime engine graph (runtime engine)
result = graph.invoke({"question": "What is LangGraph?"})
What does invoke do
- Creates the initial state
- Executes nodes in order
- Saltates over conditional statements
- Merges states
- Returns the final state
Initial state
{"question": "What is LangGraph?"}
LangGraph will add:
print(result)
Returns something like:
{ 'question': 'What is LangGraph?', 'needs_search': True, 'answer': 'I found that: LangGraph is a library...' }
And to visualize the graph:
with open("summary_workflow.png", "wb") as f: f.write(graph.get_graph().draw_mermaid_png())
