Today we are going to learn how to use Tools in LongChain to create AI Agents. To execute this tutorial well, I recommend visiting the one on how to install LangChain.
First: What are Tools in LangChain?
In the context of a LangChain Agent, a Tool is any external function or resource that the Large Language Model (LLM) can invoke to obtain information or perform an action beyond its own knowledge or computational capabilities.
Think of the Agent as CEO: it can think and reason, but needs its “employees” (the Tools) to do the real work (searching data, calculating, sending emails, etc.).
The simplest and most common way to create a Tool in LangChain is to wrap a normal Python function so that the LLM can understand its purpose and how to invoke it.
Example 1: A simple calculation function
If you want your Agent to add numbers, the LLM cannot do the perfect calculation, but it can use a Tool:
from langchain_core.tools import tool # @tool: This decorator converts the Python function into a LangChain Tool. @tooldef sum_two_numbers(a: int, b: int) -> int: """Adds two integers and returns the result.""" return a + b # The Agent will read: # - The name of the function: sum_two_numbers # - The docstring: "Adds two integers and returns the result." (This is the key for it to know when to use it!) # - The parameters: a and b (that the LLM must provide)
2. External Resources (Prebuilt Tools)
The Tools can also be wrappers (envelopes) of complex services or APIs. LangChain already has many pre-built tools ready to use:
| Tool Type | Example and Purpose |
| Internet Search | TavilySearch or Google Search. The Agent uses it if the question requires updated information (e.g. «Who won the last NBA game?»). |
| Code Execution | PythonREPLTool. Allows the Agent to execute Python code for complex tasks or validation. |
| Databases | Tools for interacting with SQL or vector databases (like RAG). |
To make your Agent (which uses your Ollama remote model) use these Tools, you need to provide them:
Installing the necessary libraries:
pip install langchain langchain_community
<
p Now, a connection object to ChatOllama is created:
# 2. Importing the class from the new package from langchain_community.chat_models import ChatOllama from langchain_core.prompts import ChatPromptTemplate # ... other imports # ⚠️ YOUR REMOTE OLLAMA_SERVER_URL = "URL OLLAMA" MODEL_NAME = "llama3.2:3b" # 3. Initializing the LLM (the rest of the code is the same) try: llm = ChatOllama( model=MODEL_NAME, base_url=OLLAMA_SERVER_URL, temperature=0.7 ) print(f"✅ Connected to ChatOllama at: {OLLAMA_SERVER_URL}") # ... continue with bind_tools(mis_tools) and agent creation. except Exception as e: print(f"❌ Error initializing or binding: {e}");
Example of Tools:
html
from langchain_core.tools import tool@tooldef get_actual_temperature(city: str) -> str: """ Returns the actual temperature in a specific city. Useful for climate-related questions. """if "madrid" in city.lower():return "25°C, sunny"elif "londres" in city.lower():return "12°C, cloudy and rainy"else:return f"No weather data available for {city}."@tooldef get_stock_price(symbol: str) -> str: """ Returns the current price of a stock using its symbol (e.g. 'GOOG'). Useful for financial questions. """symbol = symbol.upper()if simbolo == "GOOG":return "Price: $175.40 USD"elif simbolo == "AAPL":return "Price: $190.15 USD"else:return f"Stock symbol not found: {simbolo}"mis_tools = [get_actual_temperature, get_stock_price]
Later, in LangChain, you would use a technique called Function Calling or Tool Calling to have the LLM decide which tool to use in its reasoning chain.
Now we create an Agent Chain that will manage the reasoning and execution of the tools.
from langchain.agents import initialize_agent, AgentType# Initialize agentagent = initialize_agent(tools=mis_tools,llm=llm,agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,verbose=True)# Execute questionsquestion1 = "What is the current temperature in Madrid?"response1 = agent.invoke(question1)question2 = "What is the price of GOOG?"response2 = agent.invoke(question2)print("💬 Temperature:", response1)print("💬 Price GOOG:", response2)
