We are going to learn how to connect LangChain to our deployed Ollama server today with a small example, for instance, an IA Tools.

We first need to have Ollama deployed: here is how.
You will obtain the endpoint once Ollama is deployed, and then use LangChain with Python to connect to it remotely.
You need the full address where your Ollama server is listening.
Make sure that:
Configure LangChain
Install the library:
pip install langchain langchain-community langchain-core
Configure Ollama URL: Import the Ollama class and specify your server’s URL using the base_url parameter.
from langchain_community.llms import Ollama from langchain_core.prompts import PromptTemplate from langchain_core.output_parsers import StrOutputParser # ⚠️ CRITICAL STEP: Replace with your server IP/domain and port Ollama OLLAMA_SERVER_URL = "http://YOUR_IP/OR_DOMAIN:11434" MODEL_NAME = "llama3.2:3b" # Ensure this model is downloaded on your server # 1. Initialize LLM with remote server URL try: llm = Ollama(model=MODEL_NAME, base_url=OLLAMA_SERVER_URL, temperature=0.7 # Optional: for deterministic responses ) print(f"Connected to Ollama at: {OLLAMA_SERVER_URL}") except Exception as e: print(f"Error initializing connection. Check URL and server accessibility. Detail: {e}") # If connection fails here, it's a network issue (firewall or OLLAMA_HOST misconfigured)
Testing Connection
<
pre class=”EnlighterJSRAW” data-enlighter-language=”generic” data-enlighter-theme=”” data-enlighter-highlight=”” data-enlighter-linenumbers=”” data-enlighter-lineoffset=”” data-enlighter-title=”” data-enlighter-group=””>
