How to activate GPU access from a Docker container, for example, to access an LLM model.

How to activate GPU access from a Docker container, for example, to access an LLM model.

Tiempo de lectura: 2 minutosEnable GPU access is essential if we need to start a LLM model and use GPU with VRAM. Previous requirements Please ensure you have an NVIDIA GPU installed on your machine with the drivers installed and updated. Check using: nvidia-smi You should see your GPU and driver version. Additionally, Docker must be installed. NVIDIA Container … Read more

Using VLLM with Docker for Deploying Our LLM Models in Production

Using VLLM with Docker for Deploying Our LLM Models in Production

Tiempo de lectura: 2 minutosIt’s an inference server optimized (uses paged attention) that supports models like Llama 3, Mistral, Gemma, Phi, Qwen, etc. This offers an OpenAI-compatible API, perfect for easy integration. We will create the Docker Compose that allows us to deploy it: File: docker-compose.yml version: “3.9” services: vllm: image: vllm/vllm-openai:latest container_name: vllm restart: unless-stopped ports: – “8000:8000” … Read more

Types of AI Agents available with LangChain

Types of AI Agents available with LangChain

Tiempo de lectura: < 1 minutoToday we are going to see what types of AI agents are available with LangChain. ➡️ The most used. What it does:The LLM decides which tool to use at each step based on only the textual description of the tools.No needs examples or prior instances. For: Typical usage example: agent = initialize_agent( tools=mis_tools, llm=llm, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION, … Read more

Adding searches on the Internet DuckDuckGo in Llama 3.2 8b using Ollama and LangChain for AI Agents

Adding searches on the Internet DuckDuckGo in Llama 3.2 8b using Ollama and LangChain for AI Agents

Tiempo de lectura: < 1 minutoWe will today make our AI agent have a tool that allows it to perform internet searches. First, we will install the necessary libraries: pip install langchain langchain_community pip install ddgs NOW INITIALIZING CONNECTION WITH OLLAMA SERVER (https://devcodelight.com/ollama-con-llama-3-2-in-docker/) from langchain_community.chat_models import ChatOllama from langchain_core.prompts import ChatPromptTemplate # … other imports … # ⚠️ YOUR REMOTE … Read more

Using Tools in LongChain 1.0 for Creating AI Agents

Using Tools in LongChain 1.0 for Creating AI Agents

Tiempo de lectura: 3 minutosToday we are going to learn how to use Tools in LongChain to create AI Agents. To execute this tutorial well, I recommend visiting the one on how to install LangChain. First: What are Tools in LangChain? In the context of a LangChain Agent, a Tool is any external function or resource that the Large … Read more

What is a Chain in LangChain?

What is a Chain in LangChain?

Tiempo de lectura: < 1 minutoWhat is a «Chain» (Link) in LangChain? It is a specific instruction: llm_chain = prompt | llm | output_parser An example perfect and modern of what means a Chain in the architecture of LangChain (specifically using the LangChain Expression Language or LCEL). A Chain (A Chain) in LangChain is the structured sequence of steps or … Read more

Using LangChain with Ollama to Create an AI Agent Tool

Using LangChain with Ollama to Create an AI Agent Tool

Tiempo de lectura: < 1 minutoWe are going to learn how to connect LangChain to our deployed Ollama server today with a small example, for instance, an IA Tools. We first need to have Ollama deployed: here is how. You will obtain the endpoint once Ollama is deployed, and then use LangChain with Python to connect to it remotely. You … Read more