Building an Agent with LangGraph without LLM Connection Tutorial

Building an Agent with LangGraph without LLM Connection Tutorial

Tiempo de lectura: 3 minutosWe will build a LangGraph agent with Python and no LLM as an example for a basic tutorial. Our agent will do the following: You receive a user’s question Determine if you need to search for information (simulated) Respond with or without searching Save state between nodes First, we will install the library: pip install … Read more

Current Trends in Mobile Application Development and How to Apply Them Step by Step

Current Trends in Mobile Application Development and How to Apply Them Step by Step

Tiempo de lectura: 2 minutosThe mobile development is living one of its most interesting phases. The arrival of lightweight models such as Llama 3.2, more mature frameworks, and the boom of AI-driven apps are redefining how we build software. In this tutorial, we will review the most relevant trends and teach you how to apply them in your own … Read more

Language Models: General or Instructional – A Key Comparison

Language Models: General or Instructional – A Key Comparison

Tiempo de lectura: < 1 minutoWe need to understand that not all models are created equal. The main difference between general models and instructional models. General models Ejemplo:Pregunta: “Explain how a neural network works”Response from a general model: can give an extensive explanation, include unnecessary concepts or jump topics. Instruct Models Ejemplo:Pregunta: “Explain how a neural network works step by … Read more

How to activate GPU access from a Docker container, for example, to access an LLM model.

How to activate GPU access from a Docker container, for example, to access an LLM model.

Tiempo de lectura: 2 minutosEnable GPU access is essential if we need to start a LLM model and use GPU with VRAM. Previous requirements Please ensure you have an NVIDIA GPU installed on your machine with the drivers installed and updated. Check using: nvidia-smi You should see your GPU and driver version. Additionally, Docker must be installed. NVIDIA Container … Read more

Using VLLM with Docker for Deploying Our LLM Models in Production

Using VLLM with Docker for Deploying Our LLM Models in Production

Tiempo de lectura: 2 minutosIt’s an inference server optimized (uses paged attention) that supports models like Llama 3, Mistral, Gemma, Phi, Qwen, etc. This offers an OpenAI-compatible API, perfect for easy integration. We will create the Docker Compose that allows us to deploy it: File: docker-compose.yml version: “3.9” services: vllm: image: vllm/vllm-openai:latest container_name: vllm restart: unless-stopped ports: – “8000:8000” … Read more

Applying Artificial Intelligence to Development: How to Harness It Without Getting Complicated

Applying Artificial Intelligence to Development: How to Harness It Without Getting Complicated

Tiempo de lectura: < 1 minutoThe artificial intelligence (AI) has stopped being a topic of science fiction to become a common tool in software development. The AI offers unique opportunities to increase productivity, improve software quality and accelerate development time. However, many developers feel that integrating it can be complex or overwhelming. In this article, we explore how to use … Read more

Types of AI Agents available with LangChain

Types of AI Agents available with LangChain

Tiempo de lectura: < 1 minutoToday we are going to see what types of AI agents are available with LangChain. ➡️ The most used. What it does:The LLM decides which tool to use at each step based on only the textual description of the tools.No needs examples or prior instances. For: Typical usage example: agent = initialize_agent( tools=mis_tools, llm=llm, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION, … Read more

Adding searches on the Internet DuckDuckGo in Llama 3.2 8b using Ollama and LangChain for AI Agents

Adding searches on the Internet DuckDuckGo in Llama 3.2 8b using Ollama and LangChain for AI Agents

Tiempo de lectura: < 1 minutoWe will today make our AI agent have a tool that allows it to perform internet searches. First, we will install the necessary libraries: pip install langchain langchain_community pip install ddgs NOW INITIALIZING CONNECTION WITH OLLAMA SERVER (https://devcodelight.com/ollama-con-llama-3-2-in-docker/) from langchain_community.chat_models import ChatOllama from langchain_core.prompts import ChatPromptTemplate # … other imports … # ⚠️ YOUR REMOTE … Read more

Using Tools in LongChain 1.0 for Creating AI Agents

Using Tools in LongChain 1.0 for Creating AI Agents

Tiempo de lectura: 3 minutosToday we are going to learn how to use Tools in LongChain to create AI Agents. To execute this tutorial well, I recommend visiting the one on how to install LangChain. First: What are Tools in LangChain? In the context of a LangChain Agent, a Tool is any external function or resource that the Large … Read more

What is a Chain in LangChain?

What is a Chain in LangChain?

Tiempo de lectura: < 1 minutoWhat is a «Chain» (Link) in LangChain? It is a specific instruction: llm_chain = prompt | llm | output_parser An example perfect and modern of what means a Chain in the architecture of LangChain (specifically using the LangChain Expression Language or LCEL). A Chain (A Chain) in LangChain is the structured sequence of steps or … Read more