How to Create Your Own Autonomous AI Agent with AutoGen Studio

Tiempo de lectura: 2 minutos

Learn how to create an autonomous intelligent agent that can plan, reason and execute tasks on its own without needing expertise in AI. Just follow these steps.

Mountains - pexels

Autonomous agents are revolutionizing personal and professional productivity thanks to platforms like AutoGen Studio, which allows you to build assistants that:

Requirements

COUNTS ON AutoGen Studio

PYTHON 3.10+ INSTALLED

OpenAI API Key (or other LLM such as Claude, Gemini or open-source models)

Visual Studio Code https://code.visualstudio.com/

We installed Auto-gen

pip install pyautogen

AutoGen is a Microsoft library to create intelligent conversational agents. You can easily integrate it with LLMs like GPT-4 or LLaMA.Create your first agent in PythonSave this file as agent.py:

from autogen import AssistantAgent, UserProxyAgent # Define the LLM (default OpenAI GPT-4)config = { "llm_config": { "model": "gpt-4", "api_key": "YOUR_OPENAI_API_KEY" } }# Create an assistant agentasistente = AssistantAgent(name="Assistant", **config)# Create a user proxy agentusuario = UserProxyAgent(name="User", code_execution_config={"work_dir": "agent-workdir"})# Start the conversationusuario.initiate_chat(asistente, message="I want you to organize my week and make me a content plan.")

Add external tools (plugins)

You can connect your agent to:

Example for executing Python dynamically:

code_execution_config={"use_docker": False}

This allows the agent to write and execute code for you, as if it were a virtual programmer assistant.

Examples of use:

Scenario: You are a content creator. Tell the agent: "Make me a calendar of postings about artificial intelligence for this month, focused on LinkedIn and TikTok." The agent can: Generate post ideas. Create viral titles. Program them using your Buffer or Zapier API.

Extra: Configure AutoGen to use Ollama

AutoGen needs a class that defines how to connect to the model. Use the Ollama model as custom LLM.

Create a file called ollama_llm.py with this class:

import requests class OllamaLLM:    def __init__(self, model="llama3", base_url="http://localhost:11434"):        self.model = model        self.base_url = base_url    def __call__(self, prompt, **kwargs):        response = requests.post(f"{self.base_url}/api/generate", json={"model": self.model, "prompt": prompt})        return response.json()["response"]

Use your custom LLM in AutoGen

from autogen import AssistantAgent, UserProxyAgent from ollama_llm import OllamaLLM llm = OllamaLLM(model="llama3") asistente = AssistantAgent( name="Assistant", llm_config={ "completion_fn": llm } ) usuario = UserProxyAgent(name="User") usuario.initiate_chat(asistente, message="Tell me what is AutoGen in simple language.") 

Now the agent responds using Ollama locally, without need of API keys or internet. You can change the model easily (mistral, phi3, codellama, etc).

Leave a Comment