Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

How to Create Your Own Autonomous AI Agent with AutoGen Studio

Tiempo de lectura: 2 minutos

Learn how to create an autonomous intelligent agent that can plan, reason and execute tasks on its own without needing expertise in AI. Just follow these steps.

Mountains - pexels

Autonomous agents are revolutionizing personal and professional productivity thanks to platforms like AutoGen Studio, which allows you to build assistants that:

Requirements

COUNTS ON AutoGen Studio

PYTHON 3.10+ INSTALLED

OpenAI API Key (or other LLM such as Claude, Gemini or open-source models)

Visual Studio Code https://code.visualstudio.com/

We installed Auto-gen

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
pip install pyautogen
pip install pyautogen
pip install pyautogen

AutoGen is a Microsoft library to create intelligent conversational agents. You can easily integrate it with LLMs like GPT-4 or LLaMA.Create your first agent in PythonSave this file as agent.py:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
from autogen import AssistantAgent, UserProxyAgent # Define the LLM (default OpenAI GPT-4)config = { "llm_config": { "model": "gpt-4", "api_key": "YOUR_OPENAI_API_KEY" } }# Create an assistant agentasistente = AssistantAgent(name="Assistant", **config)# Create a user proxy agentusuario = UserProxyAgent(name="User", code_execution_config={"work_dir": "agent-workdir"})# Start the conversationusuario.initiate_chat(asistente, message="I want you to organize my week and make me a content plan.")
from autogen import AssistantAgent, UserProxyAgent # Define the LLM (default OpenAI GPT-4)config = { "llm_config": { "model": "gpt-4", "api_key": "YOUR_OPENAI_API_KEY" } }# Create an assistant agentasistente = AssistantAgent(name="Assistant", **config)# Create a user proxy agentusuario = UserProxyAgent(name="User", code_execution_config={"work_dir": "agent-workdir"})# Start the conversationusuario.initiate_chat(asistente, message="I want you to organize my week and make me a content plan.")
from autogen import AssistantAgent, UserProxyAgent # Define the LLM (default OpenAI GPT-4)config = { "llm_config": { "model": "gpt-4", "api_key": "YOUR_OPENAI_API_KEY" } }# Create an assistant agentasistente = AssistantAgent(name="Assistant", **config)# Create a user proxy agentusuario = UserProxyAgent(name="User", code_execution_config={"work_dir": "agent-workdir"})# Start the conversationusuario.initiate_chat(asistente, message="I want you to organize my week and make me a content plan.")

Add external tools (plugins)

You can connect your agent to:

Example for executing Python dynamically:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
code_execution_config={"use_docker": False}
code_execution_config={"use_docker": False}
code_execution_config={"use_docker": False}

This allows the agent to write and execute code for you, as if it were a virtual programmer assistant.

Examples of use:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
Scenario: You are a content creator. Tell the agent: "Make me a calendar of postings about artificial intelligence for this month, focused on LinkedIn and TikTok." The agent can: Generate post ideas. Create viral titles. Program them using your Buffer or Zapier API.
Scenario: You are a content creator. Tell the agent: "Make me a calendar of postings about artificial intelligence for this month, focused on LinkedIn and TikTok." The agent can: Generate post ideas. Create viral titles. Program them using your Buffer or Zapier API.
Scenario: You are a content creator. Tell the agent: "Make me a calendar of postings about artificial intelligence for this month, focused on LinkedIn and TikTok." The agent can: Generate post ideas. Create viral titles. Program them using your Buffer or Zapier API.

Extra: Configure AutoGen to use Ollama

AutoGen needs a class that defines how to connect to the model. Use the Ollama model as custom LLM.

Create a file called ollama_llm.py with this class:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
import requests class OllamaLLM: def __init__(self, model="llama3", base_url="http://localhost:11434"): self.model = model self.base_url = base_url def __call__(self, prompt, **kwargs): response = requests.post(f"{self.base_url}/api/generate", json={"model": self.model, "prompt": prompt}) return response.json()["response"]
import requests class OllamaLLM: def __init__(self, model="llama3", base_url="http://localhost:11434"): self.model = model self.base_url = base_url def __call__(self, prompt, **kwargs): response = requests.post(f"{self.base_url}/api/generate", json={"model": self.model, "prompt": prompt}) return response.json()["response"]
import requests class OllamaLLM:    def __init__(self, model="llama3", base_url="http://localhost:11434"):        self.model = model        self.base_url = base_url    def __call__(self, prompt, **kwargs):        response = requests.post(f"{self.base_url}/api/generate", json={"model": self.model, "prompt": prompt})        return response.json()["response"]

Use your custom LLM in AutoGen

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
from autogen import AssistantAgent, UserProxyAgent from ollama_llm import OllamaLLM llm = OllamaLLM(model="llama3") asistente = AssistantAgent( name="Assistant", llm_config={ "completion_fn": llm } ) usuario = UserProxyAgent(name="User") usuario.initiate_chat(asistente, message="Tell me what is AutoGen in simple language.")
from autogen import AssistantAgent, UserProxyAgent from ollama_llm import OllamaLLM llm = OllamaLLM(model="llama3") asistente = AssistantAgent( name="Assistant", llm_config={ "completion_fn": llm } ) usuario = UserProxyAgent(name="User") usuario.initiate_chat(asistente, message="Tell me what is AutoGen in simple language.")
from autogen import AssistantAgent, UserProxyAgent from ollama_llm import OllamaLLM llm = OllamaLLM(model="llama3") asistente = AssistantAgent( name="Assistant", llm_config={ "completion_fn": llm } ) usuario = UserProxyAgent(name="User") usuario.initiate_chat(asistente, message="Tell me what is AutoGen in simple language.") 

Now the agent responds using Ollama locally, without need of API keys or internet. You can change the model easily (mistral, phi3, codellama, etc).

0

Leave a Comment