Hoy vamos a indicar cuales son los modelos Ollama Cloud que no dan el error de: {«error»:»this model requires a subscription, upgrade for access: https://ollama.com/upgrade (ref: 374c814e-dc9c-4fc1-a898-0bbc57cb0d0a)»}
To determine this and by running tests with this script we can know which models are available to use with cloud and free tier
cat <<"EOF" > /tmp/test_models.sh #!/bin/bash TOKEN="TU_TOKEN_OLLAMA" MODELS=( "qwen3-next:80b" "deepseek-v4-flash" "ministral-3:14b" "ministral-3:8b" "ministral-3:3b" "gemma3:4b" "gemma3:12b" "gemma3:27b" "gemma4:31b" "glm-4.6" "glm-4.7" "glm-5" "glm-5.1" "kimi-k2.5" "kimi-k2.6" "minimax-m2" "minimax-m2.1" "minimax-m2.5" "minimax-m2.7" "deepseek-v3.1:671b" "deepseek-v3.2" "deepseek-v4-pro" "qwen3.5:397b" "qwen3-coder:480b" "nemotron-3-nano:30b" "gpt-oss:20b" "gpt-oss:120b" "rnj-1:8b" "cogito-2.1:671b" ) for MODEL in "${MODELS[@]}"; do RESULT=$(curl -s https://ollama.com/v1/chat/completions \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $TOKEN" \ -d "{\"model\": \"${MODEL}:cloud\", \"messages\": [{\"role\": \"user\", \"content\": \"hi\"}], \"max_tokens\": 5}" \ --max-time 15) if echo "$RESULT" | grep -q "error"; then echo "❌ ${MODEL}:cloud" else echo "✅ ${MODEL}:cloud" fi done <<"EOF" bash /tmp/test_models.sh
In my test, it has been:
✅ qwen3-next:80b:cloud ❌ deepseek-v4-flash:cloud ✅ ministral-3:14b:cloud ✅ ministral-3:8b:cloud ✅ ministral-3:3b:cloud ✅ gemma3:4b:cloud ✅ gemma3:12b:cloud ✅ gemma3:27b:cloud ✅ gemma4:31b:cloud ✅ glm-4.6:cloud ✅ glm-4.7:cloud ❌ glm-5:cloud ❌ glm-5.1:cloud ❌ kimi-k2.5:cloud ❌ kimi-k2.6:cloud ✅ minimax-m2:cloud ✅ minimax-m2.1:cloud ✅ minimax-m2.5:cloud ❌ minimax-m2.7:cloud ✅ deepseek-v3.1:671b:cloud ✅ deepseek-v3.2:cloud ❌ deepseek-v4-pro:cloud ✅ qwen3.5:397b:cloud ✅ qwen3-coder:480b:cloud ✅ nemotron-3-nano:30b:cloud ✅ gpt-oss:20b:cloud ✅ gpt-oss:120b:cloud ✅ rnj-1:8b:cloud ✅ cogito-2.1:671b:cloud
qwen3-next:80b, ministral-3:14b, ministeral-3:8b, ministeral-3:3b, gemma3:4b, gemma3:12b, gemma3:27b, gemma4:31b, glm-4.6, glm-4.7, minimax-m2, minimax-m2.1, minimax-m2.5, deepseek-v3.1:671b, deepseek-v3.2, qwen3.5:397b, qwen3-coder:480b, nemotron-3-nano:30b, gpt-oss:20b, gpt-oss:120b, rnj-1:8b, cogito-2.1:671b
