Do you want to teach your AI model super specific knowledge? Do you need it to respond like an expert in a specific niche? Then fine-tuning is your best friend… and it’s easier than you think!

You will learn how to train Llama 3.2 with your own dataset using Google Colab and Unslope, and generate a model that you can use in Ollama for your apps, projects or APIs.
Perfecto si quieres crear una IA para tu tienda, juego, negocio, público objetivo… lo que sea.
Preparing your Dataset
The dataset must be in JSONL format, where each line has this structure:
{"instruction":"User Question", "input":"", "output":"Correct Answer"}
Ejemplo:
{"instruction":"What is QuieroLibros?", "input":"", "output":"QuieroLibros is a platform where users buy and sell used books."}
Tip: at least 100-300 examples to notice changes.
The more consistent the style, the better your AI will be.
We Set Up the Environment in Google Colab
Install Unsloth:
!pip install unsloth
Load the base model:
from unsloth import FastLanguageModel model, tokenizer = FastLanguageModel.from_pretrained( model_name="unsloth/llama-3.2-1b", )
Load the dataset:
from datasets import load_dataset dataset = load_dataset("json", data_files="dataset.jsonl")
Training (Fine-Tuning)
Configured LoRA (faster and cheaper):
model = FastLanguageModel.get_peft_model(model, r=16, lora_alpha=32, lora_dropout=0.05)
Trained:
from unsloth import SFTTrainer trainer = SFTTrainer( model=model, tokenizer=tokenizer, train_dataset=dataset["train"], max_seq_length=2048, batch_size=1, lr=2e-5, epochs=3, ) trainer.train()
Export the Model for Ollama
model.save_pretrained("my_lora_model") tokenizer.save_pretrained("my_lora_model")
Download the folder my_lora_model.
Create a file Modelfile:
FROM ./my_lora_model_template """{{ .Prompt }}"""
And finally:
ollama create my-model ollama run my-model
Ready! You now have your customized model with your knowledge.
How to Use It Well
Ask specific questions about the dataset to check if you have really learned.
Example:
“Explain what is QuieroLibros and how it works.”
If you answer with your style, total success!
Tips for leveling up
Add diverse but consistent datasets.
Use chain-of-thought if you want longer reasoning.
Please update your dataset according to what you observe in your tests.
To train larger models, use gradual training.
Training your own model is no longer the domain of large corporations. With Llama 3.2 + Unsloth + Google Colab, you have power, ease and full control.
This process allows you to create , centered on your project and perfectly aligned with your purpose. And that, friend, is pure gold today.
