The process of further training a pre-trained LLM on a specialized dataset to alter its behavior or improve performance on specific tasks. A high-effort approach for large-scale, specialized needs.
Definition: The process of further training a pre-trained LLM on a specialized dataset to alter its behavior or improve performance on specific tasks. A high-effort approach for large-scale, specialized needs.
Fine-tuning involves taking a pre-trained Large Language Model and continuing to train it on a specialized dataset to modify its behavior or improve its performance on specific tasks.
| Aspect | Fine-Tuning | RAG |
|---|---|---|
| What changes | The model's weights (behavior) | The context provided to the model |
| Effort level | High (requires large datasets, compute) | Medium (requires knowledge base setup) |
| Use case | Altering fundamental model behavior | Grounding responses in specific facts |
| Cost | Higher | Lower |
Fine-tuning makes sense when:
For most research applications, fine-tuning is overkill. Better approaches include:
Fine-tuning is a high-investment approach best suited for organizations with specific, large-scale needs—such as building a centralized research repository that needs to consistently apply organizational frameworks and terminology across all outputs.
An AI system trained on vast amounts of text to predict and generate human-like language. Best understood as a concept-transformation engine rather than a knowledge database.
A technique that enhances LLM responses by first retrieving relevant information from a specific knowledge base, then using that information to ground the model's output.
This term is referenced in the following articles: