Fine-Tuning
The process of further training a base LLM on a specific dataset to specialize its outputs for a domain, format, or behavior.
Full definition
Fine-tuning takes a pre-trained LLM and trains it further on your own dataset, typically 1,000+ input-output pairs. The model weights are updated, locking in domain-specific knowledge, output format, or tone. Fine-tuning is most useful when prompting cannot reliably enforce output structure, when prompt length is a cost or latency bottleneck, or when you need a domain dialect the base model lacks. The trade-off is operational. A fine-tuned model is harder to update, ties you to a specific base version, and requires retraining when the base model upgrades.
Frequently asked
Is fine-tuning expensive?
Less than it used to be, but the operational overhead of maintaining a fine-tuned model often outweighs the per-token savings.
When should you fine-tune?
When you have 1,000+ examples and a specific behavior or output format that prompting cannot reliably enforce.