Model distillation is a training technique where a smaller or cheaper student model learns to approximate the behavior of a larger teacher model by matching the teacher’s outputs, internal representations, or probability distribution over tokens or classes.
What is Model Distillation?
In distillation, the teacher model produces targets that are richer than hard labels. For classification, the teacher provides a soft probability distribution over classes, which contains information about class similarities. For language models, the teacher can provide next-token distributions, sampled outputs, or preference signals that the student learns to imitate. The student is trained with a loss that combines the original supervised objective, if available, with a distillation objective such as KL divergence between student and teacher distributions at a chosen temperature. Distillation can also be applied to intermediate layers, attention maps, or hidden states to transfer representational structure. In generative AI, distillation is used to reduce inference latency and cost, enable on-device deployment, and produce specialized models that behave like an expensive frontier model for a narrow set of tasks.
Where it is used and why it matters
Distillation is common in AI infrastructure and MLOps because it turns a high-quality but costly model into a production-friendly model. Teams distill for throughput, lower memory footprint, and simpler serving. It also matters for governance because the student can inherit biases, safety gaps, or copyrighted memorization present in the teacher. Robust evaluation is necessary to ensure the student maintains quality and safety while meeting performance targets.
Types
- Logit distillation: student matches teacher output probabilities.
- Sequence distillation: student learns from teacher-generated sequences as training data.
- Feature distillation: student matches internal representations at selected layers.
FAQs
- Is distillation the same as fine-tuning?
No. Fine-tuning adapts a model to new data or behaviors. Distillation transfers behavior from a teacher, often without changing the teacher. - Do you need labeled data to distill?
Not always. You can distill using teacher-generated labels on unlabeled inputs, although some tasks benefit from ground-truth labels. - How do you measure distillation success?
Compare task metrics, calibration, latency, cost per token, and safety evaluations between student and teacher. - Does distillation reduce hallucinations?
It can if the teacher is better grounded and the data is curated, but it can also preserve the teacher’s hallucination patterns if not evaluated carefully.