
Knowledge Distillation (KD) transfers the knowledge of a large pretrained teacher model into a smaller student model, but the training curriculum—i.e., the schedule and weighting of the distillation signal—remains an open design question. We introduce PLS-KD (Progressive Learning Scheduling for Knowledge Distillation), a family of curriculum-based distillation strategies that modulate how soft targets from a 128-hour teacher are presented to a 32-hour student over 200 epochs.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
