ChatPaper.aiChatPaper

PHI-S:用於無標籤多教師蒸餾的分佈平衡

PHI-S: Distribution Balancing for Label-Free Multi-Teacher Distillation

October 2, 2024
作者: Mike Ranzinger, Jon Barker, Greg Heinrich, Pavlo Molchanov, Bryan Catanzaro, Andrew Tao
cs.AI

摘要

各種視覺基礎模型具有獨特的優勢和劣勢,透過無需標籤的異質多教師知識蒸餾,即所謂的「凝聚模型」,可以改善這兩者。我們在這方面的研究基礎上,探討教師的激活統計數據對結果學生模型品質的影響,特別是損失函數的影響。我們探索了一套標準的統計正規化技術工具包,以更好地對齊不同分佈並評估其影響。此外,我們研究了對下游教師匹配指標的影響,這促使我們使用哈達瑪矩陣。通過這些矩陣,我們展示了有用的特性,展示了它們如何用於等向標準化,其中多變量分佈的每個維度都使用相同的比例進行標準化。我們將這種技術稱為「PHI標準化」(PHI-S),並通過實證證明,它在所研究的方法套件中產生了最佳的學生模型。
English
Various visual foundation models have distinct strengths and weaknesses, both of which can be improved through heterogeneous multi-teacher knowledge distillation without labels, termed "agglomerative models." We build upon this body of work by studying the effect of the teachers' activation statistics, particularly the impact of the loss function on the resulting student model quality. We explore a standard toolkit of statistical normalization techniques to better align the different distributions and assess their effects. Further, we examine the impact on downstream teacher-matching metrics, which motivates the use of Hadamard matrices. With these matrices, we demonstrate useful properties, showing how they can be used for isotropic standardization, where each dimension of a multivariate distribution is standardized using the same scale. We call this technique "PHI Standardization" (PHI-S) and empirically demonstrate that it produces the best student model across the suite of methods studied.

Summary

AI-Generated Summary

PDF364November 16, 2024