Multiresolution Knowledge Distillation for Anomaly Detection

TitleMultiresolution Knowledge Distillation for Anomaly Detection
Publication TypeConference Paper
Year of Publication2021
AuthorsSalehi, M., N. Sadjadi, S. Baselizadeh, M. H. Rohban, and H. R. Rabiee
Conference NameConference on Computer Vision and Pattern Recognition (CVPR)
Date Published06/2021
Conference LocationVirtual
AbstractUnsupervised representation learning has proved to be a critical component of anomaly detection/localization in images. The challenges to learn such a representation are two-fold. Firstly, the sample size is not often large enough to learn a rich generalizable representation through con- ventional techniques. Secondly, while only normal sam- ples are available at training, the learned features should be discriminative of normal and anomalous samples. Here, we propose to use the “distillation” of features at various layers of an expert network, pre-trained on ImageNet, into a simpler cloner network to tackle both issues. We detect and localize anomalies using the discrepancy between the expert and cloner networks’ intermediate activation val- ues given the input data. We show that considering mul- tiple intermediate hints in distillation leads to better ex- ploiting the expert’s knowledge and more distinctive dis- crepancy compared to solely utilizing the last layer acti- vation values. Notably, previous methods either fail in pre- cise anomaly localization or need expensive region-based training. In contrast, with no need for any special or inten- sive training procedure, we incorporate interpretability al- gorithms in our novel framework for localization of anoma- lous regions. Despite the striking contrast between some test datasets and ImageNet, we achieve competitive or sig- nificantly superior results compared to the SOTA methods on MNIST, F-MNIST, CIFAR-10, MVTecAD, Retinal-OCT, and two Medical datasets on both anomaly detection and localization.