Adversarially Learning Occlusions by Backpropagation for Face Recognition

Bibliographic Details
Title: Adversarially Learning Occlusions by Backpropagation for Face Recognition
Authors: Caijie Zhao, Ying Qin, Bob Zhang
Source: Sensors, Vol 23, Iss 20, p 8559 (2023)
Publisher Information: MDPI AG, 2023.
Publication Year: 2023
Collection: LCC:Chemical technology
Subject Terms: occluded face recognition, deep neural network, end-to-end, adversarial learning, Chemical technology, TP1-1185
More Details: With the accomplishment of deep neural networks, face recognition methods have achieved great success in research and are now being applied at a human level. However, existing face recognition models fail to achieve state-of-the-art performance in recognizing occluded face images, which are common scenarios captured in the real world. One of the potential reasons for this is the lack of large-scale training datasets, requiring labour-intensive and costly labelling of the occlusions. To resolve these issues, we propose an Adversarially Learning Occlusions by Backpropagation (ALOB) model, a simple yet powerful double-network framework used to mitigate manual labelling by contrastively learning the corrupted features against personal identity labels, thereby maximizing the loss. To investigate the performance of the proposed method, we compared our model to the existing state-of-the-art methods, which function under the supervision of occlusion learning, in various experiments. Extensive experimentation on LFW, AR, MFR2, and other synthetic masked or occluded datasets confirmed the effectiveness of the proposed model in occluded face recognition by sustaining better results in terms of masked face recognition and general face recognition. For the AR datasets, the ALOB model outperformed other advanced methods by obtaining a 100% recognition rate for images with sunglasses (protocols 1 and 2). We also achieved the highest accuracies of 94.87%, 92.05%, 78.93%, and 71.57% TAR@FAR = 1 × 10−3 in LFW-OCC-2.0 and LFW-OCC-3.0, respectively. Furthermore, the proposed method generalizes well in terms of FR and MFR, yielding superior results in three datasets, LFW, LFW-Masked, and MFR2, and producing accuracies of 98.77%, 97.62%, and 93.76%, respectively.
Document Type: article
File Description: electronic resource
Language: English
ISSN: 1424-8220
Relation: https://www.mdpi.com/1424-8220/23/20/8559; https://doaj.org/toc/1424-8220
DOI: 10.3390/s23208559
Access URL: https://doaj.org/article/73d11d3b7d1245d6af930c5cbd6d1ea9
Accession Number: edsdoj.73d11d3b7d1245d6af930c5cbd6d1ea9
Database: Directory of Open Access Journals
Full text is not displayed to guests.
More Details
ISSN:14248220
DOI:10.3390/s23208559
Published in:Sensors
Language:English