Face recognition based on deep convolutional neural networks (CNN) has manifested superior accuracy. Despite the high discriminability of deep features generated by CNN, the vulnerability of the deep feature is often overlooked and leads to the security and privacy concerns, particularly the risks of reconstructing face images from the deep templates. In this paper, we propose a method to generate high definition (HD) face images from deep features. To be specific, the deep features extracted from CNN are mapped to the input (latent vector) of the pre-trained StyleGAN2 using a regression model. Subsequently, HD face images can be generated based on the latent vector by the pre-trained StyleGAN2 model. To evaluate our method, we derived the face features from the generated HD face images and compared them against the bona fide face features. In the sense of face image reconstruction, our method is simple, yet the experimental results suggest the effectiveness, which achieves an attack performance as high as SAR=46.08% (18.30%) @ FAR=0.1 threshold under type-I (type-II) attack settings. Besides, experiment results also indicate that 50.7% of the generated HD face images can pass one commercial off-the-shelf (COTS) liveness detection.
|Title of host publication||BIOSIG 2021 - Proceedings of the 20th International Conference of the Biometrics Special Interest Group|
|Editors||Arslan Bromme, Christoph Busch, Naser Damer, Antitza Dantcheva, Marta Gomez-Barrero, Kiran Raja, Christian Rathgeb, Ana F. Sequeira, Andreas Uhl|
|Publisher||Gesellschaft fur Informatik (GI)|
|Number of pages||10|
|Publication status||Published - 2021|
|Event||20th International Conference of the Biometrics Special Interest Group, BIOSIG 2021 - Darmstadt, Germany|
Duration: 2021 Sept 15 → 2021 Sept 17
|Name||Lecture Notes in Informatics (LNI), Proceedings - Series of the Gesellschaft fur Informatik (GI)|
|Conference||20th International Conference of the Biometrics Special Interest Group, BIOSIG 2021|
|Period||21/9/15 → 21/9/17|
Bibliographical noteFunding Information:
This work was supported by grants from Ministry of Higher Education (MOHE) Malaysia through Fundamental Research Grant Scheme (FRGS/1/2018/ICT02/ MUSM/03/3). The authors would like to thank Dr. Mai Guangcan for his gratitude in offering the reconstructed face images.
© 2021 Gesellschaft fur Informatik (GI). All rights reserved.
All Science Journal Classification (ASJC) codes
- Computer Science Applications