Skip to main content

DisP+V: A Unified Framework for Disentangling Prototype and Variation From Single Sample per Person.

Publication ,  Journal Article
Pang, M; Wang, B; Ye, M; Cheung, Y-M; Chen, Y; Wen, B
Published in: IEEE transactions on neural networks and learning systems
February 2023

Single sample per person face recognition (SSPP FR) is one of the most challenging problems in FR due to the extreme lack of enrolment data. To date, the most popular SSPP FR methods are the generic learning methods, which recognize query face images based on the so-called prototype plus variation (i.e., P+V) model. However, the classic P+V model suffers from two major limitations: 1) it linearly combines the prototype and variation images in the observational pixel-spatial space and cannot generalize to multiple nonlinear variations, e.g., poses, which are common in face images and 2) it would be severely impaired once the enrolment face images are contaminated by nuisance variations. To address the two limitations, it is desirable to disentangle the prototype and variation in a latent feature space and to manipulate the images in a semantic manner. To this end, we propose a novel disentangled prototype plus variation model, dubbed DisP+V, which consists of an encoder-decoder generator and two discriminators. The generator and discriminators play two adversarial games such that the generator nonlinearly encodes the images into a latent semantic space, where the more discriminative prototype feature and the less discriminative variation feature are disentangled. Meanwhile, the prototype and variation features can guide the generator to generate an identity-preserved prototype and the corresponding variation, respectively. Experiments on various real-world face datasets demonstrate the superiority of our DisP+V model over the classic P+V model for SSPP FR. Furthermore, DisP+V demonstrates its unique characteristics in both prototype recovery and face editing/interpolation.

Duke Scholars

Altmetric Attention Stats
Dimensions Citation Stats

Published In

IEEE transactions on neural networks and learning systems

DOI

EISSN

2162-2388

ISSN

2162-237X

Publication Date

February 2023

Volume

34

Issue

2

Start / End Page

867 / 881

Related Subject Headings

  • Pattern Recognition, Automated
  • Neural Networks, Computer
  • Humans
  • Face
  • Algorithms
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Pang, M., Wang, B., Ye, M., Cheung, Y.-M., Chen, Y., & Wen, B. (2023). DisP+V: A Unified Framework for Disentangling Prototype and Variation From Single Sample per Person. IEEE Transactions on Neural Networks and Learning Systems, 34(2), 867–881. https://doi.org/10.1109/tnnls.2021.3103194
Pang, Meng, Binghui Wang, Mang Ye, Yiu-Ming Cheung, Yiran Chen, and Bihan Wen. “DisP+V: A Unified Framework for Disentangling Prototype and Variation From Single Sample per Person.IEEE Transactions on Neural Networks and Learning Systems 34, no. 2 (February 2023): 867–81. https://doi.org/10.1109/tnnls.2021.3103194.
Pang M, Wang B, Ye M, Cheung Y-M, Chen Y, Wen B. DisP+V: A Unified Framework for Disentangling Prototype and Variation From Single Sample per Person. IEEE transactions on neural networks and learning systems. 2023 Feb;34(2):867–81.
Pang, Meng, et al. “DisP+V: A Unified Framework for Disentangling Prototype and Variation From Single Sample per Person.IEEE Transactions on Neural Networks and Learning Systems, vol. 34, no. 2, Feb. 2023, pp. 867–81. Epmc, doi:10.1109/tnnls.2021.3103194.
Pang M, Wang B, Ye M, Cheung Y-M, Chen Y, Wen B. DisP+V: A Unified Framework for Disentangling Prototype and Variation From Single Sample per Person. IEEE transactions on neural networks and learning systems. 2023 Feb;34(2):867–881.

Published In

IEEE transactions on neural networks and learning systems

DOI

EISSN

2162-2388

ISSN

2162-237X

Publication Date

February 2023

Volume

34

Issue

2

Start / End Page

867 / 881

Related Subject Headings

  • Pattern Recognition, Automated
  • Neural Networks, Computer
  • Humans
  • Face
  • Algorithms