Face Identification Using DCGANs and Basic Manipulations in Combination with Data Augmentation
Despite the excellent effectiveness of 2D and 3D recognition, face recognition based on DNNs has encountered several difficulties, including the challenge of gathering adequate training photos because DNNs frequently need a substantial amount of data for effective learning. A significant amount of data is typically helpful to get high recognition accuracy. DNNs require different views of the face for each subject because they have strong learning capabilities. But not only is it unfeasible but obtaining such a dataset for a single class takes a lot of work. The two main types of image augmentation techniques are traditional and generative. Geometric modifications, random cropping, kernel filters, colour space augmentation, and noise injection are examples of traditional data augmentation techniques.
Rapid technological advancements have made it possible to significantly increase the accuracy of facial recognition. In many different applications, including identity verification, face recognition, security, surveillance, access control, and more, there are numerous techniques for detecting faces. The authors suggested a review of the various regional and international methods for re-identification of individuals. They emphasized the use of soft biometric traits including face shape, hair colour, skin tone, eye shape, and eye colour in combination with local and global characteristics like shape, colour, and texture aspects to re-identify persons through their faces. In this study, they used images acquired by a combination of DCGAN and conventional modifications to train FaceNet as a facial recognition model.
Local Binary Pattern Histograms, Tensor Robust Principal Component Analysis (TRPCA), and Principal Component Analysis (PCA) (LBPH). PCA is frequently used to reduce dimensionality. To compress the dataset, it only retains the values that have the most impact on variance. The primary components (i.e., the eigenvectors) of the data and their corresponding eigenvalues are obtained by decomposing the covariance matrix. The benchmark dataset for face verification and recognition is the LFW dataset. The 5749 persons represented by the 13,233 facial photos in this dataset. It presents several difficulties, including different face poses, expressions, and lighting circumstances in addition to partial occlusion.
Additionally, the experimental evaluation shows that using DCGANs and basic manipulations for data augmentation and FaceNet + SVM for face recognition, as opposed to just using basic manipulations (geometric transformations, brightness change, filtering, etc.), results in a significant improvement in accuracy. FaceNet was used to extract facial features from the augmented human face dataset, and SVM was used to classify them. Various tests and comparisons with widely established data augmentation and face recognition techniques were utilized to show the usefulness of the suggested method. Images produced by the suggested data augmentation technique are sufficiently realistic to improve face recognition system performance.
Source: Information
How to Cite this paper?
APA-7 Style
Mbachu,
V. (2022). Face Identification Using DCGANs and Basic Manipulations in Combination with Data Augmentation. Research Journal of Information Technology, 14(1), 45-46. https://rjit.scione.com/cms/abstract.php?id=39
ACS Style
Mbachu,
V. Face Identification Using DCGANs and Basic Manipulations in Combination with Data Augmentation. Res. J. Inf. Technol 2022, 14, 45-46. https://rjit.scione.com/cms/abstract.php?id=39
AMA Style
Mbachu
V. Face Identification Using DCGANs and Basic Manipulations in Combination with Data Augmentation. Research Journal of Information Technology. 2022; 14(1): 45-46. https://rjit.scione.com/cms/abstract.php?id=39
Chicago/Turabian Style
Mbachu, Victoria.
2022. "Face Identification Using DCGANs and Basic Manipulations in Combination with Data Augmentation" Research Journal of Information Technology 14, no. 1: 45-46. https://rjit.scione.com/cms/abstract.php?id=39
This work is licensed under a Creative Commons Attribution 4.0 International License.