Cross-Gender and 1-to-N Face Recognition Error Analysis of Gender Misclassified Images
Abstract
A number of recent research studies have shown that face recognition
accuracy is meaningfully worse for females than males. Gender classification
algorithms also perform worse: one commercial classifier gives a 7% error
rate for African-American females vs. 0.5% for Caucasian males. In response
to these observations, we consider one primary question: do errors in gender
classification lead to errors in facial recognition? We approach this question
by focusing on two main areas (1) do gender-misclassified images generate
higher similarity scores with different individuals from the false-gender
category versus their true-gender category? (2) What is the impact of gender
misclassified images on the performance accuracy of the system? We find
that (1) for all demographic groups, except for the African American Male,
non-mated pairs of subjects with at least one gender-misclassified image have
a higher False Match Rate (FMR) with their ground truth gender compared
to their erroneously projected gender group. Similarly, on average and across
demographics groups, gender-misclassified subjects still have higher
similarity scores with subjects of their true gender than those of the falsely
classified gender. (3) There was no significant impact on the 1-to-N accuracy
when using the open-source algorithm, ArcFace, whereas for the commercial
matcher, there seems to be a decline in performance accuracy for
misclassified images. To our knowledge, this is the first work to analyze and match scores for
gender misclassified images against both the false-gender category and the
true-gender category and extend the work from an identification standpoint.