Florida Tech Logo
    • Login
    View Item 
    •   Scholarship Repository at Florida Tech
    • College of Engineering and Science
    • Theses/Dissertations
    • View Item
    •   Scholarship Repository at Florida Tech
    • College of Engineering and Science
    • Theses/Dissertations
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Cross-Gender and 1-to-N Face Recognition Error Analysis of Gender Misclassified Images

    Thumbnail
    View/Open
    VELAACHU-THESIS-2022.pdf (4.983Mb)
    Date
    2022-05
    Author
    Vela Achu, Paloma
    Metadata
    Show full item record
    Abstract
    A number of recent research studies have shown that face recognition accuracy is meaningfully worse for females than males. Gender classification algorithms also perform worse: one commercial classifier gives a 7% error rate for African-American females vs. 0.5% for Caucasian males. In response to these observations, we consider one primary question: do errors in gender classification lead to errors in facial recognition? We approach this question by focusing on two main areas (1) do gender-misclassified images generate higher similarity scores with different individuals from the false-gender category versus their true-gender category? (2) What is the impact of gender misclassified images on the performance accuracy of the system? We find that (1) for all demographic groups, except for the African American Male, non-mated pairs of subjects with at least one gender-misclassified image have a higher False Match Rate (FMR) with their ground truth gender compared to their erroneously projected gender group. Similarly, on average and across demographics groups, gender-misclassified subjects still have higher similarity scores with subjects of their true gender than those of the falsely classified gender. (3) There was no significant impact on the 1-to-N accuracy when using the open-source algorithm, ArcFace, whereas for the commercial matcher, there seems to be a decline in performance accuracy for misclassified images. To our knowledge, this is the first work to analyze and match scores for gender misclassified images against both the false-gender category and the true-gender category and extend the work from an identification standpoint.
    URI
    http://hdl.handle.net/11141/3538
    Collections
    • Theses/Dissertations

    DSpace software copyright © 2002-2015  DuraSpace
    Contact Us | Send Feedback
    Theme by 
    @mire NV
     

     

    Browse

    All of Scholarship RepositoryCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

    My Account

    LoginRegister

    Statistics

    Most Read ItemsStatistics by CountryMost Read Authors

    DSpace software copyright © 2002-2015  DuraSpace
    Contact Us | Send Feedback
    Theme by 
    @mire NV