Bias, Misuse, and Accountability in Facial Recognition Technology: Ethical, Legal, and Organizational Imperatives in the Age of Reverse Image Search

Authors

  • Lord DORDUNOO Doctoral Candidate, Marymount University, Arlington, VA, USA

Abstract

Facial recognition technology (FRT) has become an increasingly pervasive tool in law enforcement, immigration control, and commercial applications, yet its adoption raises pressing ethical, legal, and organizational concerns. Numerous studies have documented significant racial and gender biases in algorithmic performance, resulting in disproportionate misidentifications of minority groups and undermining public trust in technology-driven security solutions. Beyond technical shortcomings, the misuse of reverse image search applications, such as the use of PimEyes and or TinEye to dox immigration officers, has exacerbated the risks by weaponizing digital tools in ways that amplify existing societal tensions. This paper critically examines how the intersection of algorithmic bias and misuse of search technologies creates a compounded civil rights crisis, threatening constitutional protections, organizational legitimacy, and consumer trust. Drawing on recent policy reports, case studies, and scholarly literature, the study argues for a multi-stakeholder corrective framework emphasizing bias audits, inclusive dataset design, stronger regulatory oversight, and organizational accountability. By addressing these systemic deficiencies, stakeholders can simultaneously improve the accuracy of FRT, safeguard civil liberties, and foster inclusive innovation that builds sustainable trust in emerging technologies.

Published

2025-11-23