“ALGORITHMIC BIAS AND THE LIMITS OF U.S. LAW: LESSONS FROM THE EU AI ACT’S RISK-BASED FRAMEWORK”
Keywords:
algorithmic bias, artificial intelligence, machine learning, automated decision-making, fairness in AI, discrimination law, civil rights, equal protection, data privacy law, bias in algorithms, predictive analytics, transparency and accountability, GDPR, CCPA, employment law and AI, criminal justice algorithms, risk assessment tools, legal liability for AI, ethical AI, technology and human rightsAbstract
This article examines the growing problem of algorithmic bias in the United States and argues that the current sector-specific legal framework is insufficient to address the discriminatory harms produced by AI systems. It highlights how automated tools used in employment, lending, housing, healthcare, and criminal justice often replicate or intensify social inequalities, while U.S. laws such as Title VII, ECOA, and the Fair Housing Act provide only reactive, post-hoc remedies. The article contrasts this fragmented model with the European Union’s AI Act, which introduces a comprehensive, risk-based regulatory structure requiring pre-deployment audits, transparency, human oversight, and bias testing for high-risk AI systems. By comparing the reactive U.S. approach with the EU’s preventive model, the article concludes that the United States could benefit from adopting hybrid reforms that integrate risk assessments and mandatory bias mitigation into existing civil-rights statutes. Such an approach would strengthen accountability, reduce algorithmic discrimination, and create a more future-proof legal environment for the governance of automated decision-making technologies.
References
1. Barocas, S., Hardt, M., & Narayanan, A. (2019). Fairness and machine learning: Limitations and opportunities. https://fairmlbook.org/
2. O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown Publishing.
3. Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
4. Kleinberg, J., Ludwig, J., Mullainathan, S., & Sunstein, C. R. (2018). Discrimination in the age of algorithms. Journal of Legal Analysis, 10(1), 113–174. https://doi.org/10.1093/jla/lax001
5. Binns, R. (2018). Fairness in machine learning: Lessons from political philosophy. Proceedings of Machine Learning Research, 81, 149–159.
6. European Union. (2016). Regulation (EU) 2016/679 of the European Parliament and of the Council (General Data Protection Regulation). https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32016R0679
7. U.S. Equal Employment Opportunity Commission. (2023). EEOC guidance on AI and employment discrimination. https://www.eeoc.gov/ai-guidance
8. Executive Office of the President. (2016). Big data: A report on algorithmic fairness and civil rights. Washington, DC: U.S. Government. https://obamawhitehouse.archives.gov/sites/default/files/microsites/ostp/2016_0504_data_discrimination.pdf
9. State of California. (2022). California Consumer Privacy Act (CCPA). https://oag.ca.gov/privacy/ccpa
10. Loomis v. Wisconsin, 881 N.W.2d 749 (Wis. 2016).
11. State v. Jackson, 388 P.3d 935 (Wash. 2017).
12. United States v. Microsoft Corp., 138 S. Ct. 1186 (2018) (algorithmic/data-driven issues in context of privacy and compliance).
13. AI Now Institute. (2018). Discriminating systems: Gender, race and power in AI. New York University. https://ainowinstitute.org/discriminatingsystems.pdf
14. Partnership on AI. (2020). Fair, transparent, and accountable AI. https://www.partnershiponai.org/fairness/