Legal Frameworks for AI in Immigration Processing
Artificial intelligence is revolutionizing the landscape of administrative decision-making in the United States. From biometric screening of travelers at borders to computer-based review of documents in visa and asylum applications, algorithmic systems are increasingly tasked with determining individuals' rights and immigration status. Supporters of the technology argue that technological automation will accelerate notoriously lengthy immigration proceedings and eliminate human error. However, fully integrating AI into areas as critical to the American experience as immigration raises questions about due process, accountability, and discrimination. As federal agencies increasingly turn to algorithmic tools to assist, or replace, human adjudicators, courts and legislatures must foresee how technological decision-making will fundamentally alter constitutional and administrative law frameworks in the United States. Meaningful reform requires that administrative and constitutional doctrines evolve to guarantee transparency, oversight, and fairness in the use of algorithmic systems within the modern administrative state.
The administrative structure of U.S. immigration policy has long been characterized by conflict between equity and expedience. Immigration enforcement and adjudication have been split between various agencies since the creation of the Department of Homeland Security in 2003: U.S. Citizenship and Immigration Services, Immigration and Customs Enforcement, and Customs and Border Protection. With these organizations handling enormous caseloads of applications and enforcement proceedings annually, AI seems like an advanced solution to administrative congestion. Early implementations, like the Automated Biometric Identification System and the Homeland Advanced Recognition Technology database, enable identity verification by collecting and analyzing biometric data, like fingerprints, facial images, and iris scans [1]. In other contexts, advanced technology has been further utilised to sort asylum applications by detecting forged applications and predicting visa overstay [2].
Though it seems like a tempting remedy, introducing AI to these procedures creates complex legal concerns. The 1946 Administrative Procedure Act requires agency decisions to be neither “arbitrary nor capricious,” enforcing that affected individuals must have meaningful notice and opportunity to respond [3]. Decisions produced by artificial systems often fail to provide a clear statement of reasoning for decisions, making it especially difficult for applicants and agency staff to protest errors or bias in the model. This “black box” problem prompted the questioning of whether algorithmic adjudication can successfully perform with the discretion required by the Fifth Amendment. A recent case highlighting the modern inequalities of AI is Mobley v. Workday, Inc., where the defendant claimed that his exclusion from over 100 job applications via the vendor’s AI-powered screening system violated federal discrimination law. Though the vendor, Workday, Inc., claims to be merely providing software, the Equal Employment Opportunity Commission has argued that the vendor may be liable due to its algorithmic decision-making tool, which is partially responsible for screening out certain candidates [4]. This concern is common and has already reached court in other administrative contexts. In State v. Loomis, 881 N.W.2d 749 (Wis. 2016), the Wisconsin Supreme Court approved the use of a risk assessment algorithm in a sentence, despite warning that defendants have the right to contest algorithmic answers that affect their freedom [5]. Likewise, Department of Commerce v. New York, 139 S. Ct. 2551 (2019), reaffirmed that courts must have a rational basis for reviewing administrative determinations, emphasizing transparency in agency reasoning [6]. These cases indicate that the utilization of algorithms comes with a price: for AI to fully determine immigration adjudications, agencies will need to provide explainability and access to meaningful review. This is an important caveat to the modernizing legal system since it is a requirement that most existing technological systems cannot yet fulfill.
The use of AI in legal processes presents the additional issue is the risk of algorithmic bias. Since AI models have been taught from past immigration statistics, they stand the risk of repeating discrimination patterns by nationality, ethnicity, or socioeconomic status. The Department of Homeland Security's Office for Civil Rights and Civil Liberties cautioned that predictive algorithms have the potential to "amplify inequities inherent in human decision-making" [7]. For example, a 2023 Government Accountability Office report found that automated visa fraud detection systems disproportionately marked applicants from specific countries despite the fact that they had almost identical application profiles compared to other countries [8]. In addition, increasing rates of inequalities will not be explained to outraged applicants due to limited procedural recourse, as algorithmic risk scores are not often disclosed. Although emerging proposals such as the Algorithmic Accountability Act and the White House’s Blueprint for an AI Bill of Rights were developed to initiate the long process of eventual fairness in automated decision-making, no binding legislation currently ensures due process protections in the use of such systems by federal agencies [9].
Only recently have courts begun to grapple with these questions, but future court holdings will demarcate the boundaries of algorithmic accountability in administrative law. Some legal scholars, such as Cary Coglianese and David Lehr, argue that the usual APA requirements of reasoned decision-making must be applied to algorithms: agencies must record how data were collected, how models were trained, and the ways to prevent fundamental errors [10]. Others propose a "right to explanation," similar to one embedded in the European Union's General Data Protection Regulation, requiring agencies to disclose explanations for algorithmic decisions affecting individual rights [11].
Apart from procedural concerns, the use of AI in immigration adjudication raises questions about equal protection concerns. The Supreme Court has repeatedly held that the Due Process Clause of the Fifth Amendment prohibits arbitrary discrimination in immigration regulations. In Yick Wo v. Hopkins, 118 U.S. 356 (1886), the Court notoriously held that laws applied “with an evil eye and an unequal hand” violate equal protection principles [12]. If an algorithmic system consistently puts candidates from a specific region or demographic category at a disadvantage, it could pose constitutional concerns under this doctrine.
The future of AI in immigration adjudication will balance on courts' decisions between administrative efficiency and constitutional fairness. A potential path forward is a hybrid model of algorithmic regulation where AI tools are deployed to assist but not completely replace human adjudicators. Here, human officers would retain ultimate decision-making authority in a system where algorithmic recommendations are subjected to auditing. Agencies could also establish internal review boards consisting of technologists, ethicists, and immigration lawyers to examine algorithmic regimes for neutrality and compliance prior to deployment [13].
In conclusion, the employment of AI in immigration law should not be to supplant human judgment, but to supplement it. Used properly, AI can streamline document review, detect fraud, and utilize resources more effectively. Though it seems like a progressive tool in areas of legislation, technological advantages should not come at the expense of fundamental human rights. Today's courts and legislatures are faced with both unprecedented problems and solutions, and must proceed with the thoughtful oversight required of these rapidly evolving intersections of law and technology.
Edited by Lola Castorina
Endnotes
[1] Federal Judicial Center, FDR’s “Court-Packing” Plan, Federal Judicial Center (Federal Judicial Center, n.d.), online at https://www.fjc.gov/history/timeline/fdrs-court-packing-plan (visited October 23, 2024).
[2] U.S. Department of Homeland Security, IDENT/HART Biometric Systems Overview, DHS Privacy Office Report (U.S. Department of Homeland Security, 2022), online at https://www.dhs.gov/privacy (visited October 23, 2024).
[3] Administrative Procedure Act, 5 U.S.C. § 706 (1946), online at https://www.law.cornell.edu/uscode/text/5/706 (visited October 23, 2024).
[4] Mobley v. Workday, Inc., No. 3:23-cv-00770 (N.D. Cal. filed Feb. 17, 2023); see also Daniel Wiessner, “EEOC Says Workday Covered by Anti-Bias Laws in AI Discrimination Case,” Reuters (Reuters, Apr. 11, 2024), online at https://www.reuters.com/legal/transactional/eeoc-says-workday-covered-by-anti-bias-laws-ai-discrimination-case-2024-04-11/ (visited October 23, 2024).
[5] State v. Loomis, 881 N.W.2d 749 (Wis. 2016), online at https://casetext.com/case/state-v-loomis-3 (visited October 23, 2024).
[6] Department of Commerce v. New York, 588 U.S. (2019), online at https://www.oyez.org/cases/2018/18-966 (visited October 23, 2024).
[7] U.S. Department of Homeland Security, Office for Civil Rights and Civil Liberties, Civil Rights Implications of Algorithmic Decision-Making (U.S. Department of Homeland Security, 2021), online at https://www.dhs.gov/office-civil-rights-and-civil-liberties (visited October 23, 2024).
[8] U.S. Government Accountability Office, Artificial Intelligence: DHS Should Improve Oversight of Automated Immigration Systems, GAO-23-104612 (U.S. Government Accountability Office, 2023), online at https://www.gao.gov/products/gao-23-104612 (visited October 23, 2024).
[9] Algorithmic Accountability Act of 2023, H.R. 5628, 118th Cong. (2023); Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People (The White House Office of Science and Technology Policy, Oct. 2022), online at https://www.whitehouse.gov/ostp/ai-bill-of-rights/ (visited October 23, 2024).
[10] Cary Coglianese & David Lehr, “Regulating by Robot: Administrative Decision-Making in the Machine-Learning Era,” Georgetown Law Journal 105 (2017), online at https://scholarship.law.upenn.edu/faculty_scholarship/1734/ (visited October 23, 2024).
[11] European Union, General Data Protection Regulation, Article 22 (2016), online at https://gdpr-info.eu/art-22-gdpr/ (visited October 23, 2024).
[12] Yick Wo v. Hopkins, 118 U.S. 356 (1886), online at https://www.law.cornell.edu/supremecourt/text/118/356 (visited October 23, 2024).
[13] Elizabeth E. Joh, “Policing by Machine: Algorithmic Governance in Immigration Enforcement,” Stanford Law Review Online 76 (Stanford Law Review, 2024), online at https://www.stanfordlawreview.org/online/policing-by-machine (visited October 23, 2024).