In today’s world, artificial intelligence is increasingly shaping everyday life. Every time we apply for a loan, search for a job, or purchase insurance online,In today’s world, artificial intelligence is increasingly shaping everyday life. Every time we apply for a loan, search for a job, or purchase insurance online,

How Can the United States Protect Citizens from Opaque AI Algorithms?

2026/02/06 20:05
8 min read

In today’s world, artificial intelligence is increasingly shaping everyday life. Every time we apply for a loan, search for a job, or purchase insurance online, algorithms are “watching” and assessing us. In practice, however, algorithms are far from always transparent. They can make mistakes and operate in a biased manner—especially in facial recognition systems. For example, research by the Massachusetts Institute of Technology showed that AI-based facial recognition systems produce significantly higher error rates when identifying women with darker skin tones—34.7% compared to 0.8% for light-skinned men. This is not merely a statistic: it affects the safety of U.S. citizens, creating risks to privacy and enabling potential discrimination—based on race, age, gender, or any other characteristic.

How can America protect citizens from the unlawful use of their biometric data in artificial intelligence systems, and what mechanisms should be implemented already today? We discussed this with Nodari Gorgiladze.

About the Expert

Nodari Gorgiladze is an expert in Digital Law, a researcher of the legal regulation of artificial intelligence, biometric technologies, and digital assets; Program Director at NOTA Digital Currencies Research Center Inc., and founder of the non-profit Institute for Digital Asset Systems & Tokenization Inc. He is the author of scholarly publications on algorithmic transparency, human-rights protection, and liability for automated decision-making, as well as the inventor of the U.S.-patented Universal Asset Tokenization and Verification Protocol (UATVP™) for legally recognized tokenization of real-world assets.

1. Mr. Nodari, what do we mean by “opaque algorithms” in the United States?

Opaque algorithms are automated systems that make decisions about individuals without providing a clear explanation of the logic behind those decisions. These include credit models, insurance risk-assessment systems, candidate-screening algorithms for employment, as well as biometric technologies—facial and voice recognition.

The problem is that the algorithms themselves, the data used to train them, and the decision criteria are closed—protected as trade secrets or as elements of security systems. As a result, a citizen does not understand why an application was denied, why they were labeled as “high-risk,” or why the system made an error.

Statistics. The independent academic transparency ranking for AI models, The 2025 Foundation Model Transparency Index, shows that transparency among major developers has declined. The average transparency score across 13 companies is around 40 out of 100, which is lower than in 2024 (58/100).

2. What are the main risks you see specifically regarding biometric data?

With biometric data, the risks are even higher. Unlike a password or document number, biometrics cannot be changed. If facial data, fingerprints, or voice prints are collected improperly, used without consent, or compromised, a person may lose control over their identity for life.

In the United States, numerous cases have already been documented in which facial recognition algorithms incorrectly identified individuals, including members of minority communities, leading to wrongful detentions or investigations. That is why transparency, auditability, and legal oversight of biometric algorithms are now treated as fundamental civil-rights issues.

Legal precedent. In California, the case Derek Mobley v. Workday, Inc. is currently being litigated. The plaintiff alleges that AI-based automated candidate-screening tools resulted in discrimination on the basis of age, race, and disability. A federal court allowed part of the claims to proceed, and the case is considered one of the first in the United States in which AI hiring algorithms became the subject of judicial legal evaluation.

3. What role does the state play today in the U.S. in overseeing the use of AI algorithms? Can we speak of real protection of citizens’ rights?

At the federal level, the United States still does not have a single comprehensive law. Instead, there is a mosaic of regulatory approaches: certain powers lie with the Federal Trade Commission (FTC), the Department of Justice, financial and labor market regulators, and agencies responsible for personal data protection.

An important step has been the strengthened role of the FTC, which increasingly treats algorithmic discrimination as a form of unfair or deceptive practice. This means that companies may be held liable if their algorithms lead to systemic violations of consumer rights—even if those violations were “automated.”

At the same time, real protection of citizens’ rights largely depends on regulation at the state level. For example, Illinois, California, and New York have already implemented or are developing specific rules regarding biometric data and automated decision-making systems.

The main challenge is that technology evolves faster than the law. Today, the U.S. government is moving more toward framework-based oversight than strict bans. The focus is on building mechanisms of audit, accountability, and explainability that can contain risks without stopping innovation. The balance between protecting citizens’ rights and supporting technological development has become a key theme in U.S. regulatory policy on AI.

Fact. In 2025, California proposed and debated the legislative initiative Transparency in Frontier Artificial Intelligence Act (SB-53). It would require companies developing artificial intelligence to publish public documentation about the potential risks of their models. SB-53 is one of the first examples of attempts to legally закрепити transparency requirements for frontier AI models at the state level in the United States.

4. People often talk about differences between American and European approaches to regulating AI and biometrics. In your view, what are the key differences between the U.S. and the EU in protecting citizens from opaque algorithms?

The difference between the U.S. and the European Union lies in the regulatory model. The EU has chosen a centralized, codified approach, while the U.S. operates through a combination of sector-specific regulation, state laws, and court practice.

In the EU, the basic element of protection is the GDPR, which classifies biometric data as a special category of sensitive personal data. Processing is prohibited by default unless there is a clear legal basis. In addition, the EU adopted the AI Act in 2024, which introduces a risk-based model. Facial recognition and biometric identification systems are classified as high-risk, and some practices—such as mass remote biometric surveillance in public spaces—are restricted or prohibited.

In the U.S., the situation is different. Because there is no single federal law that comprehensively regulates biometric data in AI, the level of protection depends on jurisdiction and the context in which the technology is used.

Another important difference is the approach to liability. In the EU, violations of rules on biometric data processing can lead to significant administrative fines, incentivizing preventive compliance. In the U.S., civil lawsuits play the central role, and courts become the primary mechanism for protecting human rights.

Thus, while the EU emphasizes preventive regulation and bans, the U.S. emphasizes flexibility, judicial control, and experimentation at the state level.

Fact. Protection of biometric data in U.S. AI systems is formed through the laws of individual states (for example, BIPA in Illinois), sectoral regulation (finance, healthcare), guidance documents (such as the NIST AI Risk Management Framework), and judicial precedents.

5. Finally—the most important question. Mr. Nodari, what exactly can and should the United States change to effectively protect citizens from unlawful use of biometric data in AI systems?

First and foremost, the United States must move from fragmented regulation to a systemic approach. In my work, I identify several key directions that should be закреплені at the national level.

First, it is necessary to establish a duty of notice and informed consent for the collection and use of biometric data. To date, the most developed example is the Illinois Biometric Information Privacy Act (BIPA), which explicitly requires prior written notice and consent. This mechanism has become the basis for hundreds of lawsuits and settlements, confirming its effectiveness as a rights-protection tool.

Second, the U.S. needs a real right to challenge automated decisions made using AI. The research substantiates the need to codify the right to demand human review of a decision and an explanation of the logic used in biometric systems.

Third, it is important to introduce mandatory audits and risk assessments for AI systems that process biometric data. In the U.S., this approach has begun to take shape through the NIST AI Risk Management Framework, but it remains voluntary. Moving toward mandatory audits for high-risk systems is a crucial step.

Fourth, it is necessary to clearly define limits on law-enforcement use of biometrics. In the United States, there are already examples of local bans or restrictions (including in San Francisco and Boston), where police use of facial recognition technologies has been significantly limited. These cases show that legal intervention is possible and effective.

Finally, the U.S. needs clear legal standards for qualifying harm caused by algorithms. Today, proving rights violations in the biometric sphere is difficult due to the lack of understandable liability standards.

For reference. The development of legal mechanisms to protect individuals from unlawful use of their biometric data in artificial intelligence systems occupies a special place in Nodari Gorgiladze’s work. A dedicated study on this issue was published in 2025 in the specialized journal “Academic Visions.”

Editorial Conclusion

The interview with Nodari Gorgiladze demonstrates that the problem of opaque AI algorithms in the United States is not a technical issue, but a legal and societal challenge.

Nodari Gorgiladze’s contribution lies in the systematic integration of analysis of U.S. case law, state-level regulation, and international approaches, which helps move the discussion of AI risks from abstraction into the domain of concrete legal mechanisms. The focus is on building legal safeguards that ensure transparency, accountability, and real protection of human rights—precisely the principles that have already proven effective in certain U.S. jurisdictions and can be scaled at the federal level.

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.