After a mass biometric leak, human.tech’s Shady El Damaty explains urgent defenses, the risks of centralized ID, and privacy-first cryptographic fixes.After a mass biometric leak, human.tech’s Shady El Damaty explains urgent defenses, the risks of centralized ID, and privacy-first cryptographic fixes.

Mass Biometric Leak Exposes the Perils of Centralized Identity: human.tech Co-Founder Shady El Damaty Weighs In

human-tech (1)

A recent mass leak of biometric and national ID records in Pakistan has thrown into sharp relief a problem many technologists have warned about for years: when identity becomes centralized, a single breach becomes a systemic failure. Millions of people, from those who rely on IDs for banking and benefits to those at heightened risk of harassment or surveillance, suddenly face a cascade of harms that start with fraud and can end in long-term erosion of trust. In this climate, the question is no longer whether a breach will happen but how societies can design identity systems that survive when they do.

We spoke with Shady El Damaty, co-founder and CEO of human.tech, about the human and technical consequences of these leaks and what practical alternatives look like. El Damaty frames the problem bluntly: many people worldwide either lack any reliable ID or are trapped in systems where governments and corporations hold vast stores of sensitive data that can, and do, leak.

“Biometrics are sacred,” he says, arguing that fingerprints and face scans must be kept as close to the person as possible, never hoarded in centralized honeypots that invite attackers. Across the conversation, he lays out both immediate triage steps and a longer roadmap rooted in cryptography, decentralization and multi-stakeholder governance.

Q1. Please briefly describe your role at human.tech and the core problem the project is trying to solve.

I’m the co-founder and CEO of human.tech by Holonym, and the core problem we’re solving is pretty simple, a huge number of people in the world either don’t have IDs at all or they’ve lost them because they’re displaced or stateless, and without some way to securely prove who they are they can’t access the most basic services or humanitarian aid.

At the same time, the rest of us are stuck in systems where identity is controlled by governments or corporations that leak data and use it to track people, and that model is breaking down fast. With deepfakes and bots flooding the internet, it’s getting harder to tell who is real, so what we’re building is a privacy-preserving way for humans to prove their personhood online without handing over control of their identity to centralized institutions that don’t have their best interests at heart.

Q2. The recent Pakistan breach exposed millions of biometric and national ID records. In your view, what are the single biggest immediate and long-term risks from a leak of this scale?

The immediate risk is very human; when your fingerprints and national ID are floating around on the dark web, you are suddenly more vulnerable to targeted scams, harassment, financial fraud, SIM swaps, and, in some cases, even physical harm. If malicious actors tie that data back to where you live or who your family is, and this isn’t an abstract possibility, it’s the lived reality for millions of people right now.

The long-term risk is that these leaks don’t just disappear, they compound between government and corporate leaks, and all of that data eventually gets fed into machine learning models that are training the systems that will decide whether you can obtain a loan, the cost of your health insurance premium, whether you are a real person or a bot..

As deepfakes become indistinguishable from reality, we risk entering a future where people cannot prove their own humanity anymore, which sounds dystopian, but we are watching the early signs already, and it’s not just about fraud at scale, it’s about eroding the basic fabric of trust that society relies on.

Q3. Biometrics are often treated as immutable: “you can’t change a fingerprint.” From an engineering threat-model perspective, how should designers treat biometric data differently to reduce lifelong risk for users?

Biometrics are sacred. They are approximations of your physical instantiation and can be misused if in the wrong hands. They cannot be rotated, and can be used to create permanent identifiers  because you cannot change them like you would a password. The way to protect them is to keep biometrics as close to the user as possible. They should never be stored in plain text on any third party server.

They should only be kept in places that are fully under the control of the user. Sadly, current standard practice is to store biometrics  in centralized honeypots of fingerprint or facial data that will inevitably leak. This does not need to be the case today. Advances like zero-knowledge proofs and multi-party computation allow you to unlock a credential without the biometric being stored or transmitted. The emphasis has to be on designing for the inevitability of breach so that even if the infrastructure gets compromised, the individual’s biometric cannot be reused against them.

Q4. What architectural or operational failures make centralized national ID systems especially vulnerable to catastrophic breaches?

Centralization concentrates both the honey and the bear, creating a single point that attackers can go after and a single institution that can decide who is in and who is out. The combination is explosive because it means both mass breach and mass exclusion are always just one decision or one vulnerability away.

Q5. How can zero-knowledge proofs or other modern cryptographic techniques reduce harm when identity systems are compromised, explained simply for a general audience?

Zero-knowledge proofs let you prove something about yourself without revealing the underlying data, so, for example, you could prove you are over 18 without giving away your birthdate. We use this kind of cryptography in Human Passport so you can prove you are a unique human without handing over your face scan or your national ID, and the magic of this is that if a database gets breached, there is no honeypot of biometrics to steal, because nothing sensitive is stored in the first place.

Q6. For financial rails and payment onboarding, what defenses should banks and payment providers prioritize to stop fraudsters who scale attacks faster than human verification?

They need to stop pretending that bolting on more surveillance will save them. What will save them is adopting privacy-first verification methods that still establish uniqueness and humanity, and that means adopting cryptographic techniques that scale as fast as the fraudsters do. Humans will never be able to click “approve” as fast as a bot can spin up ten thousand fake identities, but protocols can.

Q7. Is a decentralized identity model practical at the national scale, or do you see hybrid approaches as the only realistic path? What would governance for those hybrids look like?

Decentralized identity is absolutely possible at the national and even global scale, but it doesn’t mean governments and banks don’t have a role; it means their role changes. They should be participants, validators and stewards of standards, not custodians holding everyone’s personal data in one vault.

A hybrid model might look like governments helping issue or support credentials while individuals remain the controllers of their identity, and governance has to be multi-stakeholder with civil society and technologists at the table, so that no single entity can hijack the system for profit or control.

Q8. human.tech builds privacy-first recovery and key management. Can you summarize, at a high level, how you enable biometric-assisted recovery without storing reusable biometric templates centrally?

Yes, we use multi-party computation and other cryptographic techniques so that your biometric never exists as a reusable template sitting on some server, instead it’s used as a local factor in a distributed recovery process, so the system can help you get back into your account without anyone holding a copy of your fingerprint or face that could later be lost, stolen, or abused.

Q9. If you were advising a government responding to this Pakistan breach, what are the top three immediate technical and policy actions you would recommend in the first 72 hours?

First, stop the bleeding, find the attack vector and secure the system so the breach doesn’t keep expanding. Second, immediately notify citizens and give them the tools to protect themselves because too often governments hide breaches and people don’t know until it’s too late. Third, dump all logs for forensics analysis, rotate all keys, complete security audits and analyses, and conduct penetration tests to identify gaps.

After the dust settles, begin implementing privacy-preserving identity infrastructure now because the tech exists, zero-knowledge proofs exist, decentralized identifiers exist, and the only reason they are not in place is because institutions move too slowly or frankly do not care enough about protecting their people.

Q10. What should ordinary people do right now if they suspect their national ID or biometric data has been exposed? Give a short prioritized checklist.

They should immediately update and strengthen all digital accounts that connect to that ID, enable multi-factor authentication wherever possible, watch for unusual activity like SIM swap attempts, and be very cautious about phishing calls or emails that use pieces of leaked data to sound convincing.

Use sites like “haveibeenpwnd.com” or similar to scan the Dark Web for your personal data. Other services are able to erase your information for a fee. It can also be worth it to plug into NGOs or civil groups that can help with digital security hygiene because collective defense is always stronger than trying to figure it out alone.

Q11. Which downstream attacks (SIM swap, account takeover, targeted surveillance, deepfakes, etc.) should NGOs, banks and telcos treat as the highest priority after a leak, and how should they coordinate?

SIM swaps and account takeovers are always the first wave, so telcos and banks need to be working hand in hand to monitor for that and respond fast, but the deeper long-term risk is targeted surveillance and the use of biometric leaks to build deepfakes that impersonate people. Because of this, coordination has to include not just financial institutions but also media platforms, governments, and NGOs that can provide rapid alerts and education to those most at risk.

Q12. Looking 5–10 years ahead: what does a resilient, human-centric global identity system look like, technically and in governance, and what are the most important building blocks still missing?

A resilient system looks like one where identity is not a weapon that can be turned against you or taken from you, but a tool that unlocks your humanity online. Technically, it looks like decentralized identifiers, zero-knowledge proofs, and recovery systems that do not require handing your biometrics to a central server.

In terms of governance, it looks like shared stewardship with humans at the center, not corporations or states, and the missing piece today is really political will. The technology is here, the covenant of humanistic technologies lays out the principles, but we need leaders willing to adopt them instead of doubling down on centralized surveillance infrastructures that will keep failing us.

Interview Summary

The takeaway from El Damaty’s perspective is twofold. First, the immediate practical work after any large leak must be aggressive and transparent: stop the bleeding, secure the vector, notify affected people, rotate keys, hand logs to independent forensics and push for rapid containment. Second, longer-term resilience requires rethinking who controls identity. Techniques like zero-knowledge proofs and multi-party computation can allow people to prove facts about themselves, that they are unique, over a certain age, or eligible for a service, without handing over raw biometrics or centralized databases that can be harvested and reused.

For individuals, El Damaty’s advice is straightforward and urgent: lock down accounts, enable multi-factor authentication, watch for SIM-swap attempts and be skeptical of phishing that leverages leaked data. For institutions, the lesson is structural: privacy-preserving verification scales faster than human reviewers and must be part of the defense strategy, especially for banks, telcos and aid organizations that see the worst downstream effects.

Ultimately, the technology to build safer, human-centric identity systems already exists, decentralized identifiers, zero-knowledge proofs and privacy-first recovery flows are real and deployable. “The technology is here,” El Damaty says; what’s missing, he warns, is political will and a governance model that places people, not single vaults of data, at the center. Until that changes, each new breach will be a costly reminder that centralized identity remains a single point of failure for trust itself.

Market Opportunity
ELYSIA Logo
ELYSIA Price(EL)
$0.002381
$0.002381$0.002381
-1.36%
USD
ELYSIA (EL) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

XRP Price Prediction: Ripple CEO at Davos Predicts Crypto ATHs This Year – $5 XRP Next?

XRP Price Prediction: Ripple CEO at Davos Predicts Crypto ATHs This Year – $5 XRP Next?

XRP has traded near $1.90 as Ripple CEO Brad Garlinghouse has predicted from Davos that the crypto market will reach new highs this year. Analysts have pointed
Share
Coinstats2026/01/22 04:49
What Is Jawboning? Jimmy Kimmel Suspension Sparks Legal Concerns About Trump Administration

What Is Jawboning? Jimmy Kimmel Suspension Sparks Legal Concerns About Trump Administration

The post What Is Jawboning? Jimmy Kimmel Suspension Sparks Legal Concerns About Trump Administration appeared on BitcoinEthereumNews.com. Topline Legal experts have raised concerns that ABC’s decision to pull “Jimmy Kimmel Live” from its airwaves following the host’s controversial comments about the death of Charlie Kirk, could be because the Trump administration violated free speech protections through a practice known as “jawboning.” Jimmy Kimmel speaks at Disney’s Advertising Upfront on May 13 in New York City. Disney via Getty Images Key Facts Disney-owned ABC announced Wednesday Kimmel’s show will be taken off the air “indefinitely,” which came after ABC affiliate owner Nexstar—which needs Federal Communications Commission approval to complete a planned acquisition of competitor Tegna Inc.—said it would not air the program due to Kimmel’s comments Monday regarding Kirk’s death and the reaction to it. The sudden move drew particular concern because it came only hours after FCC head Brendan Carr called for ABC to “take action” against Kimmel, and cryptically suggested his agency could take action saying, “We can do this the easy way or the hard way.” While ABC and Nexstar have not given any indication their decisions were influenced by Carr’s comments, the timing raised concerns among legal experts that the Trump administration’s threats may have unlawfully coerced ABC and Nexstar to punish Kimmel, which could constitute jawboning. Jawboning refers to “the use of official speech to inappropriately compel private action,” as defined by the Cato Institute, as governments or public officials—who cannot directly punish private actors for speech they don’t like—can use strongman tactics to try and indirectly silence critics or influence private companies’ actions. The practice is fairly loosely defined and there aren’t many legal safeguards dictating how violations of it are enforced, the Knight First Amendment Institute notes, but the Supreme Court has repeatedly ruled it can be unlawful and an impermissible First Amendment violation when it involves specific threats. The White…
Share
BitcoinEthereumNews2025/09/19 07:17
Wormhole Unleashes W 2.0 Tokenomics for a Connected Blockchain Future

Wormhole Unleashes W 2.0 Tokenomics for a Connected Blockchain Future

TLDR Wormhole reinvents W Tokenomics with Reserve, yield, and unlock upgrades. W Tokenomics: 4% yield, bi-weekly unlocks, and a sustainable Reserve Wormhole shifts to long-term value with treasury, yield, and smoother unlocks. Stakers earn 4% base yield as Wormhole optimizes unlocks for stability. Wormhole’s new Tokenomics align growth, yield, and stability for W holders. Wormhole [...] The post Wormhole Unleashes W 2.0 Tokenomics for a Connected Blockchain Future appeared first on CoinCentral.
Share
Coincentral2025/09/18 02:07