The post Deepfakes Are Entering U.S. Courtrooms—Judges Say They’re ‘Not Ready’ appeared on BitcoinEthereumNews.com. Deep fake hoax false and ai manipulation social media on display. Searching on tablet, pad, phone or smartphone screen in hand. Abstract concept of news titles 3d illustration. getty A California judge dismissed a housing dispute case in September after discovering that plaintiffs had submitted what appeared to be an AI-generated deepfake of a real witness. The case may be among the first documented instances of fabricated synthetic media being passed off as authentic evidence in an American courtroom — and judges say the legal system is unprepared for what’s coming. In Mendones v. Cushman & Wakefield, Alameda County Superior Court Judge Victoria Kolakowski noticed something wrong with a video exhibit. The witness’s voice was disjointed and monotone, her face fuzzy and emotionless. Every few seconds, she would twitch and repeat her expressions. The video claimed to feature a real person who had appeared in other, authentic evidence — but Exhibit 6C was a deepfake. Kolakowski dismissed the case on September 9. The plaintiffs sought reconsideration, arguing the judge suspected but failed to prove the evidence was AI-generated. She denied their request in November. The incident has alarmed judges who see it as a harbinger. “I think there are a lot of judges in fear that they’re going to make a decision based on something that’s not real, something AI-generated, and it’s going to have real impacts on someone’s life,” Judge Stoney Hiljus, chair of Minnesota’s Judicial Branch AI Response Committee, told NBC News. Hiljus is currently surveying state judges to understand how often AI-generated evidence is appearing in their courtrooms. The vulnerability is not hypothetical. Judge Scott Schlegel of Louisiana’s Fifth Circuit Court of Appeal, a leading advocate for judicial AI adoption who nonetheless worries about its risks, described the problem in personal terms. His wife could easily clone… The post Deepfakes Are Entering U.S. Courtrooms—Judges Say They’re ‘Not Ready’ appeared on BitcoinEthereumNews.com. Deep fake hoax false and ai manipulation social media on display. Searching on tablet, pad, phone or smartphone screen in hand. Abstract concept of news titles 3d illustration. getty A California judge dismissed a housing dispute case in September after discovering that plaintiffs had submitted what appeared to be an AI-generated deepfake of a real witness. The case may be among the first documented instances of fabricated synthetic media being passed off as authentic evidence in an American courtroom — and judges say the legal system is unprepared for what’s coming. In Mendones v. Cushman & Wakefield, Alameda County Superior Court Judge Victoria Kolakowski noticed something wrong with a video exhibit. The witness’s voice was disjointed and monotone, her face fuzzy and emotionless. Every few seconds, she would twitch and repeat her expressions. The video claimed to feature a real person who had appeared in other, authentic evidence — but Exhibit 6C was a deepfake. Kolakowski dismissed the case on September 9. The plaintiffs sought reconsideration, arguing the judge suspected but failed to prove the evidence was AI-generated. She denied their request in November. The incident has alarmed judges who see it as a harbinger. “I think there are a lot of judges in fear that they’re going to make a decision based on something that’s not real, something AI-generated, and it’s going to have real impacts on someone’s life,” Judge Stoney Hiljus, chair of Minnesota’s Judicial Branch AI Response Committee, told NBC News. Hiljus is currently surveying state judges to understand how often AI-generated evidence is appearing in their courtrooms. The vulnerability is not hypothetical. Judge Scott Schlegel of Louisiana’s Fifth Circuit Court of Appeal, a leading advocate for judicial AI adoption who nonetheless worries about its risks, described the problem in personal terms. His wife could easily clone…

Deepfakes Are Entering U.S. Courtrooms—Judges Say They’re ‘Not Ready’

2025/12/09 09:21

Deep fake hoax false and ai manipulation social media on display. Searching on tablet, pad, phone or smartphone screen in hand. Abstract concept of news titles 3d illustration.

getty

A California judge dismissed a housing dispute case in September after discovering that plaintiffs had submitted what appeared to be an AI-generated deepfake of a real witness. The case may be among the first documented instances of fabricated synthetic media being passed off as authentic evidence in an American courtroom — and judges say the legal system is unprepared for what’s coming.

In Mendones v. Cushman & Wakefield, Alameda County Superior Court Judge Victoria Kolakowski noticed something wrong with a video exhibit. The witness’s voice was disjointed and monotone, her face fuzzy and emotionless. Every few seconds, she would twitch and repeat her expressions. The video claimed to feature a real person who had appeared in other, authentic evidence — but Exhibit 6C was a deepfake.

Kolakowski dismissed the case on September 9. The plaintiffs sought reconsideration, arguing the judge suspected but failed to prove the evidence was AI-generated. She denied their request in November.

The incident has alarmed judges who see it as a harbinger.

“I think there are a lot of judges in fear that they’re going to make a decision based on something that’s not real, something AI-generated, and it’s going to have real impacts on someone’s life,” Judge Stoney Hiljus, chair of Minnesota’s Judicial Branch AI Response Committee, told NBC News. Hiljus is currently surveying state judges to understand how often AI-generated evidence is appearing in their courtrooms.

The vulnerability is not hypothetical. Judge Scott Schlegel of Louisiana’s Fifth Circuit Court of Appeal, a leading advocate for judicial AI adoption who nonetheless worries about its risks, described the problem in personal terms. His wife could easily clone his voice using free or inexpensive software to fabricate a threatening message, he said. Any judge presented with such a recording would grant a restraining order.

“They will sign every single time,” Schlegel said. “So you lose your cat, dog, guns, house, you lose everything.”

Judge Erica Yew of California’s Santa Clara County Superior Court raised another concern: AI could corrupt traditionally reliable sources of evidence. Someone could generate a false vehicle title record and bring it to a county clerk’s office, she said. The clerk likely won’t have the expertise to verify it and will enter it into the official record. A litigant can then obtain a certified copy and present it in court.

“Now do I, as a judge, have to question a source of evidence that has traditionally been reliable?” Yew said. “We’re in a whole new frontier.”

Courts are beginning to respond, but slowly. The U.S. Judicial Conference’s Advisory Committee on Evidence Rules has proposed a new Federal Rule of Evidence 707, which would subject “machine-generated evidence” to the same admissibility standards as expert testimony. Under the proposed rule, AI-generated evidence would need to be based on sufficient facts, produced through reliable methods, and reflect a reliable application of those methods — the same Daubert framework applied to expert witnesses.

The rule is open for public comment through February 2026. But the rulemaking process moves at a pace ill-suited to rapidly evolving technology. According to retired federal Judge Paul Grimm, who helped draft one of the proposed amendments, it takes a minimum of three years for a new federal evidence rule to be adopted.

In the meantime, some states are acting independently. Louisiana’s Act 250, passed earlier this year, requires attorneys to exercise “reasonable diligence” to determine whether evidence they submit has been generated by AI.

“The courts can’t do it all by themselves,” Schlegel said. “When your client walks in the door and hands you 10 photographs, you should ask them questions. Where did you get these photographs? Did you take them on your phone or a camera?”

Detection technology offers limited help. Current tools designed to identify AI-generated content remain unreliable, with false positive rates that vary widely depending on the platform and content type. In the Mendones case, metadata analysis helped expose the fabrication — the video’s embedded data indicated it was captured on an iPhone 6, which lacked capabilities the plaintiffs’ story required. But such forensic tells grow harder to find as generation tools improve.

A small group of judges is working to raise awareness. The National Center for State Courts and Thomson Reuters Institute have created resources distinguishing “unacknowledged AI evidence” — deepfakes passed off as real — from “acknowledged AI evidence” like AI-generated accident reconstructions that all parties recognize as synthetic.

The Trump administration’s AI Action Plan, released in July, acknowledged the problem, calling for efforts to “combat synthetic media in the court system.”

But for now, the burden falls on judges who may lack the technical training to spot fabrications — and on a legal framework built on assumptions that no longer hold.

“Instead of trust but verify, we should be saying: Don’t trust and verify,” said Maura Grossman, a research professor at the University of Waterloo and practicing lawyer who has studied AI evidence issues.

The question facing courts is whether verification remains possible when the tools to detect fabrication are themselves unreliable, and when the consequences of failure range from fraudulent restraining orders to wrongful convictions.

Source: https://www.forbes.com/sites/larsdaniel/2025/12/08/deepfakes-are-entering-us-courtrooms-judges-say-theyre-not-ready/

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

U.S. Seizes Oil Tanker Off Venezuela Coast

U.S. Seizes Oil Tanker Off Venezuela Coast

The post U.S. Seizes Oil Tanker Off Venezuela Coast appeared on BitcoinEthereumNews.com. Topline The U.S. seized an oil tanker off the coast of Venezuela, President Donald Trump said Wednesday, the latest military incursion near Venezuela as the Trump administration pressures Venezuelan President Nicolas Maduro to resign. A Venezuelan navy patrol boat escorts Panamanian flagged crude oil tanker Yoselin near the El Palito refinery in Puerto Cabello, Venezuela on November 11, 2025. (Photo by JUAN CARLOS HERNANDEZ/AFP via Getty Images) AFP via Getty Images Key Facts Trump confirmed the news reported earlier in the day by Reuters, telling business leaders at the White House the tanker was “the largest one ever seized.” Details of the seizure led by the U.S. Coast Guard—including the name of the tanker, its country of origin and where it took place—are unclear, according to Reuters. The price of oil futures rose 56 cents, to $58.93 per barrel, after the seizure was made public. The seizure comes amid an increase in U.S. military presence off the coast of Venezuela and a series of attacks on alleged drug-carrying vessels in the Caribbean. Big Number 303 billion barrels. That’s the total amount of oil preserves Venezuela has, according to the Oil & Gas Journal, amounting to 17% of the world’s oil supply. Read More Source: https://www.forbes.com/sites/saradorn/2025/12/10/us-seizes-oil-tanker-near-venezuela-as-tensions-rise/
Share
BitcoinEthereumNews2025/12/11 05:10