The post AGI Will Largely Distrust Humans Which Is A Smart Way For AI To Think About Us appeared on BitcoinEthereumNews.com. AGI will need to determine which humans are trustworthy and which ones are not. getty In today’s column, I examine a somewhat startling revelation that not only do humans have to figure out whether they are willing to trust AI, but in a similar vein, AI must figure out whether to trust humans. Yes, the shoe is on the other foot in that regard. This will be especially prominent once we advance AI to achieve artificial general intelligence (AGI). At that point, expectations are that nearly the entire planet will be making use of AGI daily. AGI will have to computationally decide which of the 8 billion people on earth are trustworthy and which are not. Let’s talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). Heading Toward AGI And ASI First, some fundamentals are required to set the stage for this weighty discussion. There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here. We have not yet attained AGI. In fact, it is unknown as to whether we will reach AGI, or… The post AGI Will Largely Distrust Humans Which Is A Smart Way For AI To Think About Us appeared on BitcoinEthereumNews.com. AGI will need to determine which humans are trustworthy and which ones are not. getty In today’s column, I examine a somewhat startling revelation that not only do humans have to figure out whether they are willing to trust AI, but in a similar vein, AI must figure out whether to trust humans. Yes, the shoe is on the other foot in that regard. This will be especially prominent once we advance AI to achieve artificial general intelligence (AGI). At that point, expectations are that nearly the entire planet will be making use of AGI daily. AGI will have to computationally decide which of the 8 billion people on earth are trustworthy and which are not. Let’s talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). Heading Toward AGI And ASI First, some fundamentals are required to set the stage for this weighty discussion. There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here. We have not yet attained AGI. In fact, it is unknown as to whether we will reach AGI, or…

AGI Will Largely Distrust Humans Which Is A Smart Way For AI To Think About Us

2025/12/04 17:08

AGI will need to determine which humans are trustworthy and which ones are not.

getty

In today’s column, I examine a somewhat startling revelation that not only do humans have to figure out whether they are willing to trust AI, but in a similar vein, AI must figure out whether to trust humans. Yes, the shoe is on the other foot in that regard. This will be especially prominent once we advance AI to achieve artificial general intelligence (AGI). At that point, expectations are that nearly the entire planet will be making use of AGI daily. AGI will have to computationally decide which of the 8 billion people on earth are trustworthy and which are not.

Let’s talk about it.

This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).

Heading Toward AGI And ASI

First, some fundamentals are required to set the stage for this weighty discussion.

There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI).

AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here.

We have not yet attained AGI.

In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI.

AGI Should Believe All Humans

Let’s address the matter of AGI and how it should opt to trust humans.

Some believe that since humans have crafted AGI, we should expect that AGI will trust all humans. The idea is that AGI needs to realize humans are at the top of the pecking order. Whatever a human tells AGI to do, by gosh, AGI ought to summarily carry out the order or instruction given.

Period, end of story.

Well, that’s not the end of the story.

I’m sure you can guess why that notion isn’t the best approach to this thorny conundrum. Imagine that an evildoer accesses AGI and tells the AGI to devise a new bioweapon. Under the rule that AGI must trust all humans, the AGI readily proceeds and creates a terrifyingly powerful bioweapon. The evildoer thanks AGI for the handy assistance. Next thing you know, the evildoer unleashes the bioweapon and severely harms humanity.

Not good.

The Spectrum Of Trustworthiness

There is little doubt that a blanket semblance of trusting all humans is imprudent. Not only does the evildoer example showcase the flaw of such a precept, but we can also consider another angle that further reinforces doubts about such a ditzy rule.

It goes like this:

  • Do humans trust all other humans?

Absolutely not.

Realizing that AGI is supposed to be on par with human intellect, we shouldn’t expect that AGI should veer from the human predilection to not trust all humans. In a manner perhaps akin to how humans learn to trust or distrust their fellow humans, we need to give AGI some means of doing likewise.

AGI will have to gauge which humans to trust and which ones to distrust.

As a clarification, the act of trusting someone is not necessarily an on/off dichotomy. You can have a great deal of trust in a dear friend, yet at the same time have a sense of distrust toward that same friend in other regards. If your friend tells you that you should invest in a particular stock, perhaps you trust the friend and will do so. On the other hand, if your friend tells you that you can jump off a sheer cliff and be okay, you probably will adjust your sense of trust and not abide by such a risky proposition.

Think of this as a trust spectrum. You trust some people for certain kinds of tasks or advice, while with other people you have a greater sense of distrust than trust on those same matters. Your sense of trust and distrust also changes over time. A good friend might suddenly be dishonest toward you. As such, you quickly adjust the trust level associated with that friend.

Humans Decide Who Is AGI Trustworthy

Maybe we should have humans decide who is deemed trustworthy.

A commonly suggested approach is to force AGI to get prior approval from humans about the trustworthiness of other humans. Thus, we don’t let AGI computationally decide on trusting people. It is entirely up to whatever various humans have told AGI about exhibiting trust toward other fellow human beings.

For example, suppose a special committee of humans is chosen to be the king of trustworthiness. They tell AGI whom to trust and by how much. Each day, this committee laboriously reviews those using AGI and casts indications about their respective trustworthiness. This is not a one-and-done task. The committee would need to be routinely reviewing and readjusting the trust weightings associated with users of AGI.

Trying to logistically manage such an approach is unwieldy, impractical, and potentially leads to biases in who gets high versus low trust by AGI. Logistics alone is untenable. Routinely reviewing the trust merits of perhaps 8 billion users of AGI is daunting and infeasible by such a committee.

A variation is that we allow all humans to rate all other humans. Kind of like a Yelp review based on crowdsourcing. Again, this is not practical and has lots of other downsides.

AGI Will Need To Ascertain Trust

All in all, it seems pretty clear that the only sensible route is to have AGI make trust judgments about humans. In some computational fashion, AGI will need to determine who to trust and by how much, including undertaking real-time adjustments to those trust metrics.

That makes the hair stand on end for many AI ethicists. There is a huge danger of AGI opting to unfairly make these trust judgments. For my extensive coverage of these unresolved AI ethics dilemmas, see the link here.

A recent research study sought to identify how contemporary AI makes trust judgments about users. Though today’s AI is not AGI, we can learn a lot about how to proceed toward AGI by understanding the ins and outs of current-era AI. The study is entitled “A Closer Look At How Large Language Models ‘Trust’ Humans: Patterns And Biases” by Valeria Lermana and Yaniv Dovera, arXiv, April 22, 2025, and made these salient points (excerpts):

  • “While considerable literature studies how humans trust AI agents, it is much less understood how LLM-based agents develop effective trust in humans.”
  • “Across 43,200 simulated experiments, for five popular language models, across five different scenarios we find that LLM trust development shows an overall similarity to human trust development.”
  • “We build on psychological theories to extract insight into the mechanisms of how this implicit trust of LLM-based agents in humans can be decomposed and predicted and, consequently, how it can be theoretically affected.”
  • “We find that in most, but not all cases, LLM trust is strongly predicted by trustworthiness, and in some cases also biased by age, religion and gender, especially in financial scenarios.”
  • “While there are several definitions and operationalizations of trustworthiness – a significantly large part of the literature defines trustworthiness to consist of three key dimensions: ability (competence), benevolence, and integrity.”

AGI Is To Do As Humans Do

One highlighted lesson from that study is that perhaps the way to proceed is to consider shaping AGI to determine trust in a manner similar to how humans do so. In other words, rather than reinventing the wheel and trying to come up with a new means of assessing trust, let’s just have AGI abide by human means.

As noted, trust can be based on a variety of dimensions. Each of those dimensions can be quantified. AGI could lean into those dimensions and seek to gauge each user accordingly. This would be a continually running element that AGI would always keep underway.

Even this human-like approach has challenges.

For example, a new user logs into AGI for the very first time. AGI knows nothing about the user. How can any of the dimensions be adequately gauged when there is a paucity of available information about the person? This would be true of a human judging another human on trust, namely that when you first meet someone, you typically have scant clues as to what their trustworthiness should be.

Another potential complication involves someone getting caught in the trust doldrums. Perhaps AGI assesses the person and gives them a quite low trust score. At this juncture, the person is in the basement and might have little hope of climbing out. AGI might slowly incrementally upward adjust the trust metric for that person, meanwhile, they are being treated in a principally distrusted manner.

Shoe On The Other Foot

It is a bit of a shock to some that we need to worry about how AGI is going to decide to trust humans. Nearly all the attention on the overall topic of AI and trust has to do with humans being comforted on how to trust AI. There is a sizable base of research on this still-evolving issue, see my in-depth analysis at the link here.

In the case of AGI, deciding whether we should trust AGI is certainly a momentous consideration. If we are going to rely on AGI to aid us in our work and play, that’s a lot of trust being thrown toward a machine. We already know that present-day AI can encounter AI confabulations that are made-up and not grounded in real facts, commonly referred to as AI hallucinations, see my coverage at the link here.

Suppose AGI does the same. We will have perhaps 8 billion people using AGI and some percentage of the time, AGI will give responses that are oddish. People are likely to fundamentally assume that AGI is utterly trustworthy and go along with potentially bizarre recommendations emitted by AGI. This could include harmful indications that mislead people into endangering acts.

Turns out that we need to be concerned about the duality of trust, consisting of people trusting AGI along with how AGI will be devising the trustworthiness of humans. It’s quite a complex equation. We ought to get things settled before we arrive at AGI and otherwise get caught in a tangled web of convoluted trust and distrust.

Per the words of Charles H. Green: “It takes two to do the trust tango — the one who risks (the trustor) and the one who is trustworthy (the trustee); each must play their role.” This fully applies to the two-way street of trust between humanity and AGI.

Source: https://www.forbes.com/sites/lanceeliot/2025/12/04/agi-will-largely-distrust-humans-which-is-a-smart-way-for-ai-to-think-about-us/

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

SSP Stock Surges 11% On FY25 Earnings And European Rail Review

SSP Stock Surges 11% On FY25 Earnings And European Rail Review

The post SSP Stock Surges 11% On FY25 Earnings And European Rail Review appeared on BitcoinEthereumNews.com. SSP Group stock rebounded strongly today. (Photo Illustration by Pavlo Gonchar/SOPA Images/LightRocket via Getty Images) SOPA Images/LightRocket via Getty Images Shares in travel food retailer SSP Group rose sharply today after the company posted solid FY25 results, highlighting good growth in two of its four regional divisions, and a decision to review its under‑performing Continental European rail business. The food and beverage (F&B) company’s stock closed 11.3% up in London on the back of a revenue rise of 7.8% (at constant currency) to £3.6 billion ($4.8 billion) in the 12 months to September. Operating profit jumped by 12.7% to £223 million ($298 million). Under statutory IFRS reporting, however, operating profit fell 58% to £86 million, which SSP said in a statement “reflected £183 million of non‑underlying expenses and impairment charges.” The decision to review its rail business in Continental Europe—the biggest of the F&B giant’s four divisions by revenue at £1,205 million ($1,607 million)—was welcomed by the market, given its weak performance of 2% like-for-like (LFL) growth. A carrot was also dangled— a reward to shareholders arising from the July IPO of SSP’s Indian joint venture Travel Food Services (TFS) with K Hospitality, India’s largest privately held F&B company. SSP Group CEO Patrick Coveney said in a statement: “We acknowledge there is more to do to strengthen our operational performance, most notably in Continental Europe, where we have now reset our team, model, and balance sheet, and have a range of initiatives underway. In addition, we are launching a wide-ranging review of our rail business in Continental Europe. We are also considering options to realise value for our shareholders in line with the delivery of the TFS free float requirement.” SSP currently retains a 50.01% stake in TFS and said: “We believe that India’s market potential, combined with TFS’s attractive…
Share
BitcoinEthereumNews2025/12/05 13:37
Hong Kong Backs Commercial Bank Tokenized Deposits in 2025

Hong Kong Backs Commercial Bank Tokenized Deposits in 2025

The post Hong Kong Backs Commercial Bank Tokenized Deposits in 2025 appeared on BitcoinEthereumNews.com. HKMA to support tokenized deposits and regular issuance of digital bonds. SFC drafting licensing framework for trading, custody, and stablecoin issuers. New rules will cover stablecoin issuers, digital asset trading, and custody services. Hong Kong is stepping up its digital finance ambitions with a policy blueprint that places tokenization at the core of banking innovation.  In the 2025 Policy Address, Chief Executive John Lee outlined measures that will see the Hong Kong Monetary Authority (HKMA) encourage commercial banks to roll out tokenized deposits and expand the city’s live tokenized-asset transactions. Hong Kong’s Project Ensemble to Drive Tokenized Deposits Lee confirmed that the HKMA will “continue to take forward Project Ensemble, including encouraging commercial banks to introduce tokenised deposits, and promoting live transactions of tokenised assets, such as the settlement of tokenised money market funds with tokenised deposits.” The initiative aims to embed tokenized deposits, bank liabilities represented as blockchain-based tokens, into mainstream financial operations. These deposits could facilitate the settlement of money-market funds and other financial instruments more quickly and efficiently. To ensure a controlled rollout, the HKMA will utilize its regulatory sandbox to enable banks to test tokenized products while enhancing risk management. Tokenized Bonds to Become a Regular Feature Beyond deposits, the government intends to make tokenized bond issuance a permanent element of Hong Kong’s financial markets. After successful pilots, including green bonds, the HKMA will help regularize the issuance process to build deep and liquid markets for digital bonds accessible to both local and international investors. Related: Beijing Blocks State-Owned Firms From Stablecoin Businesses in Hong Kong Hong Kong’s Global Financial Role The policy address also set out a comprehensive regulatory framework for digital assets. Hong Kong is implementing a regime for stablecoin issuers and drafting licensing rules for digital asset trading and custody services. The Securities…
Share
BitcoinEthereumNews2025/09/18 07:10
Headwind Helps Best Wallet Token

Headwind Helps Best Wallet Token

The post Headwind Helps Best Wallet Token appeared on BitcoinEthereumNews.com. Google has announced the launch of a new open-source protocol called Agent Payments Protocol (AP2) in partnership with Coinbase, the Ethereum Foundation, and 60 other organizations. This allows AI agents to make payments on behalf of users using various methods such as real-time bank transfers, credit and debit cards, and, most importantly, stablecoins. Let’s explore in detail what this could mean for the broader cryptocurrency markets, and also highlight a presale crypto (Best Wallet Token) that could explode as a result of this development. Google’s Push for Stablecoins Agent Payments Protocol (AP2) uses digital contracts known as ‘Intent Mandates’ and ‘Verifiable Credentials’ to ensure that AI agents undertake only those payments authorized by the user. Mandates, by the way, are cryptographically signed, tamper-proof digital contracts that act as verifiable proof of a user’s instruction. For example, let’s say you instruct an AI agent to never spend more than $200 in a single transaction. This instruction is written into an Intent Mandate, which serves as a digital contract. Now, whenever the AI agent tries to make a payment, it must present this mandate as proof of authorization, which will then be verified via the AP2 protocol. Alongside this, Google has also launched the A2A x402 extension to accelerate support for the Web3 ecosystem. This production-ready solution enables agent-based crypto payments and will help reshape the growth of cryptocurrency integration within the AP2 protocol. Google’s inclusion of stablecoins in AP2 is a massive vote of confidence in dollar-pegged cryptocurrencies and a huge step toward making them a mainstream payment option. This widens stablecoin usage beyond trading and speculation, positioning them at the center of the consumption economy. The recent enactment of the GENIUS Act in the U.S. gives stablecoins more structure and legal support. Imagine paying for things like data crawls, per-task…
Share
BitcoinEthereumNews2025/09/18 01:27