The post You Must Address These 4 Concerns To Deploy Predictive AI appeared on BitcoinEthereumNews.com. Predictive AI routinely fails to deploy, so data scientists are spearheading a movement to focus on its business value. But stakeholders need a better understanding. Eric Siegel (with Meta AI) Most predictive AI projects fail to launch into production. The number crunching is sound and the data scientist delivers a viable machine learning model – but stakeholder objections sadly preclude deployment. To better meet stakeholders where they are, ML professionals are spearheading a movement to focus on predictive AI’s business value. Rather than sticking with the traditional technical metrics that report on ML model performance, a proactive minority of data scientists bust out of their nerdy cubicle and deliver estimates of ML’s profit. By reporting on the potential earnings, these quants stand a much better chance of selling model deployment to the business-side counterparts. But this new move from ML evaluation to ML valuation will face certain objections until the practice is better understood and more widely adopted. Here are four common stakeholder concerns about ML valuation and how to address them. 1) How Can We Trust Profit Forecasts That Rest On Assumptions? A profit curve provides a more complete view of an ML model’s worth than any single number: A profit curve for targeting marketing with a machine learning model. As more customers are contacted, the profit goes up and then back down. Eric Siegel However, a profit curve alone doesn’t solve the business problem of planning and selling deployment. Why? Because it’s usually based on certain business assumptions – such as the false positive and false negative costs – and that can call the entire curve into question. The solution to this dilemma is to make charts interactive. By moving sliders, the user can vary the settings for such unresolved factors and see how this changes the curve’s… The post You Must Address These 4 Concerns To Deploy Predictive AI appeared on BitcoinEthereumNews.com. Predictive AI routinely fails to deploy, so data scientists are spearheading a movement to focus on its business value. But stakeholders need a better understanding. Eric Siegel (with Meta AI) Most predictive AI projects fail to launch into production. The number crunching is sound and the data scientist delivers a viable machine learning model – but stakeholder objections sadly preclude deployment. To better meet stakeholders where they are, ML professionals are spearheading a movement to focus on predictive AI’s business value. Rather than sticking with the traditional technical metrics that report on ML model performance, a proactive minority of data scientists bust out of their nerdy cubicle and deliver estimates of ML’s profit. By reporting on the potential earnings, these quants stand a much better chance of selling model deployment to the business-side counterparts. But this new move from ML evaluation to ML valuation will face certain objections until the practice is better understood and more widely adopted. Here are four common stakeholder concerns about ML valuation and how to address them. 1) How Can We Trust Profit Forecasts That Rest On Assumptions? A profit curve provides a more complete view of an ML model’s worth than any single number: A profit curve for targeting marketing with a machine learning model. As more customers are contacted, the profit goes up and then back down. Eric Siegel However, a profit curve alone doesn’t solve the business problem of planning and selling deployment. Why? Because it’s usually based on certain business assumptions – such as the false positive and false negative costs – and that can call the entire curve into question. The solution to this dilemma is to make charts interactive. By moving sliders, the user can vary the settings for such unresolved factors and see how this changes the curve’s…

You Must Address These 4 Concerns To Deploy Predictive AI

For feedback or concerns regarding this content, please contact us at crypto.news@mexc.com

Predictive AI routinely fails to deploy, so data scientists are spearheading a movement to focus on its business value. But stakeholders need a better understanding.

Eric Siegel (with Meta AI)

Most predictive AI projects fail to launch into production. The number crunching is sound and the data scientist delivers a viable machine learning model – but stakeholder objections sadly preclude deployment.

To better meet stakeholders where they are, ML professionals are spearheading a movement to focus on predictive AI’s business value. Rather than sticking with the traditional technical metrics that report on ML model performance, a proactive minority of data scientists bust out of their nerdy cubicle and deliver estimates of ML’s profit. By reporting on the potential earnings, these quants stand a much better chance of selling model deployment to the business-side counterparts.

But this new move from ML evaluation to ML valuation will face certain objections until the practice is better understood and more widely adopted. Here are four common stakeholder concerns about ML valuation and how to address them.

1) How Can We Trust Profit Forecasts That Rest On Assumptions?

A profit curve provides a more complete view of an ML model’s worth than any single number:

A profit curve for targeting marketing with a machine learning model. As more customers are contacted, the profit goes up and then back down.

Eric Siegel

However, a profit curve alone doesn’t solve the business problem of planning and selling deployment. Why? Because it’s usually based on certain business assumptions – such as the false positive and false negative costs – and that can call the entire curve into question.

The solution to this dilemma is to make charts interactive. By moving sliders, the user can vary the settings for such unresolved factors and see how this changes the curve’s shape.

This interaction provides a much-needed intuition, a “feel” for how much these factors matter when making deployment decisions. As the shape of each chart responsively morphs, the user gets to visualize the impact of each factor. In many cases, changes to the curve remain within the range of acceptability, so deployment decisions can be made with confidence. In other cases, a curve may change drastically or detrimentally, signaling that the range of uncertainty is untenable. This means that ranges of uncertainty would need to be narrowed before gaining the confidence in model value needed to greenlight deployment.

This practice empowers you to valuate models despite uncertainties. You may not have direct knowledge of, for example, the monetary loss for each false positive, because it is privy to other business units, or because it would require new investigations or experimental discovery. By interactively altering the value for such variables, you gain instant insights as to how much the uncertainty matters for driving deployment decisions. In this way, you can narrow that range, determining the limits within which the values would have to land for model deployment to be valuable. By viewing how the shape of the curves morph and how other pertinent metrics change, you gain critical intuition as to how big of a difference such factors make, whether a deployment plan may be copasetic nonetheless or whether some factors are “too uncertain” to move forward without additional efforts to narrow the range of uncertainty.

Even if you already hold fairly ideal visibility into the business factors, some of them will inevitably still be subject to potential change or uncertainty – there are always business variables that are subject to such “wiggle room.”

2) Does ML Valuation Perform An Audit On My Predictive AI Project?

Moving from standard ML evaluation to ML valuation does not constitute an audit in the usual sense of the word. In fact, doing so usually strengthens the perception of an ML model, rather than weakening it. The main outcome and purpose is to empower you to maximize deployed value and to demonstrate that potential value to your customers, colleagues and other decision makers. Stakeholders often perceive ML valuation as a validation of business value that they already intuitively believed was there.

This drives deployment. A value-oriented lens on model performance provides vital evidence to help you convince others and ensure that your model gets deployed – and that it gets deployed more optimally.

At the same time, certain “audits” help rather than hurt. Audits can be oriented toward unearthing, proving and communicating potential value – placing a spotlight on an initiative’s purpose and value so that the value will be realized. Moreover, in some cases assessing the potential business value might help you by revealing an addressable weakness in a model.

3) Isn’t Tracking Model Performance After Deployment Sufficient?

Most predictive AI projects plan to only assess the business results after the ML model is already deployed. Accordingly, most fail to deploy. This kind of post-mortem evaluation fails for a couple reasons. The only way to pursue business value during model development is to appraise its business value along the way. And the only way to make prudent business decisions as to whether to deploy, which model to deploy and precisely how to deploy, is to drive those decisions according to business value. Moreover, without an estimation of value, the model will likely never get deployed, so the project won’t ever even get to any post-deployment evaluation.

Explicitly planning for value increases value. It is possible that a model only evaluated technically could turn out to realize value if deployed – but that value would have been left unnecessarily to luck, since the process wouldn’t have explicitly optimized for value. What’s worse, the value would typically be nil, since most models that aren’t valuated aren’t deployed at all. Technical performance fails to compel stakeholders.

ML valuation as a practice also maintains ongoing value after deployment. By monitoring performance in business terms, changes to the model or to its deployment particulars (such as the decision boundary) can be driven to maximize business value. ML projects must be continually revisited and potentially redeployed, so model valuation is a must not only pre-deployment, but also “pre-redeployment.”

4) How Do We Navigate Tradeoffs Between Competing KPIs?

Money is never the only metric. Every predictive AI project must navigate tradeoffs between competing KPIs and strike a balance between them. The best way to do so is to visualize the tradeoff options.

For example, in addition to the bottom-line money saved with fraud detection, there’s another important consideration: the sheer number of times a legitimate transaction is disrupted – aka the number of false positives. A medium-sized bank may stand to win $26 million by placing the decision boundary where the savings curve peaks, but as they say, money isn’t everything. The cost of the disruptions is already factored into that bottom line savings, but they can also incur intangible or longer-term costs that haven’t been accounted for, since they could, for example, contribute to the bank’s reputation for inconveniencing customers in this way.

A small sacrifice to the monetary bottom line can sometimes greatly reduce transactional disruptions. In one case, false positives are reduced by 59% with only a 5% sacrifice in the bottom-line money saved – while also blocking 50% fewer transactions, which means cutting the disruption of commerce in half. You can read here about a similar example for misinformation detection where more misinformation is prevented by way of only a small sacrifice to the bottom line.

Get those models deployed! Addressing these four concerns will go a long way toward establishing ML valuation as a much-needed, widely-adopted best practice – thereby greatly improving predictive AI’s deployment track record.

Source: https://www.forbes.com/sites/ericsiegel/2025/11/17/you-must-address-these-4-concerns-to-deploy-predictive-ai/

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

XAG/USD struggles near $75.50 on firm hopes of Fed’s extended pause

XAG/USD struggles near $75.50 on firm hopes of Fed’s extended pause

The post XAG/USD struggles near $75.50 on firm hopes of Fed’s extended pause appeared on BitcoinEthereumNews.com. Silver price (XAG/USD) struggles to gain ground
Share
BitcoinEthereumNews2026/03/19 14:04
WLFI Price Drops 4% Despite New Governance Proposal

WLFI Price Drops 4% Despite New Governance Proposal

The post WLFI Price Drops 4% Despite New Governance Proposal appeared on BitcoinEthereumNews.com. Key Highlights World Liberty Financial (WLFI) price dropped by
Share
BitcoinEthereumNews2026/03/19 14:19
SEC greenlights new generic standards to expedite crypto ETP listings

SEC greenlights new generic standards to expedite crypto ETP listings

The post SEC greenlights new generic standards to expedite crypto ETP listings appeared on BitcoinEthereumNews.com. The U.S. Securities and Exchange Commission (SEC) has approved a new set of generic listing standards for commodity-based trust shares on Nasdaq, Cboe, and the New York Stock Exchange. The move is expected to streamline the approval process for exchange-traded products (ETPs) tied to digital assets, according to Fox Business reporter Eleanor Terret. However, she added that the Generic Listing Standards don’t open up every type of crypto ETP because threshold requirements remain in place, meaning not all products will immediately qualify. To add context, she quoted Tushar Jain of Multicoin Capital, who noted that the standards don’t apply to every type of crypto ETP and that threshold requirements remain. He expects the SEC will iterate further on these standards. The order, issued on Sept. 17, grants accelerated approval of proposed rule changes filed by the exchanges. By adopting the standards, the SEC aims to shorten the time it takes to bring new commodity-based ETPs to market, potentially clearing a path for broader crypto investment products. The regulator has been delaying the decision on several altcoin ETFs, most of which are set to reach their final deadlines in October. The move was rumored to be the SEC’s way of expediting approvals for crypto ETFs. The approval follows years of back-and-forth between the SEC and exchanges over how to handle crypto-based products, with past applications facing lengthy reviews. The new process is expected to reduce delays and provide more clarity for issuers, though the SEC signaled it may revisit and refine the standards as the market evolves. While the decision marks progress, experts emphasized that the so-called “floodgates” for crypto ETPs are not yet fully open. Future SEC actions will determine how broadly these standards can be applied across different digital asset products. Source: https://cryptoslate.com/sec-greenlights-new-generic-standards-to-expedite-crypto-etp-listings/
Share
BitcoinEthereumNews2025/09/18 08:43