The world's moving at a pace that'd make a cheetah look slow. We’re knee-deep in a tidal wave of tech advancements, radical business paradigm shifts, and full-blown cultural transformations. Trying to predict what comes next? That's the ultimate quest, and it takes more than a hunch.
In the trenches of Customer Relationship Management (CRM), there’s one number that now matters more than the rest: the lifetime value of each customer. It's not just important; it's the high-stakes game-changer.
Every business is hunting for that superior edge: better ways to mint value, refine the offer, hook the right customers, and, yes, turn a profit. For years, the Customer Lifetime Value (CLV) metric has been the bedrock, the compass guiding marketing spend and measuring overall success. Understanding the net benefit a company can realistically expect from its customer base isn't just "nice to know"; it's the key to the whole operation.
CLV has cemented itself as a cornerstone strategy because it’s a brilliant two-for-one: it reflects both the customer’s present spend and their future potential.
Forget the spreadsheets and guesswork of the past. In this piece, we’re drilling down into the nuts and bolts of how to leverage machine learning (ML) to forecast future CLV.
To put it simply, CLV represents the total value a customer brings to a company over their entire relationship. This concept has been discussed extensively in customer relationship management literature recently. It’s calculated by multiplying the average transaction value by the number of transactions and the retention time period:
CLV = Average Transaction Value × Number of Transactions × Retention Time Period
Let us bring some examples. Suppose you own a coffee shop where the average customer spends $5 per visit, and they visit your shop twice a week, on average, for a period of 2 years. Here’s how you would calculate the CLV:
CLV = $5 (average transaction) x 2 (visits per week) x 52 (weeks in a year) x 2 (years) = $1040 CLV
Why it matters: CLV helps you optimize ad spend, align CAC with value, focus sales on high-value segments, improve retention via personalized campaigns, and plan revenue with realistic targets. Using ML to analyze and predict CLV offers more accurate, actionable insights by learning from behavioral data at scale.
Transactions (one row per order/charge/renewal):
| userid | ts | amount | currency | channel | sku | country | isrefund | variable_cost | |----|----|----|----|----|----|----|----|----|
Users:
| userid | signupts | country | device | acquisition_source | … | |----|----|----|----|----|----|
Events (optional):
| userid | ts | eventname | metadata_json | |----|----|----|----|
We choose a prediction cutoff t₀ and horizon H (e.g., 30/90/180/365 days). All features must be computed using data up to and including t₀; labels come strictly after t₀ through t₀+H.
-- Parameters (set in your job): t0, horizon_days WITH tx AS ( SELECT user_id, ts, CASE WHEN is_refund THEN -amount ELSE amount END AS net_amount FROM transactions ), label AS ( SELECT user_id, SUM(net_amount) AS y_clv_h FROM tx WHERE ts > TIMESTAMP(:t0) AND ts <= TIMESTAMP_ADD(TIMESTAMP(:t0), INTERVAL :horizon_days DAY) GROUP BY user_id ), history AS ( SELECT user_id, COUNT(*) AS hist_txn_cnt, SUM(net_amount) AS hist_revenue, AVG(net_amount) AS hist_aov, MAX(ts) AS last_txn_ts, MIN(ts) AS first_txn_ts FROM tx WHERE ts <= TIMESTAMP(:t0) GROUP BY user_id ) SELECT u.user_id, u.country, u.device, u.acquisition_source, h.hist_txn_cnt, h.hist_revenue, h.hist_aov, TIMESTAMP_DIFF(:t0, h.last_txn_ts, DAY) AS recency_days, TIMESTAMP_DIFF(:t0, h.first_txn_ts, DAY) AS tenure_days, COALESCE(l.y_clv_h, 0.0) AS label_y, TIMESTAMP(:t0) AS t0 FROM users u LEFT JOIN history h USING (user_id) LEFT JOIN label l USING (user_id);
import pandas as pd import numpy as np # df has columns from the SQL above def validate_leakage(df, t0_col="t0", last_txn_col="last_txn_ts"): assert (df[last_txn_col] <= df[t0_col]).all(), "Leakage: found events after t0 in features" def add_basic_features(df): df["rfm_recency"] = df["recency_days"] df["rfm_frequency"] = df["hist_txn_cnt"].fillna(0) df["rfm_monetary"] = df["hist_aov"].fillna(0).clip(lower=0) df["arpu"] = (df["hist_revenue"] / (df["tenure_days"]/30).clip(lower=1)).fillna(0) df["log_hist_revenue"] = np.log1p(df["hist_revenue"].clip(lower=0)) return df
\
Now let’s explore two ways to predict CLV using machine learning: by cohorts and by users.
The fundamental difference between these approaches is that in the first, we form cohorts of users based on a certain characteristic (e.g., users who registered on the same day). In the second, we do not create such groups and treat each user individually. The advantage of the first approach is that we can achieve greater prediction accuracy. But there is a downside: the thing is that we must fix the characteristic by which we group users into cohorts. In the second approach, it is generally more challenging to predict the CLV of each user accurately; however, this method allows us to analyse the predicted CLV data based on various characteristics (e.g., user’s country of origin, registration day, the advertisement they clicked on, etc.).
It is also worth mentioning that CLV predictions are rarely made without a time constraint. A user can experience several “lifetimes” throughout their lifecycle, so CLV is usually considered over a specific period, such as 30, 90, or 365 days.
One of the most common ways to form user cohorts is by grouping them based on their registration day. This allows us to frame the task of predicting CLV as a time series prediction task. Essentially, our time series will represent the CLV of users over past periods, and the task will be to predict (extend) this time series into the future. This can be framed as a time-series task and extended to hierarchical models (e.g., country → region). Libraries like Nixtla offer advanced reconciliation and hierarchical tools.
# df_tx: transactions with ['user_id','ts','amount','is_refund','signup_day'] import numpy as np import pandas as pd tx = df_tx.assign(net_amount=lambda x: np.where(x.is_refund, -x.amount, x.amount)) cohort_daily = ( tx.groupby([pd.Grouper(key="ts", freq="D"), "signup_day"]).net_amount.sum() .rename("cohort_gmv").reset_index() )
Exponential Smoothing (statsmodels) as a strong baseline:
from statsmodels.tsa.holtwinters import ExponentialSmoothing def forecast_cohort(series, steps=90): # series: pandas Series indexed by day for one cohort model = ExponentialSmoothing(series, trend="add", seasonal="add", seasonal_periods=7) fit = model.fit(optimized=True, use_brute=True) fcst = fit.forecast(steps) return fcst
What is it? The “Buy ‘Til You Die” family models two hidden processes for each customer: (1) how often they make repeat purchases while they are alive and (2) when they drop out (churn). BG/NBD gives the expected number of future transactions and the probability a customer is still alive at any future time. Pairing it with Gamma–Gamma gives the expected spend per transaction, so multiplying the two yields a CLV forecast over a horizon.
BG/NBD in plain English
Pareto/NBD vs BG/NBD — BG/NBD assumes churn can only occur immediately after a purchase (simple and fast), while Pareto/NBD allows churn at any time (often fits long gaps better but is heavier to estimate).
Gamma–Gamma (monetary value) Assumes each customer has a latent average order value; given that value, their observed order amounts are Gamma distributed, with customer‑to‑customer variation captured by a Gamma prior (hence Gamma–Gamma). It further assumes spend size is independent of purchase frequency conditional on the customer—if that is badly violated, prefer a supervised model. This approach also requires frequency > 0 (at least two purchases) to estimate an average order value; otherwise backfill with a cohort AOV or a supervised prediction.
Where it shines / watch‑outs
Models repeat purchases & churn, and spend given a purchase. Good with sparse data and early lifecycles.
# pip install lifetimes from lifetimes import BetaGeoFitter, GammaGammaFitter from lifetimes.utils import summary_data_from_transaction_data summary = summary_data_from_transaction_data( transactions=df_tx, customer_id_col='user_id', datetime_col='ts', monetary_value_col='amount', observation_period_end=t0 # pandas Timestamp ) bgf = BetaGeoFitter(penalizer_coef=0.001).fit( summary['frequency'], summary['recency'], summary['T'] ) ggf = GammaGammaFitter(penalizer_coef=0.001).fit( summary['frequency'], summary['monetary_value'] ) H = 180 summary["pred_txn_H"] = bgf.conditional_expected_number_of_purchases_up_to_time( H, summary['frequency'], summary['recency'], summary['T'] ) summary["pred_spend_given_txn"] = ggf.conditional_expected_average_profit( summary['frequency'], summary['monetary_value'] ) summary["clv_H"] = summary["pred_txn_H"] * summary["pred_spend_given_txn"]
When predicting by users, we can build a model that forecasts each customer’s CLV using signals that describe the individual—purchases, on‑site behaviour (where available), pre‑signup exposure such as the ad or campaign that led to registration, and socio‑demographic attributes. Cohort‑level information like registration day can be folded in as additional descriptors. If we frame CLV as a regression target, any supervised regressor applies; in practice, gradient‑boosted trees (XGBoost, LightGBM, CatBoost) are reliable baselines for tabular data. After establishing this baseline, you can explore richer methods. A core limitation of standard tabular models is that they do not natively model sequences even though customer data often arrives as ordered events—purchase histories, in‑app navigation paths, and marketing‑touch sequences before registration. The classic workaround compresses sequences into aggregates (averages, dispersions, inter‑purchase intervals), but this discards temporal dynamics.
# pip install lightgbm import lightgbm as lgb from sklearn.model_selection import GroupKFold from sklearn.metrics import mean_absolute_error FEATURES = [ "rfm_recency","rfm_frequency","rfm_monetary","arpu", "tenure_days","log_hist_revenue","country","device","acquisition_source" ] df = add_basic_features(df).fillna(0) for c in ["country","device","acquisition_source"]: df[c] = df[c].astype("category") X = df[FEATURES] y = df["label_y"] # Group by signup month or a cohort key to avoid temporal leakage gkf = GroupKFold(n_splits=5) groups = df["signup_month"] # precomputed elsewhere models, oof = [], np.zeros(len(df)) params = dict(objective="mae", metric="mae", learning_rate=0.05, num_leaves=64, min_data_in_leaf=200, feature_fraction=0.8, bagging_fraction=0.8, bagging_freq=1) for tr, va in gkf.split(X, y, groups): dtr = lgb.Dataset(X.iloc[tr], label=y.iloc[tr]) dva = lgb.Dataset(X.iloc[va], label=y.iloc[va]) model = lgb.train(params, dtr, valid_sets=[dtr, dva], num_boost_round=3000, early_stopping_rounds=200, verbose_eval=200) oof[va] = model.predict(X.iloc[va]) models.append(model) print("OOF MAE:", mean_absolute_error(y, oof))
You’re probably wondering: Why MAE here, and how to choose a loss? We set objective="mae" (L1) and track metric="mae" because CLV labels are typically heavy‑tailed and outlier‑prone; L1 is robust to extreme values and aligns with WAPE—the business metric many teams report. If your objective is to punish large misses more strongly for high‑value customers, use L2 (MSE/RMSE). If planning needs P50/P90 scenarios for budgets and risk, use quantile loss (objective="quantile", alpha=0.5/0.9). For dollar amounts with many zeros and a continuous positive tail (insurance‑style severity), consider Tweedie (objective="tweedie", tweedie_variance_power≈1.2–1.8). For forecasting counts (e.g., number of purchases) use Poisson. In short, pick the loss that matches how decisions are made—targets, risk tolerance, and whether you optimize absolute error, tail risk, or ranking.
The rise of Large Language Models (LLMs) is transforming the Customer Lifetime Value (CLV) prediction process by enhancing traditional models and enabling new data-driven insights.
LLMs impact CLV prediction primarily through their ability to process and generate nuanced text data, which was previously challenging to incorporate effectively:
So, what's the takeaway? Implementing predictive CLV models isn't just a tech upgrade—it’s handing your business the ultimate cheat code for understanding customer potential.
By hooking into data analytics and predictive algorithms, you don't just guess; you know who your most valuable customers are. This power lets you hyper-personalize customer experiences, radically boost retention efforts, and tailor marketing campaigns with sniper-like precision. The result? You allocate resources more efficiently and maximize your ROI.
But it gets better. Predictive CLV doesn't just impact marketing. It’s a sustainable growth engine. It delivers the insights needed for optimized pricing strategies, allows for informed financial planning, and powers smarter, strategic decision-making across the board.

Highlights: US prosecutors requested a 12-year prison sentence for Do Kwon after the Terra collapse. Terraform’s $40 billion downfall caused huge losses and sparked a long downturn in crypto markets. Do Kwon will face sentencing on December 11 and must give up $19 million in earnings. US prosecutors have asked a judge to give Do Kwon, Terraform Labs co-founder, a 12-year prison sentence for his role in the remarkable $40 billion collapse of the Terra and Luna tokens. The request also seeks to finalize taking away Kwon’s criminal earnings. The court filing came in New York’s Southern District on Thursday. This is about four months after Kwon admitted guilt on two charges: wire fraud and conspiracy to defraud. Prosecutors said Kwon caused more losses than Samuel Bankman-Fried, Alexander Mashinsky, and Karl Sebastian Greenwood combined. U.S. prosecutors have asked a New York federal judge to sentence Terraform Labs co-founder Do Kwon to 12 years in prison, calling his role in the 2022 TerraUSD collapse a “colossal” fraud that triggered broader crypto-market failures, including the downfall of FTX. Sentencing is… — Wu Blockchain (@WuBlockchain) December 5, 2025 Terraform Collapse Shakes Crypto Market Authorities explained that Terraform’s collapse affected the entire crypto market. They said it helped trigger what is now called the ‘Crypto Winter.’ The filing stressed that Kwon’s conduct harmed many investors and the broader crypto world. On Thursday, prosecutors said Kwon must give up just over $19 million. They added that they will not ask for any additional restitution. They said: “The cost and time associated with calculating each investor-victim’s loss, determining whether the victim has already been compensated through the pending bankruptcy, and then paying out a percentage of the victim’s losses, will delay payment and diminish the amount of money ultimately paid to victims.” Authorities will sentence Do Kwon on December 11. They charged him in March 2023 with multiple crimes, including securities fraud, market manipulation, money laundering, and wire fraud. All connections are tied to his role at Terraform. After Terra fell in 2022, authorities lost track of Kwon until they arrested him in Montenegro on unrelated charges and sent him to the U.S. Do Kwon’s Legal Case and Sentencing In April last year, a jury ruled that both Terraform and Kwon committed civil fraud. They found the company and its co-founder misled investors about how the business operated and its finances. Jay Clayton, U.S. Attorney for the Southern District of New York, submitted the sentencing request in November. TERRA STATEMENT: “We are very disappointed with the verdict, which we do not believe is supported by the evidence. We continue to maintain that the SEC does not have the legal authority to bring this case at all, and we are carefully weighing our options and next steps.” — Zack Guzmán (@zGuz) April 5, 2024 The news of Kwon’s sentencing caused Terraform’s token, LUNA, to jump over 40% in one day, from $0.07 to $0.10. Still, this rise remains small compared to its all-time high of more than $19, which the ecosystem reached before collapsing in May 2022. In a November court filing, Do Kwon’s lawyers asked for a maximum five-year sentence. They argued for a shorter term partly because he could face up to 40 years in prison in South Korea, where prosecutors are also pursuing a case against him. The legal team added that even if Kwon serves time in the U.S., he would not be released freely. He would be moved from prison to an immigration detention center and then sent to Seoul to face pretrial detention for his South Korea charges. eToro Platform Best Crypto Exchange Over 90 top cryptos to trade Regulated by top-tier entities User-friendly trading app 30+ million users 9.9 Visit eToro eToro is a multi-asset investment platform. The value of your investments may go up or down. Your capital is at risk. Don’t invest unless you’re prepared to lose all the money you invest. This is a high-risk investment, and you should not expect to be protected if something goes wrong.

