THE PHANTOM UNIT
Your Unit Economics Are Fine. Your Unit Is Wrong.
Here’s an uncomfortable truth about African venture performance. Most startups collapse while measuring economics against units that don’t exist.
The user who downloaded the app but will never pay. The farmer who took the subsidised input but won’t return at market price. The transaction that happened once, under promotion, and will never repeat. These aren’t edge cases. In market after market, they form the denominator—the foundation on which entire fundraising decks are built.
When investors diagnose “unit economics problems” in African startups, they’re often being imprecise. The deeper failure is unit identification. The economics look broken because the unit itself was misspecified from the start. You can’t optimise your way out of measuring the wrong thing.
This explains a pattern that frustrates founders and confuses observers: companies with apparently reasonable metrics that collapse when growth capital demands proof of repeatability. The Series A cliff marks the moment when phantom units get stress-tested—and vanish.
This pattern surfaces repeatedly in diligence, in portfolio reviews, in post-mortems. The metrics looked reasonable. The unit was wrong.
The consequences run deeper than individual startup failure. Unit misidentification creates a systematic valuation trap that distorts capital allocation across the ecosystem.
The error propagates predictably. A founder builds a model using population-derived TAM: 200 million Nigerians, 5% smartphone penetration in the target segment, 2% conversion assumption. The math produces a $400 million addressable market. Investors see the number, discount it for execution risk, and underwrite at a $10 million pre-money valuation.
The denominator was wrong. Those 200 million people aren’t potential transacting users. They’re demographic entries. The subset who will actually pay, repeatedly, at a margin-positive price point, might be 2 million. Or 200,000. The founder doesn’t know, because the model never asked.
Eighteen months later, the company hasnlburned through its seed round acquiring users who looked like the target demographic but weren’t bankable demand. Cohort retention is dismal—not because the product failed, but because the “cohort” was never a real unit. The Series A doesn’t happen. The company either dies or zombifies into grant dependency.
This is where the so-called “African discount” originates. Foreign investors demanding 30-40% IRR hurdles for African deals versus 20-25% for equivalent Southeast Asian companies aren’t applying prejudice. They’re pricing unit identification risk. They’ve seen enough pitch decks built on phantom denominators.
The tragedy: this discount hits founders who have correctly identified their units, because the market can’t easily distinguish rigorous unit identification from population-based fantasy until deep diligence. The penalty is collective.
For LPs, another layer of opacity. When African fund managers report portfolio metrics, how much is built on verified bankable demand versus inherited unit misidentification? The question rarely surfaces at the GP-LP interface, but it explains performance variance that sector and timing alone cannot.
Unit misidentification degrades pricing efficiency, misallocates capital, and generates the data that perpetuates scepticism about African venture as an asset class.
The solution sits upstream of unit economics analysis: a diagnostic layer for determining whether the unit you’re measuring is real.
I call this the Unit Identification Protocol—four tests, each targeting a failure mode endemic to African market conditions.
Test 1: Existence
Does this unit actually transact, or does it only exist demographically?
Population-based TAM analysis assumes that people fitting a demographic profile are potential customers. Demographic existence doesn’t equal economic existence. A 28-year-old in Lagos with a smartphone and a bank account exists demographically. Whether she’ll transact on your platform—at your price point, for your use case, without subsidy—requires separate proof.
The population fallacy seduces because the numbers are large and defensible. Nigeria has 220 million people. These are ‘facts’. “Addressable market” demands an additional step: proving that some subset will convert into transacting units under commercial conditions.
When a pitch deck moves directly from census data to market size without intermediate demand validation, existence hasn’t been tested.
Test 2: Repeatability
Will this unit return without equivalent re-acquisition cost?
Many ventures generate real transactions but can’t demonstrate repeatability. The user transacts once, often under promotional conditions, and disappears. This signals that the “unit” was a one-time opportunist responding to a subsidy, never a repeating customer.
Consumer fintech and e-commerce suffer particularly here. First-transaction incentives—cashback, zero fees, referral bonuses—generate impressive MAU figures that collapse when incentives withdraw. Per-transaction economics may look reasonable. But if each transaction requires fresh acquisition spend, you’re buying revenue rather than building a business.
In African markets, where price sensitivity runs high and switching costs stay low, trial behaviour predicts retention poorly. When cohort curves flatten only because new acquisition masks old churn, repeatability remains unvalidated.
Test 3: Monetisability
Can you capture margin from this unit, or just gross transaction value?
Some units exist and repeat but yield no viable margin. You facilitate $10 million in GMV, but actual revenue is $200,000—a 2% take rate that must cover acquisition, operations, and platform costs. The unit is real, but it isn’t yours. You’re a pass-through.
Monetisability failures stem from competitive pressure (platforms subsidising take rates), value chain positioning (capturing volume without margin), or price elasticity (any take-rate increase triggers defection).
Agritech is particularly exposed. Platforms facilitate impressive volumes of input distribution or produce aggregation, but margin capture happens elsewhere—at the input manufacturer or export buyer level. The platform’s “unit” is high-volume, low-margin, and often negative after operational costs.
When revenue scales linearly with volume but margin percentage stagnates or worsens, monetisability is compromised.
Test 4: Measurability
Can you reliably track this unit through your system?
If you can’t measure a unit—track its behaviour across time, attribute transactions, distinguish it from other units—you can’t optimise around it. In African markets, measurability proves harder than it appears.
Informal economy dynamics mean users transact partially on-platform and partially off. SIM-swap behaviour destabilises phone-number-based identity. Cash-in/cash-out patterns cause digital records to undercount actual activity. Agent networks introduce intermediation that obscures end-user behaviour.
Your analytics show 50,000 monthly active users, but 15,000 are duplicate identities, 10,000 are agents transacting on behalf of multiple people, and 5,000 are churned users appearing “active” due to measurement lag. Your real cohort is 20,000—and decisions rest on the wrong number.
When on-the-ground transaction patterns diverge from platform analytics, measurability is broken.
Integrating UIP with CAMEL
These four tests form the foundation of venture resilience. Each CAMEL element depends on correct unit identification.
Capital Efficiency calculations against inflated denominators produce fictional ratios. Your CAC looks artificially low; your efficiency metrics mislead.
Adaptability requires knowing which units survive shocks. Pivoting to a different phantom unit isn’t adaptation—it shifts the delusion.
Margin Architecture must denominate in retained revenue from real transactions. Gross margin on non-repeating units isn’t margin. It’s one-time arbitrage.
Efficiency metrics like LTV/CAC collapse when “lifetime” is misidentified. Projected value from users who won’t retain is projection error.
Liquidity Runway should measure burn rate per real unit acquired and retained. Phantom units consume cash without generating the replenishment that extends runway.
A CAMEL-grade venture starts with unit identification. Resilience architecture only functions on a verified foundation.
B2B Calibration
The protocol applies equally to B2B ventures, though the failure modes shift weight. Existence failures are rarer—enterprise customers who sign contracts generally exist commercially. But repeatability and monetisability failures are common. The pilot customer who never converts to paid. The enterprise contract with 90-day termination clauses that functionally behaves like a rolling monthly. The large-logo partnership that delivers volume without margin because procurement squeezed pricing to cost-plus. B2B founders often assume that landing a contract validates the unit. It doesn’t. A contract validates existence. Repeatability and monetisability require separate proof: renewal rates, expansion revenue, gross margin after implementation costs.
Addressable market in B2B demands its own rigour. The relevant denominator isn’t “number of companies in the sector.” It’s the subset that meets threshold criteria: budget authority for your price point, operational infrastructure to implement your solution, procurement cycles short enough to fit your runway, and payment reliability sufficient to book revenue confidently. In African enterprise markets, each filter cuts hard. Many companies that should buy your product lack the budget discretion. Others have it but operate procurement timelines measured in years. Others pay in 180-day cycles that destroy your working capital. The addressable unit in B2B is a company that passes all four filters—not a logo on a target list. Founders who size their market by counting companies in a sector, then applying a conversion percentage, repeat the same error as B2C founders who start with population. The unit must be qualified before it’s counted.
The abstraction becomes concrete with numbers. Two fintechs—call them Phantom Co and Real Co—operating in the same market, reporting similar seed-stage metrics.
Phantom Co’s pitch:
100,000 registered users
40,000 MAU
$2 million monthly transaction volume
2.5% take rate → $50,000 monthly revenue
$600,000 annualised
$8 CAC → $800,000 total acquisition spend
Implied LTV/CAC of 3.2x
Looks fundable. A $4-5 million raise at $15-20 million post-money wouldn’t raise eyebrows.
Run the protocol.
Existence: 70,000 users registered during a zero-fee campaign. Only 30,000 ever transacted at standard pricing.
Repeatability: Of 40,000 MAU, 25,000 made just one transaction in the past 30 days. 18,000 of those have never made a second. True repeating users: roughly 12,000.
Monetisability: After payment processing, fraud, and support costs, net margin is 0.8%. Monthly profit contribution: $16,000.
Measurability: 8,000-10,000 MAU are duplicates or agent accounts. Verified unique repeating users: 7,000-8,000.
Phantom Co’s real position:
7,500 verified repeating users
$192,000 annualised margin
Effective CAC per real unit: $107
Actual LTV at realistic retention: $80-100
LTV/CAC: 0.8x
Underwater. Every unit acquired destroys value.
Real Co’s pitch (same market):
25,000 registered users
15,000 MAU
$1.2 million monthly volume
3% take rate → $36,000 monthly revenue
$432,000 annualised
Smaller numbers. Less impressive at first glance.
Protocol results:
Existence: 22,000 of 25,000 transacted at commercial terms.
Repeatability: 11,000 have transacted three or more times. 90-day retention: 65%.
Monetisability: Net margin after costs: 1.8%. Monthly profit contribution: $21,600.
Measurability: Verified account system with transaction-level identity controls. Duplicate rate under 5%. This infrastructure costs more upfront—roughly 8-10% above a frictionless onboarding flow. But the trade-off is margin-protective: every dollar spent on verification eliminates downstream waste on phantom units that consume support, distort analytics, and inflate CAC calculations. Real Co’s higher upfront cost per account is precisely why its LTV/CAC holds.
Real Co’s actual position:
11,000 verified repeating users
$259,000 annualised margin
Effective CAC: $16.40 per verified unit
LTV at 65% retention: ~$140
LTV/CAC: 8.5x
Same market. Superficially weaker top-line. Real Co builds equity value with every unit acquired. Phantom Co burns capital.
Valuation implications are severe. A sophisticated investor prices Phantom Co at 3-4x, recognising unit identification risk. Real Co commands 12-15x, recognising compounding potential on verified demand. The “African discount” applies selectively—driven by unit validity, not geography.
Standard advice to struggling African startups: fix your unit economics. Improve CAC efficiency. Increase LTV. Optimise the funnel.
Incomplete advice. Applied to a venture suffering from unit misidentification, it actively misleads.
You cannot optimise economics on a unit that doesn’t exist. You cannot improve retention on users who were never going to repeat. You cannot capture margin from transactions that were always pass-through. The spreadsheet keeps producing numbers. The numbers don’t correspond to anything in the market.
Before asking “what are your unit economics,” ask “what is your unit?”
This requires honest answers to the four protocol tests. Does your unit exist under commercial conditions? Will it return without re-acquisition spend? Can you capture margin? Can you measure it reliably?
If any answer is “no” or “unknown,” unit economics analysis is premature. The prior step is unit validation—finding, verifying, or pivoting to a unit that passes all four tests.
For founders: Resist scaling before unit validity is confirmed. Growth capital expects proof of repeatability. Validate at pre-seed and seed, not when Series A investors stress-test your cohorts. Run your current metrics through the four tests. Be honest about what emerges.
For investors: Add unit identification to due diligence—as prerequisite, not afterthought. A company with weaker top-line numbers but verified units is almost always a better position than impressive MAUs built on phantom demand. “How did you validate that these are real, repeating, monetisable users?” should precede any discussion of multiples.
For the ecosystem: If African ventures can demonstrate rigorous unit identification, the systematic discount starts to erode. Capital allocators aren’t hostile to the continent. They’re hostile to unpriced unit risk. Remove the risk, and pricing normalises.
Building analytical infrastructure for African venture means creating diagnostic tools that distinguish real opportunity from well-presented fantasy. Unit identification is one such tool. It won’t transform bad markets into good ones—but it stops good operators from building on false foundations.
The L.U.M.I. Brief exists to develop and distribute these tools. If you’re a founder, run the protocol on your own metrics this week—before someone else does it in diligence. If you’re an investor, add the four questions to your next screening call. And if this framework was useful, the next layer goes deeper: the UIP Diagnostic Template, a structured tool for stress-testing existence, repeatability, monetisability, and measurability across your user base or portfolio. Subscribers get first access.


