# Bottom-up Jevons / token-demand model

This is a **scenario model**, not a measured market-size claim.

The point is not to guess the exact number of tokens the industry will buy. The point is to show how reasoning demand can compound once a workflow becomes cheap enough to instrument continuously.

## Formula

For each workflow family:

**Annual token demand ≈ loops per year × scenarios per loop × tokens per scenario × review / revision passes**

That is the basic unit model. Then it scales by the number of:
- assets
- companies
- jurisdictions
- desks
- projects
- portfolios

## Why this matters

The weak framing is:
- one assistant replaces one analyst

The stronger framing is:
- every serious loop starts to accumulate a standing synthetic analyst
- more scenarios get run
- more exceptions get reviewed
- more packets get refreshed
- more evidence gets kept live

## What the scenarios represent

### Base
A credible first wedge with bounded adoption and limited model retries.

### Aggressive
A strong deployment where the organization stops rationing many routine analytical passes.

### Extreme diffusion
The loop becomes persistent, multi-pass, and highly instrumented. The workflow is no longer an occasional assistant. It is a standing reasoning layer.

## How to interpret the table

The scenario table is expressed **per operating unit**:
- per borrower
- per utility / jurisdiction
- per non-op platform
- per LNG desk / terminal
- per refinery + control stack
- per megaproject delivery organization

That makes it easier for a real buyer or frontier lab to scale from their own footprint.

## The important insight

The biggest growth in model usage does not come from replacing one junior seat. It comes from increasing:
- cadence
- branch count
- monitoring coverage
- version count
- evidence refresh
- exception review

That is the Jevons move in this market.

## Where the model should be refined next

This scenario model gets much better once fieldwork adds:
- true artifact counts
- page and workbook sizes
- retry / review rates
- monitoring cadence
- human signoff frequency
- actual task-level latency and cost data
