FMP
Jan 24, 2026
Cross-cap analysis usually fails before any valuation model is applied. The problem starts with the choice of inputs, not with how those inputs are processed.
Most comparisons rely on absolute figures such as revenue, EBITDA, or net income. These numbers look objective, but they scale automatically with company size. A larger balance sheet produces larger outputs even when operational quality stays average. Scale alone explains much of what analysts later interpret as strength.
This creates a structural mismatch. Large companies dominate headline metrics because they have accumulated capital over time. Smaller companies, even when operating efficiently or reinvesting aggressively, appear weaker simply because their numbers start from a smaller base. The comparison is already biased before interpretation begins.
The issue becomes harder to spot because the analysis still looks disciplined. Financial statements line up cleanly. Ratios get calculated. Tables look precise. But the conclusions remain driven by magnitude rather than performance.
This failure shows up repeatedly in practice. Smaller firms get labeled as expensive because profits look thin in absolute terms. Larger firms receive implicit validation because their scale produces reassuring totals. Neither judgment reflects how well the business actually operates.
Nothing has gone wrong mathematically at this stage. The mistake is conceptual. Size has not been separated from quality. Until that separation happens, every downstream metric inherits the same distortion.
That is the problem the framework needs to solve. This is also why a consistent fundamentals dataset matters in cross-cap work, especially when you pull profiles, ratios, and trailing metrics from Financial Modeling Prep (FMP).
Before choosing valuation multiples or performance metrics, one decision matters more than all others. You must decide what does not belong in the comparison.
Size is the first thing that needs to go.
Financial statements mix two very different signals. One reflects how large a business is. The other reflects how well it operates. Absolute numbers blur that distinction. Normalization exists to separate them.
The rule is simple in theory: compare companies using measures that do not grow automatically with scale. In practice, this means shifting away from totals and toward ratios that express relationships rather than magnitudes.
A useful way to think about it is dimensionality. Revenue, EBITDA, and net income are measured in currency. They expand as capital expands. Margins, returns, and multiples are dimensionless. They describe efficiency, pricing, and valuation without embedding size.
Normalization does not mean adjusting numbers to look similar. It means choosing metrics that remain stable when company size changes. If doubling the size of a business mechanically doubles a metric, that metric does not belong in a cross-cap comparison.
This rule also applies to time. Mixing trailing twelve months for one company with full-year numbers for another quietly reintroduces distortion. Normalization requires consistency across both scale and measurement window.
Once scale is removed, differences start to mean something. Valuation reflects what the market pays for a unit of business. Performance reflects how much value that unit creates. Only after this separation does comparison become informative rather than cosmetic.
Everything that follows builds on this rule. If scale is not stripped out first, no amount of downstream analysis can fix the comparison.
Market capitalization often becomes the default reference point in company comparisons because it is simple, visible, and widely quoted. At a glance, it appears to offer a clean way to line companies up by size. The problem is that market cap measures only one narrow dimension of a business, while cross-cap analysis implicitly tries to compare something much broader.
To understand why market cap fails as a normalization anchor, it helps to separate what it actually measures from what analysts often assume it represents.
Market capitalization measures the value of a company's equity. Nothing more. It reflects how the market prices the ownership claim, not how large or complex the underlying business is.
That distinction matters because operating scale and equity value are not the same thing. Two companies can show identical market caps while running businesses with very different economic footprints.
Market cap ignores how a company finances itself. Debt does not disappear just because it sits below equity in the capital structure.
A leveraged company can look comparable to an unleveraged one in market cap terms, even though it operates a much larger and riskier enterprise. Treating those two businesses as equivalent assumes capital structure has no impact on valuation or performance. In practice, it often dominates both.
Cash creates the reverse problem. Companies that accumulate large cash balances inflate market cap without expanding their operating business.
In those cases, market cap overstates operating scale. Two firms with similar market caps may run businesses of very different sizes once excess cash is stripped out. Market cap alone cannot make that distinction.
Cross-cap analysis tries to compare operating businesses, not equity claims in isolation. Market cap mixes operating performance with financing decisions and balance sheet choices.
As a result, comparisons anchored on market cap blur three separate effects:
Once those effects are blended together, valuation signals lose clarity. If you want a deeper breakdown of where market cap stops working and EV becomes the cleaner anchor, FMP's guide on Enterprise Value vs. Market Capitalization expands the difference with examples.
Market capitalization is not meaningless. It provides context. It tells you how the market values the equity portion of a company.
But it is not a reliable base for comparing businesses across different sizes. For that, the anchor needs to reflect the full economic footprint of the firm, not just the equity slice.
That limitation is why market cap works poorly as a normalization tool in cross-cap analysis.
Once market capitalization is stripped down to what it actually measures, the need for a different anchor becomes clear. Cross-cap analysis requires a reference point that reflects the size of the operating business itself, not just the equity claim attached to it. That anchor must account for how the business is financed while isolating the economic scale that produces operating results.
Enterprise Value fills that role by design.
Enterprise Value represents the value of the operating business as a whole. It combines equity value with net debt and adjusts for cash on the balance sheet. The result is a measure that reflects how much capital is tied to running the business, regardless of how that capital is financed.
Unlike market cap, EV aligns more closely with the economic footprint of the firm. It answers a more relevant question for comparison: how large is the business that produces the operating results? FMP's primer on Enterprise Value lays out the definition and why it maps more closely to operating footprint than equity value alone.
EV removes two distortions that market cap cannot handle. It neutralizes leverage by incorporating debt, and it adjusts for excess cash that does not contribute to operations.
This makes EV comparable across companies with different financing strategies. A debt-heavy firm and a cash-rich firm no longer appear similar simply because their equity values align. The comparison shifts from ownership claims to business scale.
Using EV does not imply anything about whether a company is cheap or expensive. It only establishes a consistent base.
EV is a starting point, not a judgment. It ensures that when valuation multiples or performance ratios are applied, they operate on a comparable economic foundation. Without this step, downstream metrics inherit distortions from capital structure.
EV is not universally applicable. Financial institutions operate with balance sheets that make debt a core input rather than a financing choice. In those cases, EV-based analysis loses meaning.
Outside those exceptions, EV remains the most reliable anchor for cross-cap comparisons. It allows valuation and performance metrics to be interpreted without confusing size, leverage, and liquidity effects.
Anchoring valuation to Enterprise Value removes the most obvious distortions caused by capital structure and excess cash. The next decision is more subtle. Even with a consistent base, the choice of valuation multiple can quietly reintroduce scale effects and undo the normalization step.
Some multiples remain stable as business size changes. Others embed assumptions about maturity, volatility, or reinvestment that bias comparisons across market caps. Understanding that distinction is essential before any relative valuation is interpreted.
Not all valuation multiples behave the same way across market caps. Some embed scale effects so deeply that they reintroduce bias even after anchoring to Enterprise Value.
Multiples tied directly to equity earnings tend to exaggerate this problem. Smaller companies show higher earnings volatility. Larger companies benefit from stability, accounting smoothing, and slower reinvestment cycles. When those differences are ignored, the multiple ends up reflecting maturity rather than value.
EV-based multiples tend to hold up better in cross-cap analysis because they price the operating business before financing effects.
EV to EBITDA is the most commonly used example. It allows businesses with different leverage profiles to be evaluated on operating performance rather than capital structure. When EBITDA is a meaningful proxy for operating cash flow, this multiple provides a reasonable baseline for comparison.
That does not make it universally correct. It makes it directionally safer than equity-only multiples.
Some businesses are still building margins. Others intentionally suppress earnings through reinvestment. In those cases, earnings-based multiples punish growth rather than risk.
Revenue multiples offer a way around that problem. They compare valuation at the top line, where business activity is already visible. However, revenue alone does not signal quality. Without pairing it with margins or cash flow, price-to-sales can reward low-quality growth.
This is why revenue multiples work best as part of a paired view, not as standalone indicators.
Price-to-earnings ratios are easy to interpret, which makes them easy to misuse. In cross-cap comparisons, they often favor large, mature companies and penalize firms that reinvest aggressively.
P/E can still add context once profitability stabilizes and cash conversion is established. It should confirm conclusions drawn from other metrics, not drive them.
Valuation improves when the choice of multiple follows business reality rather than habit.
If operating cash flow is stable, EV-based multiples usually belong first.
If earnings are unstable but revenue quality is visible, revenue multiples help frame valuation.
If profitability is mature and reinvestment is moderate, P/E becomes informative.
Using multiples this way preserves comparability without letting scale creep back into the analysis.
Once valuation has been normalized, the analysis still remains incomplete. Valuation explains how the market prices a business. It does not explain how the business actually performs. Cross-cap comparison only becomes informative when pricing is paired with measures that describe operating quality in a way that does not scale mechanically with company size.
This is where size-neutral performance metrics enter the framework. They shift the focus from market perception to business behavior and help distinguish between valuation differences driven by efficiency and those driven by structure or maturity.
Valuation multiples explain how the market prices a business. They do not explain how the business actually performs. Two companies can trade at similar multiples and deliver very different economic outcomes over time.
Cross-cap comparison improves only when valuation is paired with measures of operating quality. Those measures must remain stable as company size changes. Absolute profits fail that test. Efficiency metrics do not.
Return on invested capital focuses on how effectively a company uses the capital required to run its business. It connects operating profit to the full pool of capital employed, not just equity.
This makes ROIC particularly useful across market caps. A small company and a large company can be compared on how much value they generate per unit of capital, regardless of scale. High ROIC signals strong unit economics or disciplined reinvestment. Low ROIC often points to structural inefficiencies or capital-heavy growth.
Accounting earnings can look healthy while cash generation lags. Free cash flow margin addresses that gap by testing whether reported performance turns into usable cash after reinvestment.
This matters in cross-cap analysis because growth-stage and mature companies convert earnings into cash very differently. FCF margin cuts through that difference. It highlights whether margins are durable or supported by temporary accounting effects.
A business that shows strong ROIC but weak cash conversion deserves scrutiny. A business that combines both usually earns its valuation.
ROIC and FCF margin express relationships, not magnitudes. They do not automatically grow as balance sheets expand. That is what makes them comparable.
Large companies often benefit from scale but suffer diminishing returns on incremental capital. Smaller companies may operate with higher efficiency but limited absolute output. These metrics surface that contrast without favoring either side.
At this point, size has been neutralized. What remains is a clearer view of operating quality. That is the foundation needed before building any comparative table.
With the analytical framework in place, the focus now shifts from reasoning to implementation. The objective is to translate the normalization principles into a comparison structure that preserves their intent. If the table design drifts from those principles, the analysis quietly breaks even if the calculations appear correct.
This section defines what the comparison table must include—and just as importantly, what it must exclude—to keep size from reentering the analysis.
At this stage, the framework is defined. The remaining task is execution. The comparison table must reflect the same normalization logic used throughout the analysis. If the inputs drift, the conclusions drift with them.
Each row represents one company. Each column represents a size-neutral signal. Nothing in the table should scale automatically with company size.
The goal is not to rank companies. The goal is to make differences interpretable.
The table pulls from a small, controlled set of metrics:
All metrics must come from the same time window. Mixing trailing twelve months with fiscal-year numbers breaks comparability quietly. The table only works if every column reflects the same reporting basis.
Missing values should be handled explicitly. Silent fallbacks are worse than empty cells because they change the definition of the metric without warning.
This is the point where the data layer matters, because Financial Modeling Prep standardizes the same fundamentals and trailing metrics across companies and reporting windows.
The table relies on:
Each metric is sourced once and reused consistently. No manual normalization. No cross-provider stitching. The only practical constraint is call volume and dataset access, which differ across FMP pricing plans.
This is also where Python belongs in the article.
The table pulls every metric from FMP Stable endpoints to keep the time window and definitions consistent across companies:
|
import math import requests import pandas as pd API_KEY = "YOUR_FMP_API_KEY" BASE_URL = "https://financialmodelingprep.com/stable" SYMBOLS = ["AAPL", "AMD", "NVDA"] # replace with your tickers session = requests.Session() session.headers.update({"User-Agent": "fmp-cross-cap/1.0"}) def is_number(x): return isinstance(x, (int, float)) and not (isinstance(x, float) and math.isnan(x)) def first_item(x): if isinstance(x, list): return x[0] if x else {} return x or {} def pick(d, *keys): if not isinstance(d, dict): return None for k in keys: v = d.get(k) if v is not None: return v return None def safe_div(a, b): if not is_number(a) or not is_number(b) or b == 0: return None return a / b def fmp_get(path, params): params = dict(params) params["apikey"] = API_KEY url = f"{BASE_URL}/{path}" r = session.get(url, params=params, timeout=30) r.raise_for_status() return r.json() def compact_money(x): if not is_number(x): return None absx = abs(x) if absx >= 1e12: return f"{x/1e12:.2f}T" if absx >= 1e9: return f"{x/1e9:.2f}B" if absx >= 1e6: return f"{x/1e6:.2f}M" if absx >= 1e3: return f"{x/1e3:.2f}K" return f"{x:.2f}" def to_pct(x): if not is_number(x): return None return round(100 * x, 2) def to_num(x, nd=2): if not is_number(x): return None return round(x, nd) def compute_pe_ttm(ratios_ttm, km_ttm, quote, income_latest): pe = pick(ratios_ttm, "priceEarningsRatioTTM", "peRatioTTM") if is_number(pe): return pe pe = pick(km_ttm, "peRatioTTM", "peTTM", "priceEarningsRatioTTM") if is_number(pe): return pe price = pick(quote, "price") eps = pick(km_ttm, "netIncomePerShareTTM", "epsTTM", "earningsPerShareTTM") if not is_number(eps): eps = pick(income_latest, "eps") # fallback if present in income-statement response return safe_div(price, eps) def compute_fcf_margin_ttm(ratios_ttm, income_latest, cashflow_latest): fcf_margin = pick(ratios_ttm, "freeCashFlowMarginTTM") if is_number(fcf_margin): return fcf_margin revenue = pick(income_latest, "revenue") fcf = pick(cashflow_latest, "freeCashFlow") if not is_number(fcf): ocf = pick(cashflow_latest, "operatingCashFlow") capex = pick(cashflow_latest, "capitalExpenditure") if is_number(ocf) and is_number(capex): fcf = ocf - capex return safe_div(fcf, revenue) rows = [] for sym in SYMBOLS: try: profile = first_item(fmp_get("profile", {"symbol": sym})) quote = first_item(fmp_get("quote", {"symbol": sym})) km_ttm = first_item(fmp_get("key-metrics-ttm", {"symbol": sym})) ratios_ttm = first_item(fmp_get("ratios-ttm", {"symbol": sym})) income_latest = first_item(fmp_get("income-statement", {"symbol": sym, "limit": 1})) cashflow_latest = first_item(fmp_get("cash-flow-statement", {"symbol": sym, "limit": 1})) market_cap = pick(profile, "marketCap", "mktCap") ev_ebitda = pick(km_ttm, "evToEBITDATTM") roic = pick(km_ttm, "roicTTM", "returnOnInvestedCapitalTTM") pe = compute_pe_ttm(ratios_ttm, km_ttm, quote, income_latest) fcf_margin = compute_fcf_margin_ttm(ratios_ttm, income_latest, cashflow_latest) rows.append({ "Symbol": sym, "Market Cap": market_cap, "EV/EBITDA (TTM)": ev_ebitda, "P/E (TTM)": pe, "ROIC (TTM)": roic, "FCF Margin (TTM)": fcf_margin, }) except Exception as e: rows.append({"Symbol": sym, "Error": str(e)}) df = pd.DataFrame(rows) # presentation layer for readability in the article out = df.copy() if "Market Cap" in out.columns: out["Market Cap"] = out["Market Cap"].apply(compact_money) if "EV/EBITDA (TTM)" in out.columns: out["EV/EBITDA (TTM)"] = out["EV/EBITDA (TTM)"].apply(lambda x: to_num(x, 2)) if "P/E (TTM)" in out.columns: out["P/E (TTM)"] = out["P/E (TTM)"].apply(lambda x: to_num(x, 2)) if "ROIC (TTM)" in out.columns: out["ROIC (TTM)"] = out["ROIC (TTM)"].apply(to_pct) if "FCF Margin (TTM)" in out.columns: out["FCF Margin (TTM)"] = out["FCF Margin (TTM)"].apply(to_pct) print(out.to_string(index=False)) |
The table above shows the same companies through two different lenses at the same time. Market cap provides context, but it no longer drives the comparison. The remaining columns focus on how the market prices operating output and how efficiently each business converts capital into cash and returns.
Start with valuation. Apple and Nvidia both sit above the trillion-dollar mark, while AMD is significantly smaller. If size dictated valuation, Apple and Nvidia would cluster together. Instead, the EV/EBITDA column shows a different picture. Apple trades at 26.56, Nvidia at 38.07, and AMD at 61.54. The smallest company in the group carries the highest operating multiple. Size is no longer explaining price.
The next step is to connect valuation with operating quality. Apple shows strong capital efficiency, with ROIC near 52 percent and a free cash flow margin above 23 percent. That combination reflects a mature business that converts capital into cash reliably. The market prices that stability at the lowest EV/EBITDA multiple in the table.
Nvidia sits in a different regime. ROIC approaches 69 percent and free cash flow margin exceeds 46 percent. Those figures indicate exceptional capital productivity and cash conversion. The higher EV/EBITDA multiple reflects that operating reality. The valuation premium is not driven by size. It is supported by efficiency.
AMD provides the contrast that makes the framework useful. Despite being much smaller, it trades at the highest EV/EBITDA and an extremely elevated P/E. At the same time, ROIC remains low and free cash flow margin is materially weaker than the other two. This does not mean AMD is mispriced by definition. It means the current valuation is not explained by present-day operating efficiency or cash generation. The table surfaces that gap clearly.
This is the point of a size-neutral comparison. Absolute numbers disappear from the decision process. Market cap stops acting as a proxy for quality. Valuation is interpreted alongside capital efficiency and cash conversion, not in isolation.
Once the data is normalized this way, the analysis stops asking which company is bigger and starts asking which company earns its valuation. That shift is exactly what cross-cap comparison is meant to achieve.
Comparing companies across market caps only works when size is removed from the analysis. Absolute figures reward scale, not performance, and lead to conclusions that feel analytical but rest on arithmetic rather than economics.
Once valuation is anchored to the operating business and paired with size-neutral performance metrics, the comparison changes. Companies no longer cluster by market cap. They separate by capital efficiency, cash conversion, and how the market prices those attributes. Premium valuations become explainable. Cheap valuations stop looking cheap by default.
The framework does not rank winners or predict outcomes. It does something more important. It makes differences visible without letting size decide the result. That is the baseline required for any serious cross-cap analysis.
Earnings tend to feel conclusive because they arrive as a single, polished number. Net income grows, margins hold, and t...
The moment immediately following registration is often the most fragile point in a developer’s interaction with a new da...