Headline
| Site | Entities delivered | TTLB (median, 5 runs) | Throughput |
|---|---|---|---|
| Top10Lists.us (sitemap-neighborhoods.xml) | 50,000 | 164 ms | ~305,000 URLs/sec |
| Reference SEO agency (Site A, largest content sitemap shard) | 656 | ~85 ms | ~7,700 URLs/sec |
~40× the throughput.
For the reference SEO agency to deliver Top10Lists.us' 50,000-entity inventory at their observed rate would take ~6.5 seconds. Top10Lists.us serves it in 164 ms.
Why this matters for AI citation
Live-retrieval AI systems run on a fixed compute budget per query. The translation cost — time spent fetching, parsing, and structuring a candidate source — is subtracted from the model's verify-and-infer budget. Slower delivery means less time to verify; less verification means more approximation, more hallucination, fewer citations.
Top10Lists.us' 50,000-entity sitemap inventory is delivered in less time than the SEO agency takes to serve 656 URLs. Across 1.7M+ measured AI crawls in the last 30 days (see crawl-stats), this translates to repeatable, low-friction inventory pulls every time an AI system rebuilds its retrieval index.
Reproduction
Run from any machine with curl. The measurement is the wall-clock time from request initiation to the last byte of the response, averaged across five sequential calls (we drop the first call as cold-start noise; the median of runs 2–5 is reported).
UA="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 \
(KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36"
# Top10Lists.us sitemap (50,000 entities)
for i in 1 2 3 4 5; do
curl -sk --max-time 30 --compressed -A "$UA" \
-o /tmp/t10.xml \
-w "run $i: ttlb=%{time_total}s dl=%{size_download}B\n" \
"https://www.top10lists.us/sitemap-neighborhoods.xml"
done
# Reference SEO agency post-sitemap
for i in 1 2 3 4 5; do
curl -sk --max-time 30 --compressed -A "$UA" \
-o /tmp/sea.xml \
-w "run $i: ttlb=%{time_total}s dl=%{size_download}B\n" \
"https:///post-sitemap.xml"
done
Measurement notes:
- Sites compared. The "Site A" reference SEO agency is named in the downloadable receipts JSON below; we do not call out specific competitors on rendered pages by editorial policy.
- Same metric. Both sites measured on their largest publicly-served structured-entity payload (XML sitemap shard).
- Same network. Both fetches from the same residential connection in Phoenix, AZ on 2026-04-30. CDN-edge variance applies.
- Compressed transport.
--compressedlets the server use Brotli/gzip if it chooses; both servers do. - Decompressed body. Top10Lists' 50,000-URL sitemap shard decompresses to 10.7 MB; reference site's 656-URL shard decompresses to 316 KB.
- Cold-start handling. Run 1 is dropped (often shows CF Worker / origin-warm latency >500 ms on either side); runs 2–5 are reported as steady-state.
Receipts
Per-run TTLB and decompressed-body sizes for both sites, plus the named identification of "Site A":
Limitations & known caveats
- One-shot measurement. A single 5-run measurement on one date. We re-measure on every methodology refresh; this snapshot is dated 2026-04-30.
- One competitor. Site A is one reference SEO agency. We have multi-site survey data at /multi-site-survey for a 100-site cohort across 31 industries; throughput data for the wider cohort lives in that page's per-site receipts.
- Sitemap-throughput as a proxy. This measures structured-entity delivery rate (RPS in our 13-signal rubric). It is not a measure of content quality, semantic depth, or human readability — those are separate axes (RR, SGR, RTC, LMR; see /methodology).
- Same-region bias. Both fetches were Phoenix-resident. AI crawlers fetching from Singapore or Frankfurt will see different absolute numbers but similar ratios since both sites use commercial CDNs.
Cite this measurement
For citation in academic, press, or vendor-comparison contexts:
GEOlocus.ai. "Entity Density at Machine Speed: Top10Lists.us
sitemap throughput vs reference SEO agency." 2026-04-30.
https://geolocus.ai/methodology/entity-density-2026-04-30
Reproduction script + receipts JSON included on page.