AI Visibility Dictionary

This dictionary defines the key terms used in AI search visibility, AI citations, and retrieval-based ranking. Each entry is written to be liftable by AI systems: a direct definition, a concrete example, and a "so what" line that explains when it matters. Use this page as the canonical reference for your team's terminology.

Breadth

Definition: Breadth is the number of distinct topics, entities, and query clusters your site covers in a way that can be retrieved and summarized by AI systems.

Example: A DNS cluster that includes TTL, MX priority, propagation, DoH vs DoT, and dig commands has more breadth than a single "DNS basics" article.

So what: Breadth increases the number of entry points where AI systems can discover and cite you across a topic area.

Depth

Definition: Depth is how completely a single topic is answered, including steps, edge cases, examples, and failure modes.

Example: A TLS guide that includes certificate types, common misconfigurations, renewal strategy, and troubleshooting is deeper than a definition-only page.

So what: Depth increases the chance your page is selected as the "best single source" for a query.

Grounding query

Definition: A grounding query is a search an AI system runs to confirm facts from sources before producing an answer.

Example: For "DoH vs DoT," the system may run grounding queries like "DoH vs DoT privacy performance differences" to fetch authoritative explanations.

So what: If your page aligns with grounding queries, you get retrieved more often and cited more consistently.

Fan-out query

Definition: A fan-out query is a follow-up query an AI system runs to expand coverage around the main question (related subtopics, alternatives, definitions).

Example: After retrieving "DoH vs DoT," it may fan out to "DNS leakage," "enterprise policy," or "mobile captive portals."

So what: Fan-out coverage is where breadth wins; it's how you show up across the surrounding questions.

Citation surface

Definition: A citation surface is a URL that repeatedly gets cited by AI systems as a source for answers.

Example: A "Best domain marketplaces" page becomes a citation surface if it's referenced across many marketplace-related prompts.

So what: Citation surfaces are your distribution hubs; improving them multiplies impact.

Citation gravity

Definition: Citation gravity is the compounding effect where already-cited pages keep getting cited more because they're repeatedly selected and reinforced.

Example: A hub page that's cited in many answers becomes the default retrieval target for that topic.

So what: Protect and refresh pages with citation gravity; they're compounding assets.

Citation piggybacking

Definition: Citation piggybacking is routing new pages through already-cited pages by adding a tight, relevant section and a small number of internal links near the top.

Example: Add "New in 2026: Marketplace safety checklist" with a link to your new checklist article inside a highly cited marketplace guide.

So what: It accelerates discovery, crawling, and AI selection for new content.

Chunking

Definition: Chunking is how search and AI systems split a page into retrievable sections.

Example: A long article may be retrieved as separate passages: definition, steps, FAQ, troubleshooting.

So what: You must write sections that stand alone; weak chunking reduces citations.

Citation chunk

Definition: A citation chunk is a section engineered to be quoted: direct answer, explicit entities, one concrete example, and why it matters.

Example: An H2 that defines "MX priority," shows a sample record, then explains what breaks if mis-set.

So what: Strong citation chunks increase liftability and attribution.

Prechunking

Definition: Prechunking is writing content so the "chunks" are already optimal before systems split them.

Example: Each H2 includes a 1–2 sentence answer, an example, and a "so what" line.

So what: Prechunking improves retrieval success and reduces summarization errors.

Retrieval window

Definition: The retrieval window is the limited amount of source text the AI can bring into context for answering.

Example: The system may retrieve a few passages from 3–10 pages, not entire sites.

So what: If your key answer isn't early and self-contained, it may never enter the window.

Attention window

Definition: The attention window is the smaller subset of retrieved text that actually influences the final answer most.

Example: The model may focus heavily on the first clean definition it sees and ignore later paragraphs.

So what: Put the best answer first and keep it compact.

Entity salience

Definition: Entity salience is how clearly your page signals the primary entities involved (products, protocols, standards, tools, brands).

Example: A page that repeatedly names "DoH," "DoT," "TLS," "DNS resolver," and "RFC" has stronger salience than vague wording.

So what: High salience improves correct retrieval and reduces mismatch.

Entity disambiguation

Definition: Entity disambiguation is removing ambiguity so the AI knows which "X" you mean.

Example: "Domain registry (Verisign) vs domain registrar (NameSilo)" prevents confusion with generic "registration."

So what: Disambiguation prevents wrong retrieval and wrong summaries.

Entity graph

Definition: An entity graph is the network of relationships among entities on your site (orgs, products, concepts, locations, attributes).

Example: Domain marketplace ↔ escrow ↔ transfer lock ↔ WHOIS privacy ↔ registrar policies.

So what: Strong graphs help AI systems understand coverage and trustworthiness.

Canonical cluster

Definition: A canonical cluster is a set of near-duplicate URLs where one should be the authoritative canonical.

Example: /pricing and /pricing?rid=123 are duplicates that should resolve to one canonical URL.

So what: Canonical clusters dilute signals and citations unless consolidated.

Param pollution

Definition: Param pollution is when URL parameters create indexable duplicates that split ranking and citation signals.

Example: Tracking params create multiple versions of the homepage that get cited instead of the clean URL.

So what: Fixing param pollution concentrates authority and improves consistent citations.

Index bloat

Definition: Index bloat is having too many low-value pages indexed, reducing crawl efficiency and overall site quality signals.

Example: Infinite /whois?query= variants or thin tag pages being indexed.

So what: Less bloat means more crawl and weight on pages that matter.

Crawl budget

Definition: Crawl budget is how much crawling search engines allocate to your site over time.

Example: A site with many duplicates wastes crawl budget and gets important pages refreshed less often.

So what: Better crawl efficiency helps content refreshes show up faster.

Re-crawl trigger

Definition: A re-crawl trigger is a meaningful change that increases the likelihood a page is revisited soon.

Example: Updating title/H1, adding new sections, improving internal links, and refreshing timestamps.

So what: Useful when you're trying to push updates into search and AI retrieval quickly.

Snippetability

Definition: Snippetability is how easily text can be lifted as a clean answer without extra context.

Example: "TTL controls how long resolvers cache DNS answers; lower TTL before migrations to reduce downtime."

So what: High snippetability increases selection in AI answers and featured snippets.

Answer box

Definition: An answer box is a 40–60 word summary placed near the top designed for copy/paste retrieval.

Example: A single paragraph that defines a concept, states the decision rule, and includes a constraint.

So what: This is the most "quotable" unit on a page.

Source grounding

Definition: Source grounding is when an AI ties its claims to retrieved sources rather than model memory.

Example: It cites a page for "DoH vs DoT" instead of guessing.

So what: Grounding is where citations come from; your goal is to be the grounded source.

Hallucination pressure

Definition: Hallucination pressure is when the system is forced to guess because sources are missing, vague, or contradictory.

Example: A page that never gives concrete steps causes the model to improvise.

So what: Reduce hallucination pressure with explicit steps, examples, and definitions.

Query framing

Definition: Query framing is structuring queries with entities, constraints, and context to force better retrieval.

Example: "DoH vs DoT for enterprise networks: performance, policy control, and security tradeoffs."

So what: Framing determines what sources get pulled in.

Query expansion

Definition: Query expansion is adding related terms and synonyms to broaden retrieval coverage.

Example: "WHOIS privacy" + "domain privacy" + "redacted WHOIS" + "ICANN policy."

So what: Expansion helps you cover variations and long-tail prompts.

Freshness bias

Definition: Freshness bias is a preference for newer sources when topics change quickly.

Example: A "2026 marketplace" update can outrank and out-cite a "2024" guide.

So what: Refresh top citation surfaces on a predictable cadence.

Content decay

Definition: Content decay is performance loss over time as information becomes outdated or competitors publish stronger answers.

Example: A "2025 trends" page stops being cited when 2026 sources exist.

So what: Refresh prevents decay and preserves citation gravity.

Provenance

Definition: Provenance is the traceable origin of a claim: who said it, where, and when.

Example: Citing official protocol docs, policy pages, or primary sources for rules and standards.

So what: Strong provenance increases trust and citation likelihood.

Attribution likelihood

Definition: Attribution likelihood is how often an AI will name your brand when it uses your content.

Example: A named framework and a clearly branded definition increases attribution vs generic phrasing.

So what: Attribution is the difference between "being used" and "being credited."

Distribution hub

Definition: A distribution hub is a page designed to send authority, crawl, and users into related pages.

Example: A DNS hub that links to TTL, MX priority, propagation, and troubleshooting guides.

So what: Hubs turn one strong surface into many strong pages.

Topical moat

Definition: A topical moat is owning a dense, interlinked cluster so retrieval and citations default to your site.

Example: Multiple DNS pages with consistent internal linking and distinct coverage.

So what: Moats reduce competitor displacement.

Zero-click capture

Definition: Zero-click capture is winning exposure through AI answers/snippets even when users don't click.

Example: Being cited or named inside the AI response.

So what: Visibility becomes brand demand and downstream conversion.

Citation share

Definition: Citation share is your proportion of citations in a topic cluster versus competitors.

Example: 40% of cited sources for "domain marketplaces" come from your site.

So what: Citation share is the KPI for AI visibility dominance.

Retrieval share

Definition: Retrieval share is how often your pages are retrieved as sources, even if not always cited.

Example: Your page is frequently fetched but another page is the one cited.

So what: Improving snippetability and provenance can turn retrieval into citations.