We Didn't Wait. Here's What 18 Months of Actually Optimising for Generative Search Taught Us

While most of the industry was debating whether generative search would matter, we were already running live client programmes inside it. We've built our own tooling, developed a methodology, and seen what works — and what doesn't. This is a field report, not a forecast.

What We Started Testing — and Why

The Question We Asked

When ChatGPT changed how people find information, we didn't wait for consensus. We started tracking client visibility across ChatGPT, Gemini, Perplexity, and Claude — structured observation of topic scores, citation patterns, and share of voice inside actual LLM responses.

We wanted to understand not just whether our clients appeared in those responses, but why they appeared — and more importantly, why they didn't.

What We Tracked

➡️ Topic Scores

How prominently each client featured across key subject areas

➡️ Citation Patterns

Which content types earned model citations and which were ignored

➡️ Share of Voice

Competitive presence inside real LLM-generated answers

The Patterns Nobody Talks About

Definition Content Beats Landing Pages

A well-structured page that clearly defines a technology — how it works, where it fits — gets cited far more reliably than polished product marketing. LLMs aren't looking for sales copy. They're looking for content they can** reason from.**

Vendor Docs Outrank Marketing

Technical docs, integration guides, and release notes get cited at a rate that surprises most marketing teams. Documentation is specific, structured, and consistently updated — models treat it as a reliable source of ground truth.

Entity Clarity Over Keywords

Pages that clearly establish what something is — its category, relationships to adjacent concepts, key attributes — consistently outperform keyword-optimised pages. The model is building a coherent picture. Help it do that.

Third-Party Citations > Backlinks

In generative search, what matters is whether you're being mentioned, referenced, and discussed across sources models trust. A well-placed mention in an industry publication or analyst report does more for LLM visibility than a backlink from the same source. The model reads for signal, not links.

Structured Data Has a New Job

Schema markup still matters — but not primarily for rich snippets. It gives models a reliable, machine-readable version of what content is about. Think of it less as decoration and more as translation: helping the model understand your content on its own terms, independent of how a human reader might interpret it.

The Four-Layer Visibility Stack

The Four-Layer Visibility Stack

Optimising for generative search isn't a single problem — it's four overlapping problems, each operating at a different layer of how AI models surface information.

Most companies are only thinking about the** Retrieval Layer** — because it's the most visible. The other three are where the real leverage is.