How to feature?

Last week, I watched the Vryse pitch on Shark Tank. It is a search engine optimisation (SEO) and generative engine optimisation (GEO) platform.

SEO concerns itself with reverse engineering Google’s search algorithm–optimising for keywords, backlinks, page structure, etc. to rank higher in search results. GEO is the natural corollary–optimising to be surfaced in LLM responses.

Fundamentally, LLM-search has two layers:

The first is pre-trained knowledge, which refers to what the model learned during training. This is a snapshot of the internet at a fixed point in time, baked into the model’s weights. If you’re a brand trying to influence this, you’re essentially trying to retroactively change history. Short of becoming Wikipedia-level ubiquitous, there’s not much you can do here.

The second is RAG–Retrieval Augmented Generation. When you ask Claude or ChatGPT a question that requires current information, the model calls a search API (Google, Bing), retrieves the top results, reads them, and synthesizes an answer. This is where optimization is actually possible.

The RAG layer still runs on the existing search infrastructure. The OpenAI web search API documentation shows that models rely on the top 10 or 20 results from traditional search engines. If you aren’t on that first page, you don’t exist in the model’s context window.

This got me thinking that if GEO for pre-trained knowledge is a black box, and GEO for RAG mostly reduces to “do good SEO,” then what’s the extent of value-add of a dedicated GEO platform?

There is not a lot of research on GEO, but a few emerging hypotheses are as follows:

  1. Reasoning models can go deeper into fewer pages, not wider across more pages. If your website ranks among the top few search results, the quality and structure of your content matter more than ever.
  2. GEO rewards meaning density and semantic proximity (related concepts clustered together).
  3. While average Google search queries are 4 words1, an average generative query is 23 words. There is a behavioural shift from searching to consulting in the context of LLMs. GEO implies optimising to answer complex, conversational questions.
  4. Authority is verified through presence across key signals–communities (Reddit, Quora), YouTube transcripts, and citation-worthy journals/publications. Absence from the participatory web is treated as low-trust.
  5. Attribution is hard. Google’s ad-supported model incentivised moving users to third-party sites to sell more ads (measurable through click-through rate). LLMs are subscription-driven. They have no financial incentive to send users away (at the time of writing this); their value lies in providing a frictionless, all-in-one answer.

With GEO, the technical requirements of search are merging with the structural requirements of good writing. It would be interesting to see how this evolves and if we are able to deterministically optimise LLM responses just as we did with search.


You might also like