Is Your Brand Showing Up in AI Search Results?
- 5 days ago
- 8 min read

Most businesses find out their brand is missing from AI answers by accident: a colleague mentions it, a client asks why a competitor keeps coming up in ChatGPT or someone on the team runs a quick test and gets an uncomfortable result. The problem with that approach is that a single prompt test tells you almost nothing useful. You do not know if the result is typical or an outlier, which query types are the problem, how your position compares to competitors or whether things are getting better or worse over time.
What you actually need is a quick diagnostic that gives you a repeatable baseline enough to understand the scale of the problem and where to start. This article walks through exactly how to build one, what to measure and what your results mean once you have them.
Building your prompt set
The diagnostic starts with a set of 10 to 15 prompts written the way real buyers phrase questions to AI tools. These are not keyword strings, they are conversational queries, and the distinction matters because AI platforms respond to intent and phrasing in ways that keyword-based SEO tools cannot capture.
Your prompts need to cover five categories, becaus
e each one tests a different aspect of how AI models represent your brand.
Brand queries ask about your company directly. Examples: "What is [brand name]?" or "Tell me about [brand name]." These test whether AI can produce a confident, accurate description of your business. If the model hesitates, gives a vague answer or confuses you with another brand, you have a description availability problem regardless of how well your site ranks.
Comparison queries put your brand next to a specific competitor. Examples: "[Brand] vs [Competitor], which is better for [use case]?" or "How does [brand] compare to [competitor]?" These test whether AI has enough information about your brand to evaluate it side by side with another. Brands with thin third-party presence tend to get dismissed or ignored entirely in comparison answers.
Alternatives queries do not mention your brand at all. Examples: "What are the best alternatives to [competitor]?" or "What should I use instead of [competitor] for [specific need]?" These are the highest-stakes query type because they reflect genuine purchase intent and the user has no brand preference going in. If your brand does not appear here but competitors do, you are missing the moment buyers are most open to switching.
Feature queries ask about a specific capability relevant to your category. Examples: "Which [category] tool is best for [specific feature or workflow]?" These test whether AI associates your brand with the specific strengths you claim. A brand that markets itself on a particular capability should appear when that capability is the search criterion.
Use-case queries describe a problem or context without naming a category. Examples: "I need to [specific task], what do you recommend?" or "What tool would help a [job title] who needs to [goal]?" These test the depth of AI's understanding of your brand and whether it can match you to real buyer scenarios.
Build at least two prompts per category, keep them specific rather than generic and write them in the natural phrasing your actual customers use, not in marketing language.
Running the test
Test every prompt across ChatGPT, Claude and Gemini in the same session, on the same day, and record the full response for each. Do not rely on memory or notes summarising what you saw, because the details matter when you go back to interpret the results.
A few practical notes on how to run this cleanly. Use the standard conversational interface for each platform rather than API access, because that reflects the experience your potential customers actually have. Start a fresh conversation for each prompt so earlier answers do not influence later ones. If a platform offers web browsing or real-time search as an optional mode, enable it, because that is the mode most buyers use when researching products.
For each response, record:
whether your brand was mentioned at all;
whether it was mentioned with a clear, accurate description or just as a name in a list;
whether a specific competitor appeared in your brand's place;
the exact wording the platform used to describe your brand if it did appear.
A simple spreadsheet works well here: prompts down the rows, platforms across the columns, and your three data points per cell. Keep the raw response text in a separate tab so you can refer back to it.
What to measure
Once you have your results documented, three metrics give you a meaningful picture of where you stand.
Mention rate is the percentage of prompts where your brand appears at all across all three platforms. If you run 15 prompts across ChatGPT, Claude and Gemini, that is 45 total opportunities. Count how many times your brand appears in any form and divide by 45. A brand with no AI visibility optimisation typically scores between 10% and 30% on this metric. A well-optimised brand in a competitive category can reach 60% to 80%.
Description quality looks at the prompts where your brand did appear and assesses how accurately and completely AI described it. Score each instance on a simple three-point scale: accurate and specific (the model correctly named your category, your differentiation and your primary audience), vague or generic (the model mentioned your brand but could not say much about it), or inaccurate (the model got something factually wrong). This score tells you whether the problem is visibility or representation. A brand can have decent mention rate but poor description quality, which means AI knows you exist but cannot make a case for you.
Missed prompts are the queries where a direct competitor appeared but your brand did not. Pull these out separately and list which competitor filled the gap on each prompt. This is the most actionable output of the audit because each missed prompt is a specific scenario where a buyer in your market is being directed away from you. According to a Microsoft Clarity study of 1,200+ publisher websites, visitors from AI platforms converted to sign-ups at 1.66%, compared to 0.15% from search and 0.13% from direct traffic. Its significantly higher conversion rates suggest it brings more qualified, higher-intent users compared to traditional channels.
How to interpret your baseline
Your results will fall into one of three broad patterns, and each one points to a different priority.
Low mention rate across most prompt types. If your brand appears in fewer than 20% of prompts, the problem is foundational. AI platforms do not have enough external evidence to recommend you confidently. This is usually a combination of thin third-party presence and unclear category positioning. The priority fix is not content structure or technical SEO but building the external surface area that AI draws from: getting reviewed on G2 or Capterra, earning mentions in comparison articles and trade publications and making sure your brand appears in the same contexts as your competitors across trusted sources.
Mid mention rate with poor description quality. If your brand appears in 30% to 50% of prompts but the description quality score is mostly vague or generic, AI knows you exist but cannot represent you well enough to recommend you confidently. This is a content problem. Your site likely lacks a clear, extractable description of what you do and who you serve. The priority fix here is writing a description-ready block for your homepage and About page: under 80 words, covering your category, your differentiation, your primary audience and one honest constraint. This gives AI something to work with.
Reasonable mention rate but specific prompt types consistently missing. If you appear on brand queries and some comparison queries but are absent from alternatives and use-case prompts, your AI visibility is shallow. AI recognises your brand when it is named but cannot surface you unprompted in buying scenarios. This is the most common pattern for brands with decent SEO and some third-party presence. The priority fix is building the content formats that AI retrieves for intent-driven queries: alternatives pages, use-case specific articles and comparison pages written with factual, decision-criteria-first structure rather than promotional framing.
One important note on interpreting competitor gaps. If the same one or two competitors appear across multiple missed prompts and across multiple platforms, that is not coincidence. Those brands have established strong co-occurrence in the sources AI trusts most. Understanding why they appear, which surfaces they are mentioned on and how they are described, will tell you more about what to build than any generic AI visibility checklist.
Turning the baseline into a repeatable scorecard
A one-time audit is useful but limited. AI platforms update their retrieval behaviour, training data and citation patterns regularly, and a snapshot from one week may look different four weeks later. The audit process described here is designed to be repeatable because the value compounds when you run it consistently.
Google's Search Central documentation on how AI Overviews select sources notes that content eligibility is reassessed on an ongoing basis as pages are recrawled and updated. The same logic applies to other AI platforms: your position in AI answers is not fixed and can improve relatively quickly when you address the right signals.
Run the same prompt set on the same platforms every two to four weeks. Track your mention rate, description quality score and number of missed prompts over time. When you make a change, whether that is adding a comparison page, updating your About page or earning a review on a third-party platform, note the date and watch whether the relevant metrics move in the following cycle.
It is worth being clear about what this diagnostic is and what it is not. Ten to fifteen prompts across five categories will tell you whether you have a problem and roughly where it sits. A full AI visibility audit starts at a minimum of 50 prompts (we recommend 100), covers deep buyer behaviour analysis across your specific category, maps competitor presence across every query type and produces a prioritised action plan tied to real purchase scenarios. If your baseline results point to a significant gap and you want to understand the full picture, we can help.
Questions we get asked
How many prompts do I need for this diagnostic to be meaningful?
10-15 prompts across five categories is enough to get a directional read: you will see whether a problem exists and roughly where it sits. Think of it as a first look, not a full picture. If your results point to a real gap, a proper audit starts at a minimum of 50 prompts (we recommend 100) and goes significantly deeper into buyer behaviour and competitor patterns across your category.
Should I test the same prompts every time I run the audit?
Yes, consistency in the prompt set is what makes week-over-week or month-over-month comparison meaningful. If you change the prompts, you change what you are measuring. Add new prompts only when you have a specific reason, for instance when you launch a new product or enter a new market, and keep the original set running in parallel so you maintain a continuous baseline.
What does it mean if my brand appears on ChatGPT but not on Claude or Gemini?
It means the sources that ChatGPT draws from for your category are covering your brand, but the sources Claude and Gemini use are not. Claude places high weight on structured, recently updated content retrieved via its search integration. Gemini relies heavily on Google's knowledge graph and entity signals. If you are missing from one or both, check whether your Organisation schema is complete, whether your brand appears on Google Business Profile and whether your content has clear publication and update dates.
What if my brand does not appear on any prompts at all?
That result is actually the clearest baseline you can have because it tells you the problem is external presence rather than content structure or technical access. Before investing in content changes, focus on building third-party mentions: get listed and reviewed on the platforms relevant to your category, reach out to publications that run comparison roundups in your space and make sure your brand name appears in the same contexts as your direct competitors across sources AI trusts.

