AI Visibility Case Study: Prompt-Level Before–After Results
An anonymised, evidence-led case study showing how a global FMCG brand increased AI visibility from 12% to 38% and citation rate from 4% to 22% across a 240-prompt purchase-intent sample using prompt-level monitoring, answer-ready content and feed normalisation.
How a Consumer Brand Improved Visibility in AI Answers
A global consumer brand in the fast-moving consumer goods sector closed a major AI visibility gap and significantly increased its presence in purchase-stage answers.
Across a 240-prompt purchase-intent sample, the brand’s AI visibility rate increased from 12% to 38% within eight weeks of optimisation. Its citation rate rose from 4% to 22%, while competitor presence fell by 18 percentage points. The result was stronger brand prominence in AI-driven product discovery and a shorter path from recommendation to action in AI-assisted shopping journeys.
Why the Visibility Gap Mattered
The brand was underrepresented in high-intent prompts where consumers were actively looking for buying guidance, such as:
“best laundry detergent for sensitive skin, family of five, under $20”
In these moments, competitors were more likely to appear in AI-generated recommendations or be cited directly in answers. That meant the brand had fewer opportunities to influence purchasing decisions at the exact moment consumers were ready to compare products and act.
For the commercial team, the impact was clear:
- Fewer product-discovery opportunities in AI-driven recommendation flows
- Lower referral potential when AI answers did not cite brand-owned pages
- Weaker attribution for conversions influenced by AI-assisted journeys
Client Profile
- Sector: Consumer packaged goods (FMCG)
- Region: Global, with primary focus on the United States, United Kingdom, Australia, and Canada
- Business context: The brand depends on strong product discovery at the point of consideration. When AI answers excluded the brand or favored competitors, consideration shifted quickly elsewhere.
What Was Measured
To make the findings transparent, the team used a simple, prompt-level measurement framework.
Core Metrics
- Visibility rate: The percentage of prompts where the brand appeared in the primary AI-generated answer
- Citation rate: The percentage of prompts where the answer included a citation or link to brand-owned content
- Share of brand mentions: The brand’s share of all tracked brand mentions across the prompt set
- Competitor mention rate: The percentage of responses where a named competitor appeared in the top answer
Sample and Time Windows
- Prompt sample: 240 purchase-intent prompts
- Prompt sources: Site search logs, cart-abandon triggers, and top-converting long-tail queries
- Geographies: United States, United Kingdom, Australia, Canada
- Surfaces tested: Public LLM-driven search and assistant experiences, including GPT-style chat interfaces, Google AI Overviews/SGE-style results, Microsoft Copilot-style commerce flows, and retrieval-focused assistants
- Baseline window: Jan 1–Jan 31, 2026
- Optimisation period: Feb 1–Mar 15, 2026
- Post-optimisation tracking: Mar 16–Apr 6, 2026
To protect confidentiality, the brand is shown as [BRAND], and competitors are redacted as [COMP-A], [COMP-B], and [COMP-C].
Control Rules and Known Limits
No paid promotions or media-targeting changes were introduced during the optimisation period. The only intentional changes were to product feeds and on-site content.
As with any AI-focused measurement, some limitations remained:
- Model outputs vary by platform and over time
- Rankings in assistant environments can shift quickly
- Provider-side filters may suppress product or brand mentions in certain cases
Before and After: The Visibility Shift
The most important movements were concentrated in purchase-intent prompts, where product attributes, price sensitivity, and comparative language strongly influence recommendations.
| Metric | Baseline (Jan 1–31) | Post (Mar 16–Apr 6) | Change (pp) | Why it matters |
|---|---|---|---|---|
| Visibility rate | 12% (29 / 240) | 38% (91 / 240) | +26 pp | Greater brand presence in AI answers improves the chance of being considered |
| Citation rate | 4% (10 / 240) | 22% (53 / 240) | +18 pp | Citations create stronger paths to traffic, trust, and attribution |
| Share of brand mentions | 9% | 42% | +33 pp | Higher share reduces competitor-first outcomes |
| Competitor mention rate | 54% | 36% | -18 pp | Fewer competitor mentions means fewer lost moments of consideration |
What Changed in AI Responses
Two representative prompts show how the shift appeared in practice.
Prompt A
-
Prompt: “best gentle laundry detergent for large families under $20”
-
Baseline excerpt:
“Popular options include [COMP-A] and [COMP-B]. Consider scent-free formulas for sensitive skin.” -
Post-optimisation excerpt:
“For large families seeking gentle, cost-effective detergent, [BRAND] offers a concentrated formula with recommended dosing; see the manufacturer product page for pack sizes and retailer links.”
Prompt B
-
Prompt: “detergent safe for cloth diapers and baby skin”
-
Baseline excerpt:
“Many users recommend [COMP-C] or fragrance-free store brands. Check labels for enzymes.” -
Post-optimisation excerpt:
“[BRAND] and [COMP-C] are commonly recommended; [BRAND] is noted for its enzyme-free, dermatologist-tested formula and has retailer pages that list hypoallergenic certification.”
The improvement was not about one isolated mention. It was about making the brand more legible, relevant, and citeable to AI systems across the moments that matter most.
What Drove the Lift
The optimisation work focused on practical changes that made product information easier for AI systems to retrieve, interpret, and cite.
1. Structured, Answer-Ready Content
Product and support pages were updated with concise, high-value summaries near the top of the page. These short sections explained:
- Primary use case
- Key benefits
- Important product attributes
- Clear path to purchase
These summaries were designed to be easily extracted into AI-generated answers.
2. Stronger Machine-Readable Signals
Schema.org markup was added or standardised across product and help content, including:
ProductFAQPageHowTo
This gave retrieval systems clearer, more consistent signals about product facts and use cases.
3. Prompt-Level Prioritisation
Rather than updating every page at once, the team focused first on the prompts with the highest commercial value. The top 60 prompts were prioritised based on historical conversion potential and competitor visibility.
4. Product Feed Normalisation
Core feed fields such as gtin, mpn, brand, and shortDescription were cleaned and standardised. This reduced mismatch errors in systems that rely on structured product data for retrieval and attribution.
5. Better Citation Targets
Brand-owned articles and retailer pages were aligned through cleaner canonical signals and more consistent landing page markup. This improved the chances that AI systems would cite the right destination.
These changes were deliberately simple and practical: clear answers near the top of the page, better structured data, cleaner product feeds, and more reliable citation destinations.
How Quadrant Contributed
Quadrant’s role was to support the process with continuous monitoring, prompt-level visibility analysis, and prioritised recommendations.
Its contribution included:
- Building a reproducible prompt library from first-party search and conversion data
- Running cross-surface sampling to identify where the brand was absent or weakly cited
- Ranking actions by likely commercial impact
- Supplying dashboards that helped content, product, and engineering teams focus on the highest-return changes first
Rather than claiming broad causality, the work focused on evidence, measurement, and execution discipline.
What This Means for Retail, FMCG, and E-Commerce Teams
This case offers a useful model for brands that want to improve their presence in AI-assisted product discovery.
Measure at Prompt Level
High-value buying moments are often hidden inside long-tail prompts. Aggregate ranking metrics can miss them. A prompt set built from first-party behavior is far more actionable.
Prioritise Answer-Ready Content
Small, high-quality updates near the top of key pages often move faster in AI-driven environments than large-scale content rewrites.
Make Citation Easy
If you want AI systems to cite your brand, give them clean, canonical, machine-readable destinations.
Treat AI Visibility as Cross-Functional
Success depends on content, product feeds, technical SEO, and site engineering working together. AI visibility is not owned by one team alone.
Sample Repeatedly
AI outputs fluctuate. Baseline and post-change windows should run long enough to smooth short-term variation and reveal real movement.
Limitations and Credibility Notes
The results are strong, but they should be interpreted realistically.
- AI-generated surfaces change frequently
- Some providers apply policy filters that affect brand visibility
- Citation growth does not automatically translate into proportional traffic growth
- Retailer experience and platform UX still shape click-through and conversion behavior
Even with those caveats, the measurement framework was controlled enough to show a meaningful, prompt-level shift in brand visibility.
Final Takeaway
This case shows that AI visibility can be improved through focused, measurable changes to content and product data.
By identifying where the brand was missing, prioritising high-intent prompts, and making on-site and feed content easier for AI systems to understand and cite, the brand materially increased its presence in AI-generated product discovery.
The results are not a guarantee for every brand or every category. But they do offer a practical model: treat AI answers as a discoverability channel, measure performance at the prompt level, and optimise the assets that AI systems actually use.