Scrunch recommends tracking brand presence, citations, referral traffic, AI agent traffic, and share of voice versus competitors as key performance indicators.

For example, a Scrunch user establishing initial benchmarks and baselines should measure:
Brand presence
Brand mentions and citations in AI responses across platforms, as well as by persona, country, topic, individual platform, funnel stage, or custom tag.
Citations
How often their brand is cited in AI responses to business-relevant prompts, as well as which sources are most frequently cited for the prompts they care about.
Referral traffic
Website traffic from AI responses to target prompts.
AI traffic
How often LLMs send agents to access content on their website for training, indexing, and retrieval purposes.
Share of voice vs. competitors
Brand presence and citations versus competitors for key prompts.
There aren’t universal benchmarks for AI search, but in order to set realistic targets, Scrunch recommends:
Starting with current baseline
Measure existing performance across business-relevant prompts and aim for gradual improvement.
Comparing to competitors
Identify top competitors that consistently appear in AI responses to target prompts and try to gain share of voice.
Focusing on high-priority topics
Select a small number of key areas to improve performance in and track improvement over multiple weeks.
Scrunch recommends monitoring AI search trend data like brand mentions and citations consistently over 2-3 week periods to identify real trends versus one-off changes.
Scrunch recommends estimating prompt tracking needs using the following formula: X [# of core topics] x Y [5-8 questions related to each topic] = Z [# of AI search prompts to track]. The primary goal is to get a representative sampling of data across all customer journey stages via a mix of branded and non-branded prompts.