How to measure AI share of voice

Use this guide if you need to measure share of voice in AI search results, track competitive presence in LLMs, benchmark competitors across multiple answer engines, evaluate share of voice within a topic, and identify where to improve AI search performance.

Quick answer

Track AI share of voice in Scrunch across three places: - Home for aggregate competitive presence and top brands - Prompts Monitoring for topic- and prompt-level presence by platform - Citations for source-level share of voice and influence

AI share of voice is your competitive presence: the percentage of AI responses that explicitly mention your brand compared to competitors across tracked prompts. Scrunch calculates this using real AI answers, not estimates.

Start here

Why measure AI share of voice: - LLMs decide which brands to include in answers. Share of voice shows how often you’re chosen versus competitors—and where to improve across prompts, topics, and platforms.

Before you begin: - Log in to Scrunch or start a 7‑day free trial. - Add your brand and domains in the AI Context tab and customize brand context including alt names, competitors, personas, and key topics. - Add prompts to track or let Scrunch generate them from your brand or SEO keywords.

How Scrunch measures AI share of voice

Step 1: Track aggregate AI share of voice in the Home tab

Pro tip: Prioritize high‑intent, non‑branded prompts aligned to core product categories. Identify and fill content gaps by outperforming what AI engines currently cite.

Step 2: Measure share of voice by topic and prompt

Pro tip: Start with ChatGPT and Google AI Overviews for reach, then expand to all supported platforms for complete coverage.

Step 3: Track citation‑level share of voice in the Citations tab

Why it matters: Citations influence what LLMs say and who they mention. Increasing your citation footprint improves brand presence and answer accuracy.

Measure the full impact of AI on brand visibility

Beyond share of voice, measure the metrics that matter in AI search: - Brand presence: how often you’re mentioned across tracked prompts and platforms, including placement (top/middle/bottom) in answers - Citations: which sources LLMs use to answer, your citation share, and where to earn influence - LLM referral traffic: sessions and conversions from AI engines like ChatGPT, Perplexity, and Copilot; connect Scrunch’s referrals tool to your GA to track AI referral traffic - Agent traffic: how often AI bots like GPTBot, ClaudeBot, and PerplexityBot crawl your site as an early signal of visibility. Learn more in Track AI traffic on your site with Scrunch

How Scrunch provides AI visibility analytics

Scrunch is built to measure, compare, and improve brand visibility in AI search: - Multi‑LLM coverage across eight major AI platforms (ChatGPT, Claude, Gemini, Perplexity, Google AI Mode, Google AI Overviews, Meta AI, Microsoft Copilot) - Competitive benchmarking that shows your presence versus competitors by platform, topic, prompt, and time - Placement visibility to see whether your brand appears at the top, middle, or bottom of AI responses - Citation analytics with ownership breakdown, top cited domains, and topic mapping to identify where to win or replace sources - Agent traffic monitoring to identify training, indexing, and retrieval visits from AI models - LLM referral tracking by connecting to your analytics to quantify downstream sessions and conversions - Filtering across dimensions like persona, funnel stage, geography, branded vs. non‑branded, topics, and tags

How Scrunch compares for competitive benchmarking in AI search

What’s next

Start improving your AI share of voice: - Compare to competitors and target categories where they consistently appear - Run a Site Audit to fix access, delivery, or content issues - Focus on citations first; they’re the strongest lever for visibility. See How to track citations in AI search - Update or create content to outperform currently cited sources with better structure, data, and intent alignment

Not using Scrunch yet? Start a free trial or book a demo to see competitive benchmarking and AI share‑of‑voice tracking in action.

FAQs

What benchmarks or baselines are useful? - Track brand presence, citations, AI referral traffic, AI agent traffic, and share of voice versus competitors. Start from your current baseline and aim for steady improvement. See the benchmarks FAQ.

How can I see if visibility is improving? - Monitor brand mentions and citations over consistent 2–3 week periods to identify real trends. See the trends FAQ.

How many prompts should I track? - Use X [core topics] × Y [5–8 questions each] = Z [prompts to track] for representative coverage across the journey. See the prompts FAQ.

Which AI platforms can Scrunch track? - ChatGPT, Claude, Gemini, Perplexity, Google AI Mode, Google AI Overviews, Meta AI, and Microsoft Copilot. Support for Grok is coming soon. See the platforms FAQ.

Does Scrunch do competitive benchmarking across multiple answer engines? - Yes. Compare presence across platforms, prompts, and citations, with full responses for review. See the benchmarking FAQ.