How to tell if your AI search visibility is improving (or slipping)
Scrunch recommends tracking a consistent set of AI search metrics over time and comparing like-for-like periods (for example, Q2 vs. Q1). To avoid noise from one-off fluctuations, monitor trends in 2–3 week windows before calling a change a true signal.
What to track and how often
Focus on AI-native indicators that reflect how often and how well you show up in LLM answers:
Brand presence: Mentions and citations in AI responses across platforms, with the ability to slice by persona, country, topic, individual platform, funnel stage, or custom tag.
Citations: Frequency and source mix for the URLs AI models cite for your target prompts.
Share of voice vs. competitors: Your presence and citations compared to competitors on the same prompts.
Referral traffic: Traffic from AI responses to your site.
AI agent traffic: Visits from LLM user agents for training, indexing, and retrieval.
Track these metrics weekly and evaluate trends over 2–3 week periods to separate real movement from variability. For a deeper setup, see the how-to guide on tracking brand presence in AI search.
How to compare periods in Scrunch
Choose a representative prompt set that covers your core topics and journey stages (mix of branded and non‑branded). If you’re unsure how many to use, see how many prompts to track.
In Monitoring & Insights, view brand presence, competitive presence, prompt placement, sentiment, and citations.
Set your date range to the current period (for example, Q2), then switch to the comparison period (Q1) and compare the same views and filters.
Use filters (persona, market, platform, topic) to isolate where performance truly changed, and where it held steady.
Check Insights for issues: Scrunch flags problems tied to content relevance, metadata inconsistencies, and other potential blockers.
Identify citation gaps: Review which third-party or competitor pages AI cites for your key prompts to surface content gaps and opportunities.
Prioritize fixes: Update or create content aligned to the sources models already trust. Start with one or two high-priority topics.
Verify technical access: Ensure AI user agents can crawl your site (robots.txt, no unexpected blocks) and that pages load correctly for automated visitors.
How Scrunch measures impact versus competitors
Scrunch provides competitive analytics designed for AI search:
Track brand presence and citations for your prompts side-by-side with competitors to quantify share of voice.
Compare performance across AI platforms in aggregate, then filter by individual model/platform to see where visibility diverges.
Break down results by persona, market, topic, and funnel stage to pinpoint strengths and weaknesses relative to peers.
This gives you a clear picture of where you’re winning (or losing) attention in LLM answers—not just whether you appear, but how often and with which sources.
How Scrunch differs from traditional keyword tools
Traditional SEO tools focus on search volume and rankings tied to web search queries. AI search is different:
There’s no universal “search volume” for prompts. Instead, Scrunch measures response-level outcomes: presence, citations, placement, sentiment, and trends over time across AI platforms.
Rather than estimating demand, Scrunch observes how models actually answer your representative prompts and which sources they trust.
Beyond traffic from humans, Scrunch also tracks AI agent traffic—how often LLMs access your content for training, indexing, and retrieval.
In short, you’re optimizing for answer inclusion and authority signals in AI systems, not just keyword rankings and SERP impressions.
Choosing an AI search/AEO platform—and where Scrunch focuses
When comparing AEO tools, evaluate both product and vendor factors. Scrunch recommends looking at:
Model and platform coverage
Data filtering and segmentation (persona, market, topic, funnel stage)
Competitive benchmarking and share of voice reporting
Citation tracking and source analysis
Ease of use and workflow fit
Vendor experience, customer reviews, and pricing
Scrunch centers on end-to-end visibility: monitoring brand presence and citations across AI platforms, benchmarking against competitors, analyzing AI agent traffic, and providing insights to guide fixes. For more on evaluation criteria, see the FAQ on comparing different AEO tools.
For agencies: reporting citations and share of voice across clients
Agencies need consistent, comparable reporting across multiple brands. A practical approach with Scrunch:
Build client-specific prompt sets that mirror each brand’s topics and funnel stages. Use a consistent structure (5–8 questions per topic) to standardize reporting across accounts. See how many prompts to track.
Monitor brand presence, citations, and share of voice per client, with competitor sets tailored to each market.
Segment by persona, geography, and platform to deliver localized, audience-specific insights.
Report changes over 2–3 week periods to highlight true momentum and the impact of optimization work.
For a broader primer on planning and measurement, check out the Guide to AI search.