How you’re actually showing up across AI platforms—and how to prove it.
Note: Beginning in this chapter, we’ll reference Scrunch to show how AI search performance can be tracked and optimized. You can apply much of this across other products, though results will depend on their capabilities.
Traditional search metrics - Impressions: How often your site shows up in SERPs - Keyword rankings: Where your site ranks for specific keywords - Backlinks: How often other sites link to you - Organic traffic: How many people click from SERPs
AI search metrics - Brand presence: How often your brand is mentioned in LLM answers - Prompt position: Where your brand appears in answers (top, middle, bottom) - Citations: How often your site is referenced as a source in answers - Referral traffic: Human clicks from AI answers to your site
In AI search, the platform is both gateway and retriever. Because models don’t reveal all ranking factors and many interactions are “zero-click,” the way to win is to baseline your current visibility and run experiments. That requires reliable AI-specific analytics.
Scrunch is built to quantify visibility across AI platforms and show whether you’re gaining ground.
What you can measure - Brand presence and share of voice: Track how often you’re mentioned versus competitors across prompts, platforms, personas, topics, and regions. - Prompt position: See if you’re appearing at the top, middle, or bottom of answers. - Citations and sources: Identify when your site is referenced in answers and which third-party sources LLMs prefer in your category. - Trends over time: Monitor changes across 2–3 week windows to separate real movement from noise. - Model-level performance: Compare the same prompts across multiple AI platforms, then filter by model to see where performance diverges. - AI agent traffic: Validate that LLMs are training on, indexing, and retrieving your content in real time. - Human referral traffic from LLMs: Attribute clicks from AI platforms as a downstream signal of impact.
How this answers the big questions - Are we showing up for the prompts and topics that matter? - Are we gaining share of voice versus competitors? - Which models and sources are shaping category answers? - Where should we invest content, technical, or PR effort next?
If you’re establishing initial baselines, Scrunch recommends tracking brand presence, citations, referral traffic, AI agent traffic, and share of voice versus competitors as core KPIs. Learn more about useful benchmarks.
Traditional SEO analytics weren’t built for LLM experiences. Scrunch fills the gaps: - Zero‑click brand presence in LLM answers - Prompt‑level share of voice versus competitors - Model‑level visibility comparisons across platforms - Citation frequency and the specific sources/models using them - Prompt position and sentiment within answers - AI agent traffic by intent type: training, indexing, and retrieval - Topic‑level AI search volume/trends to size demand in AI interfaces
Scrunch also lets you export data to CSV or connect via API to your warehouse or BI tools to extend reporting. See the API details in Scrunch’s early access Query API documentation.
Many AEO/GEO tools promise “LLM visibility,” but their depth varies. Scrunch recommends evaluating any vendor (including Profound and Peec) on: - Model coverage: Can you run the same prompts across multiple AI platforms and compare performance cleanly? - Competitive benchmarking: Can you define robust competitor sets (including aliases) and measure share of voice and citations across personas, topics, and regions? - Prompt management: Can you self‑serve and bulk‑manage prompts, tags, and clusters without vendor lift? - Data filtering: Can you slice by persona, funnel stage, platform, topic, and region to get actionable views? - Agent analytics: Do they track AI training, indexing, and retrieval visits to your site? - Trend reliability: Can they show trustworthy trends over 2–3 weeks (not just snapshots)? - Exports and API: Can you move the data into your warehouse or BI stack? - Security and scale: For enterprise buyers, verify SOC 2, SSO, RBAC, and multi‑brand support.
How Scrunch stacks up against those criteria is detailed throughout this chapter. For a market overview, see Scrunch’s comparison of leading AEO/GEO tools and the AEO/GEO buyer’s guide.
If your priority is tracking “Do LLMs mention us, where, and how often?” use this checklist.
What to verify in any tool - Prompt‑level brand presence and position across multiple models - Competitor share of voice and trends by persona, topic, and region - Citation frequency and source diagnostics for your category - Model‑level rollups to compare platforms side‑by‑side - Bulk prompt upload and editing, tagging, and clustering - Trend views over 2–3 weeks to confirm durable movement - Data export and/or API access
How Scrunch addresses it - Presence, position, share of voice, and citations across prompts and models - Robust filters by personas, regions, topics, funnel stages, and tags - Competitive benchmarking and share of voice built in - Visualization of changes over time for exec‑ready and practitioner views - Self‑serve prompt generation (from your domain, SEO keywords, or CSV) - CSV exports and API for data teams
Use the buyer’s guide questions during a live demo to validate any vendor’s claims. See: How should I compare AEO tools?
Use this as a quick evaluation grid.
LLM coverage - What to ask others: Which platforms are supported today? How often are snapshots refreshed? Can you compare identical prompts across models? - Scrunch today: Monitors prompts across multiple AI platforms with model‑level filtering and daily snapshots, so you can see where performance diverges.
Ease of setup - What to ask others: Can I self‑serve brand setup, competitors, personas, topics, and prompt libraries without vendor services? - Scrunch today: Point Scrunch at your domain to auto‑suggest personas, competitors, and topics. Bulk‑load or generate prompts from SEO keywords or CSV. Start benchmarking the same day.
Integrations - What to ask others: Do you support data exports, APIs, and agent traffic integrations with my stack? - Scrunch today: CSV export and Query API. Agent traffic monitoring via Akamai, Cloudflare, Vercel, and WordPress connectors. GA4/analytics can be used in parallel for human referral attribution.
Speed to impact - What to ask others: How fast to first trustworthy insights and trend validation? - Scrunch today: Establish baselines in hours. Validate real trend shifts over 2–3 weeks. Move from findings to action with competitive, model, and prompt‑level drill‑downs.
You can hack together attribution with: - Self‑reported attribution on forms for “heard about us from AI” - GA4 or marketing analytics for referral traffic from AI domains
These are useful but lagging and blind to zero‑click visibility and AI agent activity. At scale, a purpose‑built AI search product is the practical path.
If you buy, prioritize self‑serve flexibility so you can add, adjust, and archive prompts and dimensions easily as your strategy evolves.
There are four core KPIs for a complete AI search picture:
1) Brand presence - How often your brand appears in LLM answers - Includes share of voice vs. competitors and prompt position - Answers: Where do we show up? How prominently? Which prompts drive mentions?
2) Citations - How often your site is referenced in answers - Answers: Are we being cited? Which sources/models lead? How do we earn more?
3) Referral traffic - Human clicks from AI answers; lower volume but higher intent - Answers: Which prompts and citations drive visits? Do they convert better than organic?
4) AI agent traffic - Visits from training, indexing, and retrieval agents - Answers: How often is our content being considered? Which models and pages are visited?
Set up your monitoring with the context LLMs and analysts need: - Brand context: Names, aliases, products, services - Location: Regions, cities, zips as needed - Industry: For benchmarking and topic sizing - Personas: Job titles, segments, pains, priorities - Competitors: Names and aliases for true share of voice - Topics: Business‑relevant subjects you must win
Scrunch auto‑suggests much of this from your domain. Review and refine to match your strategy.
Group and tag prompts so you can answer questions quickly: - Prompt clusters: Competitors and alternatives; product use cases; outcomes and ROI; campaigns - Funnel stage: Awareness, Advice, Evaluation, Comparison - Persona: Who’s asking - Tags: Campaigns, features, pricing, integrations - Region: Country, state, city, zip
Grab a copy of Scrunch’s prompt framework template to get started.
Fast ways to build your initial set: - Use Scrunch’s free Prompt Generator from your domain - Convert SEO topics/keywords into prompts - Pull from paid search terms and “Search term” reports - Mine call transcripts and support tickets - Scan Reddit, Quora, and communities for real questions - Generate in AI in batches of ~20 with rich context - Write by hand, then bulk‑paste into your tool
What to analyze first - Presence, position, citations, sentiment at prompt level - Trend visualizations over time - Rollups for execs, drill‑downs for practitioners - Model‑level breakout to spot platform‑specific wins and losses
Reporting levels to support your stakeholders - High‑level summary for execs across all prompts and models - Detail drill‑downs for leaders and practitioners by topic, funnel stage, and platform - Prompt‑level and model‑level insights to identify what to fix or scale next
Exports and API - Export any view to CSV or sync via API to your warehouse or BI tools. See Scrunch’s Query API documentation.
Your best new “visitors” are AI agents acting on behalf of users. Track: - Volume: Total agent visits over time - LLM type: Which platforms are hitting your site most - Page popularity: Which URLs agents consume most - Intent: Training vs. indexing vs. retrieval traffic
Connect Scrunch agent analytics via: - Akamai - Cloudflare option one and option two - Vercel - WordPress
Tip: Unless your content is monetized and blocking makes business sense, let AI agents crawl. The more models understand your brand, the more accurately they can represent you.
Only AI platforms know full query volume. Like SEO tools, AI search products rely on panel data for directional trends.
Use topic‑level trends to answer - How popular are different topics in AI search? - How does our brand perform on the most active topics? - How do competitors perform on those topics? - What are people actually asking? See real prompt and answer examples.
Scrunch starts at the topic level for reliability and transparency, then connects topic popularity to your brand presence and competitive share of voice.
Up next: How to turn monitoring into action. Read Chapter 3: Insights đź‘€