AI search shifts measurement from clicks and rankings to visibility and influence. The baseline most teams use centers on whether AIs can find, trust, and recommend your brand—and what that means for downstream traffic and conversions. If you’re just getting started, ground your program in the core KPIs below, then trend them over time and versus competitors. For a deeper dive, see our take on the new metrics that matter in AI search.
Track: Share of voice across prompts, and where your brand appears in the answer (top/middle/bottom).
Citations
Track: Citation share and total citation count across business-relevant prompts.
LLM referral traffic
Track: Sessions and conversions from AI domains.
AI agent traffic
Track: Bot visit count, bot diversity, and the pages bots visit most.
Share of voice vs. competitors
Optional but useful: Sentiment of AI answers that mention your brand to catch tone shifts and misrepresentation early.
Track a representative prompt set across multiple AI platforms (e.g., ChatGPT, Claude, Perplexity, Bing Copilot, Gemini), then measure mentions, prompt position, and citations by prompt, topic, persona, region, and stage. Scrunch automates this multi-model tracking and trend analysis; see the Monitoring chapter of our AI search guide.
LLM referral traffic
(?i)(.*gpt.*|.*chatgpt.*|.*openai.*|.*claude.*|.*gemini.*|.*google.*|.*perplexity.*)Report sessions and conversions from these sources and tie top landing pages back to likely prompts.
AI agent traffic
Monitor bot visits via infrastructure logs or integrations. We’ve outlined step-by-step setups for Akamai, Cloudflare, Vercel, and WordPress. Learn more about what to track in our guide to AI agent traffic.
Share of voice vs. competitors
Use: X [number of core topics] × Y [5–8 questions per topic] = Z [prompts to track]. Aim for coverage across awareness, advice, evaluation, and comparison stages. If you want a head start, try our free Prompt Generator.
Track across models
Run the same prompts in ChatGPT, Claude, Perplexity, Bing Copilot, and Gemini to see aggregate performance and model-by-model differences.
Measure core KPIs
For each prompt, record brand presence, position, citations (owned and 3rd-party), and competitor presence. Trend weekly snapshots for 2–3 weeks to smooth out one-off variance.
Add traffic signals
Configure GA4 filters for AI referrals and set up AI agent monitoring via your infra stack to connect visibility with downstream behavior.
Establish targets
Daily snapshots across leading platforms (ChatGPT, Claude, Perplexity, Bing Copilot, Gemini), with prompt-level answers, positions, citations, and sentiment.
Prompt management at scale
Easy import/creation, tagging, clustering, and funnel/region/persona mapping; flexible filtering and comparisons.
Competitive analysis
Share of voice vs. customizable competitor sets at topic, prompt, and model levels.
Agent traffic analytics
Reliable bot detection, agent diversity, page-level bot engagement, and training/indexing/retrieval breakdowns.
Referral attribution
Clean identification of AI-sourced sessions and conversions; landing-page insights to infer prompt intent.
Time-series trends and alerts
Change-over-time views with anomaly detection to spot wins and drops quickly.
Data portability and governance
Exports and APIs for your warehouse/BI tools, plus role-based access and reproducible reporting.
Ease of use
Tools like Scrunch provide end-to-end monitoring across models (presence, position, citations, sentiment), competitive views, agent traffic analytics, and integrations. See how to instrument KPIs in the Monitoring guide.
Web analytics
GA4/HubSpot for LLM referral sessions and conversions (configure regex filtering as above).
Infrastructure logs
Akamai/Cloudflare/Vercel/WordPress integrations for AI agent traffic visibility at the page and agent level.
AI platforms themselves