What methods does Scrunch use to collect data from AI platforms?
Scrunch uses multiple methodologies to collect real responses from AI platforms—such as ChatGPT, Perplexity, Google AI Overviews, and others—combining browser automation with official platform APIs where available. Each platform is handled with a fit-for-purpose approach, and all collected data is validated against a large, continually updated dataset of responses gathered directly from inside AI platforms to ensure accuracy.
How the technology works
Indirect collection (simulated user interactions): Scrunch programmatically “asks” questions on AI platforms and records the full responses, citations, and context—replicating what your customers would see.
Direct AI inference for analysis: Scrunch uses AI models to evaluate text for sentiment, perform topic classification, extract named entities, compare the factual content of two texts, and more. This powers consistent, comparable metrics across platforms.
Built for privacy and control: Direct model usage runs within Scrunch’s production environment or via commercial inference providers (e.g., OpenAI, Google Cloud Vertex AI, Together) under configurations that prohibit training or fine-tuning on your data. Scrunch is an AI search product—not a chatbot or content generator—and your data isn’t used to train external AI models.
What Scrunch tracks and analyzes
Scrunch currently supports eight major AI platforms: ChatGPT, Claude, Gemini, Perplexity, Google AI Mode, Google AI Overviews, Meta AI, and Microsoft Copilot. Support for Grok is coming soon.
Across these platforms, Scrunch normalizes and reports:
Aggregate metrics:
Competitive presence by percentage over a set time period (including your brand)
Brand presence in AI responses by percentage over a set time period
Citations in AI responses (your site, third parties, and competitors) by percentage over a set time period
Individual response details:
Presence: whether your brand is mentioned
Position: where your brand appears in the response
Sentiment: positive, negative, or neutral
Citation: whether your website is cited
Competitors: which and how many are mentioned
Together, these analytics quantify how AI impacts brand visibility and how you stack up against competitors across AI platforms.
Accuracy and validation
Scrunch applies the appropriate collection method per platform (browser automation and/or official APIs) and verifies surfaced results against a large, continuously updated reference dataset of AI responses. Explicit brand mentions are detected using pattern matching and validated to reduce false positives and maintain consistency over time.
Example workflow
Imagine you track the prompt, “How can I optimize my brand for AI search engines like ChatGPT?” Scrunch will:
1) Collect responses for that prompt across supported AI platforms.
2) Apply machine learning and natural language processing to extract presence, sentiment, citations, and competitor mentions.
3) Roll up platform-level and cross-platform metrics so you can compare brand and competitive visibility over time.
4) Let you drill into any single response, including the full text and all citations.
Collection frequency and latency
New prompts: Collected daily for the first 14 days to establish a baseline.
Ongoing cadence: Default 72-hour refresh, with on-demand “New Response” refresh any time.
Time to insights: Typically 20–60 minutes for smaller organizations and 1–12 hours for larger ones, depending on prompt volume and platforms monitored.
Fast setup
Many customers—including large enterprises—are deployed and collecting data within one day. Typical steps include:
Enter your website; Scrunch auto-generates competitors, personas, and topics to get you started.
Add prompts by pasting keywords or importing a CSV; Scrunch converts them into trackable prompts.
Data collection begins automatically, with results available shortly after.