How Prompt Responses Work
While aggregated metrics provide a high-level view of performance, sometimes it’s valuable to dig into the details of individual prompt responses. Search Party makes this possible through the Prompt Responses view.
To explore:
Navigate to the Prompts page
Click on a specific prompt
Select Recent Responses to see the latest outputs
When you click on a response, a right-hand pane opens that includes:
Full model response: The exact text returned by the LLM, stored in the database
Mentions: Any brand mentions detected in the response
Citations (Sources): The underlying URLs and domains the LLM cited in generating that response
This structure allows you to audit how Search Party parsed and categorized the information in real time.
Key Benefits
Transparency into exactly how LLMs responded to your tracked prompts
Ability to spot-check parsing logic, mentions, and citations
Confidence that aggregated reporting reflects accurate underlying data
Common Use Cases
Auditing: Confirm that brand mentions and sources are being captured correctly
Troubleshooting: Identify unexpected or off-target responses from providers
Training: Help teams understand how LLMs construct responses from sources and prompts
Why It Matters
AI visibility is only as strong as the responses driving it. Prompt responses give you a transparent, ground-level view of what models are saying, the sources they cite, and how Search Party processes that information. This visibility makes your aggregated metrics trustworthy and actionable.