Table of Contents
Quick Answer
AI compresses survey analysis from weeks to hours. Quant closes with SPSS or Excel; qual (open-text) opens with Claude, Dovetail, or Thematic — categorizing thousands of free-text responses in minutes.
- Open-text: AI clusters themes, scores sentiment, extracts quotes
- NPS: auto-tag Promoter/Passive/Detractor reasons
- Always validate AI output on a 50-response sample first
What You'll Need
- Clean survey CSV (response ID, demographics, answers)
- 500+ responses for meaningful clustering
- Claude 3.5 (200K context) or Thematic
- Excel or Google Sheets for quant
- A clear "so what" question before you start
Steps
- Clean the data. Drop blanks, spam responses, partial completes. De-duplicate.
- Run quant first. NPS score, score distribution by segment, significance testing.
- Batch open-text. Group by question. Paste up to 150K characters into Claude.
- Prompt for themes. Use the prompt below.
- Validate. Read 50 random responses manually. Do AI themes match? If not, adjust prompt.
- Cross-tab themes by segment. Does Theme A show up more in SMB vs Enterprise?
- Extract verbatim quotes. 2-3 per theme for the insights deck.
- Ship a 1-page summary with 5 themes, segment breakdowns, and recommended actions.
Theme Extraction Prompt
You are a qualitative research analyst.
Task: Cluster the following survey responses into 5-8 themes.
For each theme output:
- Theme name (4 words max)
- Description (1 sentence)
- Frequency (% of responses)
- 3 representative verbatim quotes (include response IDs)
- Related themes
Responses (one per line, prefixed with ID):
{{paste CSV column}}
Output as JSON.
NPS Auto-Categorization Prompt
You analyze NPS responses.
For each response:
- Category: Promoter / Passive / Detractor
- Primary reason (pick from: product quality, pricing, support, onboarding, feature gap, other)
- Sentiment score (-1 to +1)
- Action signal: churn risk / upsell opportunity / advocacy opportunity / none
Input:
{{score, open_text}}
Output JSON array.
Common Mistakes
- No research question — AI outputs noise without direction
- Pasting 10,000 rows at once — hit context limit, lose fidelity
- Trusting AI themes without reading raw data
- Ignoring "I don't know" / blank responses (often signal itself)
- Presenting quant-only when qual has the real gold
Top Tools
Tool
Best For
Pricing
Thematic
Automated theme extraction at scale
$500+/mo
Dovetail
Survey + interview repo
$39/user/mo
Claude 3.5 (200K context)
Custom analysis
$20/mo
SurveyMonkey AI
Built-in for users
$39/mo
Qualtrics iQ
Enterprise
Custom
FAQs
How accurate is AI sentiment analysis? 85-92% agreement with human coders for English (Thematic 2025 benchmark). Weaker for sarcasm and multilingual.
Can I trust AI themes? For exploratory — yes. For board-level decisions — validate with a human-coded 200-response subsample.
What about surveys in multiple languages? Claude and GPT-4o handle 30+ languages natively. Translate after theme extraction to preserve nuance.
How do I segment analysis? Pre-tag each response with segment (role, company size). Then ask AI to break themes by segment.
What about bias? AI inherits training data bias. Diverse samples + human review catch it.
Spam responses? AI can flag them: "Classify each response as legitimate, spam, or low-effort."
Is open-text better than multiple choice? For discovery — yes. For tracking — no. Use both.
Conclusion + CTA
Surveys die when analysis takes 4 weeks. By then, the moment is gone. AI turns 10,000 responses into a clear action list in a morning.
Dig up your last survey that never got analyzed. Run the prompts above. Ship insights this week — stakeholders will notice.