10 Viral Claude Prompts — Scored, Diagnosed, and Fixed

An infographic with “10 Claude Opus 4.6 Prompts” is making the rounds. They have catchy names. They sound impressive. Not one broke 55/100.

April 2026 · BumpMyPrompt

How We Score Prompts

Every prompt is scored 0–25 on four dimensions. Total: 0–100.

DimensionWhat It Measures
ClarityHow unambiguous is the instruction? Can it be misinterpreted?
CompletenessDoes it provide enough context? Are edge cases handled?
SpecificityAre there precise constraints? Will it produce consistent results?
StructureIs it well-organized with clear sections?

Most prompts land between 50–75. Exceptional prompts score 85+. These ten averaged 45.7.

The pattern across all ten is the same: they tell the AI what format to use, but never what context to work with. It’s like ordering a custom suit by describing the number of buttons you want — without mentioning your measurements.

1. The Feynman Decoder

Original (Score: 47/100)

Original Prompt

Explain [complex topic] to me as if I'm a curious 12-year-old who asks "why?" a lot. Start simple, then progressively increase the complexity across 3 levels — beginner, intermediate, and expert — ending with quick comprehension questions at each level.
ClarityCompletenessSpecificityStructure
15101210

Diagnosis: “Curious 12-year-old” is a persona cliché that tells the AI nothing useful. What matters is your actual baseline knowledge. The prompt prescribes three levels but gives zero criteria for what makes each level “beginner” vs “expert.” And “quick comprehension questions” — how many? What format?

Improved (Score: 88/100)

Improved Prompt

## Context
I'm a [your role/background] learning about [complex topic] for the first time. My goal is to [why you need this — e.g., explain it in a presentation / use it in my work / pass an exam]. I'm already familiar with [related concepts you understand].

## Instructions
Teach me this topic in three progressive layers:

Layer 1 — Core Intuition (no jargon): Explain the fundamental concept using an analogy from everyday life. A non-technical person should be able to follow this. End with 2 true/false questions to check my understanding.

Layer 2 — Working Knowledge: Introduce the key terminology and mechanics. Connect each new term back to the Layer 1 analogy so I can anchor it. End with 2 short-answer questions that require me to apply the concept.

Layer 3 — Expert Nuance: Cover the edge cases, common misconceptions, and the "it depends" caveats that practitioners deal with. End with 1 scenario-based question where I have to make a judgment call.

## Format
Use headers to separate each layer. Bold all key terms on first use. Keep each layer under 300 words.
ClarityCompletenessSpecificityStructure
23212222

What changed: Context about who you are and why you need this. Each layer has a defined purpose, a specific teaching method, and a defined question format. Word limits prevent bloat. The AI now knows what “beginner” and “expert” actually mean for you.

2. The 80/20 Skill Accelerator

Original (Score: 45/100)

Original Prompt

I want to learn [skill/subject]. Identify the 20% of concepts that will give me 80% of the practical understanding. Create a focused 7-day learning plan with specific exercises, real-world applications, and concepts to study by day 7 to solidify my knowledge.
ClarityCompletenessSpecificityStructure
1410138

Diagnosis: “80% of practical understanding” is unmeasurable. The AI has no idea what your current level is, how many hours per day you have, what “practical” means in your context, or what resources you have access to. The 80/20 framing sounds smart but gives the model zero actionable constraints — it’s a vibes instruction.

Improved (Score: 89/100)

Improved Prompt

## Context
I want to learn [skill/subject]. I'm currently at [beginner/intermediate — describe what you already know]. I have roughly [X] hours per day for the next 7 days. My goal is to be able to [specific capability — e.g., "build a basic REST API" / "have a confident conversation about X" / "pass the certification exam"].

## Instructions
1. Identify the critical 5-7 concepts I must understand to reach my goal. For each, write one sentence on why it matters and what it unlocks.
2. Create a 7-day plan with this structure for each day:
   - Focus concept (1-2 per day max)
   - Study activity (read/watch — link a specific free resource if possible)
   - Practice exercise (a hands-on task that takes 30-60 min)
   - Checkpoint (one question I should be able to answer by end of day)
3. Day 7 should be a capstone project that combines everything into a single deliverable I can show someone.

## Constraints
- Prioritize free resources (documentation, YouTube, official tutorials)
- Each day should be completable in [X] hours
- Skip theory that doesn't directly serve my stated goal
ClarityCompletenessSpecificityStructure
23222222

What changed: “Practical understanding” is replaced with a concrete goal. Time constraints are explicit. Each day has a defined structure instead of vague “exercises.” The capstone project gives a measurable endpoint.

3. The Mental Model Builder

Original (Score: 48/100)

Original Prompt

I'm studying [topic]. Give me 5 powerful mental models or frameworks that experts use to find solutions in this field. For each, give me a real-world example, a common mistake beginners make, and a practice scenario I can work through.
ClarityCompletenessSpecificityStructure
14101410

Diagnosis: “Powerful” is a meaningless qualifier — every list from an AI will claim its items are “powerful.” No context about what problems you’re trying to solve, so the AI picks generic frameworks. “Practice scenario I can work through” — how? On paper? In code? The prompt also assumes exactly 5 is the right number, which is arbitrary format-stuffing.

Improved (Score: 87/100)

Improved Prompt

## Context
I'm studying [topic] because I need to [specific goal — e.g., make better investment decisions / debug distributed systems / lead product strategy]. My background is in [related field/experience]. I learn best when I can immediately apply concepts to real situations.

## Instructions
Identify the mental models that practitioners in this field rely on most frequently. For each model:

1. Name and one-sentence definition
2. When to reach for it — the specific type of problem or decision where this model applies
3. Worked example — walk through a real scenario step-by-step showing the model in action
4. The beginner trap — the most common way people misapply this model and what goes wrong
5. My turn — give me a scenario relevant to [my specific goal above] and let me try applying the model. Include your suggested approach in a collapsed/spoiler section so I can check my work.

## Constraints
- Only include models you'd recommend to someone in my specific situation — no padding
- If fewer than 5 models are genuinely essential, give fewer. If more are needed, give more. Don't force a round number.
- Prioritize models I can use this week, not theoretical frameworks
ClarityCompletenessSpecificityStructure
22222122

What changed: Context drives model selection. “Powerful” is replaced with “most frequently relied on.” The practice scenario is tied to your actual goal. And the arbitrary “5” is replaced with “as many as are genuinely essential” — because forcing a number forces filler.

4. The Second Brain Architect

Original (Score: 41/100)

Original Prompt

I just consumed [book/course/video about X]. Extract the key insights and organize them using: What I Already Know, Action Items, What I Can Implement. Summary I could teach someone in 60 seconds.
ClarityCompletenessSpecificityStructure
1281110

Diagnosis: This prompt has a fundamental problem: the AI hasn’t consumed the content. Unless you paste the full text, the AI generates generic knowledge about the topic, not insights from your specific source. “What I Already Know” is particularly odd — the AI doesn’t know what you already know. The categories also overlap: “Action Items” and “What I Can Implement” are essentially the same thing.

Improved (Score: 86/100)

Improved Prompt

## Context
I just finished [book/course/video] by [author]. I'm a [your role] and I'm interested in this because [why it matters to your work].

## Source Material
[Paste your highlights, notes, or key passages here. If you can't paste the full text, paste at least your top 10 highlights or takeaways.]

## Instructions
Based on the source material above, create a structured knowledge note:

1. Core Thesis (2-3 sentences): What is the author's main argument or insight?
2. Key Insights (3-5 bullets): The ideas that were new or surprising to me. For each, note whether it confirms, challenges, or extends something I likely already believe given my role as [your role].
3. Action Items (2-3 bullets): Specific things I could do in the next 7 days to apply these insights. Make them concrete — "Review my onboarding flow using the framework from chapter 3" not "Think about improving onboarding."
4. Connections: How does this connect to [other book/framework you know]? Where do the authors agree or disagree?
5. Teach-back Summary (under 100 words): A summary I could deliver in 60 seconds to a colleague who hasn't read this. Focus on "why should they care" not "what the book covers."

## Format
Use markdown headers. Keep the total note under 500 words — this is a reference document, not a book report.
ClarityCompletenessSpecificityStructure
22212122

What changed: Requires actual source material instead of asking the AI to guess. Overlapping categories are replaced with distinct ones. “60-second summary” is reframed as a teach-back with a specific focus. Word limits prevent bloat.

5. The Decision Matrix Engine

Original (Score: 42/100)

Original Prompt

I need to make a decision about [situation]. Act as a strategic analyst, evaluate all factors, then create a weighted decision matrix comparing my options across financial impact, skill development, potential, risk level. Give clear comparison, then give me your top recommendation.
ClarityCompletenessSpecificityStructure
139128

Diagnosis: “Evaluate all factors” is infinitely scoped. The prescribed dimensions (financial impact, skill development, potential, risk) are generic career-decision criteria — useless if you’re deciding between marketing channels or tech stacks. “Potential” for what? And asking for a “top recommendation” without context about your risk tolerance, timeline, or values is asking for a coin flip dressed up as analysis.

Improved (Score: 88/100)

Improved Prompt

## Decision Context
I need to decide between: [Option A], [Option B], and [Option C — if applicable].

Background: [2-3 sentences on the situation — what led to this decision point]
Timeline: I need to decide by [date]. The decision will play out over [timeframe].
What matters most to me: [Your top 1-2 priorities — e.g., "minimizing financial risk" or "fastest path to revenue" or "learning opportunity"]
Constraints: [Budget, team size, skills available, dependencies]

## Instructions
1. Before building the matrix, list any factors I may not have considered. Ask me 3 clarifying questions if my context above is insufficient — don't assume.
2. Build a weighted decision matrix using these criteria (adjust weights based on my stated priorities):
   - [Criterion 1 relevant to your decision]
   - [Criterion 2]
   - [Criterion 3]
   - Add any criteria you think I'm missing
3. Score each option 1-5 on each criterion with a one-sentence justification per score.
4. Calculate weighted totals and show your work.
5. Recommendation: State your recommendation, then argue against it — what's the strongest case for a different choice? Under what conditions would your recommendation be wrong?

## Format
Use a markdown table for the matrix. Keep justifications brief.
ClarityCompletenessSpecificityStructure
23222122

What changed: Generic dimensions are replaced with user-defined criteria. The AI is told to push back and ask questions. The recommendation includes a counter-argument, which is far more useful than false certainty. Weights are tied to stated priorities instead of arbitrary.

6. The Socratic Deep Dive

Original (Score: 50/100)

Original Prompt

I think I understand [topic], but I want to stress-test my knowledge. Ask me 10 progressively harder questions about it. After each of my answers, tell me what I got right, what I missed, and fill in the gaps. Tell me exactly what to study next.
ClarityCompletenessSpecificityStructure
16111310

Diagnosis: This is the strongest of the ten — the interactive format is genuinely useful. But “progressively harder” is uncalibrated. The AI doesn’t know if you’re a first-year student or a PhD candidate. And “10 questions” in a single response will produce a wall of text. The real power of Socratic questioning is the back-and-forth, which this prompt accidentally blocks by requesting all 10 upfront.

Improved (Score: 90/100)

Improved Prompt

## Context
I've been studying [topic] and I think I have a solid grasp, but I want to find my blind spots. My background: [your level — e.g., "I've read two books on it" / "I use it daily at work" / "I took a course last month"]. I'm preparing for [why — exam, job interview, teaching it to others, applying it to a project].

## Instructions
We're going to do a Socratic knowledge audit. Here's how it works:

1. Start with one question at a level you'd expect someone with my background to answer confidently.
2. Wait for my response before continuing. Do not list multiple questions at once.
3. After each response, give me:
   - What I got right (be specific)
   - What I missed or got partially right
   - The complete, accurate answer (so I learn immediately)
   - A difficulty rating for the question (1-10)
4. Calibrate the next question based on my performance:
   - If I nailed it: increase difficulty
   - If I struggled: explore that gap deeper before moving on
   - If I got it completely wrong: back up and teach the prerequisite
5. After 8-10 exchanges, give me a summary: my strongest areas, my weakest areas, and a prioritized study list of exactly what to review (with specific subtopics, not vague categories).

Start with Question 1.
ClarityCompletenessSpecificityStructure
24222222

What changed: One question at a time enables real Socratic dialogue. Adaptive difficulty replaces fixed “progressive” escalation. The feedback format is defined. The summary at the end produces actionable study guidance instead of vague direction.

7. The Weekly System Designer

Original (Score: 50/100)

Original Prompt

Here are my current goals: [list 3-5 goals]. Design a realistic weekly system that makes progress on all of them without burnout. Include high-energy tasks, low-energy tasks, review periods, and buffer time. Give me a specific template I can complete in 10 minutes every Sunday.
ClarityCompletenessSpecificityStructure
15111410

Diagnosis: “Realistic” and “without burnout” are subjective and depend entirely on your schedule, energy patterns, and commitments — none of which are provided. The AI doesn’t know if you have 2 free hours a day or 8. The “10-minute Sunday template” is a good idea buried in a bad prompt.

Improved (Score: 87/100)

Improved Prompt

## My Situation
Goals (in priority order):
1. [Goal 1 — include why it matters and any deadline]
2. [Goal 2]
3. [Goal 3]

Available time: [e.g., "~3 hours on weekday evenings, ~6 hours on weekends"]
Energy patterns: [e.g., "I'm sharpest in the morning, low energy after 3pm, second wind around 8pm"]
Non-negotiable commitments: [e.g., "Full-time job 9-5, gym 3x/week, family dinner every night"]
Current struggle: [e.g., "I keep overcommitting on Goal 1 and Goal 3 gets nothing"]

## Instructions
1. Allocate weekly time blocks across my goals based on priority and available hours. Show the math — how many hours per goal per week.
2. Map tasks to energy levels: high-focus work to my peak hours, administrative/routine tasks to low-energy slots.
3. Build a weekly template as a simple table (Mon-Sun) with time blocks. Include:
   - 1 review/planning block (the Sunday session)
   - Buffer time (at least 15% of total hours unscheduled)
   - One full rest day or equivalent
4. Sunday Planning Checklist: A bullet-point checklist I can complete in 10 minutes to set up my week. Include prompts like "What's the #1 thing that would make this week a win?" and "What blocked me last week?"

## Constraints
- Don't schedule more than 2 goals per day
- Assume I'll actually follow through on about 70% of what's planned (build for reality, not ambition)
- If my goals exceed my available time, tell me — don't just cram everything in
ClarityCompletenessSpecificityStructure
22222122

What changed: Energy patterns and available time replace vague “realistic.” The 70% follow-through rate builds in honesty. The constraint about telling you when you’re overcommitted is critical — the original prompt would produce a beautiful system you’d abandon by Wednesday.

8. The Analogy Bridge

Original (Score: 42/100)

Original Prompt

I already understand [familiar topic] well. Use that as my foundation to teach me [new topic] by drawing clear parallels. Highlight where I already know something, and where the parallels break down to deepen the knowledge gap.
ClarityCompletenessSpecificityStructure
159108

Diagnosis: The shortest and vaguest prompt in the set. No depth specification, no format guidance, no learning objective. “Understand well” could mean “I’ve used it professionally for 10 years” or “I watched a YouTube video.” The prompt also doesn’t tell the AI what depth of knowledge you want about the new topic.

Improved (Score: 87/100)

Improved Prompt

## Context
What I know well: [familiar topic]. Specifically, I'm comfortable with [list 3-4 specific concepts within that topic — e.g., "React component lifecycle, state management with hooks, and JSX rendering"].

What I want to learn: [new topic]. My goal is to understand it well enough to [specific outcome — e.g., "build a basic app" / "make informed decisions about using it" / "discuss it confidently with practitioners"].

## Instructions
Teach me [new topic] by mapping it onto my existing knowledge of [familiar topic]:

1. The Bridge Table: Create a two-column comparison table. Left column: concepts I already know from [familiar topic]. Right column: the equivalent concept in [new topic]. Include 6-8 key mappings.

2. Guided Walkthrough: For each mapping, explain:
   - How the concepts are similar (what transfers directly)
   - Where the analogy breaks down (what's genuinely different and why)
   - The "aha" insight — the one thing about [new topic] that my [familiar topic] background makes easier to grasp than most beginners

3. False Friends: List 2-3 concepts that seem equivalent between the two domains but actually work differently. These are the traps I'll fall into if I rely too heavily on my existing mental model.

4. What's Completely New: Identify concepts in [new topic] that have no analogy in [familiar topic] — things I'll have to learn from scratch.

## Format
Use the bridge table as the anchor, then expand below it. Keep the total response under 1500 words.
ClarityCompletenessSpecificityStructure
22222122

What changed: Specific concepts replace vague “I understand it well.” The bridge table gives structure. “False friends” is the most valuable addition — it catches the exact mistakes that analogy-based learning creates. “What’s completely new” prevents dangerous overconfidence.

9. The Content-to-Curriculum Converter

Original (Score: 42/100)

Original Prompt

I just consumed a [podcast/video series/article] about [topic]. Transform it into: key terms, 5 flashcard-style Q&As for retention, a summary.
ClarityCompletenessSpecificityStructure
138129

Diagnosis: Same fatal flaw as #4 — the AI hasn’t consumed the content. Without the actual material, it will generate generic flashcards about the topic, not the specific content you consumed. “Key terms” — how many? Defined how? “A summary” — how long? For what purpose? This prompt asks for three deliverables in one sentence with almost no specifications for any of them.

Improved (Score: 88/100)

Improved Prompt

## Source Material
Source: [Title] by [Author/Creator] — [podcast/video/article]
Link: [URL if available]

My notes/highlights from this content:
[Paste your notes, timestamps with key points, or highlighted passages. The more you provide, the better the curriculum will be. If you can paste the full transcript, even better.]

## Context
I'm learning about this because [why — e.g., "I need to apply this to my marketing strategy" / "I'm preparing a presentation on this"]. My audience/use case: [who you'll share this with or how you'll use it].

## Instructions
Transform my notes into a structured learning packet:

1. Key Concepts Glossary (5-8 terms): Each term with a one-sentence definition as the author uses it (not a textbook definition). Include page/timestamp reference if available in my notes.

2. Core Arguments (3-5 bullets): The author's main claims or frameworks, stated as testable propositions (e.g., "The author argues that X leads to Y because Z" — not just "The author discusses X").

3. Retention Flashcards (8-10 Q&A pairs):
   - Mix of factual recall ("What does the author define as...?") and application ("How would you apply the author's framework to...?")
   - Answers should be 1-2 sentences max
   - Tag each as [Recall] or [Application]

4. One-Paragraph Summary (100 words max): Written for [your audience] explaining why this content matters and the single most actionable insight.

5. What to Explore Next: 2-3 questions this content raised but didn't fully answer — my rabbit holes for further learning.
ClarityCompletenessSpecificityStructure
23222221

What changed: Requires actual source material. Flashcards are typed (recall vs. application) and doubled in quantity. “Key terms” becomes a glossary with author-specific definitions. The summary has a word limit and audience. “What to explore next” turns passive consumption into active inquiry.

10. The Bottleneck Finder

Original (Score: 50/100)

Original Prompt

I'm trying to achieve [goal] but feel stuck. Analyze my process by asking me 5 targeted diagnostic questions. Then identify my top 3 bottlenecks, give 3 options to automate a bottleneck, and surface any invisible assumption that might be holding me back.
ClarityCompletenessSpecificityStructure
15111410

Diagnosis: This is tied for the highest-scoring original — the interactive diagnostic approach is genuinely useful. But it front-loads the answer (“identify my top 3 bottlenecks”) before the diagnostic is complete. The AI will generate questions and answers in a single response, which defeats the purpose. “Invisible assumptions” is a great concept but the AI has no data to identify them without conversation.

Improved (Score: 89/100)

Improved Prompt

## Context
My goal: [specific goal — e.g., "grow my newsletter from 500 to 5,000 subscribers in 6 months"]
Where I'm stuck: [describe the specific feeling — e.g., "I publish weekly but growth has plateaued at ~10 new subs/week for 3 months"]
What I've already tried: [list 2-3 things you've attempted]
Resources available: [time per week, budget, tools, team]

## Instructions
Act as an operational diagnostician. We'll work through this in two phases:

Phase 1 — Diagnostic (interactive):
Ask me 5 diagnostic questions, ONE AT A TIME. Wait for my answer before asking the next question. Focus on:
- Where my time actually goes vs. where I think it goes
- What I'm measuring (or not measuring)
- What happens right before and right after the point where I feel stuck
- What I've ruled out and why
- What would need to be true for my current approach to work

Phase 2 — Analysis (after all 5 answers):
Based on my responses, deliver:
1. Top 3 bottlenecks, ranked by impact. For each: what it is, evidence from my answers, and estimated impact if resolved.
2. For each bottleneck, one specific action I can take this week (not "consider" or "think about" — an action with a verb and a deliverable).
3. The assumption audit: Identify 1-2 beliefs I seem to be operating under that may not be true. Frame these as testable hypotheses — "You seem to assume X. You could test this by Y."

Start with Diagnostic Question 1.
ClarityCompletenessSpecificityStructure
23222222

What changed: Two-phase structure separates diagnosis from prescription. One question at a time enables real conversation. “Invisible assumptions” becomes a structured “assumption audit” with testable hypotheses. Actions are required to be specific and immediate, not vague suggestions.

The Pattern: Why All 10 Failed the Same Way

1. No Context About You

Not one prompt asks who you are, what you already know, or why you need this. The AI is flying blind. A prompt about learning mental models will produce completely different (and better) results if it knows you’re a product manager vs. a medical student.

2. Format Over Thinking

“Give me 5 mental models.” “Ask me 10 questions.” “Create a 7-day plan.” These prompts obsess over the shape of the output while ignoring the substance. They’re ordering the container without describing what goes inside. The AI dutifully fills the slots — even if 3 of the 5 mental models are filler.

3. One-Shot Fantasy

Eight of the ten prompts expect a single, complete output. But the best AI interactions are conversations. The two highest-scoring originals (#6 and #10) were the ones that implied back-and-forth — and even they didn’t commit to it properly.

4. No Success Criteria

How do you know if the output is good? None of these prompts define what “good” looks like. Without evaluation criteria, you can’t tell whether the AI gave you insight or just gave you volume.

The Scoreboard

#Prompt NameBeforeAfterGain
1The Feynman Decoder4788+41
2The 80/20 Skill Accelerator4589+44
3The Mental Model Builder4887+39
4The Second Brain Architect4186+45
5The Decision Matrix Engine4288+46
6The Socratic Deep Dive5090+40
7The Weekly System Designer5087+37
8The Analogy Bridge4287+45
9Content-to-Curriculum Converter4288+46
10The Bottleneck Finder5089+39
Average45.787.9+42.2

Every prompt nearly doubled its score. The average gain was +42 points.

The catchy-named prompts circulating on social media aren’t useless — they contain good ideas. The Feynman technique is genuinely powerful. Socratic questioning works. Decision matrices help. But a good idea in a bad prompt produces mediocre output.

The difference between a 45-point prompt and an 88-point prompt isn’t cleverness — it’s context, specificity, and structure.

Your prompts aren’t magic spells. They’re briefs. Write them like you’re briefing a brilliant colleague who just walked into the room — not like you’re feeding keywords into a search engine.

Try it yourself

Paste any prompt. Get a score in seconds. See exactly what’s weak and how to fix it.

Score my prompt