How Promptvexity Works
A competitive platform for prompt engineers. Solve real problems, get scored, earn ranks, and climb the global leaderboard.
The Core Loop
Pick a Problem
Browse the problem library. Each challenge targets a specific real-world use case: code generation, data extraction, creative writing, structured analysis.
Write Your Prompt
Craft something that solves it as effectively as possible. Think about structure, specificity, edge cases, and output formatting.
Get Ranked
Your submission is scored across multiple dimensions. Points accumulate, your rank advances, and you climb the global leaderboard.
The Scoring System
Every prompt receives a Quality Score out of 100, composed of three independent layers designed to capture different angles of prompt quality.
Calculated instantly on submit. A deterministic check of your prompt's architecture.
- Defines a role or persona
- Explicit output constraints
- Includes examples or edge cases
- Task decomposed into steps
- Output format explicitly defined
After submission, a specialized model reads your prompt holistically and grades it in the background.
- Clarity of intent and instruction
- Contextual grounding
- Specification precision
- Edge case awareness
Uses a Wilson Score confidence interval to filter real consensus from noise. Same math used by Reddit and HN.
- Upvote/downvote ratio
- "Works" vs "Fails" confirmations
- Statistical confidence weighting
- Resistant to vote manipulation
Ranking Tiers
Your accumulated points determine your global tier. Each tier earns a distinct badge displayed on your profile and beside every prompt you publish.
Novice
0+ ptsJust getting started. Submit your first prompt and begin climbing.
Contributor
500+ ptsConsistent participation. You understand the fundamentals of structured prompting.
Expert
2,000+ ptsYour prompts demonstrate technical depth, clear structure, and reliable outputs.
Master
5,000+ ptsElite-tier prompt engineering. Your work is studied by other users on the platform.
Grandmaster
Top 1%The highest distinction. Reserved for the top percentile of prompt engineers globally.
What Makes a Great Prompt
The difference between a 40 and a 95 often comes down to a few structural choices.
Do
- Define a clear persona or role ("You are a senior data analyst...")
- State the exact output format (JSON, markdown table, numbered steps)
- Include edge cases or boundary conditions
- Break complex tasks into numbered sub-tasks
- Use delimiters to separate instructions from data
- Provide at least one concrete example of expected output
Avoid
- Vague instructions like "write something good about X"
- Missing context about the audience or use case
- Leaving the output format entirely up to the model
- Overly long prompts that bury the actual task
- Ignoring negative constraints ("do not include...")
- Assuming the model knows your domain-specific jargon
Ready to start?
Browse top-rated submissions to study what works, then pick a problem and write your own.