Join our Mailing List About the Latest in AI
Lab Test:
Prompt Playground: One Prompt, 6 Outcomes
What We Were Testing
We wanted to understand how different prompt structures impact the quality, tone, and usefulness of GPT’s responses—specifically in the context of marketing copy and brand alignment.
Why We Did It
Clients often ask, “Can you give me the perfect prompt?”—but the reality is, effective prompting depends on the goal, context, and tone. This test helps show why there’s no one-size-fits-all answer.
How We Ran the Test
What We Did
Took one request: “Write a LinkedIn post about our AI Chatbot product for SMBs.” Then tested 6 prompt types:
Directive
Gives GPT a clear, direct instruction to write exactly what’s needed.
Conversational
Prompts GPT to write in a casual, human-like tone—like talking to a colleague.
Persona-Based
Frames the output through the lens of a specific audience or character (e.g., a startup founder).
Framework (PAS)
Uses proven copywriting formulas like Problem–Agitate–Solution for structure and persuasion.
Constraint-Based
Adds rules like “keep it under 100 words” or “avoid buzzwords” to sharpen the output.
GPT-as-Copywriter
Instructs GPT to act like a professional marketer, applying best practices automatically.
The Takeaways
What We Learned
Different prompt styles shape not just what GPT writes—but how well it resonates and performs.
Structure Meets Style
Framework + Persona combo gave the best mix of tone + structure.
Casual Connects
Conversational tone was best for engagement.
On-Brand by Design
“GPT as…” prompts created the most brand-aligned copy.
What’s Next
We’ll create a downloadable “Prompt Pack” clients can plug into their own GPTs.