Join our Mailing List About the Latest in AI
Lab Test:
GPT vs. Real Copywriter: Facebook Ad Test
What We Were Testing
We set out to see if a GPT model trained on past client campaigns and their unique tone of voice could outperform ads written entirely by humans.
Specifically, we were looking at key performance metrics like click-through rate and engagement to determine whether AI could not just replicate, but exceed the results of traditional ad copywriting.
Why We Did It
Clients often ask us, “Can AI really write ads that perform?” It’s a great question—and one we wanted to answer with real data, not just assumptions.
So we ran a test: AI-written ads versus human-written ads, head-to-head in a live campaign. The goal? To see if AI could create not just copy, but effective, results-driven messaging that holds its own in the real world.
Inside the Experiment
What We Did
A real-world test to see how AI-generated ads stack up against human copy—using the same budget, audience, and goals.
![]()
Campaign Intelligence, Captured
Trained a GPT on the client’s last 15 campaigns.
![]()
Creative, Multiplied
Built 3 ad variations using different prompt styles.
![]()
Human vs. AI, Tested
Ran a Facebook A/B test (GPT ads vs. human ads).
![]()
Budget, Evenly Split
Split test budget: $500 per set.
The Takeaways
What We Learned
The right prompts—and the right tone—can make AI copy not just competitive, but top-performing.
![]()
Higher Engagement
GPT copy had a 26% higher CTR.
![]()
Human Touch Still Matters
Human copy had slightly better comments (felt “more real”).
![]()
Emotion Wins
GPT ad with emotional tone outperformed others.
![]()
Prompts Make the Difference
Prompt phrasing mattered a lot—slight tweaks = major lift.
What’s Next?
![]()
Next Up: Search Ads
We’re testing GPT-led ads for Google Search next.
![]()
Smarter by Persona
Also planning to train GPTs by persona (e.g. “GPT for Lawyers”).