Quit Guessing: Your Guide to Running Better Experiments
What if a cold email campaign getting a 6% reply rate is actually worse than a campaign getting a 4% reply rate? Here’s two cold email campaigns:
Campaign 1 is “good”. Initial reply rates were 6%.
Campaign 2 is “great”. But, initial reply rates were only 4%.
If the “good” campaign worked better, how can the other campaign be great?
In this blog, we’re going to cover:
- The right mindset for running experiments
- Rules for an effective test
- Things you can test in your next campaign
- Building a "testing" culture

Now, let’s dig deeper on this hypothetical.
The “good” campaign ran a single variant of messaging to the same persona. It got a 6% reply rate. Pretty solid to get 6 replies per every 100 people contacted.
But, the “great” campaign ran multiple variants of messaging to the same persona. So while the overall reply rates were 4%:
Segment A of testing saw a 1.5% reply rate
Segment B saw a 3.5% reply rate
Segment C saw a 7% reply rate
From those first two segments? You learned what to avoid.
From the 3rd you learned something new to iterate on.
Or.. maybe you take a new approach to Segment B.
The problem is that most teams lack a foundation for running experiments, so they might get a 6% reply rate. But, when results start to fade, they’re back to guess work to crack the next high performing campaign.
The “Mindset” for running effective tests
Here's the simple things that should guide your next test:
1. You should always be looking to learn something.
2. You should always know (directionally) your next experiment based on the results
That "good" campaign is only as good as the folks that respond to "that" blast. But, all you know is the other 94% didn't respond to it. There’s nothing to compare your campaign against.
Eventually? Results will fall off and you’ll be back to guessing and hoping you can get lucky again.
If you follow this mindset, you’re better off than 99.99% of sales teams. But, let’s break down some more rules.
More Rules for Running a Great Experiment
Rule 1: Only test one thing at a time
A good heuristic:
You can change the words, but only change one idea.
A truly effective test is one where you’re only testing a single element. Going back to middle school science, an effective test requires a hypothesis, a variable you’re testing, and a control. If you are testing multiple variables you lose control of understanding which variable created the change in results.
This rule can be flexible. Don’t let rigid thinking leaving you feeling like you can only change one word at a time.
Ex. I can test two value propositions. The wording might be “different”, but I know what I’m testing in the message.
This can expand to changing an entire paragraph or even the flow of information.
Rule 2: Get enough data to have a conclusive result
I’m going to go out on a limb and guess that you didn’t get into sales to test your math skills. But, it’s really easy to look at a small data set and think you have the answer.
At Lavender, we don’t start shaping recommendations in our email coach until we’ve seen 100 replies with a variable.
You don’t have to follow this much rigidity, But, don’t send 100 emails and think the results are conclusive. The results are directional - and that’s okay if you’re going to continue experimenting.
Don’t get discouraged by bad results
Most teams will look at a bad outcome and think “failure”. You didn’t waste time. You learned what to avoid going forward.
Every test is a step closer to finding the optimal.
Things you can test in your next campaign
Targeting
You can test the same pain point and framework to different persona to understand who feels the pain more.
Personalization
You can test different personalization elements to understand which situations for the prospect are more likely to tell you it’s a good time to reach out.
Try different triggers:
- Funding
- Hiring
- Tech change
- Recent content
- Exec quotes
- Competitor behavior
Value Proposition
You can test different ways to position the problems faced and different ways that you provide value against those problems.
Test:
- A problem-first value prop
- A proof-based value prop
- A curiosity-based value prop
- An insight-based value prop
Call to Action
You can test different asks in your email. You can test the number of asks you make. Testing things like an “offer” (something in return for responding) can be a great way to see if there’s a lower friction opportunity to start a conversation.
Test:
- A question
- A soft ask
- A micro-ask
- An offer (“I’ll send the findings once the study is complete…”)
- A favor-based CTA
Framework
You can try different ways of framing the information. There’s countless ways to say the same thing. Maybe you start with the ask. Maybe you finish with your personalization. Maybe you slide the problem statement to the front, follow it up with the personalization that made you think it was a problem and finish with an offer.
Try versions like:
- Personalization → Problem → POV → CTA
- Problem → Personalization → Insight → CTA
- Pattern break → POV → CTA
- CTA → Problem → Proof → CTA
You can get some inspiration here.
Other Surrounding Steps
A campaign shouldn’t just be one email. Maybe you experiment with sending a few more emails. Maybe you space out the emails at a different time interval. Maybe you try a totally different type of follow up email. Maybe you try to stack some other channel interactions around your emails.
The only thing to be careful with here? If you’re testing surrounding steps alongside an experiment, you might taint your learnings. It’s not to say you shouldn’t do this. But, as long as your aware, the results can guide your next experiment.
Building a culture of testing
Having worked with thousands of sales organizations, I know that testing and experimentation are not the norm.
At an org-level important to establish a “quarterback” behind these experiments who knows what tests are running, what learnings are being created and what next steps will be. Ensure they keep a running document of what’s working.
Individuals adopting testing can be a great start. It can also be a great way to bring about organizational change.The only risk organizationally is if this becomes widespread, you can find your team inefficiently testing things that have already been proven to fail.
If you’re in disagreement with your team about the way it’s “always been done” don’t put yourself in a position to be fired. Instead, run an experiment alongside your current work.
Build the case for change with results.
Don’t be an organization that doesn’t test
Most teams are doing cold email like this:
Guess → Blast → Hope → Guess Again
That’s not a process.
That’s not scalable.
And it’s definitely not repeatable.
Great teams do this instead:
Hypothesis → Test → Learn → Improve → Compound → Win
Iteration beats brilliance. Every time.
If you follow this process, you won’t just run better campaigns — you’ll build a system that improves every single week.
And that’s how you separate yourself from the countless teams guessing their way through outbound.
And yes, if you want structured testing without triple the effort, Ora can generate controlled variants and track results automatically.





