Skip to content
Impact ConversionBook a call

Methodology · April 2026

Why most CRO programs fail in the first 90 days

Most CRO programs die quietly. Not with a single blow-up, but with a slow drift: a few inconclusive tests, a couple of false positives, a marketing team that loses interest, and twelve months later everyone agrees it "didn't really work."

That's almost never a talent problem. It's a structure problem. There are four patterns I see over and over again. If you can see them coming, you can avoid most of them.

1. Running tests before you've done the research

The temptation, especially when you've just started paying someone to optimise your site, is to start running tests on day one. Button colour. Hero headline. Free shipping bar. You feel productive. You're not.

Tests without research are guesses with a fancy jacket on. Your win rate sits at 10–15%, your effect sizes are tiny, and the insights you generate don't compound. Each test is answering a different made-up question, not the same real one from different angles.

The fix is boring: spend the first two to four weeks understanding what's happening on the site. Surveys. Session recordings. Funnel analysis. Review mining. You want a ranked list of the top five to ten friction points, written in the voice of the customers themselves, not your marketing team's guesses.

2. Calling wins that aren't wins

The other failure mode is shipping "wins" that aren't real.

I've seen this happen three ways. Stopping a test the second it crosses 90% confidence. Peeking at segments until one of them is positive. Running a test for two weeks and declaring victory because revenue-per-visitor is up 8%.

The discipline that fixes all three is the same: pick a sample size target before you start the test, commit to running for at least two full business cycles, and look at the primary metric on the full audience first. Segments after. Confidence at the end, not the start.

If you can't resist peeking, at least agree the stopping rule in writing before the test goes live.

3. Testing things that don't matter

Even if your research is good and your stats are clean, you can still waste a year testing the wrong stuff.

The question to ask isn't "what could we improve?" Almost anything on a site could be improved. The question is "what's touching the most revenue with the highest confidence that it's actually hurting us?" If you're a D2C ecommerce brand doing $10M, a 3% lift on the PDP is worth ten wins on a blog post.

I use ICE-L (Impact, Confidence, Effort, Lifetime value) as the filter. If a test doesn't clear the bar on at least three of the four, it doesn't get a slot. Slots are precious. You only get about 10–15 of them per quarter.

4. No one owns the process on the client side

The last one isn't about methodology at all. It's organisational.

CRO programs work when there's one person on the client side who owns the backlog, attends the sessions, shepherds the development tickets through, and evangelises the wins internally. They don't need to write the hypotheses or run the stats. That's the agency's job. But they need to care. They need to be the person who flags when a dev ticket is stuck and the person who presents the test summary to the exec team.

Without that, even a perfect program drifts. The agency ships work, the client team doesn't integrate it, and after a quarter or two the value conversation becomes impossible.

The short version

Research before tests. Commit to your stats. Test the things touching the most revenue. Have someone in the building who cares.

None of it is novel. Most of it is just refusing to take the shortcut in the moment when you most want to.


If any of this sounds like the last CRO engagement you had, the how we work page walks through how we structure programs to avoid these four specifically. Or book a 15-minute intro call and we can talk through where your current program is drifting.