Next month marks the 10-year anniversary of my first A/B test. How things have changed since those early days!
The A/B testing hustle was real – partnering with Optimizely and the incredible founding team there, under the innovative leadership of Dan Siroker and Pete Koomen.
We were all just figuring things out.
We knew that A/B testing had the power to do something incredible in the market (in fact, it helped support Obama’s re-election success in 2012).
At Roboboogie, we were excited that we could add a new layer to our UX strategy and design, with measurable results. Real-time feedback from actual users based on how they performed? Incredible!
Focus groups and user testing had been helpful tools for us, but they only provided academic feedback. A/B Testing, on the other hand, unleashed a new approach – an interactive, scientific methodology that could guarantee positive outcomes.
Growing up with a biology professor and a chemistry teacher running our household, I viewed the world through a scientific lens. Developing hypotheses was how I figured out the world, and my place in it.
Combining science with marketing in my career was a no-brainer. Bring on A/B testing!
Fast forward to June 2013.
Roboboogie engaged our first A/B testing client. A highly innovative, fast-paced e-commerce client willing to take (smart) risks, with the ‘go-fast-and-break-things’ mentality. “Let’s embrace the experimentation. Test fast, and iterate.”
We were scrappy back then. I remember singlehandedly doing test ideation, sketching UX test variation concepts, hopping on a call with clients for alignment, then jumping into the WYSIWYG editor to build it, set up analytics, QA it, and launch it – sometimes all in the same day.
At our peak, we were launching 12 tests per month (all possible due to their high site traffic and quick purchase cycles).
And we saw big success. Experiments were increasing revenue, unlocking new customer segments, and helping inform product development and positioning.
But the go-go-go approach wasn’t without missteps.
I still cringe to this day about one test in particular.
We tested dynamic pricing for a certain set of luggage products. Depending on the variation, we were showing pricing differently, presenting the retail price, promotional price, and an additional discount layer. Regardless of the variation, the product was priced at $74.99.
The test strategy and architecture were sound. The test was built to spec and seamlessly passed QA.
But when we launched the test, the results were staggering. There were massive lifts in product engagement, product add-to-carts, initiated check-outs, user time on site, total pages views per session, and… site-wide add to carts?
That’s when I got the phone call from our client partner.
“Umm, we have a shopper on the line talking to customer service who is pretty upset that they can’t buy our featured trip to Costa Rica for $74.99. What is going on?! We need to halt all testing immediately.”
Oops.
Instead of my testing parameters applying only to the backpack products we were piloting our experiment on, they had been applied site-wide. To clothing, snowboards, bikes, kayaks, and even trips. The test was running perfectly for our intended products – but also everywhere else.
The go-fast-and-break-things approach had … broken things. While we had massive wins, we also now had a sobering misstep. The experiment was only live for about 25 minutes, but it had caused some need for damage control with several customers. Luckily, customers were understanding, and with some exchanged store credit, everything was smoothed over fairly quickly.
That mistake, however, significantly shaped my professional approach and how we approach our testing methodology at Roboboogie. The experience has proved invaluable time and time again. For it is out of failure, that our best growth and maturity comes.
My attention to detail has never been the same since – an approach we now incorporate into our testing process. We have a fully immersive, multi-disciplinary approach to each step of the testing process – involving thoughtful strategy, smart UX, tight UI, pixel-perfect development, methodical data engineering, and double-and-triple-check QA before launch. Each team member has an eye on the bigger picture goal – launching smart tests, free from errors, with the right balance of speed, strategy, and attention to detail.
For us, our only “failed” tests are the ones that are launched broken.
We embrace the mantra of “go-fast-and-BUILD-things” now. We believe our clients and end-users deserve better than broken tests.
Not every first test we launch results in an immediate net-positive CRO impact (it’s experimentation, after all). But we work to ensure that every test we launch is a winner – either driving revenue/leads, elevating the brand experience, unearthing user insights, or unlocking new user segments.
And we’re incredibly proud of our ability to do so.
Would we be where we are today without that mistake? I’m not sure. But I do believe it catapulted us forward. Looking back at that mistake 10 years ago – almost accidentally selling a $3500 trip to South America for $75 – I dare say the mistake may have been worth it.
That failure resulted in a decade of better tests – for dozens of clients and millions of tested visitors since.
- Jedidiah Fugle, COO @ Roboboogie