How to Use Qualitative UX Research to Identify Conversion Roadblocks

As the world of e-commerce continues to expand, businesses must prioritize their user experience (UX) in order to remain competitive. A key component of creating a successful UX is identifying and removing conversion roadblocks – issues that prevent users from completing a desired action, such as making a purchase. One way to identify these roadblocks is through qualitative UX research. In this post, we’ll explore how to use qualitative UX research to identify conversion roadblocks, and why this approach is essential for creating a successful e-commerce website.

What is Qualitative UX Research and Why is it Important?

Before diving into how to use qualitative UX research to identify conversion roadblocks, let’s define what it is and why it’s important. Qualitative UX research involves gathering feedback from users in the form of interviews, surveys, and observation, with the goal of understanding their experiences and behaviors on a website or app. By analyzing this data, businesses can gain insights into how users interact with their website, what frustrates them, and what motivates them to take action.

So why is qualitative UX research important? First, it allows businesses to better understand their users and create a more user-friendly experience. By identifying roadblocks and pain points, businesses can make necessary changes to improve the user experience and increase conversions. Second, qualitative UX research provides valuable insights that can inform design decisions and help prioritize future improvements. By understanding what users want and need, businesses can make informed decisions that lead to a better overall experience.

How to Conduct Qualitative UX Research to Identify Conversion Roadblocks

Now that we understand the importance of qualitative UX research, let’s dive into how to conduct it to identify conversion roadblocks. There are several methods businesses can use to gather qualitative data, including:

  • User Interviews: Conducting interviews with users can provide valuable insights into their experiences and behaviors on a website. Businesses can ask questions about what users like and dislike about the website, what they find confusing or frustrating, and what motivates them to take action.
  • Surveys: Surveys are a quick and efficient way to gather feedback from a large number of users. Businesses can use surveys to ask specific questions about the user experience, such as how easy it was to find a product or complete a checkout.
  • Observations: Observing users as they navigate a website can provide valuable insights into their behavior and frustrations. By watching how users interact with a website, businesses can identify areas where users get stuck or confused. (We use FullStory to help us with this!)

Once businesses have gathered qualitative data, they can use it to identify conversion roadblocks. Some common roadblocks include:

  • Confusing Navigation: If users have a hard time finding what they’re looking for on a website, they’re more likely to abandon their purchase. Businesses should ensure that their website is easy to navigate, with clear labels and a logical hierarchy.
  • Complicated Checkout Process: A complicated checkout process is a major conversion roadblock. Businesses should strive to make the checkout process as simple and streamlined as possible, with clear calls to action and minimal steps.
  • Lack of Trust: If users don’t trust a website, they’re unlikely to make a purchase. Businesses should ensure that their website is secure and that they have clear policies in place for things like returns and refunds.

The Importance of Using Qualitative UX Research to Improve Conversion Rates

In conclusion, qualitative UX research is an essential tool for identifying conversion roadblocks and creating a successful e-commerce website. By gathering feedback from users and analyzing their experiences and behaviors, businesses can identify pain points and make necessary changes to improve the user experience. This, in turn, can lead to increased conversions and a more successful online business.

When conducting qualitative UX research, it’s important to use a variety of methods, including user interviews, surveys, and observations. By leveraging qualitative data pre- and post-optimization, organizations can track their impact and make data-driven decisions about future investments. Roboboogie is the one-stop-shop for uncovering user pain points, gathering feedback, and measuring the success of digital transformation efforts. Our expertise in Analytics, User Research, Design, and Technology provides actionable and measurable insights, with flexible services to meet you where you’re at in your optimization efforts.

Navigating the Path to Conversion Success: Let Conversion Compass Guide You

In an ever-evolving landscape of digital optimization, prioritizing what and where to test can often feel disorienting.

With countless variables like testing runtimes, page selection, and scope for design and development, it’s easy to feel directionless. But fear not, our Conversion Compass can point your optimization efforts in the right direction.

Imagine having a personalized testing plan for your e-commerce site, designed just for your brand. We create this plan using two key ingredients. First, we dive into your customer behavior data to gather valuable insights. Second, we leverage proven optimization techniques that work specifically for your industry.

That’s the magic of our Conversion Compass program. It’s a strategic tool that reveals the clearest path to success and gives you the confidence to take measurable steps towards increasing conversions. With Conversion Compass, you’ll have a roadmap to guide your testing efforts, ensuring that each experiment brings you closer to your goals

But roadmapping isn’t just a list of random tests; it’s a strategic process involving stakeholder collaboration, objective setting, data analysis, and prioritization based on impact and alignment with business goals.

In this blog, we break down 5 pro-tips for key roadmapping phases.

Account Onboarding & Business Objectives Distillation

  1. Ensure alignment: Collaborate closely with key stakeholders across departments to ensure that the defined business objectives align with the overall organizational goals. Make sure to break down the WHY behind the objectives.
  2. Focus on measurability: Clearly define measurable goals and key performance indicators that will be used to track the success of the A/B testing roadmap.
  3. Prioritize objectives: Identify and prioritize business objectives based on their potential impact on revenue, customer satisfaction, or other relevant metrics. Build a few ROI models to best understand what your biggest levers are.
  4. Set realistic timelines: Establish realistic timelines for achieving each objective, taking into account resource availability and potential dependencies. Break them into phases and milestones enabling you to track progress effectively and determine if adjustments to the overall goal or strategy are necessary.
  5. Document thoroughly: Document all discussions, decisions, and action items from the onboarding process to ensure clarity and accountability throughout the roadmapping journey.

Data & Insights Analysis:

  1. Gather comprehensive data: Collect data from various sources, including website analytics, historic tests, customer feedback, and market research, to gain a holistic portfolio of data.
  2. Transform data into insights: Data isn’t useful by itself. It’s the insights and application that derives from it that creates value. Focus on extracting meaningful insights that provide clarity on their significance and potential impact.
  3. Rank your Insights: Make note of your favorite insights, these can be ones that offer biggest ROI potential, spark the most design ideas, or perhaps it reveals several follow up questions that can be answered through testing.
  4. Define 3-4 optimization themes: Identify patterns, trends, and correlations within the data that directly connect to the business objectives. Define these patterns as optimization themes by attaching a strategy that can create a better experience for users. Every test should be connected to an optimization theme.
  5. Bucket insights into themes: With optimization themes defined, begin to organize your insights by bucketing them within your themes, this will help create clear direction and guardrails for future ideation sessions.

Ideation Session & Generation of Backlog:

  1. Foster creativity: Create a collaborative and supportive environment that encourages team members to think outside the box and share innovative ideas.
  2. Set clear guardrails and structure:  Focus on one optimization theme at a time. Make sure ideas fit the theme and are based on data insights. Look at specific parts of your site, and move on when you’ve explored all the ideas for that area.
  3. Prioritize quality over quantity: Focus on generating high-quality test ideas that are based on data insights, user research, and best practices rather than aiming for sheer volume.
  4. Test hypothesis diversity: Include a mix of hypothesis types, such as usability improvements, feature enhancements, messaging variations, and design changes, to cover a wide range of potential tests.
  5. Document and organize: Document all generated test ideas in a centralized backlog, categorizing them based on themes, potential impact, and alignment with business objectives for easy reference and prioritization.

Prioritization and Roadmap phase:

  1. Rank Your Tests: Identify key factors that matter to your business. Score each test idea based on its potential impact on these factors. Give each idea a number score to easily compare and rank them.
  2. Balance Effort and Impact: Look at how much work each test takes to run and put into action. Compare this to the expected impact of the test. Prioritize tests that offer a big impact for a reasonable amount of effort. This helps you get the best bang for your buck in your optimization work.
  3. Iterative vs Innovative Approach: Create a roadmap that balances two types of tests: innovative and iterative. Innovative tests are big, complex changes that can have a major impact, either good or bad. Iterative tests are smaller, focused experiments that help you learn and improve gradually. Aim for about 75% iterative and 25% innovative tests. This mix helps you test quickly, boost revenue, and gain clear insights.
  4. Stakeholder Alignment: Involve key stakeholders when prioritizing tests. This ensures your testing plan aligns with your organization’s goals and priorities. Work to get everyone’s buy-in on the final roadmap.
  5. Capture, Monitor and Review: Once everyone agrees on the roadmap, take a screenshot for your records. Regularly check on running tests and review finished ones. Make sure you’re meeting your testing goals and deadlines. After each test, think about what worked, what didn’t, and what you learned. Use these insights to make your testing process even better.

Conversion Compass is a strategic solution designed to simplify the complexities of digital optimization. Our team of experts will work closely with you to develop a custom roadmap tailored to your site’s unique needs and goals. The program combines the power of data-driven insights with proven optimization strategies to help you identify and prioritize the tests and changes that will have the greatest impact on your bottom line. Contact our team today to schedule a consultation and discover how Conversion Compass can help you chart a course to conversion success.

Don’t let the sun set on your Digital Optimization Program

Google is dropping its A/B testing product, Optimize, in September, and if you don’t have a plan to replace it yet, you might be left in the dark.

  1. The stakes: Budgets for innovation are lean, this is a hard time to get buy-in for a big tech investment with a recurring cost. You could lose your whole program budget if you can’t find a replacement, but the hard part is selecting a tool that ensures the future of the program and getting the budget for it.

  1. The pitch: You have a plan that will align your optimization strategy with revenue and cost savings and show positive ROI on the program quickly. You just need to get the budget approved to get this tool.

  1. The choice: Now you have to make sure you choose the right tool that will future-proof your program and activate your plan. You need to get it up and running quickly so you can start gaining momentum and getting the return.

  1. The promise: You need to get test velocity and wins, so that you are making good on the promise you have in this investment.

  1. The proof: You need to show the ROI in a short period of time and get the program in the black, this requires a measurement model and reporting that you can share with leadership that makes certain your program is seen as profit-center.

In today’s digital landscape, brands rely heavily on data-driven decision making and a carefully selected suite of technology to manage and optimize their websites, enhance user experience, and drive revenue growth. A critical tool in AB testing programs has been the free version of Google Optimize, which allows businesses to experiment with different variations of their website and measure the impact on user behavior and conversions. In September 2023, Google is sunsetting this product, and referring customers to adopt other products, most of which are not free.

What’s at Stake

With the discontinuation of Google Optimize, many brands are being forced to rethink their suite of technology and make new plans for the future of optimization. This shift will require more investment into infrastructure, training, and staff. In the current economic climate, where organizations are actively seeking cost-cutting measures amid declining innovation investments, it becomes difficult to get a budget for recurring costs in software licenses, and training staff to use them. In order for brands to keep their testing programs, they are going to have to find the budget to replace Google Optimize, but there are ways to recoup these costs and add momentum to the program by strategically selecting the right A/B testing platform.

The Pitch

If you are about to embark on finding a Google Optimize replacement and the budget to pay for it, first make sure you have a good read on where your optimization program is today, where it could be with the additional power of a new platform, and also what it would take to grow your practice to where you want it to be. Consider this migration from Google Optimize as a critical piece of your future growth strategy and this may be your big chance to ask for the budget you really need to secure the future of the program.

The Choice

Selecting a reliable A/B testing platform is crucial as it directly impacts the accuracy, scalability, and flexibility of experiments. By choosing the right platform, brands can gather actionable insights to inform their optimization efforts and achieve long-term success. A/B testing platforms all have different strengths and weaknesses, and a wide variety of costs.

When selecting an A/B testing platform to replace Google Optimize, brands should evaluate several factors. These include ease of use, scalability, integration capabilities, statistical rigor, support for personalization, and advanced targeting options. By carefully assessing the features and functionalities offered by different platforms, brands can make an informed decision that aligns with their optimization goals and long-term vision.

Key features that really make the difference

When considering each of these features as they relate to choosing the best A/B testing tool for your company, here’s what you should keep in mind:

  • WYSIWYG (What You See Is What You Get) refers to an interface that allows non-technical users to make changes visually without writing code, while the code-based approach requires coding knowledge to make modifications. Consider the technical expertise of your team. If you have non-technical members who need to make changes, a WYSIWYG editor would be more user-friendly. If your team is comfortable with coding, a code-based tool may provide more flexibility and customization options.

  • Statistics Engine: The statistics engine of an A/B testing tool calculates the statistical significance and confidence levels of the experiment results. Look for a tool that uses robust statistical methods to ensure accurate and reliable results. It should provide features like p-values, confidence intervals, and sample size calculations to help you interpret the data effectively.

  • Targeting and Personalization Rules: Targeting and personalization rules allow you to define specific audience segments for your experiments and deliver tailored experiences. Consider the sophistication of targeting options. Look for a tool that offers flexible targeting criteria based on user attributes, behavior, demographics, or any other relevant data. Advanced personalization capabilities can help you create more targeted and impactful experiments.

  • Martech and Analytics Integration: Martech (Marketing Technology) and analytics integration enable seamless data exchange between your A/B testing tool and other marketing or analytics platforms. Check if the A/B testing tool integrates with your existing marketing technology stack, such as customer relationship management (CRM) systems, analytics tools, email marketing platforms, etc. Integration allows you to leverage existing data and streamline your workflows.

  • Experiment Management: Experiment management features help you organize and track your A/B tests efficiently. Look for a tool that offers a user-friendly interface to create, schedule, and monitor experiments. It should provide options for segmenting experiments, setting experiment goals, tracking progress, and generating reports. Collaboration features like role-based access control and annotations can be beneficial for team collaboration. Remember that the specific requirements of your company might vary based on your team’s skill set, goals, and budget. It’s important to assess these features in the context of your unique needs and select a tool that aligns with your business objectives.
The Promise

Developing a strategic optimization plan is key to maximizing the benefits of A/B testing, and a vital step to get a high return on testing. This roadmap should highlight clear optimization objectives around unlocking trapped revenue. This is revenue that is being lost by abandonment, inefficient ad spending, or areas to reduce cost by increasing self-service or other online behavior. By aligning tests with revenue and cost goals, brands can prioritize high-impact experiments, streamline the testing process, and ensure a return on their software investment. 

An efficient A/B testing platform enables e-commerce brands to achieve high test velocity, allowing for rapid iteration and optimization. Test velocity refers to the speed at which experiments can be conceived, executed, and analyzed. With faster experiments, brands can uncover winning variations sooner, optimize conversion funnels, and continuously improve their website to drive revenue growth.

By identifying and implementing winning variations quickly, brands can maximize the time they are seeing an increase in conversion rates, average order values, and customer lifetime value, resulting in a net revenue gain for the organization. The effect of a winning test doesn’t last forever, come back to your winners regularly and see if they need a fresh experiment.

The Proof

Investing in an A/B testing platform is a strategic decision that requires careful consideration of cost-effectiveness and return on investment (ROI). While there may be upfront costs associated with the software, a well-executed optimization plan can help recoup the investment within a short period. Be upfront with budget owners about the cost of ownership of an A/B testing tool, but also show them the ROI model you use to make your business case for testing.

Consider that testing not only helps you incrementally increase the revenue rate of your site, it also helps stop you from making costly unvalidated changes. There are both cost savings, and revenue generation values in testing. CRO is one specific strategy that works well for same-visit purchases, but also think about the value you get from a “do no harm” test, and calculate that into your program ROI.


In conclusion, having a clear vision for how A/B testing and optimization will impact your bottom line, and a strategy for turning your product investments into actualized business value is the key to moving forward with a Google Optimization replacement that will set you up for the future.

With these tips, we hope you are able to create that vision, put together a solid case, and find the A/B testing tool that works best for your business. If you are looking for a partner to help you build your case, create your strategy, or implement your new A/B testing platform, Roboboogie is here to be your guide.

10 Years of A/B Testing: Embracing Failure for Better Tests 


Next month marks the 10-year anniversary of my first A/B test. How things have changed since those early days!

The A/B testing hustle was real – partnering with Optimizely and the incredible founding team there, under the innovative leadership of Dan Siroker and Pete Koomen.

We were all just figuring things out.

We knew that A/B testing had the power to do something incredible in the market (in fact, it helped support Obama’s re-election success in 2012).

At Roboboogie, we were excited that we could add a new layer to our UX strategy and design, with measurable results. Real-time feedback from actual users based on how they performed? Incredible!

Focus groups and user testing had been helpful tools for us, but they only provided academic feedback. A/B Testing, on the other hand, unleashed a new approach – an interactive, scientific methodology that could guarantee positive outcomes.

Growing up with a biology professor and a chemistry teacher running our household, I viewed the world through a scientific lens. Developing hypotheses was how I figured out the world, and my place in it.

Combining science with marketing in my career was a no-brainer. Bring on A/B testing!

Fast forward to June 2013.

Roboboogie engaged our first A/B testing client. A highly innovative, fast-paced e-commerce client willing to take (smart) risks, with the ‘go-fast-and-break-things’ mentality. “Let’s embrace the experimentation. Test fast, and iterate.”

We were scrappy back then. I remember singlehandedly doing test ideation, sketching UX test variation concepts, hopping on a call with clients for alignment, then jumping into the WYSIWYG editor to build it, set up analytics, QA it, and launch it – sometimes all in the same day.

At our peak, we were launching 12 tests per month (all possible due to their high site traffic and quick purchase cycles).

And we saw big success. Experiments were increasing revenue, unlocking new customer segments, and helping inform product development and positioning.

But the go-go-go approach wasn’t without missteps.

I still cringe to this day about one test in particular.

We tested dynamic pricing for a certain set of luggage products. Depending on the variation, we were showing pricing differently, presenting the retail price, promotional price, and an additional discount layer. Regardless of the variation, the product was priced at $74.99.

The test strategy and architecture were sound. The test was built to spec and seamlessly passed QA.

But when we launched the test, the results were staggering. There were massive lifts in product engagement, product add-to-carts, initiated check-outs, user time on site, total pages views per session, and… site-wide add to carts?

That’s when I got the phone call from our client partner.

“Umm, we have a shopper on the line talking to customer service who is pretty upset that they can’t buy our featured trip to Costa Rica for $74.99. What is going on?! We need to halt all testing immediately.”

Oops.

Instead of my testing parameters applying only to the backpack products we were piloting our experiment on, they had been applied site-wide. To clothing, snowboards, bikes, kayaks, and even trips. The test was running perfectly for our intended products – but also everywhere else.

The go-fast-and-break-things approach had … broken things. While we had massive wins, we also now had a sobering misstep. The experiment was only live for about 25 minutes, but it had caused some need for damage control with several customers. Luckily, customers were understanding, and with some exchanged store credit, everything was smoothed over fairly quickly.

That mistake, however, significantly shaped my professional approach and how we approach our testing methodology at Roboboogie. The experience has proved invaluable time and time again. For it is out of failure, that our best growth and maturity comes.

My attention to detail has never been the same since – an approach we now incorporate into our testing process. We have a fully immersive, multi-disciplinary approach to each step of the testing process – involving thoughtful strategy, smart UX, tight UI, pixel-perfect development, methodical data engineering, and double-and-triple-check QA before launch. Each team member has an eye on the bigger picture goal – launching smart tests, free from errors, with the right balance of speed, strategy, and attention to detail.

For us, our only “failed” tests are the ones that are launched broken.

We embrace the mantra of “go-fast-and-BUILD-things” now. We believe our clients and end-users deserve better than broken tests.

Not every first test we launch results in an immediate net-positive CRO impact (it’s experimentation, after all). But we work to ensure that every test we launch is a winner – either driving revenue/leads, elevating the brand experience, unearthing user insights, or unlocking new user segments.

And we’re incredibly proud of our ability to do so.

Would we be where we are today without that mistake? I’m not sure. But I do believe it catapulted us forward. Looking back at that mistake 10 years ago – almost accidentally selling a $3500 trip to South America for $75 – I dare say the mistake may have been worth it.

That failure resulted in a decade of better tests – for dozens of clients and millions of tested visitors since.

  • Jedidiah Fugle, COO @ Roboboogie