Testing Into a Feature Rollout

Who doesn’t love the idea of adding a new feature to their website to enhance customers’ experiences and make more money?

It could be a new survey that helps users find the right product that you offer. A pop-up modal or banner that connects users with promotions or offers. A new feature could be new payment options like Apple Pay, payment plans or an extended warranty or coverage plan like those offered by Clyde

All of those sound like features designed to make your customers happy and to make you more money. Perfect! 

Introducing new elements or features to your digital experience can be a fantastic way to address customers’ wants and needs. New features can lead to improved customer satisfaction, and can also be a means to driving the bottom line revenue and conversion goals of your business. 

Sounds great, right? Create the feature, roll it out, and count the money and high fives from happy customers. 

But before you flip the switch on the new feature, let’s stop and consider a strategic approach – testing into new feature rollouts. 

While new features start with the best intentions of providing an improved customer experience and helping businesses achieve their goals, the way the changes fit into the existing user experience can be more complicated than expected, and in some cases can have drastically negative consequences. 

1. ​​Why Should I Test Into a Feature Roll Out?

Let’s start with the why? Changes to your website or digital experience start with an assumption – or in optimization parlance, a hypothesis. The hypothesis for a new feature can often be as simple as, “By rolling out feature Y (a pop-up modal with recommended products), we will see an increase in clicks on add-to-cart and will lead to an increase in average order value.”

Illustration showing a popup causing user frustration

It sounds great, but how do we validate that our assumption or hypothesis is correct? And how do we ensure that the new feature doesn’t create a negative experience for users or negatively impact the metric we are trying to improve? 

For example, users can’t figure out how to close the pop-up and a large percentage of those users abandon the shopping experience without purchasing. Yikes! 

While this would be a very undesirable outcome of your initial test, this data is invaluable in improving the user experience. And because the experience is only presented to a small number of visitors, you haven’t negatively impacted ALL of your potential buyers. 

Testing into a feature rollout allows you to strategically introduce your new feature to a small number of users and compare the impact of the new experience to your existing experience. This approach allows you to launch your feature with confidence, and with the data to validate your decision to roll out. 

Let’s take a look at what this approach looks like in practice. 

2. Identify the Feature and Define Your Measurement of Success

Along with establishing your hypothesis, you should also establish your measures of success. 

Defining success removes ambiguity in the feature’s impact and allows you to objectively evaluate the performance of your new feature. These metrics for success can be anything that you can use to measure the effectiveness of your changes. In eCommerce, we often look at increases in add-to-cart rate, average order value, or alternatively, a decrease in cart abandonment or a decrease in exit rate in the shopping cart. 

We recommend taking note of all of the metrics you consider and sharing them with your analyst as they are all potential KPIs for your testing plan.

Illustration showing conversion rates chart

Let’s look at an example where we are trying to address low conversion rates on our eCommerce website. We design and develop a new way to serve promotions to users on our site using a pop-up with an enticing, personalized offer. In this case, we have decided our main measure of success is the conversion rate for users who saw the pop-up when compared to users who did not see the pop-up. This is our main measure of success.

Features are not always focused on conversion rate. We can also look at improvement in clickthrough rate, increased promotional redemptions, increased revenue attributed to marketing, increased add-to-carts for promoted products, decreased cart abandonment, total revenue lift, or something else entirely depending on your unique factors. 

The key here is that knowing your measurement of success before you start will remove ambiguity in evaluating a test, regardless of how successful it is.

3. Identify Your Testing Variables

Once you have success defined, it’s time to evaluate the variables in your current experience or in your new feature that will likely contribute to a successful rollout.  

Most often, these variables are going to fall into the categories of design and development. This is where having a team of experts who understand testing and testing strategy is vital. We recommend including an optimization strategist, UX/UI designer, and experience engineer or developer in the conversation as you identify your testing variables and evaluate the potential impact on the user experience, as well as the impact on scope and timeline.

When evaluating the variables with your experts, you can think about the visual UI design of the feature, when and where in the user experience the feature is presented to users, and how the feature impacts a user’s journey through your site with their non-linear, real-world behavior. 

Mockup of a user journey
Visualizing where in the user journey your new feature will go

From a visual perspective, we also recommend evaluating how the presentation of a new feature impacts and reflects your brand. Does the new feature look on-brand? Is the messaging consistent with what your customers expect? Does the type of feature resonate with your customer base? 

As you evaluate the UX and UI options and start crafting visual solutions, we recommend communicating with the engineer who will be building and launching your test, to ensure the experience can be built the way you envision it without causing technical issues. Experienced engineers can also provide strategic input from a technical perspective and identify potential technical hurdles that could hurt the user experience or make for an ineffective test. 

4. Design, Build, and Run Your Test

With your hypothesis in place, your success metrics defined, and your variables isolated, it’s time to design, build, and run your test. 

If A/B testing and conversion rate optimization are everyday practices at your organization, this process is likely in place. If you’re newer to testing and optimizing and diving in with a feature roll-out – congratulations! There are a few common pieces of technology that you’ll need in place to easily run and monitor your tests. 

You need a testing platform – this is the tool that will allow you to display your new feature. Tools like A/B Tasty, Convert, Google Optimize,  Kibo, and many others allow you to run your tests and determine the number of users who will see your new feature and the number of users who will see the previous experience. 

These tools integrate with almost every website. If you aren’t sure where to start, we can help!

Illustration of graphs and charts

You need an analytics tool or tools. Accurate measurement of your tests is critical. The good news is that there are many great options for reliable analytics, including Google Analytics, as well as the reporting tools included with some of the testing platforms we’ve mentioned above. We also recommend a digital experience analytics tool with session replay like FullStory, which allows you to quickly identify user pain points and frustrations. 

With your design, build, and testing platform in place, it’s time to run and monitor the test. Your optimization strategist should be the one to determine how long a test should run to achieve statistical significance and the percentage of users who should see the test variation. 

5. Analyze and Iterate

​​As tests conclude, it may be tempting to simply look at your success metric and move on. However, it’s important to ingest and analyze as much of the data as possible to uncover insights that may lead to valuable iterations.  

I know we said “test” (singular), but there is a lot of opportunity to iterate and run multiple tests as you refine the experience surrounding your new feature and maximize its impact on your goals and your customer experience. 

Alternatively, you may find only a small impact, a significantly insignificant impact, or a negative impact when you run a test. That doesn’t mean it’s time to throw the feature out with the bathwater. That’s when it’s time to dive into deeper analysis to understand what did and didn’t positively connect with your visitors.

Illustration of Roboboogie's process for testing and optimization
Continuous test-and-optimize track

While this process can take time and it’s certainly not a magic wand, we’ve found incredible success in sticking to the test-and-optimize methodology with our clients. The fact is, by testing into a new feature you mitigate the risks of rolling out a new feature to your entire audience and give yourself the opportunity to roll out an optimized version, backed by data!

6. Implement with Confidence

Congratulations, you have developed a new feature that you know performs to your expectations as well as your users! You have the data to back it up, and a baseline for further optimization, if you want to ideate and improve the feature further. 

Additionally, as you have been conducting testing on your real users, they have likely already been exposed to the new feature, resulting in easier adoption and stronger, more consistent performance.

While this testing methodology can take time and planning, it is a way to confidently roll out new features, functionality changes, and user experience changes rooted in data. Taking a test-and-optimize approach to your digital experience as a whole creates near limitless opportunities for digital transformation and optimization. 

If you aren’t sure how to get started with testing or want to consult with the best in the business – send us an email and we’ll set up a  meeting with you!

Introducing Daniel and Our Expanded Internship Approach

Meet Daniel Adeyemi!

Daniel moved to Oregon from Russia four years ago. He’s been a university chemistry instructor, a P.E. teacher, and currently works as an NCAA basketball referee. And recently, he finished a three-month development internship with our team at Roboboogie.

Daniel decided to learn to code after working on his own refereeing website. When he applied to Epicodus, he checked a box for scholarships for underrepresented groups. Not long after, he found out he had gotten a scholarship for the Epicodus training program from Blacks in Technology and the paid internship with Roboboogie.

“My main goal,” he said, “was just don’t quit.”

So far, he’s stuck to that goal and gone above and beyond “just don’t quit.”

In his time with us, he’s worked on building A/B tests for clients, as well as putting the work of his fellow developers through the QA process to ensure their work looks great and functions on a wide range of computers, tablets, and mobile devices. In his words, he’s “learning how to do the job of a developer”, and we think he’s done a fantastic job! 

“The biggest thing I learned was how to work on code that was in production,” Daniel said. 

observeDOM(document.querySelector('body'), queryString, function (el) {
  if (!$('.minicart-wrapper .update-item-count').length) {
    $('.minicart-items-wrapper + .actions .primary .subtotal').remove();
    $('.minicart-items .item-qty').before('<span class="update-item-count" data-new-count="-1">-</span>');
    $('.minicart-items .item-qty').after('<span class="update-item-count" data-new-count="1">+</span>');
    if ($('#cloned-cart').length >= 1) {
      $('#minicart-icon').clone().attr('id', 'cloned-cart').prependTo('.items-total');
    } else if ($('#minicart').length == 0) {
      $('#minicart-icon').clone().attr('id', 'cloned-cart').prependTo('.items-total');
  if (el.matches('.minicart-wrapper .update-item-count')) {
  if (el.matches('.minicart-wrapper .subtitle.empty')) {
  if (el.matches('#bottom-subtotal')) {
    var payOverTimeOption = parseFloat($('.t-16 .block-minicart #bottom-subtotal .price').text().substring(1).replace(/,/g, ''));
    if (payOverTimeOption < 5000 && payOverTimeOption > 399) {
      payOverTimeOption = Math.ceil(payOverTimeOption / 12);
      $('.t-16 .block-minicart .subtotal .amount .price-wrapper .price').append(' or ' + '<a href="/en-US/financing" id="price-in-cart">' + '$' + payOverTimeOption + '/mo</a>');
      $('.t-16 .block-minicart .subtotal .amount .price-wrapper .price').append('<img id="subscription-service-subtotal-rbg" src="...hidden" alt="subscription service" />');
Code written by Daniel as part of his internship for a shopping cart test.

He said he appreciated the pressure and challenges of working on live projects. He also loved the collaboration of working with a team on websites for real-world clients.

Jeremy Sell, director of technology, said Daniel has been an ideal intern. 

“He’s great. He’s fantastic to work with. His code is now being seen by millions of people, which is pretty cool!”

Now, Daniel turns his attention to finding the next step in his development journey while he also continues his work as a referee. Finding a company with a culture to match his experience at Roboboogie is high on his list. As is finding a place that is diverse and supportive, with interesting and challenging projects to work on. We were incredibly impressed by Daniel and are excited to see what his next chapter looks like.

Building an internship partnership

Daniel was the first Roboboogie intern that came to the company as part of a partnership the company formed with the nonprofit Blacks in Technology and coding school Epicodus.

Tech, like many industries, has a history and perception of being white and male. Roboboogie is part of that industry — as with all companies, it is run by human beings with implicit biases who can sometimes have trouble separating out what those biases even are.

Roboboogie decided we wanted to do better. How could we make the company a more inclusive workplace and what would it take to actually remove implicit bias from the equation?

We looked at our hiring processes and could see that we needed to do better when it came to hiring historically excluded people.

That’s when we focused on moving forward with a partnership with two groups — Epicodus and Blacks in Technology, to create a paid internship with a student from Epicodus and picked by Blacks in Technology. We realized our own biases would come into play if we picked the intern candidate, so they turned that process over to Blacks in Technology.

“They were on board,” Jeremy said. “They asked us some hard questions about ourselves, which I was excited about.” 

Daniel won’t be the last intern to come through a program like this, and we’re excited about helping ourselves and other companies come up with ways to remove implicit bias from hiring.

“We have visions for this to be something that we do more often,” Jeremy said. “Maybe we can make this happen for more people and with more companies, and make it easier for companies to bring on people from all groups and to change the way tech looks and make it more representative of what our communities actually look like.”

We’d like to again thank Daniel for joining us for his internship and Blacks in Technology and Epicodus for their partnership.

Blacks In Technology is a global platform for Black people in technology. We are “Stomping the Divide” by establishing standards for world-class technical excellence. BIT serves members through community, media, and mentorship. We provide resources, guidance and challenge members to establish new standards of innovation.

Epicodus’s mission is to help people learn the skills they need to get great jobs. For us, “great jobs” means jobs in growing industries that pay well and provide rewarding work. Beyond the particular skills needed to get these jobs, we aim to help our students become confident self-teachers who can adapt to changing job markets, and great communicators who will work well in teams. We focus on serving people who, by birth or circumstance, don’t have easy access to learning the skills they need to get great jobs.