Why practice A/A testing?

Created by LiRou C, Modified on Tue, 12 Mar at 10:45 AM by LiRou C

What's A/A Testing All About?




You may be familiar with A/B testing where you lead traffic to two distinct pages – the original (control) and a variation (a different version of the original) – to determine which one yields better conversions.


Well, A/A testing takes a slightly different approach. Here, you are looking at two identical pages competing against each other. The purpose of practicing A/A test is to spot any problems with your testing setup or platform. 


If your experiment is set up correctly, it should end with inconclusive result, with no "winner". 



Why Do Some Organizations Do A/A Testing?

In an A/A test, you compare a webpage with an exact copy of it. Since both pages are the same, there's supposed to be no 'winner'. If your A/A test shows one version is better, something is wrong. This could be because:


  • The tool was set up wrong.
  • The test wasn't done right.
  • The testing tool doesn't work well.


Organizations typically employ A/A testing when they're integrating a fresh A/B testing tool. There are several reasons for undertaking an A/A test initially:


1. Confirming Accuracy: 

Executing an A/A test allows organization to verify the accuracy of the new A/B testing tool's results. If the test displays a clear 'winner' between two identical pages, there may be a problem with the tool's functionality or setup. 


2. Setting Baseline Conversion Rate: 

It's crucial for organizations to have a grasp on what their average or expected conversion rate is before they start making changes. By running an A/A test, they can establish a benchmark or standard conversion rate which can be used for comparison in future A/B tests.


Imagine running an A/A test, where out of 15,000 visitors, 500 people converted on your original webpage (Variation A). Now, on Variation B (which is exactly like A), 505 out of 15,000 visitors converted.


So, Variation A has converted about 3.33% of visitors while B has converted about 3.37%. Since A and B are identical and these conversion rates are so close together, we can say our standard conversion rate is around 3.33%-3.37%. Later, if you conduct an A/B test and the conversion rate falls within this range, it could indicate that the outcome isn't statistically significant or noteworthy.


3. Deciding on Sample Size: 

When conducting any kind of test, it’s critical to determine how many participants (or in this case, website visitors), are needed for the test results to be statistically significant. With an A/A test, organizations can figure out an adequate minimum sample size, ensuring the validity and reliability of their future tests.



How do I create an A/A test in Mida?


1. Navigate to 'Experiments', then click on 'Add New Test'. Select the A/A test option and click 'Start a new test'.



2. Assign a name to your test campaign, input the URL of the page where you intend to run the A/A test, and then click 'Next'.



3. Assuming you're aiming to target all your site visitors, you can proceed by clicking 'Next' on the 'Targeting' tab. When you reach the 'Goal' tab, establish a goal for your test to define what constitutes a successful conversion. In the following example, we're defining a successful conversion as a visitor who lands on the URL 'https://app.mida.so/sign-up'.





4. Next, in 'Configuration', the default setting should work fine. You can directly click on 'Publish Test' to keep your A/A test straightforward and uncomplicated. That's it, your A/A test is live! 





Was this article helpful?

That’s Great!

Thank you for your feedback

Sorry! We couldn't be helpful

Thank you for your feedback

Let us know how can we improve this article!

Select at least one of the reasons
CAPTCHA verification is required.

Feedback sent

We appreciate your effort and will try to fix the article