What is Sample Ratio Mismatch (SRM) in A/B Testing?
Sample Ratio Mismatch (SRM) is a critical concept in A/B testing that occurs when the observed traffic split between control and variant groups doesn't align with the expected split set for the experiment. This discrepancy can significantly impact the validity of test results and lead to incorrect business decisions.
The SRM Indicator in Mida:
Mida's Simple Ratio Mismatch (SRM) indicator is a powerful tool that:
- Automatically checks if the ratio of users in control and variant groups matches the expected allocation.
- Helps identify potential issues affecting test validity.
- Provides early warning signs of problems in your experimentation setup.
Common causes of SRM and how to address it:
1. Incorrect Script Installation
Suppose we're running a Split URL test where we want to split traffic 50/50 between the Control (A) and Variation (B) pages. Here's how incorrect script installation could lead to SRM:
- Control Page (A):
- The correct Mida project script is properly installed.
- It's functioning as intended, assigning 50% of visitors to the Control group.
- Variation Page (B):
- The script is either missing entirely or an incorrect version is installed.
- As a result, visitors to this page are not being properly tracked or assigned to the Variation group.
To fix this, ensure that the CORRECT Mida project script is properly installed on both the Control and Variation pages:
- Verify the script is installed on both the Control and Variation pages.
- Double check that the correct Mida project key is used, especially on all variation pages. Quick video demo here.
- Make sure to add the script to every page where you want the experiment to run.
2. Incorrect Page Targeting Configuration
SRM can occur when the test targeting is set up incorrectly. Here're some examples to illustrate how this misconfiguration might occur.
Scenario A: Incorrect URL targeting
Suppose the test is set up to target the URL "https://www.mida.so/features" but the actual URL structure includes additional parameters or subcategories:
Targeted URL: https://www.mida.so/features
Actual URLs:
In this case, visitors to the subcategory pages or pages with UTM parameters would be selected for the test but not actually see the variation, causing an SRM.
How to fix this:
Use a wildcard pattern: Instead of targeting the exact URL, use a pattern that captures all relevant pages: https://www.mida.so/features*
This wildcard pattern will match:
- https://www.mida.so/features
- https://www.mida.so/features/web-personalization
- https://www.mida.so/features?utm_source=google
Scenario B: Inconsistent page structure
If the mida.so website has inconsistent URL structures across different pages, it could lead to targeting issues:
Targeted URL pattern: https://www.mida.so/*/
Actual URLs:
- https://www.mida.so/pricing (matches)
- https://www.mida.so/about-us (matches)
- https://www.mida.so/blog/article-1 (doesn't match)
Visitors to the blog pages wouldn't be included in the test, potentially causing an SRM.
Why it doesn't match:
a) Trailing slash: The targeted pattern "https://www.mida.so/*/" ends with a slash, which means it's looking for URLs that have exactly one segment after "mida.so" followed by a trailing slash. The blog URL has two segments ("blog" and "article-1") and no trailing slash.
b) Wildcard limitation: In this case, the "*" wildcard is only matching a single URL segment. It doesn't capture multiple segments.
To fix this and include blog pages, you could modify the targeting pattern:
- Remove the trailing slash: "https://www.mida.so/*" This would match single-segment URLs with or without a trailing slash.
- Use a regex pattern: "^https://www\.mida\.so/.*$" This regex will match any URL that starts with "https://www.mida.so/" regardless of how many segments follow.
Scenario C: Dynamic content loading
If mida.so uses dynamic content loading (e.g., single-page application), the URL might not change when navigating between sections:
Initial URL: https://www.mida.so/
After navigation: https://www.mida.so/ (URL doesn't change, but content does)
If the test is set up to target specific content sections based on URL, it might fail to capture all relevant traffic.
To address SPA issues in experiment execution, consider implementing DOM change listeners to capture dynamic content updates or adding custom JavaScript triggers at important points in your application. This gives you precise control over when and how tests are initiated.
You can find these options on the 'CONFIGURATION' tab when setting up your test.
3. Manual Triggering in SPAs (Single-Page Application)
Imagine a React-based e-commerce site where users can browse products without page reloads.
When using manual triggering (Execute by Javascript) for SPA, there's an increased risk of Sample Ratio Mismatch, which can skew your test results.
This could be caused by:
- Timing issues: The experiment code fires before or after the manual trigger.
- Conditional triggering: The trigger code only runs under certain conditions, excluding some users.
- Multiple triggers: Accidentally triggering the experiment multiple times for some users.
To Avoid SRMs:
- Consistent triggering: Ensure the manual trigger fires at the same point in the user journey for all users.
- Single trigger point: Avoid multiple trigger points that could fire the experiment more than once.
- Error handling: Implement proper error handling to prevent partial executions.
- Monitoring: Regularly check your experiment results for signs of SRM and investigate any discrepancies promptly.
By following these practices and being aware of the potential issues with manual triggering, you can ensure more accurate and reliable experiment results in your Single Page Applications and other scenarios requiring manual experiment activation.
4. User Intervention
Imagine an online retailer conducted an A/B test on their product page layout during the holiday season. The test progressed as follows:
- Weeks 1-2: The test ran smoothly, with equal traffic to both versions.
- Week 3: Variation B was accidentally paused for a day during a major sale.
- Week 4: The test continued as normal.
Results showed Variation B performing significantly better. However, these results could be misleading due to:
- Variation B missing a day of potentially lower conversions during the sale (when people might browse more but buy less).
- The original version receiving more exposure during a high-traffic period.
To address these issues, the following actions are recommended:
- Exclude the affected date range: Remove the day when Variation B was paused from the analysis.
- Reset the experience data: Clear all existing data and restart the experiment.
Additionally, it's crucial to:
- Review the test's change history to ensure no traffic reallocation, targeting changes, or variation number alterations occurred after the experiment began.
- Utilize the graph view on the results page to check visit numbers for each variation, which can reveal if discrepancies occur only on specific dates.
By taking these steps, you can ensure the integrity of your A/B test results and make more informed decisions based on accurate data.
5. Bot-Induced SRM
SRM can sometimes be caused by non-human traffic, particularly bots. Mida actively identifies these bots to protect your test results. If you suspect unusual patterns, you can contact us to investigate additional logs and user agent data we maintain.
If you suspect bot interference in your experiments, don't hesitate to contact Mida support. We'll investigate using our comprehensive logs and update our bot exclusion list if necessary, ensuring the integrity of your experiment data.
Why is SRM Detection important?
By prioritizing SRM detection, you safeguard the validity of your A/B tests and enhance the quality of your data-driven decision-making:
Data Integrity: SRM checks ensure your test results are reliable. Without them, you risk basing decisions on skewed or inaccurate data.
Resource Optimization: By identifying invalid tests early, SRM detection prevents wasted time and resources on flawed experiments.
Technical Insights: SRM can reveal underlying issues in your testing infrastructure, data collection, or experiment setup.
Fairness Assurance: It helps maintain equal treatment of users across test groups, preventing unintended bias in your experiments.
Credibility Building: Regular SRM checks demonstrate rigor in your testing process, building trust with stakeholders and team members.
Continuous Improvement: Addressing SRM encourages ongoing refinement of your experimentation methods, leading to more accurate insights over time.
Was this article helpful?
That’s Great!
Thank you for your feedback
Sorry! We couldn't be helpful
Thank you for your feedback
Feedback sent
We appreciate your effort and will try to fix the article