Many A/B testing problems come from using statistical methods without checking if they fit the situation. The three most common mistakes are: (1) using the MannMany A/B testing problems come from using statistical methods without checking if they fit the situation. The three most common mistakes are: (1) using the Mann

Three A/B Testing Mistakes I Keep Seeing (And How to Avoid Them)

Over the past few years, I have observed many common errors people make when designing A/B tests and performing post-analysis. In this article, I want to highlight three of these mistakes and explain how they can be avoided.

Using Mann–Whitney to compare medians

The first mistake is the incorrect use of the Mann–Whitney test. This method is widely misunderstood and frequently misused, as many people treat it as a non-parametric “t-test” for medians. In fact, the Mann–Whitney test is designed to determine whether there is a shift between two distributions.

\

When applying the Mann–Whitney test, the hypotheses are defined as follows:

\ We must always consider the assumptions of the test. There are only two:

  • Observations are i.i.d.
  • The distributions have the same shape

\ How to compute the Mann–Whitney statistic:

  1. Sort all observations by magnitude.
  2. Assign ranks to all observations.
  3. Compute the U statistics for both samples.

\

  1. Choose the minimum from these two values
  2. Use statistical tables for the Mann-Whitney U test to find the probability of observing this value of U or lower.

**Since we now know that this test should not be used to compare medians, what should we use instead?

\ Fortunately, in 1945 the statistician Frank Wilcoxon introduced the signed-rank test, now known as the Wilcoxon Signed Rank Test.

The hypotheses for this test match what we originally expected:

How to calculate the Wilcoxon Signed Rank test statistic:

  1. For each paired observation, calculate the difference, keeping both its absolute value and sign.

  2. Sort the absolute differences from smallest to largest and assign ranks.

  3. Compute the test statistic:

    \

  4. The statistic W follows a known distribution. When n is larger than roughly 20, it is approximately normally distributed. This allows us to compute the probability of observing W under the null hypothesis and determine statistical significance.

    \ Some intuition behind the formula:

Using bootstrapping everywhere and for every dataset

The second mistake is applying bootstrapping all the time. I’ve often seen people bootstrap every dataset without first verifying whether bootstrapping is appropriate in that context.

The key assumption behind bootstrapping is

==The sample must be representative of the population from which it was drawn.==

If the sample is biased and poorly represents the population, the bootstrapped statistics will also be biased. That’s why it’s crucial to examine proportions across different cohorts and segments.

For example, if your sample contains only women, while your overall customer base has an equal gender split, bootstrapping is not appropriate.

Always using default Type I and Type II error values

Last but not least is the habit of blindly using default experiment parameters. In about 95% of cases, 99% of analysts and data scientists at 95% of companies stick with defaults: a 5% Type I error rate and a 20% Type II error rate (or 80% test power).

\ Let’s start with why don’t we just set both Type I and Type II error rates to 0%?

==Because doing so would require an infinite sample size, meaning the experiment would never end.==

Clearly, that’s not practical. We must strike a balance between the number of samples we can collect and acceptable error rates.

I encourage people to consider all relevant product constraints.

The most convenient way to do it , create the table ,that you see below, and discuss it with product managers and people who are responsible for the product.

\

For a company like Netflix, even a 1% MDE can translate into substantial profit. For a small startup, that’s not true. Google, on the other hand, can easily run experiments involving tens of millions of users, making it reasonable to set the Type I error rate as low as 0.1% to gain higher confidence in the results.

\


Our path to excellence is paved with mistakes. Let’s make them!

Market Opportunity
B Logo
B Price(B)
$0.18502
$0.18502$0.18502
-4.53%
USD
B (B) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

The Top 10 Altcoins Most Purchased by Investors in 2025 Have Been Revealed! There’s a Trump Detail Too!

The Top 10 Altcoins Most Purchased by Investors in 2025 Have Been Revealed! There’s a Trump Detail Too!

The post The Top 10 Altcoins Most Purchased by Investors in 2025 Have Been Revealed! There’s a Trump Detail Too! appeared on BitcoinEthereumNews.com. The Top
Share
BitcoinEthereumNews2025/12/25 17:36
The high premium of silver funds has attracted attention; Guotou Silver LOF will be suspended from trading from the opening of the market on December 26 until 10:30 a.m. on the same day.

The high premium of silver funds has attracted attention; Guotou Silver LOF will be suspended from trading from the opening of the market on December 26 until 10:30 a.m. on the same day.

PANews reported on December 25th that Guotou Silver LOF announced it will suspend trading from the market opening on December 26th until 10:30 AM, resuming trading
Share
PANews2025/12/25 17:10
Why The Green Bay Packers Must Take The Cleveland Browns Seriously — As Hard As That Might Be

Why The Green Bay Packers Must Take The Cleveland Browns Seriously — As Hard As That Might Be

The post Why The Green Bay Packers Must Take The Cleveland Browns Seriously — As Hard As That Might Be appeared on BitcoinEthereumNews.com. Jordan Love and the Green Bay Packers are off to a 2-0 start. Getty Images The Green Bay Packers are, once again, one of the NFL’s better teams. The Cleveland Browns are, once again, one of the league’s doormats. It’s why unbeaten Green Bay (2-0) is a 8-point favorite at winless Cleveland (0-2) Sunday according to betmgm.com. The money line is also Green Bay -500. Most expect this to be a Packers’ rout, and it very well could be. But Green Bay knows taking anyone in this league for granted can prove costly. “I think if you look at their roster, the paper, who they have on that team, what they can do, they got a lot of talent and things can turn around quickly for them,” Packers safety Xavier McKinney said. “We just got to kind of keep that in mind and know we not just walking into something and they just going to lay down. That’s not what they going to do.” The Browns certainly haven’t laid down on defense. Far from. Cleveland is allowing an NFL-best 191.5 yards per game. The Browns gave up 141 yards to Cincinnati in Week 1, including just seven in the second half, but still lost, 17-16. Cleveland has given up an NFL-best 45.5 rushing yards per game and just 2.1 rushing yards per attempt. “The biggest thing is our defensive line is much, much improved over last year and I think we’ve got back to our personality,” defensive coordinator Jim Schwartz said recently. “When we play our best, our D-line leads us there as our engine.” The Browns rank third in the league in passing defense, allowing just 146.0 yards per game. Cleveland has also gone 30 straight games without allowing a 300-yard passer, the longest active streak in the NFL.…
Share
BitcoinEthereumNews2025/09/18 00:41