A/B Test Size and Duration Calculator

Test Calculator

Choose this to measure your ‘Success/Failure’ metrics like conversions, clicks or sign-ups.

%
Improvement: 20%
%
 Estimated campaign duration
4 Days
 Total visitors required
3600 Visitors

To detect a change in conversion from 10% to a target 12%, your campaign needs to run for approximately 56 days with your current traffic and statistical settings.

How to Use

  • Enter your current conversion rate (e.g., signups, purchases).

  • Select the minimum % of improvement you want to measure (MDE).

  • Choose your test type.

  • Enter your average daily traffic.

  • Get results instantly → sample size per variation + recommended test duration.

When to Use

  • Before starting an A/B test (to plan feasibility)

  • When presenting an experiment plan to stakeholders

  • To avoid running under-powered tests that waste time

  • To check if your site/app has sufficient traffic for valid experiments

Sample Use Cases

Here are a few ways teams like yours use this calculator:

 

E-commerce store

We get around 80,000 visits per month. If our current checkout conversion is 2%, how long will it take to detect a 10% improvement?

 

SaaS product

We want to test a new onboarding flow. With 5,000 free-to-paid signups monthly, how many weeks do we need to validate a 15% uplift?

 

Mobile app

Our daily active users are 25,000. How long will it take to measure the impact of a new feature adoption goal?

 

Media & Content site

We want to test different headline styles. With a 5% click-through rate on articles, what’s the sample size required to detect a 5% change?

Frequently Asked Questions (FAQs)

Likely because you chose a very small MDE. Smaller changes need more data.

Yes, as long as you know your baseline conversion rate and traffic numbers.

Multi-Armed Bandit tests do not require pre-defined sample sizes or fixed duration. Instead, they adjust dynamically, i.e, shifting traffic towards better-performing variations in real time.

For CRO experiments, a 5% margin of error (corresponding to a 95% confidence level) is standard. However:

  • Use 10% when you want a quick read on direction or are testing early ideas.
  • Use 1 to 2% when the decision is big, like pricing or checkout changes.
  • Lowering your margin of error will increase your required sample size and duration.
  • Frequentist testing sets the sample size first, then checks if the winner would still win across repeated runs.
  • Bayesian testing updates confidence with every new visitor, so you can stop once the winner looks clear.

That is why calculators behave differently, because each approach defines confidence in its own way.