Sometimes it’s just not possible to set up an experiment. Here are a few reasons why A/B tests won’t work in every situation:
- Lack of tooling. For example, if your code can’t be modified in certain parts of the product.
- Lack of time to implement the experiment.
- Ethical concerns for example, at an e-commerce company like Shopify, randomly leaving some merchants out of a new feature that could help them with their business is sometimes not an option).
- Just plain oversight (for example, a request to study the data from a launch that happened in the past).
Fortunately, if you find yourself in one of the above situations, there are methods that exist which enable you to obtain causal estimates.
A quasi-experiment is an experiment where your treatment and control group are divided by a natural process that isn’t truly random, but are considered close enough to compute estimates. Quasi-experiments frequently occur in product companies, for example, when a feature rollout happens at different dates in different countries, or if eligibility for a new feature is dependent on the behaviour of other features (like in the case of a deprecation). In order to compute causal estimates when the control group is divided using a non-random criterion, you’ll use different methods that correspond to different assumptions on how “close” you are to the random situation.