About A/B Testing

What is A/B Testing

A/B testing—and its broader family of experimental designs—lets you compare ideas with hard data, so you act on what truly helps the business, not just what feels right.

Even when a classic A/B test isn't feasible, alternative methods (geo-splits, holdouts, diff-in-diff, and more) still give you the evidence you need for confident, data-driven decisions.

Pick one change

Test a new price, redesigned page, or marketing campaign

→

Split into groups

Randomly show users version A or B

→

Measure KPIs

Track conversion, retention, revenue per user

→

Statistical Analysis

Determine real impact vs random variation

→

Ship the winner

Roll out changes that deliver results

→

See impact

Understand performance improvement

Why Do You Need A/B Testing?

For Companies

ROI Maximization Competitive Advantage Risk Mitigation Data-Driven Culture Product Validation Learning Loop
  • Maximise ROI – focus engineering and design budget on proven winners; cut losses early on bad ideas.
  • Stay ahead of competitors – data-driven roadmaps out-iterate rivals that still rely on gut feel.
  • De-risk big bets – test pricing, UX, and feature shifts on a slice of traffic before a full rollout.
  • Validate product changes – release only what measurably improves revenue, conversion, or retention.
  • Build an experiment-driven culture – replace opinion battles with evidence; speed up decision cycles.
  • Systematic learning loop – each test feeds a knowledge base that compounds over time.

For Employees

Career Growth Hiring Advantage Higher Compensation Decision Ownership Performance Reviews Future-Proof Skills
  • Accelerate career growth – hands-on experimentation skill is a must-have for senior analyst, PM, and growth roles.
  • Stand out in the hiring funnel – A/B expertise boosts HR screening pass-rates and impresses hiring managers.
  • Higher compensation – companies pay a premium for talent that can prove uplift, not just report metrics.
  • Own product decisions – design, run, and interpret tests without waiting for a separate analytics team.
  • Shine at performance reviews – show direct business impact, not vanity metrics.
  • Future-proof your skill set – experimentation is core at Amazon, Booking.com, Airbnb, and every product-led company.

Who is this course for?

This course is for you

Data Professional

Data Analyst Product Analyst Marketing Analyst Data Scientist ML Engineer

Turn everyday data into trustworthy insights, master the nuances of A/B testing, avoid common pitfalls, and be confident defending your results.

Decision-Maker

Product Manager Growth Manager Marketing Manager Operations Manager

Make roadmap calls with clear experiment outcomes, a workable grasp of the stats, and data you can trust—without waiting on a committee.

Engineer

Backend Developer Platform Engineer MLOps

Embed experimentation into the systems you build and understand how it works under the hood—from assignment and metrics to analysis and rollout.

Data Career Starter

Student Recent Graduate Career-Switcher

Get hands-on projects, job-ready experimentation skills, and a solid statistical foundation to launch your analytics career.

Reach Company Goals

Enhance your company's experimentation capabilities and drive measurable business outcomes

Raise the bar & improve processes in your company

Process Improvement Team Velocity Methodology Automation
  • Increase velocity and the number of experiments.
  • Raise the bar for experimentation across teams.
  • Improve methodology, boost metric sensitivity, and experiment power.
  • Open new use-cases for testing in marketing, pricing, UX, and other domains.
  • Automate, standardise, and speed up experiment analysis.

Make statistically-justified product decisions on your own

Statistical Analysis Independence Decision Making Communication
  • Read p-values, confidence intervals, and Bayes factors with confidence.
  • Size and run tests without waiting for an analyst.
  • Understand the reasoning behind analysts' recommendations.
  • Don't wait for analytics resources—make your own justified decisions.
  • Present results in plain language that guides product direction.

Deal with experimentation constraints

Legal Compliance Data Integrity Risk Management
  • Handle legal constraints in pricing and other sensitive experiments.
  • Manage user contamination (overlap, interference, holdback hygiene) to keep results trustworthy.

Achieve Personal Goals

Advance your career with cutting-edge A/B testing expertise and industry recognition

Move to a more advanced analytical team & company

Career Growth Team Leadership Advanced Skills Industry Network
  • Join top-tier analytics teams at Booking.com, Netflix, Uber, Airbnb, and other leading companies.
  • Get recognized for advanced experimentation expertise and statistical rigor.
  • Lead A/B testing initiatives and guide product strategy with data-driven insights.

Become an absolute expert

Expert Level Leadership Advanced Methods Team Lead
  • Get hands-on experience and become an expert in complex experimentation methodologies such as variance reduction, holdouts, sequential tests, Monte-Carlo simulation, advanced designs (switchbacks, cross-overs, factorial, interleaving, AAB experiments, etc.), and Bayesian Contextual Multi-Armed Bandits.
  • Become a tech / team lead of a data-science experimentation team.

Grow from beginner to professional

Beginner Friendly Guided Learning Portfolio Project Career Ready
  • Start with the basics—metrics, hypotheses, stats foundations.
  • Follow guided labs that mirror real company setups.
  • Finish with a capstone project you can demo to employers.

Ready to Advance Your Career?

Join data professionals who've already transformed their careers with our A/B testing expertise.

What will you gain

What You'll Be Able to Do

Master the best-practice experimentation skills employed at leading companies such as Booking.com, Wolt, DoorDash, Delivery Hero, and Netflix—and turn them into measurable gains for your business and your career.

For Companies

  • Lift ROI across teams – turn product, engineering, and analytics spend into measurable revenue gains.
  • Ship features faster – shorten experiment cycles and reach release decisions sooner.
  • Operate under constraints – run valid experiments despite legal, traffic, or platform limits.
  • Uncover new use cases – extend experimentation from pricing and funnels to ML models, marketing, supply-chain logic, and more.

For Professionals

  • Raise the analytical bar – apply industry-leading methodologies to every test you run.
  • Demonstrate best-practice expertise – deliver rigorous experiment designs and precise, variance-reduced results.
  • Convert interviews into offers – answer the hiring-manager A/B questions that filter top candidates.
  • Stand out at performance reviews – demonstrate clear, measured gains from your initiatives and earn higher ratings and compensation.

Course Contents

A/B Testing & Online Experimentation — Comprehensive Course Outline

1. Overview of Experimentation

Build a foundational understanding of experimentation culture, goals, challenges, and constraints.

Goals, Challenges & Impact Framing

Turn problems into testable hypotheses tied to business outcomes and clear decision rules.

Experiment Types

Match user‑, time‑, or geo‑level designs to traffic, interference risk, and operational constraints. Include patterns such as AA/AAB tests; holdout & long‑term holdback; crossover & Latin square; switchback/geo‑lift; interleaving for search/ranking; full & fractional factorial; synthetic control; diff‑in‑diff, propensity‑score matching, clustering; and synthetic diff‑in‑diff.

Constraints & Risks

Account for seasonality, upstream dependencies, pricing/cannibalisation effects, legal/ethical/privacy limits, and potential impact if something goes wrong; set ramps, holdbacks, and stop‑losses accordingly.

Building an Experimentation Culture

Operationalise a scalable workflow—idea intake → sizing → design review → launch → monitoring → analysis → decision log—so more high‑quality tests ship faster.

Experiment Lifecycle Overview

Define artefacts (hypothesis brief, exposure spec, metrics plan), exit criteria, and a pre‑launch checklist & pilots so ideas move predictably from design to decision.

Client‑ vs. server‑side tracking

Use a hybrid: client events capture UI intent but face lag/loss (ad‑blockers, privacy, app backgrounding), while server events are precise and ordered; deduplicate and resolve identities once.

2. Metrics & Decision Frameworks

Tie KPIs to revenue and risk with explicit thresholds and windows so decisions are binary, fast, and comparable across tests.

Metric Taxonomy

Separate primary, guardrail, and diagnostic metrics to keep goals focused and trade‑offs explicit.

Metric Windows & Time‑to‑event metrics

Choose attribution and survival windows that reflect true latency to impact.

Novelty, Learning & Resistance Effects

Distinguish transient behaviour changes from durable lift with ramps and time trends.

Metric Trees & North‑Star Metrics (NSM)

Decompose NSM into drivers so mechanisms and side‑effects are visible.

Sensitivity, Directionality & Elasticity

Prioritise ideas by detectable effect size, expected sign, and business elasticity.

Guardrails & Stop‑loss Rules

Pre‑define critical thresholds that automatically pause harmful tests.

3. Statistical Foundations

Size tests correctly and interpret results rigorously so you stop shipping false wins and missing real lift.

Frequentist Foundations

Build intuition for estimators, variance, and intervals so results are properly calibrated.

Hypothesis Testing & p‑values

Pre‑specify tests and interpret p‑values and CIs correctly to prevent overclaiming.

Statistical Power, MDE & Sample‑Size Calculators

Size experiments using historical variance and traffic to balance speed and risk.

Bootstrapping & Non‑parametric Methods

Use resampling when assumptions fail to obtain robust intervals and tests.

Poisson Bootstrap

Apply Poisson(1) weights for scalable uncertainty estimates on streaming or distributed logs.

Introduction to Bayesian Inference

Combine priors with data to report credible intervals and decision probabilities.

4. Advanced Statistical Techniques

Handle many metrics and variants—and stop early—without inflating error, using simulations and multiplicity‑aware methods to decide sooner.

Monte‑Carlo & Resampling Simulations

Simulate traffic and effects to tune ramps, power, and guardrails before launch.

Multiple Comparisons & False‑Discovery Rate (FDR)

Control family‑wise error across many metrics or variants with principled adjustments.

Sequential & Adaptive Testing

Monitor accumulating data with alpha‑spending or Bayesian rules to stop earlier safely.

Ratio Metrics

Use stable estimators that handle variable denominators without biasing conclusions.

Linearization

Approximate ratios as additive effects to enable simple, higher‑power tests.

Delta method

Estimate variances of transformed metrics via Taylor expansion for quick, reliable CIs.

Fieller's theorem

Construct valid ratio intervals when denominators are noisy or near zero.

Bucketization & Cluster‑Robust Inference

Correct for within‑cluster correlation so significance is not overstated.

5. Variance Reduction & Sensitivity Boosters

Recover signal with pre‑period data and covariates to get the same power with less traffic and shorter run times.

Stratification / Blocking

Balance key covariates at assignment to reduce variance from the outset.

CUPED & CUPAC

Subtract predictable noise using pre‑period or covariate information to gain sensitivity.

Covariate‑Adjusted Regression (ANCOVA)

Improve precision by modelling outcomes with treatment and predictive features.

Weighted Variance Estimators & Control Variates

Reweight observations or use auxiliary signals to tighten intervals.

Shrinkage & Hierarchical Models

Partially pool segments to stabilise estimates where data are sparse.

6. Experiment Design Patterns

Match design to constraints (traffic, geo, carryover) so causal claims hold even when simple user‑level A/B is impossible.

AA, A/A/B & Smoke Tests

Prove assignment integrity and variance assumptions before high‑stakes launches.

Holdout & Long‑Term Holdback

Preserve a stable control to measure delayed or indirect effects credibly.

Crossover & Latin Square

Counterbalance period and order effects when units can be their own controls.

Switchback / Geo‑Lift

Randomise by time or geo to avoid contamination in marketplaces and offline settings.

Interleaving & Online Ranking Experiments

Mix results at the item level to compare rankers with higher sensitivity.

Full & Fractional Factorial Designs

Estimate interactions efficiently under traffic limits with planned aliasing.

Synthetic‑Control Methods

Build weighted counterfactuals when randomisation is constrained or impossible.

Diff‑in‑Diff, Propensity‑Score Matching, Clustering

Use quasi‑experiments with diagnostics that verify core assumptions.

Synthetic Diff‑in‑Diff

Relax parallel‑trend assumptions by combining DiD with synthetic controls.

7. Special Topics

Choose randomisation units and interference‑aware estimators so measured lift reflects reality, not spillover or leakage.

Randomisation Units & Bias–Variance Trade‑off

Choose user, session, household, or geo units to balance power against interference.

Network & Interference Effects

Detect and mitigate spillovers with cluster designs, graph cuts, or saturation tests.

9. Advanced Experiment Design & Analysis

Execute end‑to‑end studies from scoping to decision, emphasising mechanisms, validation checks, and stakeholder‑ready communication.

Search & Recommendations

Design interleaving or A/B tests that quantify ranking quality, engagement, and revenue trade‑offs. Control click‑bias and measure both short‑term lift and longer‑run relevance.

Dynamic Pricing

Estimate elasticity and revenue impact while respecting fairness, churn, and margin guardrails. Compare local vs. global pricing policies and seasonality interactions.

Marketing Campaigns

Measure true incrementality via user‑ or geo‑level randomisation and align outcomes to MMM/attribution. Diagnose spillovers, saturation, and halo effects across channels.

Logistics & Route Optimisation

Test dispatch, batching, or routing policies with minimal interference to marketplace dynamics. Balance operational KPIs with customer experience and cost.

Offline Venues / Stores

Design geo cells, detect interference, and adjust for local seasonality to isolate causal impact. Validate with pre‑period fit and placebo tests.

10. Contextual Multiarmed Bandits

Operationalise personalisation that pays for itself: measure regret and profit, choose the right policy, and integrate pipelines that make learning safe and auditable.

Success Metrics & ROI

Define the business objective in measurable terms—incremental revenue, margin, or retention—and track both cumulative reward and regret against a holdout or baseline. Report speed‑to‑lift and expected profit per 1,000 decisions so stakeholders see payback, not just accuracy.

Data & Pipelines

Log context, action, propensity, and reward with deterministic exposure and idempotent updates. Build offline evaluation datasets (IPS/DR with clipping) and daily policy snapshots so changes are auditable and reproducible.

Policy Choice & Tuning

Select policies by economics and data shape: ε‑greedy for cold start, UCB for confidence‑driven exploration, Thompson/LinTS for probabilistic or linear‑context problems. Tune exploration budgets, add eligibility rules, caps, and fairness constraints to respect business guardrails.

A/B vs Bandits: Evaluation & Profit

Run shadow evaluation or a small A/B holdout to compare bandit profit versus static policies. Use counterfactual estimates to forecast ROI, then ramp traffic in stages (e.g., 5% → 20% → 50%) with pre‑defined stop/roll‑back criteria.

Rollout & Governance

Define ownership, change‑management, and decision logs; monitor guardrails (latency, error rates, adverse outcomes) in real time. Provide a kill‑switch and weekly review of regret and profit so exploration remains safe and value‑creating.

11. Methodology Research

Replicate, benchmark, and improve methods that raise sensitivity or causal validity, with publishable artefacts.

Improving Metric Sensitivity with ML

Train models to predict outcomes or variance and use them for reweighting/adjustment. Demonstrate reduced runtime or MDE on historical experiments.

Developing Proxy Metrics Using ML

Design early indicators with validated linkage to long‑run goals (e.g., retention, revenue). Monitor calibration drift and refresh models on schedule.

Your Own Exploration

Pick a question, set success criteria, and deliver a short paper with code and reproducible notebooks. Include ablations and limitations for credibility.

12. Experiment Analysis Automation

Automate metrics, inference, and reporting so decisions are consistent, fast, and audit‑ready.

Metrics Configuration Management

Store metric definitions, windows, joins, and filters as versioned configs (e.g., YAML/DB) with approvals and change history. Provide a self‑serve UI so owners can propose changes without editing code while keeping lineage intact.

Query Configuration & Templates

Offer parameterised SQL/templates and macros for common joins (exposure → events → outcomes), segmentations, and windows. Enforce query linting, cost guards, and dry‑run previews so analyses are correct and affordable.

Metrics & Inference Pipelines

Build reusable jobs that compute metrics, confidence intervals, and variance‑reduction adjustments. Parameterise by exposure set, window, and segmentation; emit artefacts for reproducibility.

Alerting & Monitoring

Implement SRM and guardrail alarms with links to diagnostics and runbooks. Log resolutions and false‑positives to refine thresholds and reduce alert fatigue.

Report Templates & Decision Logs

Generate decision‑ready summaries with effect sizes, risk, and recommended action. Persist signed‑off decisions and inputs for compliance and future learning.

Scaling Across the Company

Define ownership, SLAs, and onboarding; track adoption (experiments/week, time‑to‑decision, guardrail breach rate). Provide enablement materials and office hours to drive self‑serve usage while maintaining standards.

CI/CD & Testing

Add unit/integration tests for metric definitions and exposure joins, plus backfills and load checks. Ship safely via staged releases with automated schema/version compatibility checks.

13. A/B‑Platform Development & Deployment

Ship a minimal, reliable experimentation platform that enables self‑serve testing at scale.

Assignment & Eligibility Service

Provide deterministic bucketing and eligibility logic with conflict detection. Expose APIs so services can request assignments safely.

Exposure Logging & Identity

Record exposure once with stable identities across devices and sessions. Prevent double counting with idempotency and late‑arriving data handling.

Metrics Service & Definitions

Centralise metric formulas and windows as versioned, testable code. Offer on‑demand recomputation and immutable snapshots for audits.

Analyst UI & Self‑Serve

Deliver a simple interface to create, monitor, and analyse experiments without engineering tickets. Include pre‑flight checks and guided templates.

Reliability, Backfills & Load

Validate correctness with backfills, replay tests, and stress/load runs. Track SLOs for assignment latency, exposure completeness, and report freshness.

Projects you'll complete

Design, build, and deliver five portfolio-ready projects that demonstrate practical experimentation skills

Advanced Experiment Design and Analysis

End-to-end, best-practice analysis—from selecting your North Star, key, and guardrail metrics, estimating sample size, and choosing an effective rollout strategy, to experiment analysis, variance reduction, and decision making.

Experiment Setup and Validation
  • Validate the setup with A/A and A/A/B experiments to control the false-positive rate (FPR) and ensure statistical power.
  • In case of disproportionate traffic allocation (SRM), adjust the sample size to maintain valid FPR and power.
  • Adjust the sample size based on the most effective variance-reduction technique for the chosen metrics—e.g., CUPED, CUPAC, weighted variance estimators, and pre-/post-experiment stratification.
  • Adjust the sample size to account for variance bias in ratio metrics using linearisation or dedicated variance-correction techniques.
  • Adjust the sample size to account for dimensionality reduction when applying bucketisation to high-cardinality data.
  • Adjust the sample size based on the choice of the most appropriate statistical test for each metric and distribution, considering the desired FPR and power: Student's t-test, Welch's t-test, Mann–Whitney U, z-test, proportion tests, bootstrap, or Poisson bootstrap.
  • Make additional sample-size corrections to account for multiple comparisons using Benjamini–Hochberg, Bonferroni, Holm–Bonferroni, or other False Discovery Rate (FDR) procedures.
  • Estimate the optimal holdout share to balance learning speed and revenue protection.
  • Model possible metric distributions and the bias-variance trade-off to refine sample-size estimates.
Implementation & Monitoring
  • Choosing effective roll-out strategies and minimising risks.
  • Making decisions based on MDE, sequential testing, SRM, and avoiding any statistically justified degradation in metrics.
Advanced Analysis
  • Detect and correct for Sample Ratio Mismatch (SRM) using statistical tests and diagnostic procedures.
  • Apply variance-reduction techniques post-experiment (CUPED, CUPAC) to increase statistical power and reduce confidence intervals.
  • Handle multiple-metrics analysis with appropriate corrections for multiple comparisons and family-wise error-rate control.
  • Calculate practical significance alongside statistical significance using effect sizes, confidence intervals, and business-impact thresholds.
  • Perform robustness checks through sensitivity analysis, outlier detection, and alternative statistical approaches.
  • Segment analysis to understand heterogeneous treatment effects across user groups, time periods, and other dimensions.
  • Estimate long-term effects using holdout cohorts or synthetic-control methods.
Business Decision Making
  • Generate actionable business recommendations with confidence levels, risk assessments, and implementation guidance.

Experimentation Methodology Research

Research which statistical tests work best for different metrics. Compare fixed-horizon, sequential, and Bayesian approaches.

Experiment Analysis Automation

Build an end-to-end pipeline that ingests data, applies statistical methods, and publishes results automatically.

Contextual Multi-Armed Bandits

Implement bandit algorithms, run simulations, and compare performance against traditional A/B testing.

A/B Platform Development

Create a feature-flag service that serves variants, logs events, and provides analytics endpoints for real-time experiment monitoring.

Questions You'll Be Ready to Answer

You'll be able to solve some of the most challenging industry questions—problems that even teams at top-tier companies struggle to address.

Experiment Design

Sample Size Multiple Testing FDR Procedures Long-term Effects Ratio Metrics Variance Reduction Factorial Switchback Crossover Interleaving Holdout Share Contamination Early Stopping
  • How do we choose the most optimal sample-size determination method for a given metric, and what are its pros, cons, and trade-offs?
  • How can we seamlessly incorporate multiple-testing adjustments (e.g., FDR procedures) into our power and sample-size calculations?
  • How do we include the possibility of estimating long-term effects in the sample-size calculation—balancing FPR, power, and confidence in long-range predictions?
  • How do we model novelty- and resistance-curves and choose metric windows that keep FPR and power valid throughout the test?
  • How do we fold ratio-metric transformations into sample-size calculations while applying variance-reduction techniques, maintaining FPR, power, and an optimal bias-variance trade-off?
  • How can we expand our experimentation toolkit with complex designs—factorial, switchback, crossover, interleaving—and what constraints, benefits, and trade-offs do they introduce?
  • How do we choose the optimal holdout share to meet target FPR and power while avoiding unnecessary loss of experimentation capacity?
  • How do we minimise holdout group contamination, and what practical strategies can we use to minimise exposure leakage or interference?
  • How can we monitor holdout drift in real time and correct it without restarting the entire experiment?
  • How do we set decision boundaries that balance speed with statistical power—and communicate early-stop calls to stakeholders?

Network Effects & Randomisation

Spill-overs Network Effects Geo Boundaries Assignment Logic Randomisation Unit
  • How can we detect and correct for spill-overs and network effects when parallel tests interact?
  • What strategies—such as trimming geo boundaries or adjusting assignment logic—best mitigate contamination of entities exposed to multiple treatments?
  • How do we select the optimal randomisation unit—user, session, product, order, geo, device, timeslot, or geo-timeslot—while balancing false-positive risk, power, bias-variance trade-offs, and interference?

Constraints & Workarounds

Legal Constraints Pricing Constraints Data Privacy Manual Splits Feature Flags
  • How do we design compliant experiments under legal or pricing constraints and strict data-privacy rules?
  • What workarounds enable high-quality testing with only partial or no dedicated experimentation platform—through manual splits, basic feature flags, or spreadsheet workflows?

Variance Reduction

CUPED CUPAC Stratification Re-weighting Prior Experiments
  • How do we leverage results from prior experiments to tighten current estimates without inflating false-positive rates?
  • How can we maximise variance reduction for our key metrics—selecting and tuning CUPED, CUPAC, stratification, or weighted variance estimators for the greatest precision gain?

Long-Term Impact & Causal Inference

Long-term Forecasts 6-12 Month Impact Causal Inference Ripple Effects Validation
  • How do we convert short-term improvements into credible 6- or 12-month impact estimates that stakeholders will trust?
  • How to use causal-inference toolkits to validate experiment results when pure randomisation breaks down or ripple effects surface months later?

Business Impact

ROI Measurement Metric Selection Leadership Communication Decision Making Culture Building Stakeholder Alignment Quality Frameworks
  • How do we foster an experimentation culture and raise the bar across different teams and departments?
  • How do we quantify and communicate the business value of experimentation programs?
  • How do we balance stakeholder urgency and business needs with statistical rigour?
  • How can we boost the number and speed of experiments without sacrificing validity?

What Does the Course Look Like?

Experience a comprehensive learning journey that combines theory, practice, and real-world application.

Theory

Interactive lessons with tailored explanations that adapt to your learning pace and background. Master the fundamentals of A/B testing with engaging content designed for practical understanding.

Interactive Theory Lessons

Practice

Hands-on coding exercises, quizzes, and work directly in an A/B-testing platform to build practical skills. Apply what you learn immediately with real tools and scenarios.

Hands-on Practice Platform

Simulator

Real-world case studies drawn from our own industry projects, giving you authentic experience with actual business challenges and decision-making scenarios.

Real-world Case Simulator

Projects

Complete end-to-end projects you'll work on, ready to add directly to your portfolio as real-world experience.

  • Advanced experiment end-to-end analysis
  • Experimentation methodology improvement research
  • Experiment analysis automation
  • Contextual multi-arm bandits project
  • A/B platform
Ready-to-use Projects

How Learning Works

Our curriculum is designed to give you both theoretical knowledge and practical experience that employers value.

Project-based curriculum

Apply your learning immediately to real-world scenarios and build a portfolio as you go.

Expert mentorship

Get guidance from industry professionals who have built A/B testing systems at top companies.

Practical exercises

Work with real datasets and tools used by data scientists and analysts in the field.

Powerful learning platform

Built on a robust LMS with practical exercises, simulations & progress tracking.

Learning Management System Preview

Optional 1-on-1 Consultation Pack

Get personalized guidance and accelerate your learning with direct access to industry experts.

What's Included

  • Unlimited chat support - Message your mentor any time for quick answers and guidance
  • 1-1 private calls - Schedule deep-dive sessions whenever you need them
  • Practical tips & shortcuts - Apply proven industry tricks immediately
  • Guided learning path - Customised check-ins and hand-picked exercises
  • Ongoing mentorship - Career advice, project feedback, interview prep

What Makes It Special

  • Industry professionals - Learn from mentors who've raised the bar, improved processes and methodologies, and built A/B testing systems at top companies
  • Personalized approach - Your learning path is customized based on your background, goals, and career aspirations
  • Fast-track your career - Get insider knowledge and shortcuts that typically take years to learn on your own

Ready to Start?

Join thousands of professionals who have transformed their careers with our A/B testing expertise.

Transform Your Career Today

Stop guessing. Start testing. Build the skills that top companies value most.

Start Learning Book a Call with Us

Why is it worth it

Why This Course Stands Out

Four reasons data-driven teams choose this program.

Complete End-to-End Projects

Work through the full experimentation cycle—from the optimal choice of metrics and design to rollout, analysis, decision-making, and ROI reporting—mirroring workflows used at Booking.com, Microsoft, Netflix, DoorDash, Wolt, Miro, and other top companies.

Methodologies With Proven ROI

Use the same experimentation methods that have already generated millions in additional revenue for top companies, then showcase these case studies on your CV or implement them directly at work.

Solutions to Industry Challenges

Master sample-size sizing under low traffic, balance bias–variance and FPR–power trade-offs, account for novelty effects, choose the right statistical test for any metric, and forecast long-term impact—issues most teams still struggle to solve.

Hands-On Best-Practice Training

Design optimal experiments using advanced modelling, apply variance reduction, run sequential monitoring, and present board-ready results—skills you can use immediately, not just theory.

Knowledge and materials from Top Companies

Learn from methodologies and insights developed at industry-leading organizations.

Upskill Your Team, Unlock Compounding ROI

Big-tech and independent studies show that experimentation increases KPIs

Booking.com
Booking.com
Conversion rates 2–3× higher than industry average
A company-wide testing culture delivers conversion rates 2–3× higher than the travel-industry average.
Microsoft
Microsoft
Hundreds of millions of dollars in new revenue annually
Its internal experimentation platform adds hundreds of millions of dollars in new revenue annually.
Uber
1,000+ live experiments at any moment
Keeps 1,000+ live experiments running at any time, protected by a universal holdout layer.
DoorDash
DoorDash
Tripled test velocity
Tripled test velocity, now running thousands of experiments each month.
Harvard Business Review
Well-designed experiments 10Ă— more likely to produce business value
Harvard Business Review reports that well-designed experiments are 10Ă— more likely to produce business value.
Market Research
A/B-testing tools: 14% annual growth → US $850M in 2024
Spend on A/B-testing tools is climbing 14% per year and will reach US $850 million in 2024, showing how fast companies are investing.

Upskill even one analyst and the gains cascade across every release, campaign, and pricing change.

Questions and Answers

Choose who pays

Flexible payment options for individuals, teams, and companies

Individual

Secure your spot with any major card or bank transfer.

Company

We send an L&D invoice to your employer—full amount or instalments.

Shared cost

Split the fee; your employer covers an agreed portion, you pay the rest.

Got Questions?

Everything you need to know about the course and billing

How long does it take to complete the course?

  • Most students complete the course in 6-8 weeks when dedicating 5-8 hours per week. The self-paced format allows you to go faster or slower based on your schedule and prior experience.

What level of statistics background do I need?

  • Basic understanding of statistics (mean, standard deviation, hypothesis testing) is recommended but not required. We provide refresher materials and build up from fundamentals to advanced concepts.

Can I get a refund if I'm not satisfied?

  • Yes! We offer a 30-day money-back guarantee. If you're not completely satisfied with the course within the first 30 days, we'll provide a full refund.

Do you provide certificates?

  • Yes, you'll receive a certificate of completion that you can add to your LinkedIn profile and resume. Our certificates are recognized by leading companies in the data science industry.

Is there ongoing support after the course?

  • Team and Enterprise plans include ongoing community access. Individual plans have 6 months of access to all materials and updates. You can always upgrade to get extended support.

When do I get access?

  • This program follows a weekly release ("drip") model. You’ll receive access to the first module as soon as your cohort begins. Each following module unlocks automatically once per week, starting from your cohort start date.

Still have questions?

Our team is here to help you choose the right plan and answer any questions you might have.

Start Learning Book a Call with Us