Monday, March 23, 2026

Nihar V. Patel Is Rethinking How Marketplaces Measure What Actually Works

NewsNihar V. Patel Is Rethinking How Marketplaces Measure What Actually Works

Marketplaces are not clean laboratories. On any large platform, a change rolled out to one group of users ripples across the entire system, affecting sellers who never saw the test, buyers on the other side of the city, and pricing algorithms that respond in real time. Standard experiments were never built for that kind of chaos. Nihar V. Patel has spent his career building methods that were. Patel works as a product data scientist at a major two-sided marketplace, where he uses data and experimentation to guide product decisions across both the buyer and seller experience. 

His work spans A/B testing, causal analysis, and the construction of measurement frameworks for products with multiple downstream effects. What makes his position unusual is not just the scale of the platforms he has worked on — it is the body of research he has produced alongside that work. Patel is currently publishing six papers on causal inference, fairness, and experimentation in two-sided marketplaces. Four are in the pipeline; two have already appeared in Scopus-indexed journals.

His path to this work is worth noting. Patel began as a petroleum engineering student before pursuing a Master’s in Engineering Management with a concentration in product and data science. He later completed a nano-degree in self-driving car AI engineering. The arc is not as unusual as it sounds. What petroleum engineering and marketplace data science share is a preoccupation with systems: how pressure changes in one part of a network travel unpredictably through the whole. That intuition has proved useful.

When A/B Tests Lie

The most dangerous moment in a product launch is when the A/B test returns a win that is wrong. Standard A/B testing assumes that the treatment and control groups remain cleanly separated. On a two-sided marketplace, that assumption collapses almost immediately. A promotion offered to buyers in one city can pull supply away from buyers in another. A seller incentive can flood the platform with extra drivers or listings, depressing returns for everyone. 

The “control” is no longer a control — it has been contaminated by spillover effects that the experiment never accounted for. Patel’s research directly addresses this problem. His paper on propensity frameworks for marketplace selection bias redefines the unit of analysis. Rather than treating individual buyers or sellers as recipients of an intervention, it models exposure at the level of the buyer-seller interaction—the edge on the network graph. 

The paper proposes a two-stage propensity score-matching method that separately estimates buyer intent and seller reach, then enforces overlap across both dimensions before drawing any causal conclusions. The result is a method that can distinguish genuine causal lift from selection artifacts in ranking changes, targeting rules, and premium placements — without requiring a randomized experiment that might harm revenue or user experience during testing. A separate paper on pricing interventions in two-sided marketplaces goes further. 

Drawing on survey data from roughly 300 buyers and sellers, public pricing information, and time-based proxies, the research finds that a 10% price increase correlates with approximately a 6% drop in orders and a 5-percentage-point increase in switching to a competing app. Neighboring areas experiencing simultaneous price surges amplify those losses by an additional 2%. The paper gives pricing and operations teams a practical method for measuring these spillover effects when randomized tests are either impractical or incomplete.

Fairness As An Operational Problem

Algorithmic bias in marketplace systems is often discussed as a values question. Patel treats it as a measurement and governance problem, which is where it actually has to be solved in practice. His life-cycle framework for algorithmic bias covers eight stages of a marketplace product’s existence, from initial data collection through model training, ranking, and long-term governance. The paper argues that fairness cannot be assessed at a single checkpoint; it accumulates and warps across the entire system over time. 

A recommendation algorithm that appears balanced at launch can drift toward amplifying already-popular sellers simply because the feedback loop rewards early signals. The paper proposes counterfactual log replay for safe offline testing, group-aware ranking models, and fairness service-level objectives that product teams can monitor alongside revenue metrics. The impact of this work shows up in measurable outcomes. Through structured experimentation and financial modeling across marketplace roles, Patel has identified more than $60 million in business growth opportunities. 

Seller and buyer engagement on products he worked on climbed from 8% to 19%. On the advertising side of a separate marketplace platform, similar methods produced a 20% improvement in return on ad spend. The numbers are large, but the mechanism behind them is precise: find where the measurement is wrong, fix the measurement, and the right decision tends to follow. Patel’s sixth paper, on adaptive experimentation, addresses what comes after fixed A/B tests are replaced by multi-armed bandit systems that learn in real time. 

The challenge is that learning systems can quietly become unfair, optimizing for high-converting user segments while starving others of traffic. The paper proposes ways to wire fairness constraints and causal validity directly into the bandit architecture, so that the speed of adaptive learning does not come at the cost of interpretability or equity. Across all six papers, the throughline is the same: marketplaces are dynamic, interdependent systems, and the methods used to study them need to account for that. Patel is methodically building those methods, one framework at a time.

Check out our other content

Check out other tags:

Most Popular Articles