About Me
I study the human supply chain of Artificial Intelligence.
Hello! I am a fifth-year PhD Candidate at Anderson School of Management, UCLA, advised by Charles Corbett and Auyon Siddiq.
My research focuses on the human infrastructure behind AI. Artificial intelligence relies on invisible labor to label images, annotate text, and teach algorithms right from wrong. I study crowdwork platforms, the digital assembly lines where this work happens, using econometric methods to understand fairness, task estimation, and worker retention in these marketplaces.
Before joining UCLA, I was the Head of Advanced Analytics at ACHS in Chile and a management consultant at Oliver Wyman. I hold an MBA from MIT Sloan. I am expecting to graduate in June 2027.
Beyond the research and the data, my life is centered on my family. I am incredibly fortunate to walk this path with my wife, Andrea, and our four children: Carlota, Martín, Joaquín, and Juan Pablo. They are my greatest joy, my daily chaos, and my most important responsibility.
Recent Activity
- May 2026 Scheduled to present "First Impressions Matter" at the POMS Annual Conference in Reno, NV.
- Nov 2025 Submitted "Searching for Serendipity" to Strategic Management Journal. Preprint available
- Nov 2025 Submitted "The Impact of Information Systems on Experts' Decisions" to American Economic Journal: Applied Economics.
- Jan 2025 Published "Fairness in Crowdwork" in Business Horizons.
Research
First Impressions Matter: Task Frictions and Retention
Crowdwork platforms are central to the data supply chains that power AI systems and sustain large-scale academic and commercial research. Yet worker engagement dynamics remain poorly understood. We study how early procedural frictions shape retention using approximately 64 million task submissions from Prolific. Focusing on new workers’ first fourteen days, we examine two frictions: task returns, reflecting study design failures, and duration underestimation, reflecting inaccurate platform information. Using instrumental variables to address endogeneity, we find that a 10 percentage point increase in returned tasks reduces retention by 4.4 percentage points, while the same increase in underestimated tasks reduces retention by 15.4 percentage points. Both frictions operate primarily through attrition rather than demotivation among surviving workers. These findings suggest that process reliability and information accuracy are critical levers for sustaining the engaged workforce that platforms need to function.
Fairness in Crowdwork: Making the Human AI Supply Chain More Humane
The vast quantities of data required to build artificial intelligence (AI) technologies are often annotated and processed manually, making human labor a critical component of the AI supply chain. The workers who input this data are sourced through digital labor (crowdwork) platforms that often are unregulated and offer low wages, raising concerns about labor standards in AI development. Using the results of a survey, this article aims to shed light on the experiences and perceptions of fair treatment among workers in the AI supply chain. The study reveals significant variability in workers’ experiences, identifies potential drivers of fairness, and highlights how design choices by labor platforms can significantly affect worker welfare. Drawing on lessons from physical supply chains, this article offers practical guidance to managers on how to enhance worker welfare within the AI supply chain and how to ensure that AI technologies are responsibly sourced.
Searching for Serendipity
Serendipity would seem to preclude purposeful search. To understand the relationship between search behavior and the probability of a serendipitous discovery, we propose a formal modeling framework based on the NK model. Our simulations suggest that searchers with a clear hypothesis convert good fortune into fitness or value far more effectively. Searchers that pursue their theories through incremental steps instead of long-distance jumps have the highest rates of serendipitous success. Our model also provides insight into the role of bias, inaccurate or incorrect beliefs, and the ruggedness of the terrain in determining the rate of serendipitous discovery. Our results demonstrate the distinct roles of intent and information in finding novel, valuable discoveries.
The Impact of Information Systems on Experts' Decisions
How do professionals respond to computerized, data-driven guidance in practice? We analyze a workers’ compensation insurance program where physicians make coverage and diagnosis decisions. We study the introduction of an automated system that flagged diagnoses with historically low coverage. We develop a model that yields testable predictions to distinguish between informational and persuasive effects. Consistent with persuasion, physicians granted coverage less often when confronted with alerts, but they also avoided alerts by recoding diagnoses. Data from secondary reviews show that the system aligned physicians’ decisions with management’s preferences. These findings provide lessons for the design of information systems for decision-makers.