David Bruns-Smith’s Journey Through High-Speed Code, Data Science, Causal Inference, and Economic Insight
At first glance, it might seem like a straight path: a computer whiz discovers a love for fast algorithms, studies computer science, and dives headfirst into high-performance computing. But for data science postdoctoral fellow David Bruns-Smith, the path took an unexpected—and ultimately transformative—turn from speed-focused code to impact-driven economic research.
David’s story is about discovering purpose through data. It's about learning when to follow performance benchmarks and when to ask, “What does this actually mean for people’s lives?” And at its core, it's about how interdisciplinary science, when rooted in curiosity and courage, can change not just research directions, but entire fields.
Early Curiosity Meets Applied Data
“I was pretty sure I wanted to be a computer scientist pretty early on,” David recalls. “I was always playing with computers.”
That early passion led him to a computer science degree, but from the start, his work was surprisingly applied, diving into data-intensive projects, including a computational neuroscience lab that exposed him to large-scale data analysis and modeling.
David’s first job after college, at a small research-focused company, was his first real encounter with data science, though the term wasn’t even used at the time. He was given a choice of research domains: network security or Flickr’s social network data. “They were writing fast implementations of new algorithms,” he says. “And I was supposed to show they did something.”
It was high-performance computing with a purpose. But soon, he made what he now calls “a horrible mistake.”
The Fast Lane That Fell Flat
Energized by the challenge of optimizing algorithms, he decided to pursue a PhD in computer architecture, focusing on accelerating gene sequencing. “It was really fun to make the computer run fast,” he says. “But I wasn’t getting to do any genetics. I was solving puzzles without meaning.”
The disconnection between technical mastery and meaningful application led to a painful realization: he was deeply unfulfilled.
It wasn’t until he pivoted from implementation-focused research to mission-driven inquiry that things began to change. “I realized I wanted to start with a problem I cared about, and then figure out what statistical tools I needed to solve it.”
Leap of Faith into Economics
That realization led him to leave his PhD lab and his funding. He took a chance on a fellowship in labor economics and machine learning, made possible by pure serendipity: he happened to be taking a poverty and inequality class taught by someone connected to a special interdisciplinary fund. “It was pure luck,” he admits. “They really wanted a computer scientist.”
That single connection opened the door to a new world. He sent cold emails to economists looking for mentorship—most went unanswered, but one reply changed David’s trajectory again. “She [Emi Nakamura at UC Berkeley] was the most amazing advisor I could have imagined,” he says. “I basically learned economics from her and her collaborator [Jon Steinsson], and they took me under their wing.” “Emi is just amazing,” reflects David. “I can't say enough great things about her. In many ways, I won the lottery twice, because by attending the causal inference research group, I also met my second advisor, Avi Feller (also at UC Berkeley), who taught me causal inference.”
Discovering Causal Inference
David’s work began focusing on causal inference, a field that sits at the crossroads of statistics, policy, and machine learning. “Machine learning is great at prediction,” he explains. “But public policy isn’t about prediction. It’s about intervention. You want to know what happens if you change something.”
That’s where causal inference comes in. It's the study of cause and effect, trying to understand the impact of actions in the real world. Whether it’s the Federal Reserve changing interest rates or a household receiving a stimulus check, prediction alone isn’t enough. “We need to ask, if I do X, what will happen to Y? And that’s really hard, because people don’t randomly get assigned policies.”
His work centers on understanding these kinds of systems using statistical techniques and adapting machine learning to help when randomized experiments aren’t feasible.
Building Tools That Matter
David’s main research at Stanford has focused on making causal inference methods work better with machine learning. One powerful approach, instrumental variables, is commonly used to estimate causal effects when experiments aren’t possible.
A famous example involves school enrollment deadlines: if two students are born just days apart but end up with a year’s difference in schooling due to cutoff dates, any later difference in income can be attributed to that extra year of education, isolating the causal effect.
But existing approaches can break in surprising ways when classical statistical models are replaced with machine learning. David’s breakthrough: a simple, elegant mathematical trick that makes it possible to use your favorite ML algorithms within this complex framework. “I just wanted a method that I could run myself,” he says. “And now I can, and it works.”
His technique is the focus of his latest paper: Two-Stage Machine Learning for Nonparametric Instrumental Variable Regression.
From Fragrance Chemistry to Economic Policy
When David is not conducting in-depth statistical research, he experiments with fragrances in his home lab. “I love making fragrances,” he says. It’s part chemistry, part puzzle. He even built a Jupyter notebook that helps reverse-engineer the natural components in perfumes using machine learning and gas chromatography data.
“Limonene shows up in orange, lemon, bergamot... so how do you know which was used in a perfume?” he explains with a grin. “You look at the trace chemicals. Maybe there’s an aldehyde that only shows up in grapefruit.”
David’s idea of fun is blending creativity, chemistry, and machine learning. And somehow, it all fits.
Advice for Aspiring Data Scientists
His advice to anyone pursuing data science, especially across fields, is simple but bold: “You have to be an expert in both fields.”
“It's very powerful to be a computer scientist collaborating with economists if you deeply understand their world and vice versa. This leads to uncovering surprising, deep connections between the fields.
And as he prepares for the next stage, applying for faculty positions, he dreams of building his own lab to explore the intersection of machine learning and macroeconomic policy. “It’s about being bigger than myself,” he says. “Working in a group that can push the science forward, and mentoring the next generation.”
From building lightning-fast algorithms to creating tools that help us understand economic inequality, David’s journey shows how data science can be both deeply technical and profoundly human.
Key Links & Recent Papers
- David’s website: https://brunssmith.com/
- Two-Stage Machine Learning for Nonparametric Instrumental Variable Regression
- Augmented Balancing Weights as Linear Regression, Journal of the Royal Statistical Society Series B: Statistical Methodology, 2025. Presented as a keynote at the RSS Annual Conference in Edinburgh (September 2025).
- Multi-accurate Estimators Can Be Simultaneously Robust and Efficient; To appear at NeurIPS 2025 (abstract here)