We just released another belated episode of our Data Spanners podcast (with Courtney Paulson). In this episode, we host the inimitable Sean Taylor and talk about matching (and re-matching), causal inference, and challenges in modeling different types of data (including “sequence data”). It’s an episode we had a lot of fun recording, and I bet you’ll enjoy listening to it (Spotify only).
We touch on big data, optimization, continued value of theory, System 1 and 2 loops, and modeling decisions in high stakes vs. low stakes problems. We also tackle tough questions like “What are the most important inputs to modeling data?” Data itself, creativity, domain expertise, or algorithms? I think we even mention AI at some point (pretty sure Sean brings it up!).
On a related note, but unrelated to the people involved in the making of this podcast episode, I’ll be posting some updates soon on our concept-in-progress “data centricity” and how assumptions play a critical but underappreciated role in modeling data and making models work. Stay tuned.