In this latest post at Data Duets, we discuss SHAP and SAGE, two explainable AI methods. I focused on a discussion of what these methods actually do and how they work intuitively, their lesser known but serious limitations, and how and why they relate to counterfactual thinking and causality.
If you’d rather skip all the reading and listen to the article as a podcast discussion, here’s the link. But don’t skip reading, because:
“Outside of a dog, a book is a man’s best friend. Inside of a dog, it’s too dark to read.” —Groucho Marx