You should not add 1 before log-transforming zeros. If you don’t believe me, listen to these two experts on how to make better decisions using log-transformed data.
This conversation was produced by NotebookLM based on our discussion about the Log of Zero problem at Data Duets. Duygu Dagli and I have now added a podcast-style conversation to each of our articles. All audio is raw/unedited.
The conversations are usually fun (sometimes for odd reasons). The model adds (1) examples we don’t have in the original content and (2) light banter and some jokes. The examples are hit or miss.
So, besides the usual deep and reinforcement learning backend, what does NotebookLM do? (based on Steven Johnson’s description on the Vergecast)
- Start with a draft and revise it
- Generate a detailed script of the podcast
- Critique the script and create a revised version
- Add disfluencies (um, uh, like, you know, c-c-can, sssssee…) to sound convincingly human
- Apply Google’s latest text-to-speech Gemini model to add intonation, emphasis, and pacing
Have fun, and don’t add 1 to your variables before applying the log transformation.