What an A/B test is not

[Click title for image]

The founder of this Shark Tank backed company (thinks he) did an A/B test on the impact of tariffs on customer behavior (demand for a showerhead): “Made in USA” vs. “Made in Asia”.

There’s so much wrong here that I’m just going to share it without comment. But one thing is clear: Outside of tech and other companies that are invested in data science, we’re still in the early days of business analytics education. When it comes to causal modeling, inference, and experimental design, we seem to be just getting started.

Source

AI as a disguised customer service agent

[Click title for image]

This is ironic and offers a valuable lesson.

Cursor, an AI-powered integrated development environment (IDE), started kicking users out when they logged in from multiple machines.

I use Cursor on a daily basis, and I know how frustrating and disruptive this limitation can be for most users.

So many Cursor users rushed to email the support team to ask if this was a new policy. In response, the support team explained that this was “expected behavior” as part of a new security feature.

But, in reality, there was no support team. Sam is a bot designed to “mimic human responses.” That answer, which was completely made up by the bot, quickly went viral, and users started canceling their subscriptions.

By the time Cursor’s “real humans” stepped in, the damage was done. Here on Reddit, Cursor is doing damage control.

Pretty remarkable that the AI company got hit by the AI and no one noticed until users canceled their subscriptions in droves.

And this could have been largely avoided if Cursor had disclosed that Sam was a bot.

Agent2Agent Protocol for LLMs

Google has just announced the Agent2Agent Protocol (A2A). A2A is open source and aims to enable AI agents to work together seamlessly, potentially multiplying productivity gains in end-to-end business processes.

As I understand it, A2A is to agent communication what MCP is to tool use. At the time, I saw MCP as an opportunity to reduce frictions in agent deployment while maintaining a level of security (see here), and it has taken off since then. Google’s A2A seems to take it to the next level, providing more security in the cloud for multiple agents to communicate and collaborate:

A2A focuses on enabling agents to collaborate in their natural, unstructured modalities, even when they don’t share memory, tools and context. We are enabling true multi-agent scenarios without limiting an agent to a “tool.”

SourceDocumentation

Collapse of trust in digitized evidence

[Click title for image]

How much longer will we have non-zero trust in what see on a computer screen?

Generative models are eroding trust in the digital world at an astonishing rate with each new model released. Soon, pictures and videos of events will no longer be accepted as evidence.

Insurance companies won’t accept pictures and videos of damage after accidents, and accounting departments will no longer accept pictures of receipts. This may be an easier problem to solve. We’ll likely develop more ways to authenticate digital files. More algorithms will verify authenticity, and companies may simply ask customers to use dedicated apps.

But the shift in public trust in digital files is less easily repairable and may even be permanent. We may be leaving behind pics or it didn’t happen for I only believe what I physically see.

No-code as a cure for understanding

[Click title for image]

Some tasks require understanding, not just knowing how to do. Tools can’t fill the gaps in understanding. For these tasks, time is better spent learning and understanding. No-code development is useful for building without understanding, but understanding is most critical when things fail. And things fail while building products, be they data products or otherwise.

Here the user switches from Cursor (automated coding) to Bubble (a no-code tool) to address the lack of understanding, not realizing that switching tools is solving the wrong problem.

We often make the same mistake in data science, especially in predictive modeling, where a new off-the-shelf library or method is treated as a prophet (pun intended), only to find out later that it was solving the wrong problem.

Source

Coding vs. understanding the code

[Click title for image]

Doing is not understanding. Even LLMs seem to know the difference.

I’ve written and spoken a lot about this (link to the talk). Naturally, the exchange here was too good not to share. Here is Claude in Cursor lecturing a user on the difference between having something coded by an LLM vs. coding it yourself so you learn and understand.

The better we separate things we need to understand from things we just need to do, the more effectively we will benefit from LLMs. We certainly can’t understand everything (nor do we need to), but it’s a good idea to avoid the illusion of understanding just because we can do it.

To paraphrase Feynman, we can only understand the code we can create.

Sources of technological progress

[Click title for image]

If you woke up this morning running to the coffee pot even more aggressively because of the start of Daylight Saving Time, just remember that you’re not alone, and that’s how innovation and technological progress begin.

The world’s first webcam was invented in 1991 to monitor a coffee pot in a computer lab at the University of Cambridge, England:

To save people working in the building the disappointment of finding the coffee machine empty after making the trip to the room, a camera was set up providing a live picture of the coffee pot to all desktop computers on the office network. After the camera was connected to the Internet a few years later, the coffee pot gained international renown as a feature of the fledgling World Wide Web, until being retired in 2001.

See the Wiki here.

Deep, Deeper, Deepest Research

[Click title for image]

You must be Platinum, Diamond, or Elite Plus somewhere. Maybe Premier Plus?

Since LLM developers discovered the idea of using multiple models (or agents?) that interact with each other to produce richer output, we have seen another round of semantic reduction by overusing “deep” and “research” (as we did with “intelligence”, “thinking”, and “reasoning”).

In this post “The Differences between Deep Research, Deep Research, and Deep Research”, Han Lee tries to make sense of the deep research mania and offers a quadrant to classify different models.

Is the “depth” of research just the number of iterations in the search for information? That’s another story.

AI as a substitute or complement

This is a much-needed perspective on the new generation of tools in language modeling, object recognition, robotics, and others. The time and effort spent pitting algorithms against human intelligence is truly mind-boggling, when algorithms have been complementing us in so many tasks for decades. The new generation of tools simply offers more opportunities.

In data science, for example, humans excel at conceptual modeling of causal problems because they are creative and imaginative, and algorithms excel at complementing effect identification by collecting, structuring, computing, and optimizing high-dimensional, high-volume data in nonlinear, nonparametric space. Maybe we just need to get over the obsession with benchmarks that pit machine against human and create tests of complementarity.

Causal inference is not about methods

The price elasticity of demand doesn’t magically become causal by using DoubleML instead of regression. Similarly, we can’t estimate the causal effect of a treatment if a condition is always treated or never treated. We need to treat sometimes and not treat other times.

Causal modeling starts with bespoke data and continues with assumptions. The methods follow the data and assumptions and are useful only if the right data and assumptions are available. This is different from predictive modeling, where brute force bias reduction using the most complex method can be successful.

We offer a reminder in this solo piece at Data Duets. You can read or listen (just scroll to the end).