Podcast Episode 3: Matching and Causal Inference

We just released another belated episode of our Data Spanners podcast (with Courtney Paulson). In this episode, we host the inimitable Sean Taylor and talk about matching (and re-matching), causal inference, and challenges in modeling different types of data (including “sequence data”). It’s an episode we had a lot of fun recording, and I bet you’ll enjoy listening to it (Spotify only).

We touch on big data, optimization, continued value of theory, System 1 and 2 loops, and modeling decisions in high stakes vs. low stakes problems. We also tackle tough questions like “What are the most important inputs to modeling data?” Data itself, creativity, domain expertise, or algorithms? I think we even mention AI at some point (pretty sure Sean brings it up!).

On a related note, but unrelated to the people involved in the making of this podcast episode, I’ll be posting some updates soon on our concept-in-progress “data centricity” and how assumptions play a critical but underappreciated role in modeling data and making models work. Stay tuned.

Source

Performance of large language models on counterfactual tasks

I came across a post by Melanie Mitchell summarizing their recent research on understanding the capabilities of large language models (GPT in particular). LLMs seem to do relatively well at basic analogy (zero-generalization) problems, performing about 20% worse than humans in their replication study. However, the latest and supposedly best LLMs continue to fail at counterfactual tasks (which require reasoning beyond the content available in the training set), performing about 50% worse than humans. This is another study showing that the fundamental prerequisite for causal understanding is missing from the language models:

When tested on our counterfactual tasks, the accuracy of humans stays relatively high, while the accuracy of the LLMs drops substantially.  The plot above shows the average accuracy of humans (blue dots, with error bars) and the accuracies of the LLMs, on problems using alphabets with different numbers of permuted letters, and on symbol alphabets (“Symb”).  While LLMs do relatively well on problems with the alphabet seen in their training data, their abilities decrease dramatically on problems that use a new “fictional” alphabet. Humans, however, are able to adapt their concepts to these novel situations. Another research study, by University of Washington’s Damian Hodel and Jevin West, found similar results.

Our paper concluded with this: “These results imply that GPT models are still lacking the kind of abstract reasoning needed for human-like fluid intelligence.”

The post also refers to contradictory studies, but I agree with the comment about what counterfactual (abstract) thinking means, and thus why the results above make more sense:

I disagree that the letter-string problems with permuted alphabets “require that letters be converted into the corresponding indices.”  I don’t believe that’s how humans solve them—you don’t have to figure out that, say, m is the 5th letter and p is the 16th letter to solve the problem I gave as an example above.  You just have to understand general abstract concepts such as successorship and predecessorship, and what these means in the context of the permuted alphabet. Indeed, this was the point of the counterfactual tasks—to test this general abstract understanding.

Source

Business models for Gen AI

You’ll get the semicolon joke only if you’ve coded in Java or the C family.

Here’s the part that’s not a joke:

Venture capital firm Sequoia estimates that in 2023, the AI industry spent $50 billion on the Nvidia chips used to train the generative AI models, but generated only $3 billion in revenue. My understanding is that the spending figure doesn’t even include the rest of the costs, just the chips.

It will take a serious creative leap to find a business model and close the gap between cost and value. Until business use cases move beyond better search and answer generation, the gap will continue to widen as the willingness to pay for existing services does not appear to be anywhere near the cost of development (user base growth for LLMs has already stalled).

Credit for the Venn goes to Forrest Brazeal.

The AI revolution is already losing steam

These models work by digesting huge volumes of text, and it’s undeniable that up to now, simply adding more has led to better capabilities. But a major barrier to continuing down this path is that companies have already trained their AIs on more or less the entire internet, and are running out of additional data to hoover up. There aren’t 10 more internets’ worth of human-generated content for today’s AIs to inhale.

We need a difference in kind, not in degree. Existing language models are incapable of learning cause-effect relationships, and adding more data won’t change that.

Source

Reducing “understanding” to the ability to create a map of associations

Reducing “understanding” to the ability to create a map of associations (even a highly successful map) is not helpful for business use cases. This leads to the illusion that existing large language models can “understand”.

The first image is an excerpt from the latest Anthropic article claiming that LLMs can understand (otherwise a very useful article, here). OpenAI also often refers to AGI or strong AI in its product releases.

The following screenshots from Reddit are one of many illustrations of why such a reductionist approach is neither accurate nor helpful. Without the ability to map causal relationships, knowledge doesn’t translate into understanding.

We will have the best business use cases for LLMs only if we define the capabilities of these models correctly. Let’s say a business analyst wants to take a quick look at some sales numbers in an exploratory analysis. They would interact with an LLM very differently if they were told that the model understands versus just knows more (and potentially better).

Student’s t-test during a study leave at Guinness

You may or may not know that the Student’s t-test was named after William Sealy Gosset, head experimental brewer at Guinness, who published under the pseudonym “Student”. This is because Guinness preferred its employees to use pseudonyms when publishing scientific papers.

The part I didn’t know is that Guinness had a policy of granting study leave to technical staff, and Gosset took advantage of this during the first two terms of the 1906-1907 academic year. This sounds like a great idea to encourage boundary spanning.

This article is a very nice account of the story, with nice visuals (which will definitely make it into the beer preference example in my predictive analytics course).

Source

Mind the AI Gap: Understanding vs. Knowing

Next week I will be speaking at the 2024 International Business Pedagogy Workshop on the use of AI in education. This gave me the opportunity to put together some ideas about using LLMs as learning tools.

Some key points:
– Knowing is not the same as understanding, but can easily be confused.
– Understanding requires the ability to reason and identify causality (using counterfactual thinking).
– This is completely lacking in LLMs at the moment.
– Framing LLMs as magical thinking machines or creative minds is not helpful because it can easily mislead us to lend our cognition.
– The best way to benefit from LLMs is to recognize them for what they are: Master memorization models.
– Their power lies in their sheer processing power and capacity, which can make them excellent learning companions when used properly.
– How can LLMs best be used as learning companions? That’ll be part of our discussion.

Source

Are the two images the same?

To humans, the answer is undoubtedly yes. To algorithms, they could easily be two completely different images, if not mistaken in their characteristics. The image on the right is the 𝘨𝘭𝘢𝘻𝘦𝘥 version of the original image on the left.

Glazed is a product of the SAND Lab at the University of Chicago that helps artists protect their art from generative AI companies. Glaze adds noise to artwork that is invisible to the human eye but misleading to the algorithm.

Glaze is free to use, but understandably not open source, so as not to give art thieves an advantage in adaptive responses in this cat-and-mouse game.

The idea is similar to the adversarial attack famously discussed in Goodfellow et al. (2015), where a panda predicted with low confidence becomes a sure gibbon to the algorithm after adversarial noise is added to the image.

I heard about this cool and useful project a while ago and have been meaning to help spread the word. In the words of the researchers:

Glaze is a system designed to protect human artists by disrupting style mimicry. At a high level, Glaze works by understanding the AI models that are training on human art, and using machine learning algorithms, computing a set of minimal changes to artworks, such that it appears unchanged to human eyes, but appears to AI models like a dramatically different art style. For example, human eyes might find a glazed charcoal portrait with a realism style to be unchanged, but an AI model might see the glazed version as a modern abstract style, a la Jackson Pollock. So when someone then prompts the model to generate art mimicking the charcoal artist, they will get something quite different from what they expected.

The sample artwork is by Jingna Zhang.

Source

Hardest problem in Computer Science: Centering things

This is a must-read/see article full of joy (and pain) for visually obsessed people. It’s a tribute to symmetry and a rebuke to non-random, unexplained errors in achieving it.

Centering is more than a computer science problem. We struggle with centering all the time, from hanging frames on the wall to landscaping. In another world, centering is also central to data science, as in standardized scores and other rescaling operations. Centering gives us a baseline against which to compare everything else. Our brains love this symmetry (as explained here and elsewhere).

Source