“Medium is for human storytelling, not AI-generated writing.”

Medium appears to be the first major publishing platform to adopt a policy banning the monetization of articles written by AI, effective May 1, 2024.

Enforcing this policy will be a real challenge, and will likely require human moderators to win an otherwise cat-and-mouse game. This is another area where AI may, ironically, create jobs to clean up the mess it has made.

Source

Why do people use LLMs?

Apparently for anything and everything, including advice of all kinds (medical, career, business), therapy, and Dungeons & Dragons (to create storylines, characters, and quests for players).

The list is based on a crawl of the web (Quora, Reddit, etc.).

Source

How do language models represent relations between entities?

This work shows that the complex nonlinear computation of LLMs for attribute extraction can be well-approximated with a simple linear function…

and more importantly, without a conceptual model.

The study has two main findings:
1. Some of the implicit knowledge is represented in a simple, interpretable, and structured format.
2.. This representation is not universally used, and superficially similar facts can be encoded and extracted in very different ways.

This is an interesting study that highlights the simplistic and associative nature of language models and the resulting randomness in their output.

Source

Google’s new PDF parser

In less sensational but more useful AI news, I’ve just discovered Google’s release of a new PDF parser.

The product was pushed by the Google Scholar team as a Chrome extension, but once installed, it parses any PDF opened in Chrome (it doesn’t have to be an academic article). It creates an interactive table of contents and shows the in-text references, tables, and figures on the spot, without having to go back and forth from top to bottom of the paper. It also has rich citation features.

I love it, but my natural reaction was, why didn’t we have this already?

Source

World’s first fully autonomous AI engineer?

Meet Devin, the world’s first fully autonomous AI software engineer.

We are an applied AI lab focused on reasoning.

We’re building AI teammates with capabilities far beyond today’s existing AI tools. By solving reasoning, we can unlock new possibilities in a wide range of disciplines—code is just the beginning.

Cognition Labs makes some big claims. The demos are impressive, but it is not clear what they mean by “solving reasoning”. There is good reasoning and there is bad reasoning. The latter may be easier to solve. Let’s see what’s left after the smoke clears.

At least they do not claim that Devin is a creative thinker.

Source

When do neural nets outperform boosted trees on tabular data?

Otherwise, tree ensembles continue to outperform neural networks. The decision tree in the figure shows the winner among the top five methods.

Now, the background:

I explored the why of this question before, but didn’t get very far. This may be expected, given the black-box and data-driven nature of these methods.

This is another study, this time testing larger tabular datasets. By comparing 19 methods on 176 datasets, this paper shows that 𝗳𝗼𝗿 𝗮 𝗹𝗮𝗿𝗴𝗲 𝗻𝘂𝗺𝗯𝗲𝗿 𝗼𝗳 𝗱𝗮𝘁𝗮𝘀𝗲𝘁𝘀, 𝗲𝗶𝘁𝗵𝗲𝗿 𝗮 𝘀𝗶𝗺𝗽𝗹𝗲 𝗯𝗮𝘀𝗲𝗹𝗶𝗻𝗲 𝗺𝗲𝘁𝗵𝗼𝗱 𝗽𝗲𝗿𝗳𝗼𝗿𝗺𝘀 𝗮𝘀 𝘄𝗲𝗹𝗹 𝗮𝘀 𝗮𝗻𝘆 𝗼𝘁𝗵𝗲𝗿 𝗺𝗲𝘁𝗵𝗼𝗱, 𝗼𝗿 𝗯𝗮𝘀𝗶𝗰 𝗵𝘆𝗽𝗲𝗿𝗽𝗮𝗿𝗮𝗺𝗲𝘁𝗲𝗿 𝘁𝘂𝗻𝗶𝗻𝗴 𝗼𝗻 𝗮 𝘁𝗿𝗲𝗲-𝗯𝗮𝘀𝗲𝗱 𝗲𝗻𝘀𝗲𝗺𝗯𝗹𝗲 𝗺𝗲𝘁𝗵𝗼𝗱 𝗶𝗺𝗽𝗿𝗼𝘃𝗲𝘀 𝗽𝗲𝗿𝗳𝗼𝗿𝗺𝗮𝗻𝗰𝗲 𝗺𝗼𝗿𝗲 𝘁𝗵𝗮𝗻 𝗰𝗵𝗼𝗼𝘀𝗶𝗻𝗴 𝘁𝗵𝗲 𝗯𝗲𝘀𝘁 𝗮𝗹𝗴𝗼𝗿𝗶𝘁𝗵𝗺.

This project also comes with a great resource. This time it comes with a ready-to-use codebase and testbed along with the paper.

Source

Why do tree-based models outperform deep learning on tabular data?

“The man who knows how will always have a job. The man who knows why will always be his boss.” – Ralph Waldo Emerson

The study shows that tree-based methods consistently outperform neural networks on tabular data with about 10K observations, both in prediction error and computational efficiency, with and without hyperparameter tuning. 45 datasets from different domains are modeled for benchmarking.

The paper then goes on to explain why. The “why” part offers some experiments but looks quite empirically driven so I can’t say I’m convinced there. The Hugging Face repo for the paper, datasets, code, and a detailed description is a great resource though.

Source

Project Euler and the SQL Murder Mystery

If you’re like me and love coding, but your daily work can go long stretches without coding, you’ll like Project Euler, where you can solve math problems using any programming language you like (as a long-time user, I use Python, since I use R more often when modeling data).

The project now has nearly 900 problems, with a new one added about once a week. The problems vary in difficulty, but each can be solved in less than a minute of CPU time using an efficient algorithm on an average computer.

Also, my recommendation engine says that if you like Project Euler, you might also like this SQL Murder Mystery I just discovered. This one is not really that difficult, but it does require you to pay close attention to the clues and prompts.

Unexpected spillover effect of the AI boom

Anguilla will generate over 10% of its GDP from the .ai domain sales this year. Based on a population of 15,899, .ai will generate a net gain of over $8K per year for a family of four on an island with a GDP per capita of $20K.

𝘈𝘯𝘥 𝘪𝘵’𝘴 𝘫𝘶𝘴𝘵 𝘱𝘢𝘳𝘵 𝘰𝘧 𝘵𝘩𝘦 𝘨𝘦𝘯𝘦𝘳𝘢𝘭 𝘣𝘶𝘥𝘨𝘦𝘵—𝘵𝘩𝘦 𝘨𝘰𝘷𝘦𝘳𝘯𝘮𝘦𝘯𝘵 𝘤𝘢𝘯 𝘶𝘴𝘦 𝘪𝘵 𝘩𝘰𝘸𝘦𝘷𝘦𝘳 𝘵𝘩𝘦𝘺 𝘸𝘢𝘯𝘵. 𝘉𝘶𝘵 𝘐’𝘷𝘦 𝘯𝘰𝘵𝘪𝘤𝘦𝘥 𝘵𝘩𝘢𝘵 𝘵𝘩𝘦𝘺’𝘷𝘦 𝘱𝘢𝘪𝘥 𝘥𝘰𝘸𝘯 𝘴𝘰𝘮𝘦 𝘰𝘧 𝘵𝘩𝘦𝘪𝘳 𝘥𝘦𝘣𝘵, 𝘸𝘩𝘪𝘤𝘩 𝘪𝘴 𝘱𝘳𝘦𝘵𝘵𝘺 𝘶𝘯𝘶𝘴𝘶𝘢𝘭. 𝘛𝘩𝘦𝘺’𝘷𝘦 𝘦𝘭𝘪𝘮𝘪𝘯𝘢𝘵𝘦𝘥 𝘱𝘳𝘰𝘱𝘦𝘳𝘵𝘺 𝘵𝘢𝘹𝘦𝘴 𝘰𝘯 𝘳𝘦𝘴𝘪𝘥𝘦𝘯𝘵𝘪𝘢𝘭 𝘣𝘶𝘪𝘭𝘥𝘪𝘯𝘨𝘴. 𝘚𝘰 𝘸𝘦’𝘳𝘦 𝘥𝘰𝘪𝘯𝘨 𝘸𝘦𝘭𝘭, 𝘐 𝘸𝘰𝘶𝘭𝘥 𝘴𝘢𝘺.

So AI stands for Asset Increase in Anguilla.

Source

Environmental costs of the AI boom

This is a bit personal. As a technologist, there’s probably never been a better time to be alive. As an environmentalist, it’s probably just the opposite.

As usual, we largely ignore the environmental impact and sustainability of large language models compared to the use cases and value they create. This whitepaper uses some descriptive data to provide a contrarian yet realistic view. TL;DR – It’s not a crisis per se yet, but it could be soon.

The comparisons need to be refined though. For example, the trend is more important than the snapshot (there is no kettle boom). We also probably need to use the kettle and the oven more than we need language models to “write a biblical verse in the style of the King James Bible explaining how to remove a peanut butter sandwich from a VCR” (from the article).

The article goes on to offer another positive: Responsible AI can spur efforts toward environmental sustainability, “from optimizing model-training efficiency to sourcing cleaner energy and beyond.” We will see about that.

Source