Simplified calls for LLM APIs

For a new project, I’ve been exploring options to develop a backend to query multiple large language models and just came across this great solution.

It’s an open source project called LiteLLM and it provides a unified interface to call 100+ LLMs using the same input and output format, including OpenAI, Anthropic, models on Hugging Face, Azure etc.

There is cost tracking and rate limits. To make things easier, there is even a user interface. What I found most useful is the ease of comparison and benchmarking between LLMs. Kudos to the developer team.

I can see so many business use cases for integrations like this: rapid prototyping and experimentation, performance benchmarking and optimization, cost control…

Source

Creative process and LLMs

Beyond the analogy of LLMs being a lossy compression of the Web, the point about the creative process is spot on in this article. The more we relegate the creative process to the tools of efficiency, the more we risk the output being mediocre.

Will letting a large language model handle the boilerplate allow writers to focus their attention on the really creative parts?

Obviously, no one can speak for all writers, but let me make the argument that starting with a blurry copy of unoriginal work isnโ€™t a good way to create original work. If youโ€™re a writer, you will write a lot of unoriginal work before you write something original. And the time and effort expended on that unoriginal work isnโ€™t wasted; on the contrary, I would suggest that it is precisely what enables you to eventually create something original. The hours spent choosing the right word and rearranging sentences to better follow one another are what teach you how meaning is conveyed by prose.

Sometimes itโ€™s only in the process of writing that you discover your original ideas.

Source

Yet another generative tool without safe and fair use discussion

Google seems to have just revealed its latest text-to-video diffusion model, Google Lumiere, just as the debate over fake images and videos heats up, with the following note:

๐˜š๐˜ฐ๐˜ค๐˜ช๐˜ฆ๐˜ต๐˜ข๐˜ญ ๐˜๐˜ฎ๐˜ฑ๐˜ข๐˜ค๐˜ต
๐˜–๐˜ถ๐˜ณ ๐˜ฑ๐˜ณ๐˜ช๐˜ฎ๐˜ข๐˜ณ๐˜บ ๐˜จ๐˜ฐ๐˜ข๐˜ญ ๐˜ช๐˜ฏ ๐˜ต๐˜ฉ๐˜ช๐˜ด ๐˜ธ๐˜ฐ๐˜ณ๐˜ฌ ๐˜ช๐˜ด ๐˜ต๐˜ฐ ๐˜ฆ๐˜ฏ๐˜ข๐˜ฃ๐˜ญ๐˜ฆ ๐˜ฏ๐˜ฐ๐˜ท๐˜ช๐˜ค๐˜ฆ ๐˜ถ๐˜ด๐˜ฆ๐˜ณ๐˜ด ๐˜ต๐˜ฐ ๐˜จ๐˜ฆ๐˜ฏ๐˜ฆ๐˜ณ๐˜ข๐˜ต๐˜ฆ ๐˜ท๐˜ช๐˜ด๐˜ถ๐˜ข๐˜ญ ๐˜ค๐˜ฐ๐˜ฏ๐˜ต๐˜ฆ๐˜ฏ๐˜ต ๐˜ช๐˜ฏ ๐˜ข๐˜ฏ ๐˜ค๐˜ณ๐˜ฆ๐˜ข๐˜ต๐˜ช๐˜ท๐˜ฆ ๐˜ข๐˜ฏ๐˜ฅ ๐˜ง๐˜ญ๐˜ฆ๐˜น๐˜ช๐˜ฃ๐˜ญ๐˜ฆ ๐˜ธ๐˜ข๐˜บ. ๐˜๐˜ฐ๐˜ธ๐˜ฆ๐˜ท๐˜ฆ๐˜ณ, ๐˜ต๐˜ฉ๐˜ฆ๐˜ณ๐˜ฆ ๐˜ช๐˜ด ๐˜ข ๐˜ณ๐˜ช๐˜ด๐˜ฌ ๐˜ฐ๐˜ง ๐˜ฎ๐˜ช๐˜ด๐˜ถ๐˜ด๐˜ฆ ๐˜ง๐˜ฐ๐˜ณ ๐˜ค๐˜ณ๐˜ฆ๐˜ข๐˜ต๐˜ช๐˜ฏ๐˜จ ๐˜ง๐˜ข๐˜ฌ๐˜ฆ ๐˜ฐ๐˜ณ ๐˜ฉ๐˜ข๐˜ณ๐˜ฎ๐˜ง๐˜ถ๐˜ญ ๐˜ค๐˜ฐ๐˜ฏ๐˜ต๐˜ฆ๐˜ฏ๐˜ต ๐˜ธ๐˜ช๐˜ต๐˜ฉ ๐˜ฐ๐˜ถ๐˜ณ ๐˜ต๐˜ฆ๐˜ค๐˜ฉ๐˜ฏ๐˜ฐ๐˜ญ๐˜ฐ๐˜จ๐˜บ, ๐˜ข๐˜ฏ๐˜ฅ ๐˜ธ๐˜ฆ ๐˜ฃ๐˜ฆ๐˜ญ๐˜ช๐˜ฆ๐˜ท๐˜ฆ ๐˜ต๐˜ฉ๐˜ข๐˜ต ๐˜ช๐˜ต ๐˜ช๐˜ด ๐˜ค๐˜ณ๐˜ถ๐˜ค๐˜ช๐˜ข๐˜ญ ๐˜ต๐˜ฐ ๐˜ฅ๐˜ฆ๐˜ท๐˜ฆ๐˜ญ๐˜ฐ๐˜ฑ ๐˜ข๐˜ฏ๐˜ฅ ๐˜ข๐˜ฑ๐˜ฑ๐˜ญ๐˜บ ๐˜ต๐˜ฐ๐˜ฐ๐˜ญ๐˜ด ๐˜ง๐˜ฐ๐˜ณ ๐˜ฅ๐˜ฆ๐˜ต๐˜ฆ๐˜ค๐˜ต๐˜ช๐˜ฏ๐˜จ ๐˜ฃ๐˜ช๐˜ข๐˜ด๐˜ฆ๐˜ด ๐˜ข๐˜ฏ๐˜ฅ ๐˜ฎ๐˜ข๐˜ญ๐˜ช๐˜ค๐˜ช๐˜ฐ๐˜ถ๐˜ด ๐˜ถ๐˜ด๐˜ฆ ๐˜ค๐˜ข๐˜ด๐˜ฆ๐˜ด ๐˜ช๐˜ฏ ๐˜ฐ๐˜ณ๐˜ฅ๐˜ฆ๐˜ณ ๐˜ต๐˜ฐ ๐˜ฆ๐˜ฏ๐˜ด๐˜ถ๐˜ณ๐˜ฆ ๐˜ข ๐˜ด๐˜ข๐˜ง๐˜ฆ ๐˜ข๐˜ฏ๐˜ฅ ๐˜ง๐˜ข๐˜ช๐˜ณ ๐˜ถ๐˜ด๐˜ฆ.

This is the only paragraph in the paper on safe and fair use. The model output certainly looks impressive, but, without a concrete discussion of ideas and guardrails for safe and fair use, this reads like nothing more than a checkbox to avoid bad publicity from the likely consequences.

Source

Garbage in, garbage out?

In a sample of 6.4 billion sentences in 90 languages from the Web, this study finds that 57.1% is low-quality machine translation. In addition, it is the low quality content produced in English (to generate ad revenue) that is translated en masse into other languages (again, to generate ad revenue).

The study discusses the negative implications for the training of large language models (garbage in, garbage out), but the increasingly poor quality of public web content is concerning nevertheless.

Source

Excel =? LLM

In this Q&A about Walmart’s custom-trained, proprietary “My Assistant” language model, I saw an excerpt from another article in which Walmart’s Head of People Product uses Excel as an analogy for generative models.

โ€œ๐˜ˆ๐˜ค๐˜ค๐˜ฐ๐˜ณ๐˜ฅ๐˜ช๐˜ฏ๐˜จ ๐˜ต๐˜ฐ ๐˜—๐˜ฆ๐˜ต๐˜ฆ๐˜ณ๐˜ด๐˜ฐ๐˜ฏ, ๐˜ข๐˜ฏ๐˜บ ๐˜จ๐˜ฆ๐˜ฏ๐˜ฆ๐˜ณ๐˜ข๐˜ต๐˜ช๐˜ท๐˜ฆ ๐˜ˆ๐˜ ๐˜ณ๐˜ฐ๐˜ญ๐˜ญ๐˜ฐ๐˜ถ๐˜ต ๐˜ช๐˜ด ๐˜จ๐˜ฐ๐˜ช๐˜ฏ๐˜จ ๐˜ต๐˜ฐ ๐˜ฆ๐˜ฏ๐˜ค๐˜ฐ๐˜ถ๐˜ฏ๐˜ต๐˜ฆ๐˜ณ ๐˜ข ๐˜ค๐˜ฉ๐˜ข๐˜ฏ๐˜จ๐˜ฆ ๐˜ค๐˜ถ๐˜ณ๐˜ท๐˜ฆ ๐˜ฏ๐˜ฐ๐˜ต ๐˜ถ๐˜ฏ๐˜ญ๐˜ช๐˜ฌ๐˜ฆ ๐˜ธ๐˜ฉ๐˜ข๐˜ต ๐˜”๐˜ช๐˜ค๐˜ณ๐˜ฐ๐˜ด๐˜ฐ๐˜ง๐˜ต ๐˜Œ๐˜น๐˜ค๐˜ฆ๐˜ญ ๐˜ฆ๐˜น๐˜ฑ๐˜ฆ๐˜ณ๐˜ช๐˜ฆ๐˜ฏ๐˜ค๐˜ฆ๐˜ฅ ๐˜ช๐˜ฏ ๐˜ต๐˜ฉ๐˜ฆ 1980๐˜ด ๐˜ฃ๐˜ฆ๐˜ง๐˜ฐ๐˜ณ๐˜ฆ ๐˜ฃ๐˜ฆ๐˜ช๐˜ฏ๐˜จ ๐˜ข๐˜ค๐˜ค๐˜ฆ๐˜ฑ๐˜ต๐˜ฆ๐˜ฅ ๐˜ข๐˜ด ๐˜ค๐˜ฐ๐˜ณ๐˜ฑ๐˜ฐ๐˜ณ๐˜ข๐˜ต๐˜ฆ ๐˜จ๐˜ฐ๐˜ด๐˜ฑ๐˜ฆ๐˜ญ. ๐˜š๐˜ช๐˜ฎ๐˜ช๐˜ญ๐˜ข๐˜ณ ๐˜ต๐˜ฐ ๐˜ฉ๐˜ฐ๐˜ธ ๐˜ฆ๐˜ข๐˜ณ๐˜ญ๐˜บ ๐˜ถ๐˜ด๐˜ฆ๐˜ณ๐˜ด ๐˜ฐ๐˜ง ๐˜”๐˜ช๐˜ค๐˜ณ๐˜ฐ๐˜ด๐˜ฐ๐˜ง๐˜ต ๐˜Œ๐˜น๐˜ค๐˜ฆ๐˜ญ ๐˜ฉ๐˜ข๐˜ฅ ๐˜ต๐˜ฐ ๐˜ฃ๐˜ฆ ๐˜ต๐˜ณ๐˜ข๐˜ช๐˜ฏ๐˜ฆ๐˜ฅ ๐˜ต๐˜ฐ ๐˜ถ๐˜ฏ๐˜ฅ๐˜ฆ๐˜ณ๐˜ด๐˜ต๐˜ข๐˜ฏ๐˜ฅ ๐˜ฉ๐˜ฐ๐˜ธ ๐˜ต๐˜ฐ ๐˜ฉ๐˜ข๐˜ณ๐˜ฏ๐˜ฆ๐˜ด๐˜ด ๐˜ต๐˜ฉ๐˜ฆ ๐˜ฑ๐˜ฐ๐˜ธ๐˜ฆ๐˜ณ ๐˜ฐ๐˜ง ๐˜ข ๐˜—๐˜ช๐˜ท๐˜ฐ๐˜ต๐˜›๐˜ข๐˜ฃ๐˜ญ๐˜ฆ ๐˜ข๐˜ฏ๐˜ฅ ๐˜๐˜“๐˜–๐˜–๐˜’๐˜œ๐˜— ๐˜ง๐˜ฐ๐˜ณ๐˜ฎ๐˜ถ๐˜ญ๐˜ข๐˜ด, ๐˜จ๐˜ฆ๐˜ฏ๐˜ฆ๐˜ณ๐˜ข๐˜ต๐˜ช๐˜ท๐˜ฆ ๐˜ˆ๐˜ ๐˜ถ๐˜ด๐˜ฆ๐˜ณ๐˜ด ๐˜ฉ๐˜ข๐˜ท๐˜ฆ ๐˜ต๐˜ฐ ๐˜ถ๐˜ฏ๐˜ฅ๐˜ฆ๐˜ณ๐˜ด๐˜ต๐˜ข๐˜ฏ๐˜ฅ ๐˜ฑ๐˜ณ๐˜ฐ๐˜ฎ๐˜ฑ๐˜ต๐˜ช๐˜ฏ๐˜จ ๐˜ข๐˜ฏ๐˜ฅ ๐˜ฉ๐˜ช๐˜จ๐˜ฉ-๐˜ช๐˜ฎ๐˜ฑ๐˜ข๐˜ค๐˜ต ๐˜ถ๐˜ด๐˜ฆ ๐˜ค๐˜ข๐˜ด๐˜ฆ๐˜ด ๐˜ต๐˜ฐ ๐˜ต๐˜ณ๐˜ถ๐˜ญ๐˜บ ๐˜ฉ๐˜ข๐˜ณ๐˜ฏ๐˜ฆ๐˜ด๐˜ด ๐˜ช๐˜ต๐˜ด ๐˜ฑ๐˜ฐ๐˜ธ๐˜ฆ๐˜ณ.โ€

2024 will be the year that more companies adopt generative models as an aid to their employees. But it is interesting to use an analogy to deterministic functions like PivotTable and VLOOKUP to drive the adoption of a black box model with probabilistic outputs. Let’s see how that plays out for Walmart.

Source

From explainable to predictive, and to causal

The use of AI algorithms for drug discovery is one of the most promising areas for its societal value. Historically, most deep learning approaches in this area have used black box models, providing little insight into discoveries.

A recent study published in Nature uses explainable graph neural networks to address the urgent need for new antibiotics due to the ongoing antibiotic resistance crisis.

The study begins with the testing and labeling of 39,312 compounds,

– which become training data for four ensembles of graph neural networks,
– which make predictions for a total of 12,076,365 compounds in the test set (hits vs. non-hits based on antibiotic activity and cytotoxicity),
– of which 3,646 compounds are selected based on the probability that they will act as antibiotics without being toxic to humans,
– which are then reduced to 283 compounds by a series of empirical steps,
– and to 4 compounds by experimental testing,
– and of the 4, two “drug-like” compounds are tested in mice,

and one of the two is found to be effective against MRSA infections in this controlled experiment, thus closing the causal loop.

This is a great application of combining explainable predictive models with causal identification, and demonstrates that machine learning models used in high-stakes areas can be explainable without compromising performance.

Source

Dose-response analysis using difference-in-differences

The dose-response work of Callaway, Goodman-Bacon, and Pedro Sant’Anna seems to be coming along nicely. If you haven’t had enough of the parallel trends assumption, get ready for the “strong” parallel trends assumption!

“๐˜๐˜ฏ ๐˜ต๐˜ฉ๐˜ช๐˜ด ๐˜ฑ๐˜ข๐˜ฑ๐˜ฆ๐˜ณ, ๐˜ธ๐˜ฆ ๐˜ฅ๐˜ช๐˜ด๐˜ค๐˜ถ๐˜ด๐˜ด ๐˜ข๐˜ฏ ๐˜ข๐˜ญ๐˜ต๐˜ฆ๐˜ณ๐˜ฏ๐˜ข๐˜ต๐˜ช๐˜ท๐˜ฆ ๐˜ฃ๐˜ถ๐˜ต ๐˜ต๐˜บ๐˜ฑ๐˜ช๐˜ค๐˜ข๐˜ญ๐˜ญ๐˜บ ๐˜ด๐˜ต๐˜ณ๐˜ฐ๐˜ฏ๐˜จ๐˜ฆ๐˜ณ ๐˜ข๐˜ด๐˜ด๐˜ถ๐˜ฎ๐˜ฑ๐˜ต๐˜ช๐˜ฐ๐˜ฏ, ๐˜ธ๐˜ฉ๐˜ช๐˜ค๐˜ฉ ๐˜ธ๐˜ฆ ๐˜ค๐˜ข๐˜ญ๐˜ญ ๐˜ด๐˜ต๐˜ณ๐˜ฐ๐˜ฏ๐˜จ ๐˜ฑ๐˜ข๐˜ณ๐˜ข๐˜ญ๐˜ญ๐˜ฆ๐˜ญ ๐˜ต๐˜ณ๐˜ฆ๐˜ฏ๐˜ฅ๐˜ด. ๐˜š๐˜ต๐˜ณ๐˜ฐ๐˜ฏ๐˜จ ๐˜ฑ๐˜ข๐˜ณ๐˜ข๐˜ญ๐˜ญ๐˜ฆ๐˜ญ ๐˜ต๐˜ณ๐˜ฆ๐˜ฏ๐˜ฅ๐˜ด ๐˜ฐ๐˜ง๐˜ต๐˜ฆ๐˜ฏ ๐˜ณ๐˜ฆ๐˜ด๐˜ต๐˜ณ๐˜ช๐˜ค๐˜ต๐˜ด ๐˜ต๐˜ณ๐˜ฆ๐˜ข๐˜ต๐˜ฎ๐˜ฆ๐˜ฏ๐˜ต ๐˜ฆ๐˜ง๐˜ง๐˜ฆ๐˜ค๐˜ต ๐˜ฉ๐˜ฆ๐˜ต๐˜ฆ๐˜ณ๐˜ฐ๐˜จ๐˜ฆ๐˜ฏ๐˜ฆ๐˜ช๐˜ต๐˜บ ๐˜ข๐˜ฏ๐˜ฅ ๐˜ซ๐˜ถ๐˜ด๐˜ต๐˜ช๐˜ง๐˜ช๐˜ฆ๐˜ด ๐˜ค๐˜ฐ๐˜ฎ๐˜ฑ๐˜ข๐˜ณ๐˜ช๐˜ฏ๐˜จ ๐˜ฅ๐˜ฐ๐˜ด๐˜ฆ ๐˜จ๐˜ณ๐˜ฐ๐˜ถ๐˜ฑ๐˜ด. ๐˜๐˜ฏ๐˜ต๐˜ถ๐˜ช๐˜ต๐˜ช๐˜ท๐˜ฆ๐˜ญ๐˜บ, ๐˜ต๐˜ฐ ๐˜ฃ๐˜ฆ ๐˜ข ๐˜จ๐˜ฐ๐˜ฐ๐˜ฅ ๐˜ค๐˜ฐ๐˜ถ๐˜ฏ๐˜ต๐˜ฆ๐˜ณ๐˜ง๐˜ข๐˜ค๐˜ต๐˜ถ๐˜ข๐˜ญ, ๐˜ญ๐˜ฐ๐˜ธ๐˜ฆ๐˜ณ-๐˜ฅ๐˜ฐ๐˜ด๐˜ฆ ๐˜ถ๐˜ฏ๐˜ช๐˜ต๐˜ด ๐˜ฎ๐˜ถ๐˜ด๐˜ต ๐˜ณ๐˜ฆ๐˜ง๐˜ญ๐˜ฆ๐˜ค๐˜ต ๐˜ฉ๐˜ฐ๐˜ธ ๐˜ฉ๐˜ช๐˜จ๐˜ฉ๐˜ฆ๐˜ณ-๐˜ฅ๐˜ฐ๐˜ด๐˜ฆ ๐˜ถ๐˜ฏ๐˜ช๐˜ต๐˜ดโ€™ ๐˜ฐ๐˜ถ๐˜ต๐˜ค๐˜ฐ๐˜ฎ๐˜ฆ๐˜ด ๐˜ธ๐˜ฐ๐˜ถ๐˜ญ๐˜ฅ ๐˜ฉ๐˜ข๐˜ท๐˜ฆ ๐˜ค๐˜ฉ๐˜ข๐˜ฏ๐˜จ๐˜ฆ๐˜ฅ ๐˜ธ๐˜ช๐˜ต๐˜ฉ๐˜ฐ๐˜ถ๐˜ต ๐˜ต๐˜ณ๐˜ฆ๐˜ข๐˜ต๐˜ฎ๐˜ฆ๐˜ฏ๐˜ต ๐˜ข๐˜ฏ๐˜ฅ ๐˜ข๐˜ต ๐˜ต๐˜ฉ๐˜ฆ ๐˜ญ๐˜ฐ๐˜ธ๐˜ฆ๐˜ณ ๐˜ญ๐˜ฆ๐˜ท๐˜ฆ๐˜ญ ๐˜ฐ๐˜ง ๐˜ต๐˜ฉ๐˜ฆ ๐˜ต๐˜ณ๐˜ฆ๐˜ข๐˜ต๐˜ฎ๐˜ฆ๐˜ฏ๐˜ต. ๐˜ž๐˜ฆ ๐˜ด๐˜ฉ๐˜ฐ๐˜ธ ๐˜ต๐˜ฉ๐˜ข๐˜ต ๐˜ธ๐˜ฉ๐˜ฆ๐˜ฏ ๐˜ฐ๐˜ฏ๐˜ฆ ๐˜ฐ๐˜ฏ๐˜ญ๐˜บ ๐˜ช๐˜ฎ๐˜ฑ๐˜ฐ๐˜ด๐˜ฆ๐˜ด ๐˜ต๐˜ฉ๐˜ฆ โ€œ๐˜ด๐˜ต๐˜ข๐˜ฏ๐˜ฅ๐˜ข๐˜ณ๐˜ฅโ€ ๐˜ฑ๐˜ข๐˜ณ๐˜ข๐˜ญ๐˜ญ๐˜ฆ๐˜ญ ๐˜ต๐˜ณ๐˜ฆ๐˜ฏ๐˜ฅ๐˜ด ๐˜ข๐˜ด๐˜ด๐˜ถ๐˜ฎ๐˜ฑ๐˜ต๐˜ช๐˜ฐ๐˜ฏ, ๐˜ค๐˜ฐ๐˜ฎ๐˜ฑ๐˜ข๐˜ณ๐˜ช๐˜ด๐˜ฐ๐˜ฏ๐˜ด ๐˜ข๐˜ค๐˜ณ๐˜ฐ๐˜ด๐˜ด ๐˜ต๐˜ณ๐˜ฆ๐˜ข๐˜ต๐˜ฎ๐˜ฆ๐˜ฏ๐˜ต ๐˜ฅ๐˜ฐ๐˜ด๐˜ข๐˜จ๐˜ฆ๐˜ด ๐˜ข๐˜ณ๐˜ฆ โ€œ๐˜ค๐˜ฐ๐˜ฏ๐˜ต๐˜ข๐˜ฎ๐˜ช๐˜ฏ๐˜ข๐˜ต๐˜ฆ๐˜ฅโ€ ๐˜ธ๐˜ช๐˜ต๐˜ฉ ๐˜ด๐˜ฆ๐˜ญ๐˜ฆ๐˜ค๐˜ต๐˜ช๐˜ฐ๐˜ฏ ๐˜ฃ๐˜ช๐˜ข๐˜ด ๐˜ณ๐˜ฆ๐˜ญ๐˜ข๐˜ต๐˜ฆ๐˜ฅ ๐˜ต๐˜ฐ ๐˜ต๐˜ณ๐˜ฆ๐˜ข๐˜ต๐˜ฎ๐˜ฆ๐˜ฏ๐˜ต ๐˜ฆ๐˜ง๐˜ง๐˜ฆ๐˜ค๐˜ต ๐˜ฉ๐˜ฆ๐˜ต๐˜ฆ๐˜ณ๐˜ฐ๐˜จ๐˜ฆ๐˜ฏ๐˜ฆ๐˜ช๐˜ต๐˜บ. ๐˜›๐˜ฉ๐˜ถ๐˜ด, ๐˜ธ๐˜ช๐˜ต๐˜ฉ๐˜ฐ๐˜ถ๐˜ต ๐˜ข๐˜ฅ๐˜ฅ๐˜ช๐˜ต๐˜ช๐˜ฐ๐˜ฏ๐˜ข๐˜ญ ๐˜ด๐˜ต๐˜ณ๐˜ถ๐˜ค๐˜ต๐˜ถ๐˜ณ๐˜ฆ, ๐˜ค๐˜ฐ๐˜ฎ๐˜ฑ๐˜ข๐˜ณ๐˜ช๐˜ด๐˜ฐ๐˜ฏ ๐˜ข๐˜ค๐˜ณ๐˜ฐ๐˜ด๐˜ด ๐˜ฅ๐˜ฐ๐˜ด๐˜ข๐˜จ๐˜ฆ๐˜ด ๐˜ฎ๐˜ข๐˜บ ๐˜ฏ๐˜ฐ๐˜ต ๐˜ช๐˜ฅ๐˜ฆ๐˜ฏ๐˜ต๐˜ช๐˜ง๐˜บ ๐˜ค๐˜ข๐˜ถ๐˜ด๐˜ข๐˜ญ ๐˜ฆ๐˜ง๐˜ง๐˜ฆ๐˜ค๐˜ต๐˜ด. ๐˜›๐˜ฉ๐˜ฆ ๐˜ฑ๐˜ญ๐˜ข๐˜ถ๐˜ด๐˜ช๐˜ฃ๐˜ช๐˜ญ๐˜ช๐˜ต๐˜บ ๐˜ฐ๐˜ง ๐˜ด๐˜ต๐˜ณ๐˜ฐ๐˜ฏ๐˜จ ๐˜ฑ๐˜ข๐˜ณ๐˜ข๐˜ญ๐˜ญ๐˜ฆ๐˜ญ ๐˜ต๐˜ณ๐˜ฆ๐˜ฏ๐˜ฅ๐˜ด ๐˜ฅ๐˜ฆ๐˜ฑ๐˜ฆ๐˜ฏ๐˜ฅ๐˜ด ๐˜ฐ๐˜ฏ ๐˜ต๐˜ฉ๐˜ฆ ๐˜ฆ๐˜ฎ๐˜ฑ๐˜ช๐˜ณ๐˜ช๐˜ค๐˜ข๐˜ญ ๐˜ค๐˜ฐ๐˜ฏ๐˜ต๐˜ฆ๐˜น๐˜ต ๐˜ฐ๐˜ง ๐˜ต๐˜ฉ๐˜ฆ ๐˜ข๐˜ฏ๐˜ข๐˜ญ๐˜บ๐˜ด๐˜ช๐˜ด, ๐˜ข๐˜ฏ๐˜ฅ ๐˜ธ๐˜ฆ ๐˜ฅ๐˜ช๐˜ด๐˜ค๐˜ถ๐˜ด๐˜ด ๐˜ด๐˜ฐ๐˜ฎ๐˜ฆ ๐˜ง๐˜ข๐˜ญ๐˜ด๐˜ช๐˜ง๐˜ช๐˜ค๐˜ข๐˜ต๐˜ช๐˜ฐ๐˜ฏ ๐˜ด๐˜ต๐˜ณ๐˜ข๐˜ต๐˜ฆ๐˜จ๐˜ช๐˜ฆ๐˜ด ๐˜ต๐˜ฉ๐˜ข๐˜ต ๐˜ค๐˜ข๐˜ฏ ๐˜ฃ๐˜ฆ ๐˜ถ๐˜ด๐˜ฆ๐˜ฅ ๐˜ต๐˜ฐ ๐˜ข๐˜ด๐˜ด๐˜ฆ๐˜ด๐˜ด ๐˜ช๐˜ต.”

Source

Human Creator v. Gen AI

2024 will be the year of lawsuits against generative AI companies. We’ve already had the GitHub Copilot case over assisted coding (https://lnkd.in/eH8Ap-eJ) and the Anthropic case over AI lyrics (https://lnkd.in/eY6UF9Cn). Now the Times joins the fray (https://lnkd.in/e8wHHMzx), and more are likely to follow.

So far, Gen AI companies have defended themselves by arguing fair use and transformative use – that their models create something substantially new and serve a different purpose than the original (thus doesn’t substitute the original, as in Google Books). But recent Supreme Court decisions such as Warhol v. Goldsmith made clear that transformative use claims face high bars.

What might come next?
– New business models for content licensing
– Restrictions on public access to some internal models
– Calls for updated copyright laws and content use regulations
– Technical solutions like attribution, data provenance, and content tagging
– What else?