Open Pretrained Transformer

Meta AI’s release of Open Pretrained Transformer (OPT-175B), which is on par with OpenAI’s GPT-3 at 175 billion parameters/weights, emphasizes responsible compute and claims one-seventh the computational cost in terms of carbon footprint. Pretrained model weights are free to download (link in the comments). This is good news for open collaboration and better news for the environment.

Source

When reverse causation is more profitable

You may have heard of ESG (Environmental, Social, and Governance) investing. It’s also called “socially responsible investing” when ethics is added to the picture. Public companies are assigned an ESG score, which is a quantification of the social impact. What social impact though? You would probably expect ESG ratings to quantify the societal impact of (not on) a company, right? Well, you’ll be disappointed. “Socially responsible investing” is a misnomer when associated with the ESG ratings, at least those reported by MSCI, a leading provider of the ESG ratings globally.

MSCI basically quantifies the impact of environmental, social, and governance risks on a company’s operations (not the other way around!). In other words, if we rely on ESG ratings while making investment decisions, we may not be doing any social good. We are essentially ensuring that our investments are protected from the environmental, social, and other risks such as climate change. After all, why would we care about the carbon footprint of our investments on the environment as long as profits are good?

MSCI’s plot offers some takeaways on how to generate data and model it. Apparently, measuring reverse causation and packaging it to look like the cause and effect are in the right place can be quite profitable. To be fair, MSCI is explicit about its data generation and modeling process residing in the darkside.

Source

On the proof-of-concept to production gap

A valuable insight on the proof-of-concept to production gap in computer vision that underlines again the importance of context:

“It turns out,” Ng said, “that when we collect data from Stanford Hospital, then we train and test on data from the same hospital, indeed, we can publish papers showing [the algorithms] are comparable to human radiologists in spotting certain conditions.”

But, he said, “It turns out [that when] you take that same model, that same AI system, to an older hospital down the street, with an older machine, and the technician uses a slightly different imaging protocol, that data drifts to cause the performance of AI system to degrade significantly. In contrast, any human radiologist can walk down the street to the older hospital and do just fine.”

Source

99/1 is the new 80/20

An obvious but often neglected fact is the overemphasized value of accuracy as a performance metric. In a two-class problem where 99% of the cases are of 0 (Not a spam email), achieving an accuracy of 99% is as easy as classifying all emails as safe. Sensitivity, specificity, and other metrics exist for a reason.

The story of Waymo, Google’s self-driving car, resembles the value of solving the remaining 1% of the problem where conventional machine learning gets stuck due to the limitations of training data. If 1% of the error turns into a make or break point, one needs to get creative. On a long tail that extends to infinity, walking faster or running does not probably help as much as a leap of imagination.

I must note that it’s not fair to expect an autonomous car to be “error-free” given we do not expect human drivers to perform error-free at the driver license exams and road tests. The two will just make different errors.

When to normalize / apply weights

To me, this is interesting not because of the lack of transparency in methodology but the potential reason for the rankings to be wrong.

I want to believe that this is a mistake not fraud, but really? Applying the weights before normalizing the scores? And the Bloomberg Businessweek spokesperson says “the magazine’s methodology was vetted by multiple data scientists.”

I have created a quick scenario as a reminder to my former (and current) students (posted in the comments as LinkedIn doesn’t allow here). In the example, the scores are standardized across the five items (which are randomly generated and assigned weights). In the Businessweek rankings, standardization is supposed to be across institutions so that the weights proportionately affect each institution’s score on the corresponding item. Nevertheless, the source of the error is the same. If the weights are applied before normalizing the data, the scores are adjusted by the weights disproportionately. Ranking changes accordingly.

Algorithmic fashioning

For years, Zara has been my go-to case to discuss data centricity in fashion retail. Zara is a staple example of how a focus on data and analytics combined with the right, complementary business processes can create wonders even in a market with high degrees of demand uncertainty due to the hedonic nature of consumption.

Shein seems to be emerging as a contender, moving further into data-driven (not only data-informed) fast fashion. Its operation is also called real-time fashion rather than fast fashion. Shein doesn’t own any physical stores (none at all) and ships all of its products directly from China.

Bloomberg reports that “Shein has developed proprietary technology that harvests customers’ search data from the app and shares it with suppliers, to help guide decisions about design, capacity and production. It generates recommendations for raw materials and where to buy them, and gives suppliers access to a deep database of designs for inspiration.”

Shein reduces the design to customer turnaround to 10 days, a record compared to already-fast Zara’s two- to three-week lead time. It’s not a niche operation either, given the reports of a $10 billion annual sales and a potential $30 billion valuation.

I’ve found the whole story interesting. It all sounds impressive but also dangerous. The article already mentions some of the “accidents” its algorithm-driven fashion caused along with sustainability concerns.

“But it would be naïve to predict that unpredictable events won’t happen in the future.”

“Zillow Quits Home-Flipping Business, Cites Inability to Forecast Prices,” WSJ reports.* I try to avoid passing along news stories but it’s not everyday I receive a predictive analytics story as breaking news.

I wonder whether the reason is really “an inability to forecast the prices” or “relying too much on an ability to forecast the prices” for a “$20 billion a year” venture as it was debuted.

Zillow announced plans for this data-driven venture in 2018 by citing consumers who “expect magic to happen with a simple push of a button.” In a statement yesterday, Zillow seems to have realized magic is not happening: “But it would be naïve to predict that unpredictable events won’t happen in the future.”

Maybe it is never a good idea to develop a whole business model that grossly underestimates the changes in error (both reducible and irreducible) due to potential bifurcations in market forces.

Source

If tech is everything, then it is nothing

What do #Facebook, #Tesla, #DoorDash, #Nvidia, and #GM* have in common? They are all “tech” companies.

Alex Webb of Bloomberg offers a linguistic explanation for why technology ceased to be meaningful:

“English lacked an equivalent to the French technique and German Technik. The English word “technique” hadn’t caught up with the innovations of the Industrial Revolution, and it still applied solely to the way in which an artist or artisan performed a skill.”

He contrasts technique as in “artistic technique” in English with technique as in “Lufthansa Technik” in German and argues that technology emerged in the early 20th century for the lack of a better alternative.

Whether the reason is linguistic, sheer overhype, or semantic satiation, we may be better off dropping the “tech company” reference at this point unless it is elaborated further. For the companies that are more tech than your average tech, a good alternative may be “deep tech.”

Data-driven paralysis

Data-driven decision making can lead to paralysis. Last week, the FDA and CDC committees couldn’t make a decision about the booster shots because (complete) data was not available. Well, making decisions in the absence of complete data is a process of imagination and deep thinking, one that puts hypothesis development at the center and humans continue to prevail over machines in the process.

To avoid such a paralysis, more focus can be put on developing and rethinking hypotheses and their likelihoods. In emergent problems, an in-depth discussion on hypotheses and likelihoods is probably more helpful than an obsession to access complete data. Otherwise, by defining complete data as a prerequisite, as it would be in data-driven decision making, we will continue to be paralyzed looking into the future.

If we turn to data-informed decision making, however, hypotheses would take more control (not gut feeling but properly developed hypotheses*). We could then make decisions to be improved as more data becomes available without being paralyzed in the present. Rather than seeking the truth, we would seek probable truths (as in Bayesian thinking).

While we may be able to remain strictly data-driven for some problems and decisions, we should be comfortable proceeding informed (not driven) by data for others.

* This post made me think of a book I enjoyed reading last Fall: Defense of the Scientific Hypothesis: From Reproducibility Crisis to Big Data