Overcorrecting the overcorrected

Jeremy Siegel almost loses it here for good reason. The Federal Reserve Board called it transitory inflation when prices were skyrocketing last Fall. When the same Fed (except a couple members) now argues that inflation is not yet slowing at a reasonable pace despite the price contractions recent data shows, questions arise. Fed gives the impression that it is now overcorrecting what it overcorrected earlier by loosening the monetary policy too much and for too long.

In case after case, our data modeling and inference practices are tested against lags in data. Lack of a high predictive power, we resort to overcorrections. Overcorrecting is doing more than enough (vs. not doing enough) and sounds better than coming short. But then, the pendulum swings back a little harder.

What do we learn from such swings? Well, one rather obvious takeaway is to put more emphasis on correctly understanding and modeling lags in time series. Another one is to be content with coming short occasionally, especially when the cost of overcorrecting is much higher than the cost of coming short.

Source

Killing fish the right way using computer vision

The commercial refrigerator/shelf looking device in the picture is an ikejime machine. Ikejime is considered the fastest and most humane method of killing fish. The method also leads to the best taste of fish because fish are killed instantly before their bodies go into distress and produce lactic acid and ammonia into their muscles. Ikejime involves the insertion of a spike quickly and directly into the hindbrain, causing immediate brain death.

That is, if fishermen know where the hindbrain exactly is for each species, and can insert the spike quickly and precisely within minutes of catching a fish, and have time to do so repeatedly. Well, that’s what robots are for.

Shinkei Systems’ machine is a combination of hardware and an edge detection algorithm, the engine behind object recognition in convolutional neural networks. Challenges abound. The machine operates on a fishing boat that tilts around even at zero speed. Apparently, “even in the same species, even with the same contour, the brain can be in a different location” as well.

Working with fishermen in Maine, New Hampshire and Cape Cod, Shinkei Systems seems to have been accomplishing the task on fresh-caught fish at a rate of one every 10-15 seconds. Moving forward, accuracy should increase and time to complete the task should decrease, leading to further opportunities.

Source

BLOOM, the first truly open-science, open-access, and multilingual large language model

“We wanted to make sure people with proximity to the data, their country, the language they speak, had a hand in choosing what language came into the model’s training,” says Jernite.

BLOOM, the first truly open-science, open-access, and multilingual (46 languages) large language model with 176B parameters (slightly larger than GPT-3) will soon be released as a complete pretrained model. Behind the project is BigScience, a wide-scale collaboration of over 1,000 researchers.

The project is quite impressive overall, both for the extent of collaboration and outcome. It’s also an engineering delight to watch. The model has been trained using 384 A100 GPUs (with 80 GB of memory each) since March 11, 2022.

BigScience provides updates on training every day (having hit its initial target earlier than planned, the model is currently being trained for “a few more days”). See the links in the comments to follow the updates and download the model. The full model will be released on HuggingFace (also a partner of the project).

This is a significant step forward for at least two reasons: the way the training data was collected and the core values behind the initiative. BigScience seems to have prioritized data quality by hand crafting the training data. In a world of models that favor kitchen sink approaches (because they can!), this is a progress. More obviously, BLOOM paves the way for a true democratization by removing the strings that have been attached to the use of such models by OpenScience, Google, and Facebook (apply for API access, accredited researcher only etc.).

Source

Ordinary lasso vs. fancy lasso

While attending the Symposium on Data Science & Statistics to present our study in Improving Algorithms for Big Data session two weeks ago, I learned about useful new methods (and met the great people behind them).

One of my favorites is Sparsity Ranked Lasso (SRL) by Ryan Peterson. The paper mainly focuses on lasso but the idea is also extended to other regularization approaches such as elastic net.

Takeaway: Use SRL over ordinary lasso especially if your model has interaction terms and polynomials. On average, SRL’s predictive performance is better than lasso in the 112 datasets from the Penn Machine Learning Benchmark database. Ryan goes on to show also that SRL overperforms a Random Forest (RF) in a case study both in accuracy and efficiency. Even if SRL performs on par with a RF, why not use SRL as it is both interpretable and explainable!

The part I loved about SRL is the simple yet important challenge, which the authors call “covariate equipoise”: the prior belief that all covariates are equally likely to enter into a model. Basically, a model’s simplicity is usually defined by its parsimony: the number of parameters. This is no matter whether a parameter is an interaction (or a polynomial form) of the other terms in the model. This is problematic for obvious reasons and SRL solves it by treating covariate groups differently based on their type.

And yes, there is a package for that: sparseR. Link to the R package and nicely written paper are in the comments.

R package – Paper

Where have all the Uber drivers gone?

A seemingly persistent effect of the pandemic on Uber is a 50% decrease in the mobility’s share of revenue (a decrease from a 80% share to less than a 40% share of rides in total revenue). Based on revenue, Uber is now more a delivery company than a mobility company.

This is data centricity extrapolated: a shift from carrying people to carrying objects while solving pretty much the same data-driven optimization problem. The article is from last year but the effect persists as of Q1 2022: Only 37% of the revenue is from carrying people.

Source

Google Imagen

Now that object detection is almost a solved problem, work on the next frontier, text-to-image generation, began to thrive. Google Research’s most recent work on generative models, Imagen, uses text embeddings from a large language model called T5 (similar to GPT3 and OPT175B) to encode text for image synthesis.

Interestingly, the study finds that increasing the size of the language model improves performance more than increasing the size of the image diffusion model. Imagen achieves exceptional similarity between real and synthetic images (measured by the distance metric FID, Imagen achieves a score of 7.27 on the COCO dataset). Human raters confirm the performance of the model.

The paper is nicely written with a much-needed ethics discussion at the end, and full of colorful images. Apparently, Imagen does not perform as well when generating images that portray humans.

Synthetic data generation and image restoration are two common use cases of GANs. I will post a link to one such study on medical images in the comments. Arts and crafts is obvious. I can also think of use cases for fashion and potentially personalization of products in retail. What are some other business use cases?

Source

How does the brain learn mental models?

Interesting read and perspective on modeling the learning in hippocampus and potentially applying the model structure to the design and development of algorithms. Clone-structured cognitive graph (CSCG) uses Markov chains and dynamic Markov compression. So CSCGs form a probabilistic sequence model.

Source

NeuralProphet puts Facebook’s Prophet on steroids using neural networks

Models remain interpretable to the extent that the components of the original function are retained. The authors claim 55% to 92% improvement in accuracy in short to medium-term forecasts, which is impressive if generalizable. Model training time increases 4-fold but prediction time improves 14-fold. Developed on PyTorch so it can be parallelized and deployed on GPUs, potentially to reduce training time. Ported to R but using a Python environment.

Looks promising especially for “AI on the edge” type mobile applications.

Source

Open Pretrained Transformer

Meta AI’s release of Open Pretrained Transformer (OPT-175B), which is on par with OpenAI’s GPT-3 at 175 billion parameters/weights, emphasizes responsible compute and claims one-seventh the computational cost in terms of carbon footprint. Pretrained model weights are free to download (link in the comments). This is good news for open collaboration and better news for the environment.

Source

When reverse causation is more profitable

You may have heard of ESG (Environmental, Social, and Governance) investing. It’s also called “socially responsible investing” when ethics is added to the picture. Public companies are assigned an ESG score, which is a quantification of the social impact. What social impact though? You would probably expect ESG ratings to quantify the societal impact of (not on) a company, right? Well, you’ll be disappointed. “Socially responsible investing” is a misnomer when associated with the ESG ratings, at least those reported by MSCI, a leading provider of the ESG ratings globally.

MSCI basically quantifies the impact of environmental, social, and governance risks on a company’s operations (not the other way around!). In other words, if we rely on ESG ratings while making investment decisions, we may not be doing any social good. We are essentially ensuring that our investments are protected from the environmental, social, and other risks such as climate change. After all, why would we care about the carbon footprint of our investments on the environment as long as profits are good?

MSCI’s plot offers some takeaways on how to generate data and model it. Apparently, measuring reverse causation and packaging it to look like the cause and effect are in the right place can be quite profitable. To be fair, MSCI is explicit about its data generation and modeling process residing in the darkside.

Source

On the proof-of-concept to production gap

A valuable insight on the proof-of-concept to production gap in computer vision that underlines again the importance of context:

“It turns out,” Ng said, “that when we collect data from Stanford Hospital, then we train and test on data from the same hospital, indeed, we can publish papers showing [the algorithms] are comparable to human radiologists in spotting certain conditions.”

But, he said, “It turns out [that when] you take that same model, that same AI system, to an older hospital down the street, with an older machine, and the technician uses a slightly different imaging protocol, that data drifts to cause the performance of AI system to degrade significantly. In contrast, any human radiologist can walk down the street to the older hospital and do just fine.”

Source

99/1 is the new 80/20

An obvious but often neglected fact is the overemphasized value of accuracy as a performance metric. In a two-class problem where 99% of the cases are of 0 (Not a spam email), achieving an accuracy of 99% is as easy as classifying all emails as safe. Sensitivity, specificity, and other metrics exist for a reason.

The story of Waymo, Google’s self-driving car, resembles the value of solving the remaining 1% of the problem where conventional machine learning gets stuck due to the limitations of training data. If 1% of the error turns into a make or break point, one needs to get creative. On a long tail that extends to infinity, walking faster or running does not probably help as much as a leap of imagination.

I must note that it’s not fair to expect an autonomous car to be “error-free” given we do not expect human drivers to perform error-free at the driver license exams and road tests. The two will just make different errors.

When to normalize / apply weights

To me, this is interesting not because of the lack of transparency in methodology but the potential reason for the rankings to be wrong.

I want to believe that this is a mistake not fraud, but really? Applying the weights before normalizing the scores? And the Bloomberg Businessweek spokesperson says “the magazine’s methodology was vetted by multiple data scientists.”

I have created a quick scenario as a reminder to my former (and current) students (posted in the comments as LinkedIn doesn’t allow here). In the example, the scores are standardized across the five items (which are randomly generated and assigned weights). In the Businessweek rankings, standardization is supposed to be across institutions so that the weights proportionately affect each institution’s score on the corresponding item. Nevertheless, the source of the error is the same. If the weights are applied before normalizing the data, the scores are adjusted by the weights disproportionately. Ranking changes accordingly.

Algorithmic fashioning

For years, Zara has been my go-to case to discuss data centricity in fashion retail. Zara is a staple example of how a focus on data and analytics combined with the right, complementary business processes can create wonders even in a market with high degrees of demand uncertainty due to the hedonic nature of consumption.

Shein seems to be emerging as a contender, moving further into data-driven (not only data-informed) fast fashion. Its operation is also called real-time fashion rather than fast fashion. Shein doesn’t own any physical stores (none at all) and ships all of its products directly from China.

Bloomberg reports that “Shein has developed proprietary technology that harvests customers’ search data from the app and shares it with suppliers, to help guide decisions about design, capacity and production. It generates recommendations for raw materials and where to buy them, and gives suppliers access to a deep database of designs for inspiration.”

Shein reduces the design to customer turnaround to 10 days, a record compared to already-fast Zara’s two- to three-week lead time. It’s not a niche operation either, given the reports of a $10 billion annual sales and a potential $30 billion valuation.

I’ve found the whole story interesting. It all sounds impressive but also dangerous. The article already mentions some of the “accidents” its algorithm-driven fashion caused along with sustainability concerns.

“But it would be naïve to predict that unpredictable events won’t happen in the future.”

“Zillow Quits Home-Flipping Business, Cites Inability to Forecast Prices,” WSJ reports.* I try to avoid passing along news stories but it’s not everyday I receive a predictive analytics story as breaking news.

I wonder whether the reason is really “an inability to forecast the prices” or “relying too much on an ability to forecast the prices” for a “$20 billion a year” venture as it was debuted.

Zillow announced plans for this data-driven venture in 2018 by citing consumers who “expect magic to happen with a simple push of a button.” In a statement yesterday, Zillow seems to have realized magic is not happening: “But it would be naïve to predict that unpredictable events won’t happen in the future.”

Maybe it is never a good idea to develop a whole business model that grossly underestimates the changes in error (both reducible and irreducible) due to potential bifurcations in market forces.

Source

If tech is everything, then it is nothing

What do #Facebook, #Tesla, #DoorDash, #Nvidia, and #GM* have in common? They are all “tech” companies.

Alex Webb of Bloomberg offers a linguistic explanation for why technology ceased to be meaningful:

“English lacked an equivalent to the French technique and German Technik. The English word “technique” hadn’t caught up with the innovations of the Industrial Revolution, and it still applied solely to the way in which an artist or artisan performed a skill.”

He contrasts technique as in “artistic technique” in English with technique as in “Lufthansa Technik” in German and argues that technology emerged in the early 20th century for the lack of a better alternative.

Whether the reason is linguistic, sheer overhype, or semantic satiation, we may be better off dropping the “tech company” reference at this point unless it is elaborated further. For the companies that are more tech than your average tech, a good alternative may be “deep tech.”

Data-driven paralysis

Data-driven decision making can lead to paralysis. Last week, the FDA and CDC committees couldn’t make a decision about the booster shots because (complete) data was not available. Well, making decisions in the absence of complete data is a process of imagination and deep thinking, one that puts hypothesis development at the center and humans continue to prevail over machines in the process.

To avoid such a paralysis, more focus can be put on developing and rethinking hypotheses and their likelihoods. In emergent problems, an in-depth discussion on hypotheses and likelihoods is probably more helpful than an obsession to access complete data. Otherwise, by defining complete data as a prerequisite, as it would be in data-driven decision making, we will continue to be paralyzed looking into the future.

If we turn to data-informed decision making, however, hypotheses would take more control (not gut feeling but properly developed hypotheses*). We could then make decisions to be improved as more data becomes available without being paralyzed in the present. Rather than seeking the truth, we would seek probable truths (as in Bayesian thinking).

While we may be able to remain strictly data-driven for some problems and decisions, we should be comfortable proceeding informed (not driven) by data for others.

* This post made me think of a book I enjoyed reading last Fall: Defense of the Scientific Hypothesis: From Reproducibility Crisis to Big Data

To log or how to log

I avoid posting technical notes here. This is an exception because I have an agenda.

Log transformation is widely used in modeling data for several reasons: Making data “behave,” calculating elasticity etc.

When an outcome variable naturally has zeros, however, log transformation is tricky. Many data modelers (including seasoned researchers) instinctively add a positive constant to each value in the outcome variable. One popular idea is to add 1 to the variable and transform raw zeros to log-transformed zeros. Another idea is to add a very small constant, especially when the scale of the outcome variable is small.

Well, bad news is these are arbitrary choices and the resulting estimations may be biased. To me, if an analysis is correlational (as most are), a small bias may not be a big concern. If it is causal, and for example, an estimated elasticity will be used to take action (with an intention to change an outcome), that’s trouble waiting to happen. This is a problem of data centricity.

What is a solution (other than deserting to Poisson etc.)? A recent study by Christophe Bellégo and his coauthors offers a solution called iOLS (iterated OLS). To avoid bias, the iOLS algorithm adds an observation-specific value to the outcome variable. Voila! I haven’t tested it yet but I like the idea. Read their nicely written paper here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3444996

My (not so hidden) agenda is regarding the implementation. The authors offer a Stata implementation (https://github.com/ldpape/iOLS). I would love to see it in R (or Python). Hence this is a call for action.

In defense of Amazon (Trends)

#WSJ continues to report on #Amazon’s shady practices. An earlier article said Amazon used sales data on third-party sellers to offer copycat, private-label products (like AmazonBasics). It was a coherent story but making hasty generalizations. Another piece showed how Amazon manipulates product search ads to favor its products. Both articles (linked within) underlined a data access problem: Amazon has access to the data on its rivals and exploits it for competitive advantage.

This latest article is not as coherent and a bit all over the place, but Amazon’s response is not helping either. Amazon says “Offering products inspired by the trends to which customers are responding is a common practice across the retail industry.” Amazon needs to nurture trust in its ecosystem but seems to be doing the opposite.

I don’t actually see any rampant issues except for access to product search data. Amazon is the dominant leader of the product search market (above Google and others). As a sign of good faith in building trust, Amazon could make (aggregated, anonymous) search data available and offer “Amazon Trends” like Google Trends. Needless to say, third-party sellers may be offered a more in-depth access.