This is based on a recent Nature study, and it’s useful with a caveat that may make the findings and visuals less striking than they look:
“๐๐๐ก๐ข๐๐ ๐ ๐๐๐๐โ๐๐ ๐๐๐ ๐๐๐ก๐๐๐๐๐ , ๐๐๐ฃ๐๐๐ค๐ ๐๐๐ ๐๐๐๐๐๐๐๐๐๐ ๐๐๐๐๐๐ ๐๐ ๐๐๐๐๐ข๐ , ๐ค๐๐กโ ๐ก๐๐ก๐๐๐ , ๐๐๐ ๐ก๐๐๐๐ก๐ , ๐๐ ๐๐๐ฆ๐ค๐๐๐๐ ๐๐๐๐ก๐๐๐๐๐๐ ๐กโ๐ ๐ก๐๐๐๐ โ๐๐๐โ๐๐๐ ๐๐๐๐๐๐๐๐โ; โ๐๐๐ข๐๐๐ ๐๐๐ก*โ, โ๐๐๐๐ ๐๐๐๐๐๐๐๐โ, โ๐๐๐๐๐๐ ๐๐๐๐๐ ๐กโ, โ๐๐๐๐ ๐๐๐๐๐๐๐๐โ, โ๐ ๐ข๐๐๐๐๐ก ๐ฃ๐๐๐ก๐๐ ๐๐๐โ๐๐๐โ, โ๐๐๐ก๐๐๐๐๐๐๐ ๐๐๐ก๐๐๐๐๐๐๐๐๐โ, โ๐๐๐๐๐๐ ๐๐๐๐๐๐๐ก๐ฆ ๐๐๐๐ข๐๐ก๐๐๐โ, โ๐๐๐ข๐ ๐ ๐๐๐ ๐๐๐๐๐๐ ๐ ๐๐ โ, โ๐๐๐ฬ๐ฃ๐ ๐๐๐ฆ๐๐ โ, โ๐๐๐๐๐ ๐๐๐๐๐ข๐๐๐ ๐๐๐๐๐๐ โ, โ๐๐๐*โ, โ๐โ๐๐ก๐๐๐กโ, โ๐๐๐ข๐ ๐ ๐๐๐ ๐๐๐ฅ๐ก๐ข๐๐ ๐๐๐๐๐๐ โ, โ๐๐๐ ๐๐๐๐๐ ๐๐๐กโ๐๐๐ โ.”
So, SVM, Naive Bayes, Random forest, and Ensemble methods are all called AI (not untrue, but…). Gaussian processes? Well, papers with a GP regression count then. Dimensionality reduction? So, papers using PCA or LDA count.
This feeds the trend of using AI as an umbrella term unfortunately.