Which LLMs can you run locally?

This project helps you find out which models your machine can handle.

If you’re a data scientist experimenting with local models, getting an idea of what you can run locally is better than wasting time setting up huge models. The auto-detection isn’t perfect, and the list is missing some hardware combinations, but the convenience still makes it useful.

This is also useful if you’re running a workshop or demo. In my courses like AI Applications, we experiment with LLMs in Docker containers, but performance varies greatly by hardware.

This new webtool is a nice, fast alternative for selecting models. You can also use the more accurate CLI tool llmfit, but that takes an install or a Docker pull and run.