menu-bar app · macOS

every LLM,
on your menu bar.

discover, download, and manage LLMs from huggingface, lm studio, ollama, mlx, and llama.cpp — without ever leaving your menu bar.

requires macOS 26+ · apple silicon · ~4 MB

managing LLMs is messy. you've got models scattered across LM Studio, the huggingface cache, maybe a few random folders — and finding, downloading, or removing one means juggling a browser, two terminals, and tools that don't talk to each other.

ModelHub brings it all into one place.

—— plays well with ——

ollama·lm studio·llama.cpp·mlx·transformers·huggingface·vllm·candle·ollama·lm studio·llama.cpp·mlx·transformers·huggingface·vllm·candle·ollama·lm studio·llama.cpp·mlx·transformers·huggingface·vllm·candle·

downloads land in the standard huggingface cache, exactly where every other tool already looks. nothing locked in. nothing to migrate.

for the developers

zero lock-in. cache compatible.

every download replicates the official huggingface cache layout — blobs, snapshots, refs, the lot. a model fetched through ModelHub is byte-identical to one fetched with huggingface-cli. drop into any pipeline. uninstall anytime. no migration.

~/.cache/huggingface/hub/
└─ models--meta-llama--Llama-3.2-3B
├─ blobs/
└─ a3f9c… [1.8 GB]
├─ snapshots/
└─ 8b0c…/ → ../blobs/a3f9c…
└─ refs/main
reads & writes the standard layout.
ollama, mlx-lm, transformers, llama.cpp.
uninstall ModelHub — keep your models.

—— ready when you are ——

one menu bar.
every model.

   Download ModelHub.dmg

v1.2.0 · macOS 26+ · apple silicon

01
open .dmg
drag ModelHub into Applications.
02
launch it
look for the dot in your menu bar.
03
that's it
browse/manage your LLMs.