Chameleon
ストックにはログインが必要です
Run any LLM on demand — zero idle VRAM.
Artificial Intelligence
Developer Tools
GitHub
Open Source
Chameleon is a stateless AI runtime that becomes any LLM on demand. Instead of keeping models loaded, it routes each request to the best model, loads it just-in-time, executes, and fully unloads — resulting in zero idle VRAM usage. Run multiple models efficiently with one runtime, without wasting memory or restarting systems.
投票数: 0