Back to changelog

September 18th, 2025

Jan v0.6.10: Auto Optimize, custom backends, and vision model imports
Jan v0.6.10: Auto Optimize, custom backends, and vision model imports

Highlights 🎉

  • Auto Optimize: One-click hardware-aware performance tuning for llama.cpp.
  • Custom Backend Support: Import and manage your preferred llama.cpp versions.
  • Import Vision Models: Seamlessly import and use vision-capable models.

🚀 Auto Optimize (Experimental)

Intelligent performance tuning — Jan can now apply the best llama.cpp settings for your specific hardware:

  • Hardware analysis: Automatically detects your CPU, GPU, and memory configuration
  • One-click optimization: Applies optimal parameters with a single click in model settings

Auto Optimize is currently experimental and will be refined based on user feedback. It analyzes your system specs and applies proven configurations for optimal llama.cpp performance.

👁️ Vision Model Imports

Vision Model Import Demo

Enhanced multimodal support — Import and use vision models seamlessly:

  • Direct vision model import: Import vision-capable models from any source
  • Improved compatibility: Better handling of multimodal model formats

🔧 Custom Backend Support

Import your preferred llama.cpp version — Full control over your AI backend:

  • Custom llama.cpp versions: Import and use any llama.cpp build you prefer
  • Version flexibility: Use bleeding-edge builds or stable releases
  • Backup CDN: New CDN fallback when GitHub downloads fail
  • User confirmation: Prompts before auto-updating llama.cpp

Update your Jan or download the latest (opens in a new tab).

For the complete list of changes, see the GitHub release notes (opens in a new tab).