This document shows the complete alignment of models across all components of OllamaMax.
qwen2.5-coder:7b- Python, JS, Bash expertdeepseek-coder:6.7b- Best for coding helpdeepseek-r1- DeepSeek R1 reasoning modelglm-4.6- GLM 4.6 modeldeepseek-v3.1- DeepSeek v3.1
llama3.1:8b- Like ChatGPT (DEFAULT MODEL) ✓qwen3:7b- Qwen3 general purposegemma2:9b- Fast & smartmistral:7b- Lightweight & private
llava:7b- Read images, PDFsmoondream:1.8b- Tiny vision modelqwen3-vl- Qwen3 Vision-Language
phi3:mini- Super fast on CPUtinyllama:1.1b- Runs anywhere
nomic-embed-text- Local RAG / search
✅ Updated - Contains all 15 models with proper tags and installation status indicators
✅ Updated - AVAILABLE_MODELS array contains all 15 models
✅ Updated - Default model set to llama3.1:8b
✅ Updated - Menu shows all 15 models (options A-O) ✅ Updated - Case statement handles all 15 models ✅ Updated - Proper quantization applied based on hardware detection
- All models use the exact Ollama naming convention with tags
- Examples:
llama3.1:8b,qwen2.5-coder:7b,deepseek-coder:6.7b - Some models don't have size tags:
deepseek-r1,glm-4.6,qwen3-vl - Quantization is appended automatically by the shell script based on hardware
The HTML dropdown shows installation status:
- ✓ = Installed and ready
- (Not Installed) = Needs to be pulled from Ollama
The Go backend will automatically offer to pull models when selected if they're not installed.
llama3.1:8b is set as the default model across all components as it's:
- Most likely to be pre-installed
- Best general-purpose balance
- Similar to ChatGPT in capabilities