🔧 Troubleshooting: Quick Fixes for Common Issues
Having trouble with Libre WebUI? Don't worry! Most issues have simple solutions. Let's get you back to chatting with AI quickly.
90% of issues are solved by checking these three things: Ollama running, models downloaded, and backend/frontend started.
🚨 Most Common Issue: "Can't Create New Chat"
This usually means one of three things is missing. Let's check them in order:
✅ Quick Fix: The One-Command Solution
If you have the start script, try this first:
cd /home/rob/Documents/libre-webui-dev
./start.sh
This should start everything automatically! If it works, you're done! 🎉
🔍 Step-by-Step Diagnosis
If the quick fix didn't work, let's figure out what's wrong:
- 1️⃣ Ollama Running?
- 2️⃣ Models Downloaded?
- 3️⃣ Backend Running?
- 4️⃣ Frontend Running?
The Problem: Ollama is the AI engine. Without it, there's no AI to chat with.
Check if installed:
ollama --version
If you see "command not found":
- 📥 Install Ollama: Go to ollama.ai and download for your system
- 💻 Restart your terminal after installation
If you see a version number, start Ollama:
ollama serve
Keep this terminal open!
The Problem: Ollama is running but has no AI "brains" to use.
Check available models:
ollama list
If the list is empty or shows an error:
Download a recommended model:
# Best balance of quality and speed (recommended, ~5GB)
ollama pull llama3.1:8b
# Ultra-fast for slower computers (~2GB)
ollama pull llama3.2:3b
# For 16GB+ VRAM systems (~9GB)
ollama pull qwen2.5:14b
Wait for the download to finish (2-50GB depending on the model).
The Problem: The backend connects your browser to Ollama.
Start the backend:
cd backend
npm install # Only needed the first time
npm run dev
You should see: Server running on port 3001 or similar.
Keep this terminal open!
The Problem: The frontend is the beautiful interface you see in your browser.
Start the frontend:
cd frontend
npm install # Only needed the first time
npm run dev
You should see: A message with a local URL like http://localhost:5173
Keep this terminal open!
🎯 Visual Troubleshooting
In Your Browser (http://localhost:5173):
✅ Good Signs:
- You see the Libre WebUI interface
- There's a model name shown in the header or sidebar
- The "New Chat" button is clickable
- Settings menu shows available models
❌ Warning Signs:
- Yellow banner saying "No models available"
- "New Chat" button is grayed out
- Error messages about connection
- Blank page or loading forever
Quick Browser Fixes:
- Hard refresh: Hold Shift and click refresh
- Clear cache: Press F12 → Network tab → check "Disable cache"
- Check console: Press F12 → Console tab (look for red errors)
🛠️ Common Error Messages & Solutions
"Cannot connect to Ollama"
Solution: Start Ollama: ollama serve
"No models found"
Solution: Download a model: ollama pull llama3.1:8b
"Failed to fetch" or "Network Error"
Solution: Start the backend: cd backend && npm run dev
"This site can't be reached"
Solution: Start the frontend: cd frontend && npm run dev
"Port already in use"
Solution: Something else is using the port. Find and stop it:
# Check what's using port 3001 (backend)
lsof -i :3001
# Check what's using port 5173 (frontend)
lsof -i :5173
# Kill the process (replace XXXX with the PID number)
kill -9 XXXX
⚡ Performance Issues
AI Responses Are Very Slow
Solutions:
- Check if model fits in VRAM: Run
ollama psto see memory usage - Use a smaller model:
ollama pull llama3.2:3b(~2GB VRAM) - Use Q4 quantization: Models ending in
:latestuse Q4 by default - Close GPU-intensive apps (games, video editors)
- CPU-only inference: Expect 5-15 tokens/sec (this is normal without GPU)
"Timeout of 30000ms exceeded" Errors
Problem: Large models on multiple GPUs need more time to load into memory.
Solutions:
-
Quick Fix - Environment Variables:
# Backend (.env file or environment)
OLLAMA_TIMEOUT=300000 # 5 minutes for regular operations
OLLAMA_LONG_OPERATION_TIMEOUT=900000 # 15 minutes for model loading
# Frontend (.env file or environment)
VITE_API_TIMEOUT=300000 # 5 minutes for API calls -
For Large Models (like CodeLlama 70B, Llama 70B+):
# Increase to 30 minutes for very large models
OLLAMA_LONG_OPERATION_TIMEOUT=1800000
VITE_API_TIMEOUT=1800000 -
Restart the services after changing environment variables
Interface Is Laggy
Solutions:
- Hard refresh your browser (Shift + Refresh)
- Close other browser tabs
- Try a different browser (Chrome, Firefox, Safari)
Models Won't Download
Solutions:
- Check internet connection
- Free up disk space (models can be 1-32GB each)
- Try a smaller model first:
ollama pull llama3.2:1b
🚀 Advanced Troubleshooting
Multiple Terminal Management
You need 3 things running simultaneously:
Terminal 1 (Ollama):
ollama serve
# Keep this running
Terminal 2 (Backend):
cd backend
npm run dev
# Keep this running
Terminal 3 (Frontend):
cd frontend
npm run dev
# Keep this running
Check Everything Is Working
Run these commands to verify each part:
# Check Ollama
curl http://localhost:11434/api/tags
# Check Backend
curl http://localhost:3001/api/ollama/health
# Check Frontend
# Open http://localhost:5173 in your browser
Each should return data, not errors.
🆘 Still Stuck?
Before Asking for Help:
- ✅ Try the quick fix at the top of this guide
- ✅ Check all three services are running (Ollama, backend, frontend)
- ✅ Download at least one model (
ollama pull llama3.2:3b) - ✅ Restart everything and try again
When Reporting Issues:
Please include:
- Operating system (Windows, Mac, Linux)
- Error messages (exact text)
- Browser console errors (press F12 → Console)
- Terminal output from backend/frontend
Get Help:
- 🐛 Report bugs: GitHub Issues
- 💬 Ask questions: GitHub Discussions
- 📚 Read more: Check other guides in the docs folder
🎯 Prevention Tips
For Smooth Operation:
- Keep terminals open while using Libre WebUI
- Don't close Ollama - it needs to stay running
- Download models when you have good internet
- Monitor disk space - AI models are large files
- Restart everything occasionally to clear memory
System Requirements Reminder:
- Minimum: 8GB RAM, 10GB free disk space
- Recommended: 16GB RAM, 8GB VRAM GPU, 50GB+ disk space
- Power User: 32GB RAM, 16-24GB VRAM GPU, 100GB+ disk space
- Professional: 64GB+ RAM, 48GB+ VRAM, 200GB+ SSD
See the Hardware Requirements Guide for detailed GPU recommendations.
🎉 Most issues are solved by ensuring all three services are running!
Remember: Ollama (AI engine) + Backend (API) + Frontend (interface) = Working Libre WebUI
Still having trouble? The Quick Start Guide has step-by-step setup instructions.
🐳 Docker Issues
Container Won't Start
# Check container logs
docker-compose logs libre-webui
# Check if ports are in use
lsof -i :8080
lsof -i :11434
Can't Connect to Ollama in Docker
# For bundled Ollama, check both containers
docker-compose logs ollama
# For external Ollama, verify host connection
curl http://localhost:11434/api/version
# Make sure OLLAMA_BASE_URL is correct in docker-compose
# Bundled: http://ollama:11434
# External: http://host.docker.internal:11434 (Mac/Windows)
# External: http://172.17.0.1:11434 (Linux)
Data Not Persisting
# Check volumes exist
docker volume ls | grep libre
# Inspect volume
docker volume inspect libre_webui_data
GPU Not Working in Docker
# Verify NVIDIA Container Toolkit
nvidia-smi
docker run --rm --gpus all nvidia/cuda:12.0-base nvidia-smi
# Use GPU compose file
docker-compose -f docker-compose.gpu.yml up -d
📦 NPX Installation Issues
"Cannot find module" Errors
# Clear npm cache and reinstall
npm cache clean --force
npx libre-webui@latest
Data Location
When using npx libre-webui, data is stored in:
- Linux/macOS:
~/.libre-webui/ - Windows:
%USERPROFILE%\.libre-webui\
Port Already in Use
# Default port is 8080, use different port
PORT=9000 npx libre-webui
First-Time Setup Not Showing Encryption Key
This was fixed in v0.2.7. Update to latest:
npx libre-webui@latest
🔌 Plugin Issues
Can't Connect to External AI Services
The Problem: You have API keys but external services (OpenAI, Anthropic, etc.) aren't working.
Common Solutions:
-
Check API Key Format:
# Set API keys in backend/.env
OPENAI_API_KEY=your_openai_key_here
ANTHROPIC_API_KEY=your_anthropic_key_here
GROQ_API_KEY=your_groq_key_here
GEMINI_API_KEY=your_gemini_key_here
MISTRAL_API_KEY=your_mistral_key_here
GITHUB_API_KEY=your_github_token_here -
Verify API Keys Are Valid:
# Test OpenAI
curl -H "Authorization: Bearer $OPENAI_API_KEY" \
https://api.openai.com/v1/models
# Test Anthropic
curl -H "x-api-key: $ANTHROPIC_API_KEY" \
https://api.anthropic.com/v1/messages -
Update Plugin Models:
# Update all providers
./scripts/update-all-models.sh
# Or update specific providers
./scripts/update-openai-models.sh
./scripts/update-anthropic-models.sh
./scripts/update-groq-models.sh
./scripts/update-gemini-models.sh
./scripts/update-mistral-models.sh
./scripts/update-github-models.sh
Plugin Update Scripts Failing
The Problem: Model update scripts are reporting errors.
Common Solutions:
-
Check API Keys:
# Verify environment variables are set
echo $OPENAI_API_KEY
echo $ANTHROPIC_API_KEY
echo $GROQ_API_KEY
echo $GEMINI_API_KEY
echo $MISTRAL_API_KEY
echo $GITHUB_API_KEY -
Check Script Permissions:
# Make scripts executable
chmod +x scripts/update-*.sh -
Run Individual Scripts with Debug:
# Run with verbose output
bash -x ./scripts/update-openai-models.sh
Models Not Showing in UI
The Problem: Plugin models aren't appearing in the model selector.
Solutions:
-
Restart Backend:
# Stop backend (Ctrl+C) and restart
cd backend
npm run dev -
Check Plugin Status:
- Go to Settings → Plugins
- Verify plugins are enabled
- Check for any error messages
-
Manual Plugin Refresh:
# Update all plugins
./scripts/update-all-models.sh
# Restart backend to reload models
cd backend && npm run dev