[EN-C036-090] Self-Hosted AI Healthcheck and Auto-Repair
Category
Advanced Ops (20%)
Overview
Monitor the operational status of Ollama or Local LLM servers. Upon detecting an anomaly, run an automatic restart process and temporarily switch OpenClaw's Gateway settings to a cloud-based model.
Implementation
- Periodically run
curl localhost:11434/api/tagsusing theexecskill. - If there's no response or severe latency, attempt commands like
openclaw gateway stop/start. - If repair is difficult, use
gateway config.patchto switch the default model to Gemini Flash to ensure continuity.
Benefits
- Minimize service interruptions caused by AI server downtime during the night or while away.
- Ensure the butler service is always available through a fallback mechanism.