跳到主要内容

[EN-C036-090] Self-Hosted AI Healthcheck and Auto-Repair

Category

Advanced Ops (20%)

Overview

Monitor the operational status of Ollama or Local LLM servers. Upon detecting an anomaly, run an automatic restart process and temporarily switch OpenClaw's Gateway settings to a cloud-based model.

Implementation

  1. Periodically run curl localhost:11434/api/tags using the exec skill.
  2. If there's no response or severe latency, attempt commands like openclaw gateway stop/start.
  3. If repair is difficult, use gateway config.patch to switch the default model to Gemini Flash to ensure continuity.

Benefits

  • Minimize service interruptions caused by AI server downtime during the night or while away.
  • Ensure the butler service is always available through a fallback mechanism.