Fresh start - excluded large ROM JSON files
This commit is contained in:
113
skills/ollama-memory-embeddings/README.md
Normal file
113
skills/ollama-memory-embeddings/README.md
Normal file
@@ -0,0 +1,113 @@
|
||||
# ollama-memory-embeddings
|
||||
|
||||
Installable OpenClaw skill to use **Ollama as the embeddings server** for
|
||||
memory search (OpenAI-compatible `/v1/embeddings`).
|
||||
|
||||
> **Embeddings only** — chat/completions routing is not affected.
|
||||
|
||||
This skill is available on [GitHub](https://github.com/vidarbrekke/OpenClaw-Ollama-Memory-Embeddings) under the MIT license.
|
||||
|
||||
## Features
|
||||
|
||||
- Interactive embedding model selection:
|
||||
- `embeddinggemma` (default — closest to OpenClaw built-in)
|
||||
- `nomic-embed-text` (strong quality, efficient)
|
||||
- `all-minilm` (smallest/fastest)
|
||||
- `mxbai-embed-large` (highest quality, larger)
|
||||
- Optional import of a local embedding GGUF into Ollama (`ollama create`)
|
||||
- Detects: embeddinggemma, nomic-embed, all-minilm, mxbai-embed GGUFs
|
||||
- Model name normalization (handles `:latest` tag automatically)
|
||||
- Surgical OpenClaw config update (`agents.defaults.memorySearch`)
|
||||
- Post-write config sanity check
|
||||
- Smart gateway restart (detects available restart method)
|
||||
- Two-step verification: model existence + endpoint response
|
||||
- Non-interactive mode for automation (GGUF import is opt-in)
|
||||
- Optional memory reindex during install (`--reindex-memory auto|yes|no`)
|
||||
- Idempotent drift enforcement (`enforce.sh`)
|
||||
- Optional auto-heal watchdog (`watchdog.sh`, launchd on macOS)
|
||||
|
||||
## Install
|
||||
|
||||
```bash
|
||||
bash ~/.openclaw/skills/ollama-memory-embeddings/install.sh
|
||||
```
|
||||
|
||||
Bulletproof install (enforce + watchdog):
|
||||
|
||||
```bash
|
||||
bash ~/.openclaw/skills/ollama-memory-embeddings/install.sh \
|
||||
--non-interactive \
|
||||
--model embeddinggemma \
|
||||
--reindex-memory auto \
|
||||
--install-watchdog \
|
||||
--watchdog-interval 60
|
||||
```
|
||||
|
||||
From repo:
|
||||
|
||||
```bash
|
||||
bash skills/ollama-memory-embeddings/install.sh
|
||||
```
|
||||
|
||||
## Non-interactive example
|
||||
|
||||
```bash
|
||||
bash ~/.openclaw/skills/ollama-memory-embeddings/install.sh \
|
||||
--non-interactive \
|
||||
--model embeddinggemma \
|
||||
--reindex-memory auto \
|
||||
--import-local-gguf yes # explicit opt-in; "auto" = "no" in non-interactive
|
||||
```
|
||||
|
||||
## Verify
|
||||
|
||||
```bash
|
||||
~/.openclaw/skills/ollama-memory-embeddings/verify.sh
|
||||
~/.openclaw/skills/ollama-memory-embeddings/verify.sh --verbose # dump raw response on failure
|
||||
```
|
||||
|
||||
## Drift guard and self-heal
|
||||
|
||||
One-time check/heal:
|
||||
|
||||
```bash
|
||||
~/.openclaw/skills/ollama-memory-embeddings/watchdog.sh --once --model embeddinggemma
|
||||
```
|
||||
|
||||
Manual enforce (idempotent):
|
||||
|
||||
```bash
|
||||
~/.openclaw/skills/ollama-memory-embeddings/enforce.sh --model embeddinggemma
|
||||
```
|
||||
|
||||
Install launchd watchdog (macOS):
|
||||
|
||||
```bash
|
||||
~/.openclaw/skills/ollama-memory-embeddings/watchdog.sh \
|
||||
--install-launchd \
|
||||
--model embeddinggemma \
|
||||
--interval-sec 60
|
||||
```
|
||||
|
||||
Remove launchd watchdog:
|
||||
|
||||
```bash
|
||||
~/.openclaw/skills/ollama-memory-embeddings/watchdog.sh --uninstall-launchd
|
||||
```
|
||||
|
||||
## Important: re-embed when changing model
|
||||
|
||||
If you switch embedding model, existing vectors may be incompatible with the new
|
||||
vector space. Rebuild/re-embed your memory index after model changes to avoid
|
||||
retrieval quality regressions.
|
||||
|
||||
Installer behavior:
|
||||
- `--reindex-memory auto` (default): reindex only when embedding fingerprint changed (`provider`, `model`, `baseUrl`, `apiKey presence`).
|
||||
- `--reindex-memory yes`: always run `openclaw memory index --force --verbose`.
|
||||
- `--reindex-memory no`: never reindex automatically.
|
||||
|
||||
Notes:
|
||||
- `enforce.sh --check-only` treats apiKey drift as **missing apiKey** (empty), not strict equality to `"ollama"`.
|
||||
- Backups are created only when config changes are actually written.
|
||||
- Legacy config fallback supported: if canonical `agents.defaults.memorySearch` is missing,
|
||||
scripts read known legacy paths and mirror updates for compatibility.
|
||||
173
skills/ollama-memory-embeddings/SKILL.md
Normal file
173
skills/ollama-memory-embeddings/SKILL.md
Normal file
@@ -0,0 +1,173 @@
|
||||
---
|
||||
slug: ollama-memory-embeddings
|
||||
display_name: Ollama Memory Embeddings
|
||||
displayName: Ollama Memory Embeddings
|
||||
name: ollama-memory-embeddings
|
||||
description: >
|
||||
Configure OpenClaw memory search to use Ollama as the embeddings server
|
||||
(OpenAI-compatible /v1/embeddings) instead of the built-in node-llama-cpp
|
||||
local GGUF loading. Includes interactive model selection and optional import
|
||||
of an existing local embedding GGUF into Ollama.
|
||||
---
|
||||
|
||||
# Ollama Memory Embeddings
|
||||
|
||||
This skill configures OpenClaw memory search to use Ollama as the **embeddings
|
||||
server** via its OpenAI-compatible `/v1/embeddings` endpoint.
|
||||
|
||||
> **Embeddings only.** This skill does not affect chat/completions routing —
|
||||
> it only changes how memory-search embedding vectors are generated.
|
||||
|
||||
## What it does
|
||||
|
||||
- Installs this skill under `~/.openclaw/skills/ollama-memory-embeddings`
|
||||
- Verifies Ollama is installed and reachable
|
||||
- Lets the user choose an embedding model:
|
||||
- `embeddinggemma` (default — closest to OpenClaw built-in)
|
||||
- `nomic-embed-text` (strong quality, efficient)
|
||||
- `all-minilm` (smallest/fastest)
|
||||
- `mxbai-embed-large` (highest quality, larger)
|
||||
- Optionally imports an existing local embedding GGUF into Ollama via
|
||||
`ollama create` (currently detects embeddinggemma, nomic-embed, all-minilm,
|
||||
and mxbai-embed GGUFs in known cache directories)
|
||||
- Normalizes model names (handles `:latest` tag automatically)
|
||||
- Updates `agents.defaults.memorySearch` in OpenClaw config (surgical — only
|
||||
touches keys this skill owns):
|
||||
- `provider = "openai"`
|
||||
- `model = <selected model>:latest`
|
||||
- `remote.baseUrl = "http://127.0.0.1:11434/v1/"`
|
||||
- `remote.apiKey = "ollama"` (required by client, ignored by Ollama)
|
||||
- Performs a post-write config sanity check (reads back and validates JSON)
|
||||
- Optionally restarts the OpenClaw gateway (with detection of available
|
||||
restart methods: `openclaw gateway restart`, systemd, launchd)
|
||||
- Optional memory reindex during install (`openclaw memory index --force --verbose`)
|
||||
- Runs a two-step verification:
|
||||
1. Checks model exists in `ollama list`
|
||||
2. Calls the embeddings endpoint and validates the response
|
||||
- Adds an idempotent drift-enforcement command (`enforce.sh`)
|
||||
- Adds optional config drift auto-healing watchdog (`watchdog.sh`)
|
||||
|
||||
## Install
|
||||
|
||||
```bash
|
||||
bash ~/.openclaw/skills/ollama-memory-embeddings/install.sh
|
||||
```
|
||||
|
||||
From this repository:
|
||||
|
||||
```bash
|
||||
bash skills/ollama-memory-embeddings/install.sh
|
||||
```
|
||||
|
||||
## Non-interactive usage
|
||||
|
||||
```bash
|
||||
bash ~/.openclaw/skills/ollama-memory-embeddings/install.sh \
|
||||
--non-interactive \
|
||||
--model embeddinggemma \
|
||||
--reindex-memory auto
|
||||
```
|
||||
|
||||
Bulletproof setup (install watchdog):
|
||||
|
||||
```bash
|
||||
bash ~/.openclaw/skills/ollama-memory-embeddings/install.sh \
|
||||
--non-interactive \
|
||||
--model embeddinggemma \
|
||||
--reindex-memory auto \
|
||||
--install-watchdog \
|
||||
--watchdog-interval 60
|
||||
```
|
||||
|
||||
> **Note:** In non-interactive mode, `--import-local-gguf auto` is treated as
|
||||
> `no` (safe default). Use `--import-local-gguf yes` to explicitly opt in.
|
||||
|
||||
Options:
|
||||
|
||||
- `--model <id>`: one of `embeddinggemma`, `nomic-embed-text`, `all-minilm`, `mxbai-embed-large`
|
||||
- `--import-local-gguf <auto|yes|no>`: default `auto` (interactive: prompts; non-interactive: `no`)
|
||||
- `--import-model-name <name>`: default `embeddinggemma-local`
|
||||
- `--skip-restart`: do not restart gateway
|
||||
- `--openclaw-config <path>`: config file path override
|
||||
- `--install-watchdog`: install launchd drift auto-heal watchdog (macOS)
|
||||
- `--watchdog-interval <sec>`: watchdog interval (default 60)
|
||||
- `--reindex-memory <auto|yes|no>`: memory rebuild mode (default `auto`)
|
||||
|
||||
## Verify
|
||||
|
||||
```bash
|
||||
~/.openclaw/skills/ollama-memory-embeddings/verify.sh
|
||||
```
|
||||
|
||||
Use `--verbose` to dump raw API response on failure:
|
||||
|
||||
```bash
|
||||
~/.openclaw/skills/ollama-memory-embeddings/verify.sh --verbose
|
||||
```
|
||||
|
||||
## Drift enforcement and auto-heal
|
||||
|
||||
Manually enforce desired state (safe to run repeatedly):
|
||||
|
||||
```bash
|
||||
~/.openclaw/skills/ollama-memory-embeddings/enforce.sh \
|
||||
--model embeddinggemma \
|
||||
--openclaw-config ~/.openclaw/openclaw.json
|
||||
```
|
||||
|
||||
Check for drift only:
|
||||
|
||||
```bash
|
||||
~/.openclaw/skills/ollama-memory-embeddings/enforce.sh \
|
||||
--check-only \
|
||||
--model embeddinggemma
|
||||
```
|
||||
|
||||
Run watchdog once (check + heal):
|
||||
|
||||
```bash
|
||||
~/.openclaw/skills/ollama-memory-embeddings/watchdog.sh \
|
||||
--once \
|
||||
--model embeddinggemma
|
||||
```
|
||||
|
||||
Install watchdog via launchd (macOS):
|
||||
|
||||
```bash
|
||||
~/.openclaw/skills/ollama-memory-embeddings/watchdog.sh \
|
||||
--install-launchd \
|
||||
--model embeddinggemma \
|
||||
--interval-sec 60
|
||||
```
|
||||
|
||||
## GGUF detection scope
|
||||
|
||||
The installer searches for embedding GGUFs matching these patterns in known
|
||||
cache directories (`~/.node-llama-cpp/models`, `~/.cache/node-llama-cpp/models`,
|
||||
`~/.cache/openclaw/models`):
|
||||
|
||||
- `*embeddinggemma*.gguf`
|
||||
- `*nomic-embed*.gguf`
|
||||
- `*all-minilm*.gguf`
|
||||
- `*mxbai-embed*.gguf`
|
||||
|
||||
Other embedding GGUFs are not auto-detected. You can always import manually:
|
||||
|
||||
```bash
|
||||
ollama create my-model -f /path/to/Modelfile
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- This does not modify OpenClaw package code. It only updates user config.
|
||||
- A timestamped backup of config is written before changes.
|
||||
- If no local GGUF exists, install proceeds by pulling the selected model from Ollama.
|
||||
- Model names are normalized with `:latest` tag for consistent Ollama interaction.
|
||||
- If embedding model changes, rebuild/re-embed existing memory vectors to avoid
|
||||
retrieval mismatch across incompatible vector spaces.
|
||||
- With `--reindex-memory auto`, installer reindexes only when the effective
|
||||
embedding fingerprint changed (`provider`, `model`, `baseUrl`, `apiKey presence`).
|
||||
- Drift checks require a non-empty apiKey but do not require a literal `"ollama"` value.
|
||||
- Config backups are created only when a write is needed.
|
||||
- Legacy schema fallback is supported: if `agents.defaults.memorySearch` is absent,
|
||||
the enforcer reads known legacy paths and mirrors writes to preserve compatibility.
|
||||
6
skills/ollama-memory-embeddings/_meta.json
Normal file
6
skills/ollama-memory-embeddings/_meta.json
Normal file
@@ -0,0 +1,6 @@
|
||||
{
|
||||
"ownerId": "kn77fr145swszdg45szek5hzw580e233",
|
||||
"slug": "ollama-memory-embeddings",
|
||||
"version": "1.0.1",
|
||||
"publishedAt": 1770747527476
|
||||
}
|
||||
372
skills/ollama-memory-embeddings/enforce.sh
Normal file
372
skills/ollama-memory-embeddings/enforce.sh
Normal file
@@ -0,0 +1,372 @@
|
||||
#!/usr/bin/env bash
|
||||
# Enforce OpenClaw memorySearch to use Ollama embeddings settings.
|
||||
# Idempotent: safe to run repeatedly.
|
||||
set -euo pipefail
|
||||
|
||||
MODEL=""
|
||||
BASE_URL="http://127.0.0.1:11434/v1/"
|
||||
CONFIG_PATH="${OPENCLAW_CONFIG_PATH:-${HOME}/.openclaw/openclaw.json}"
|
||||
CHECK_ONLY=0
|
||||
QUIET=0
|
||||
RESTART_ON_CHANGE=0
|
||||
API_KEY_VALUE="ollama"
|
||||
|
||||
usage() {
|
||||
cat <<'EOF'
|
||||
Usage:
|
||||
enforce.sh [options]
|
||||
|
||||
Options:
|
||||
--model <id> embedding model id (required unless already in config)
|
||||
--base-url <url> Ollama OpenAI-compatible base URL (default: http://127.0.0.1:11434/v1/)
|
||||
--openclaw-config <path> OpenClaw config path (default: ~/.openclaw/openclaw.json)
|
||||
--api-key-value <value> apiKey to set if missing (default: ollama)
|
||||
--check-only exit non-zero if drift is detected, do not modify config
|
||||
--restart-on-change restart gateway if config was changed
|
||||
--quiet suppress non-error output
|
||||
--help show help
|
||||
|
||||
Exit codes:
|
||||
0 success (no drift or drift healed)
|
||||
10 drift detected in --check-only mode
|
||||
1 error
|
||||
EOF
|
||||
}
|
||||
|
||||
while [ $# -gt 0 ]; do
|
||||
case "$1" in
|
||||
--model) MODEL="$2"; shift 2 ;;
|
||||
--base-url) BASE_URL="$2"; shift 2 ;;
|
||||
--openclaw-config) CONFIG_PATH="$2"; shift 2 ;;
|
||||
--api-key-value) API_KEY_VALUE="$2"; shift 2 ;;
|
||||
--check-only) CHECK_ONLY=1; shift ;;
|
||||
--restart-on-change) RESTART_ON_CHANGE=1; shift ;;
|
||||
--quiet) QUIET=1; shift ;;
|
||||
--help|-h) usage; exit 0 ;;
|
||||
*) echo "Unknown option: $1"; usage; exit 1 ;;
|
||||
esac
|
||||
done
|
||||
|
||||
log() {
|
||||
[ "$QUIET" -eq 1 ] || echo "$@"
|
||||
}
|
||||
|
||||
normalize_model() {
|
||||
local m="$1"
|
||||
if [[ "$m" != *:* ]]; then
|
||||
echo "${m}:latest"
|
||||
else
|
||||
echo "$m"
|
||||
fi
|
||||
}
|
||||
|
||||
normalize_base_url() {
|
||||
local u="${1%/}"
|
||||
if [[ "$u" != */v1 ]]; then
|
||||
u="${u}/v1"
|
||||
fi
|
||||
echo "${u}/"
|
||||
}
|
||||
|
||||
restart_gateway() {
|
||||
if ! command -v openclaw >/dev/null 2>&1; then
|
||||
log "NOTE: openclaw CLI not found; skip restart."
|
||||
return 0
|
||||
fi
|
||||
if openclaw gateway restart 2>/dev/null; then
|
||||
log "Gateway restarted."
|
||||
return 0
|
||||
fi
|
||||
log "WARNING: openclaw gateway restart failed; restart manually."
|
||||
return 1
|
||||
}
|
||||
|
||||
require_cmd() {
|
||||
command -v "$1" >/dev/null 2>&1 || {
|
||||
echo "ERROR: '$1' not found in PATH."
|
||||
exit 1
|
||||
}
|
||||
}
|
||||
|
||||
require_cmd node
|
||||
|
||||
mkdir -p "$(dirname "$CONFIG_PATH")"
|
||||
[ -f "$CONFIG_PATH" ] || echo "{}" > "$CONFIG_PATH"
|
||||
|
||||
BASE_URL_NORM="$(normalize_base_url "$BASE_URL")"
|
||||
|
||||
# If model omitted, try current config model; otherwise enforce requires explicit model.
|
||||
if [ -z "$MODEL" ]; then
|
||||
MODEL="$(node -e '
|
||||
const fs=require("fs");
|
||||
const p=process.argv[1];
|
||||
const CANDIDATES = [
|
||||
{ label: "agents.defaults.memorySearch", path: ["agents", "defaults", "memorySearch"] },
|
||||
{ label: "memorySearch", path: ["memorySearch"] },
|
||||
{ label: "agents.memorySearch", path: ["agents", "memorySearch"] },
|
||||
{ label: "agents.defaults.memory.search", path: ["agents", "defaults", "memory", "search"] },
|
||||
{ label: "memory.search", path: ["memory", "search"] },
|
||||
];
|
||||
function getAt(obj, path) {
|
||||
let cur = obj;
|
||||
for (const k of path) {
|
||||
if (!cur || typeof cur !== "object" || !(k in cur)) return undefined;
|
||||
cur = cur[k];
|
||||
}
|
||||
return cur;
|
||||
}
|
||||
function resolveActive(cfg) {
|
||||
const canonical = CANDIDATES[0];
|
||||
const canonicalVal = getAt(cfg, canonical.path);
|
||||
if (canonicalVal && typeof canonicalVal === "object" && !Array.isArray(canonicalVal)) return canonical;
|
||||
for (const c of CANDIDATES.slice(1)) {
|
||||
const v = getAt(cfg, c.path);
|
||||
if (v && typeof v === "object" && !Array.isArray(v)) return c;
|
||||
}
|
||||
return canonical;
|
||||
}
|
||||
try {
|
||||
const cfg=JSON.parse(fs.readFileSync(p,"utf8"));
|
||||
const active = resolveActive(cfg);
|
||||
const ms = getAt(cfg, active.path) || {};
|
||||
const m=ms.model||"";
|
||||
process.stdout.write(m);
|
||||
} catch (_) {}
|
||||
' "$CONFIG_PATH")"
|
||||
fi
|
||||
|
||||
if [ -z "$MODEL" ]; then
|
||||
echo "ERROR: no model provided and no existing memorySearch.model in config."
|
||||
exit 1
|
||||
fi
|
||||
MODEL_NORM="$(normalize_model "$MODEL")"
|
||||
if [ -z "$API_KEY_VALUE" ]; then
|
||||
echo "ERROR: --api-key-value must be non-empty."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
export CONFIG_PATH MODEL_NORM BASE_URL_NORM API_KEY_VALUE
|
||||
if [ "$CHECK_ONLY" -eq 1 ]; then
|
||||
set +e
|
||||
node <<'EOF'
|
||||
const fs = require("fs");
|
||||
const p = process.env.CONFIG_PATH;
|
||||
const model = process.env.MODEL_NORM;
|
||||
const base = process.env.BASE_URL_NORM;
|
||||
const CANDIDATES = [
|
||||
{ label: "agents.defaults.memorySearch", path: ["agents", "defaults", "memorySearch"] },
|
||||
{ label: "memorySearch", path: ["memorySearch"] },
|
||||
{ label: "agents.memorySearch", path: ["agents", "memorySearch"] },
|
||||
{ label: "agents.defaults.memory.search", path: ["agents", "defaults", "memory", "search"] },
|
||||
{ label: "memory.search", path: ["memory", "search"] },
|
||||
];
|
||||
function getAt(obj, path) {
|
||||
let cur = obj;
|
||||
for (const k of path) {
|
||||
if (!cur || typeof cur !== "object" || !(k in cur)) return undefined;
|
||||
cur = cur[k];
|
||||
}
|
||||
return cur;
|
||||
}
|
||||
function resolveActive(cfg) {
|
||||
const canonical = CANDIDATES[0];
|
||||
const canonicalVal = getAt(cfg, canonical.path);
|
||||
if (canonicalVal && typeof canonicalVal === "object" && !Array.isArray(canonicalVal)) return canonical;
|
||||
for (const c of CANDIDATES.slice(1)) {
|
||||
const v = getAt(cfg, c.path);
|
||||
if (v && typeof v === "object" && !Array.isArray(v)) return c;
|
||||
}
|
||||
return canonical;
|
||||
}
|
||||
let cfg = {};
|
||||
try { cfg = JSON.parse(fs.readFileSync(p, "utf8")); } catch { process.exit(10); }
|
||||
const active = resolveActive(cfg);
|
||||
const ms = getAt(cfg, active.path) || {};
|
||||
const apiKey = ms?.remote?.apiKey || "";
|
||||
const drift =
|
||||
ms.provider !== "openai" ||
|
||||
(ms.model || "") !== model ||
|
||||
(ms?.remote?.baseUrl || "") !== base ||
|
||||
apiKey === "";
|
||||
process.exit(drift ? 10 : 0);
|
||||
EOF
|
||||
status=$?
|
||||
set -e
|
||||
if [ "$status" -eq 0 ]; then
|
||||
log "No drift detected."
|
||||
exit 0
|
||||
elif [ "$status" -eq 10 ]; then
|
||||
log "Drift detected."
|
||||
exit 10
|
||||
else
|
||||
echo "ERROR: drift check failed."
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
set +e
|
||||
PLAN_OUT="$(node <<'EOF'
|
||||
const fs = require("fs");
|
||||
const path = process.env.CONFIG_PATH;
|
||||
const model = process.env.MODEL_NORM;
|
||||
const base = process.env.BASE_URL_NORM;
|
||||
const desiredApiKey = process.env.API_KEY_VALUE;
|
||||
const CANDIDATES = [
|
||||
{ label: "agents.defaults.memorySearch", path: ["agents", "defaults", "memorySearch"] },
|
||||
{ label: "memorySearch", path: ["memorySearch"] },
|
||||
{ label: "agents.memorySearch", path: ["agents", "memorySearch"] },
|
||||
{ label: "agents.defaults.memory.search", path: ["agents", "defaults", "memory", "search"] },
|
||||
{ label: "memory.search", path: ["memory", "search"] },
|
||||
];
|
||||
function getAt(obj, path) {
|
||||
let cur = obj;
|
||||
for (const k of path) {
|
||||
if (!cur || typeof cur !== "object" || !(k in cur)) return undefined;
|
||||
cur = cur[k];
|
||||
}
|
||||
return cur;
|
||||
}
|
||||
function ensureAt(obj, path) {
|
||||
let cur = obj;
|
||||
for (const k of path) {
|
||||
if (!cur[k] || typeof cur[k] !== "object" || Array.isArray(cur[k])) cur[k] = {};
|
||||
cur = cur[k];
|
||||
}
|
||||
return cur;
|
||||
}
|
||||
function resolveActive(cfg) {
|
||||
const canonical = CANDIDATES[0];
|
||||
const canonicalVal = getAt(cfg, canonical.path);
|
||||
if (canonicalVal && typeof canonicalVal === "object" && !Array.isArray(canonicalVal)) return canonical;
|
||||
for (const c of CANDIDATES.slice(1)) {
|
||||
const v = getAt(cfg, c.path);
|
||||
if (v && typeof v === "object" && !Array.isArray(v)) return c;
|
||||
}
|
||||
return canonical;
|
||||
}
|
||||
let cfg = {};
|
||||
try { cfg = JSON.parse(fs.readFileSync(path, "utf8")); } catch (_) { cfg = {}; }
|
||||
const before = JSON.stringify(cfg);
|
||||
const canonical = CANDIDATES[0];
|
||||
const active = resolveActive(cfg);
|
||||
const targets = [canonical];
|
||||
if (active.label !== canonical.label) targets.push(active);
|
||||
for (const t of targets) {
|
||||
const ms = ensureAt(cfg, t.path);
|
||||
ms.provider = "openai";
|
||||
ms.model = model;
|
||||
ms.remote = ms.remote || {};
|
||||
ms.remote.baseUrl = base;
|
||||
// Preserve existing non-empty apiKey to avoid false drift with custom conventions.
|
||||
if (!ms.remote.apiKey) ms.remote.apiKey = desiredApiKey;
|
||||
}
|
||||
const afterObj = getAt(cfg, canonical.path) || {};
|
||||
const changed = before !== JSON.stringify(cfg);
|
||||
console.log(changed ? "changed" : "unchanged");
|
||||
console.log(afterObj.provider || "");
|
||||
console.log(afterObj.model || "");
|
||||
console.log((afterObj.remote && afterObj.remote.baseUrl) || "");
|
||||
console.log((afterObj.remote && afterObj.remote.apiKey) ? "(set)" : "(missing)");
|
||||
console.log(active.label);
|
||||
console.log(active.label !== canonical.label ? "yes" : "no");
|
||||
EOF
|
||||
)"
|
||||
status=$?
|
||||
set -e
|
||||
|
||||
if [ "$status" -ne 0 ]; then
|
||||
echo "ERROR: failed to plan memorySearch enforcement."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
CHANGED="$(printf "%s\n" "$PLAN_OUT" | sed -n '1p')"
|
||||
PROVIDER_NOW="$(printf "%s\n" "$PLAN_OUT" | sed -n '2p')"
|
||||
MODEL_NOW="$(printf "%s\n" "$PLAN_OUT" | sed -n '3p')"
|
||||
BASE_NOW="$(printf "%s\n" "$PLAN_OUT" | sed -n '4p')"
|
||||
APIKEY_NOW="$(printf "%s\n" "$PLAN_OUT" | sed -n '5p')"
|
||||
ACTIVE_PATH="$(printf "%s\n" "$PLAN_OUT" | sed -n '6p')"
|
||||
MIRRORING_LEGACY="$(printf "%s\n" "$PLAN_OUT" | sed -n '7p')"
|
||||
|
||||
if [ "$CHANGED" = "changed" ]; then
|
||||
BACKUP_PATH="${CONFIG_PATH}.bak.$(date -u +%Y-%m-%dT%H-%M-%SZ)"
|
||||
cp "$CONFIG_PATH" "$BACKUP_PATH"
|
||||
node <<'EOF'
|
||||
const fs = require("fs");
|
||||
const path = process.env.CONFIG_PATH;
|
||||
const model = process.env.MODEL_NORM;
|
||||
const base = process.env.BASE_URL_NORM;
|
||||
const desiredApiKey = process.env.API_KEY_VALUE;
|
||||
const CANDIDATES = [
|
||||
{ label: "agents.defaults.memorySearch", path: ["agents", "defaults", "memorySearch"] },
|
||||
{ label: "memorySearch", path: ["memorySearch"] },
|
||||
{ label: "agents.memorySearch", path: ["agents", "memorySearch"] },
|
||||
{ label: "agents.defaults.memory.search", path: ["agents", "defaults", "memory", "search"] },
|
||||
{ label: "memory.search", path: ["memory", "search"] },
|
||||
];
|
||||
function getAt(obj, path) {
|
||||
let cur = obj;
|
||||
for (const k of path) {
|
||||
if (!cur || typeof cur !== "object" || !(k in cur)) return undefined;
|
||||
cur = cur[k];
|
||||
}
|
||||
return cur;
|
||||
}
|
||||
function ensureAt(obj, path) {
|
||||
let cur = obj;
|
||||
for (const k of path) {
|
||||
if (!cur[k] || typeof cur[k] !== "object" || Array.isArray(cur[k])) cur[k] = {};
|
||||
cur = cur[k];
|
||||
}
|
||||
return cur;
|
||||
}
|
||||
function resolveActive(cfg) {
|
||||
const canonical = CANDIDATES[0];
|
||||
const canonicalVal = getAt(cfg, canonical.path);
|
||||
if (canonicalVal && typeof canonicalVal === "object" && !Array.isArray(canonicalVal)) return canonical;
|
||||
for (const c of CANDIDATES.slice(1)) {
|
||||
const v = getAt(cfg, c.path);
|
||||
if (v && typeof v === "object" && !Array.isArray(v)) return c;
|
||||
}
|
||||
return canonical;
|
||||
}
|
||||
let cfg = {};
|
||||
try { cfg = JSON.parse(fs.readFileSync(path, "utf8")); } catch (_) { cfg = {}; }
|
||||
const canonical = CANDIDATES[0];
|
||||
const active = resolveActive(cfg);
|
||||
const targets = [canonical];
|
||||
if (active.label !== canonical.label) targets.push(active);
|
||||
for (const t of targets) {
|
||||
const ms = ensureAt(cfg, t.path);
|
||||
ms.provider = "openai";
|
||||
ms.model = model;
|
||||
ms.remote = ms.remote || {};
|
||||
ms.remote.baseUrl = base;
|
||||
if (!ms.remote.apiKey) ms.remote.apiKey = desiredApiKey;
|
||||
}
|
||||
fs.writeFileSync(path, JSON.stringify(cfg, null, 2));
|
||||
EOF
|
||||
fi
|
||||
|
||||
log "Config: ${CONFIG_PATH}"
|
||||
if [ "$CHANGED" = "changed" ]; then
|
||||
log "Backup: ${BACKUP_PATH}"
|
||||
fi
|
||||
log "provider=${PROVIDER_NOW}"
|
||||
log "model=${MODEL_NOW}"
|
||||
log "baseUrl=${BASE_NOW}"
|
||||
log "apiKey=${APIKEY_NOW}"
|
||||
if [ -n "$ACTIVE_PATH" ] && [ "$ACTIVE_PATH" != "agents.defaults.memorySearch" ]; then
|
||||
log "legacyPathDetected=${ACTIVE_PATH}"
|
||||
fi
|
||||
if [ "$MIRRORING_LEGACY" = "yes" ]; then
|
||||
log "legacyMirrored=yes"
|
||||
fi
|
||||
|
||||
if [ "$CHANGED" = "changed" ]; then
|
||||
log "Drift healed: memorySearch settings updated."
|
||||
if [ "$RESTART_ON_CHANGE" -eq 1 ]; then
|
||||
restart_gateway || true
|
||||
fi
|
||||
else
|
||||
log "No changes required."
|
||||
fi
|
||||
520
skills/ollama-memory-embeddings/install.sh
Normal file
520
skills/ollama-memory-embeddings/install.sh
Normal file
@@ -0,0 +1,520 @@
|
||||
#!/usr/bin/env bash
|
||||
# Install and configure OpenClaw memory embeddings via Ollama.
|
||||
# Can run from repo path or from ~/.openclaw/skills/ollama-memory-embeddings.
|
||||
set -euo pipefail
|
||||
|
||||
SKILL_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
OPENCLAW_DIR="${HOME}/.openclaw"
|
||||
SKILLS_DIR="${OPENCLAW_DIR}/skills/ollama-memory-embeddings"
|
||||
|
||||
MODEL="${EMBEDDING_MODEL:-embeddinggemma}"
|
||||
IMPORT_LOCAL_GGUF="${IMPORT_LOCAL_GGUF:-auto}" # auto|yes|no
|
||||
IMPORT_MODEL_NAME="${IMPORT_MODEL_NAME:-embeddinggemma-local}"
|
||||
NON_INTERACTIVE=0
|
||||
SKIP_RESTART=0
|
||||
INSTALL_WATCHDOG=0
|
||||
WATCHDOG_INTERVAL=60
|
||||
REINDEX_MEMORY="auto" # auto|yes|no
|
||||
CONFIG_PATH="${OPENCLAW_CONFIG_PATH:-${OPENCLAW_DIR}/openclaw.json}"
|
||||
|
||||
usage() {
|
||||
cat <<'EOF'
|
||||
Usage:
|
||||
install.sh [options]
|
||||
|
||||
Configures OpenClaw memory search to use Ollama as the embeddings server
|
||||
(OpenAI-compatible /v1/embeddings). This is embeddings only — chat/completions
|
||||
routing is not affected.
|
||||
|
||||
Options:
|
||||
--model <id> embeddinggemma | nomic-embed-text | all-minilm | mxbai-embed-large
|
||||
--import-local-gguf <mode> auto | yes | no (default: auto)
|
||||
In non-interactive mode, auto is treated as "no"
|
||||
unless explicitly set to "yes".
|
||||
--import-model-name <name> model name to create in Ollama (default: embeddinggemma-local)
|
||||
--openclaw-config <path> OpenClaw config path (default: ~/.openclaw/openclaw.json)
|
||||
--non-interactive do not prompt; use supplied/default values
|
||||
--skip-restart do not restart OpenClaw gateway
|
||||
--install-watchdog install drift auto-heal watchdog via launchd (macOS)
|
||||
--watchdog-interval <sec> watchdog check interval in seconds (default: 60)
|
||||
--reindex-memory <mode> auto | yes | no (default: auto)
|
||||
auto: reindex only if embedding fingerprint changed
|
||||
--help show help
|
||||
EOF
|
||||
}
|
||||
|
||||
while [ $# -gt 0 ]; do
|
||||
case "$1" in
|
||||
--model) MODEL="$2"; shift 2 ;;
|
||||
--import-local-gguf) IMPORT_LOCAL_GGUF="$2"; shift 2 ;;
|
||||
--import-model-name) IMPORT_MODEL_NAME="$2"; shift 2 ;;
|
||||
--openclaw-config) CONFIG_PATH="$2"; shift 2 ;;
|
||||
--non-interactive) NON_INTERACTIVE=1; shift ;;
|
||||
--skip-restart) SKIP_RESTART=1; shift ;;
|
||||
--install-watchdog) INSTALL_WATCHDOG=1; shift ;;
|
||||
--watchdog-interval) WATCHDOG_INTERVAL="$2"; shift 2 ;;
|
||||
--reindex-memory) REINDEX_MEMORY="$2"; shift 2 ;;
|
||||
--help|-h) usage; exit 0 ;;
|
||||
*) echo "Unknown option: $1"; usage; exit 1 ;;
|
||||
esac
|
||||
done
|
||||
|
||||
# ── Helpers ──────────────────────────────────────────────────────────────────
|
||||
|
||||
require_cmd() {
|
||||
command -v "$1" >/dev/null 2>&1 || {
|
||||
echo "ERROR: '$1' not found in PATH."
|
||||
exit 1
|
||||
}
|
||||
}
|
||||
|
||||
validate_model() {
|
||||
case "$1" in
|
||||
embeddinggemma|nomic-embed-text|all-minilm|mxbai-embed-large) return 0 ;;
|
||||
*) echo "ERROR: invalid model '$1'"; exit 1 ;;
|
||||
esac
|
||||
}
|
||||
|
||||
validate_import_mode() {
|
||||
case "$1" in
|
||||
auto|yes|no) return 0 ;;
|
||||
*) echo "ERROR: invalid import mode '$1' (expected auto|yes|no)"; exit 1 ;;
|
||||
esac
|
||||
}
|
||||
|
||||
validate_reindex_mode() {
|
||||
case "$1" in
|
||||
auto|yes|no) return 0 ;;
|
||||
*) echo "ERROR: invalid reindex mode '$1' (expected auto|yes|no)"; exit 1 ;;
|
||||
esac
|
||||
}
|
||||
|
||||
# Normalize model name: add :latest if no tag present.
|
||||
# Ollama list prints "model:tag"; the embeddings API also expects the tagged form.
|
||||
normalize_model() {
|
||||
local m="$1"
|
||||
if [[ "$m" != *:* ]]; then
|
||||
echo "${m}:latest"
|
||||
else
|
||||
echo "$m"
|
||||
fi
|
||||
}
|
||||
|
||||
ollama_running() {
|
||||
curl -fsS "http://127.0.0.1:11434/api/tags" >/dev/null 2>&1
|
||||
}
|
||||
|
||||
# Search for any embedding GGUF (embeddinggemma, nomic-embed, all-minilm, mxbai-embed).
|
||||
# Currently scoped to known node-llama-cpp / OpenClaw cache directories.
|
||||
find_local_embedding_gguf() {
|
||||
local dirs=(
|
||||
"$HOME/.node-llama-cpp/models"
|
||||
"$HOME/.cache/node-llama-cpp/models"
|
||||
"$HOME/.cache/openclaw/models"
|
||||
)
|
||||
local d
|
||||
for d in "${dirs[@]}"; do
|
||||
[ -d "$d" ] || continue
|
||||
while IFS= read -r -d '' file; do
|
||||
echo "$file"
|
||||
return 0
|
||||
done < <(find "$d" -type f \( \
|
||||
-name "*embeddinggemma*.gguf" -o \
|
||||
-name "*nomic-embed*.gguf" -o \
|
||||
-name "*all-minilm*.gguf" -o \
|
||||
-name "*mxbai-embed*.gguf" \
|
||||
\) -print0 2>/dev/null)
|
||||
done
|
||||
return 1
|
||||
}
|
||||
|
||||
gguf_matches_selected_model() {
|
||||
local gguf="$1"
|
||||
local model="$2"
|
||||
case "$model" in
|
||||
embeddinggemma) [[ "$gguf" == *embeddinggemma* ]] ;;
|
||||
nomic-embed-text) [[ "$gguf" == *nomic-embed* ]] ;;
|
||||
all-minilm) [[ "$gguf" == *all-minilm* ]] ;;
|
||||
mxbai-embed-large) [[ "$gguf" == *mxbai-embed* ]] ;;
|
||||
*) return 1 ;;
|
||||
esac
|
||||
}
|
||||
|
||||
guess_model_from_gguf() {
|
||||
local gguf="$1"
|
||||
if [[ "$gguf" == *embeddinggemma* ]]; then
|
||||
echo "embeddinggemma"
|
||||
elif [[ "$gguf" == *nomic-embed* ]]; then
|
||||
echo "nomic-embed-text"
|
||||
elif [[ "$gguf" == *all-minilm* ]]; then
|
||||
echo "all-minilm"
|
||||
elif [[ "$gguf" == *mxbai-embed* ]]; then
|
||||
echo "mxbai-embed-large"
|
||||
else
|
||||
echo "unknown"
|
||||
fi
|
||||
}
|
||||
|
||||
prompt_model_if_needed() {
|
||||
if [ "$NON_INTERACTIVE" -eq 1 ]; then
|
||||
return 0
|
||||
fi
|
||||
echo "Choose embedding model for Ollama:"
|
||||
echo " 1) embeddinggemma (default — closest to OpenClaw built-in)"
|
||||
echo " 2) nomic-embed-text (strong quality, efficient)"
|
||||
echo " 3) all-minilm (smallest/fastest)"
|
||||
echo " 4) mxbai-embed-large (highest quality, larger)"
|
||||
printf "Selection [1-4, default 1]: "
|
||||
read -r pick
|
||||
case "${pick:-1}" in
|
||||
1) MODEL="embeddinggemma" ;;
|
||||
2) MODEL="nomic-embed-text" ;;
|
||||
3) MODEL="all-minilm" ;;
|
||||
4) MODEL="mxbai-embed-large" ;;
|
||||
*) echo "Invalid selection."; exit 1 ;;
|
||||
esac
|
||||
}
|
||||
|
||||
# Check if model exists in Ollama (handles :latest normalization).
|
||||
model_exists_in_ollama() {
|
||||
local model_tagged
|
||||
model_tagged="$(normalize_model "$1")"
|
||||
# Also check untagged form: ollama list may show either
|
||||
ollama list 2>/dev/null | awk 'NR>1{print $1}' | grep -qE "^(${1}|${model_tagged})$" 2>/dev/null
|
||||
}
|
||||
|
||||
import_gguf_to_ollama() {
|
||||
local gguf="$1"
|
||||
local name="$2"
|
||||
local tmp
|
||||
tmp="$(mktemp)"
|
||||
echo "FROM \"$gguf\"" > "$tmp"
|
||||
set +e
|
||||
ollama create "$name" -f "$tmp"
|
||||
local status=$?
|
||||
set -e
|
||||
rm -f "$tmp"
|
||||
return "$status"
|
||||
}
|
||||
|
||||
# Detect gateway restart method.
|
||||
restart_gateway() {
|
||||
# Try openclaw gateway restart first
|
||||
if openclaw gateway restart 2>/dev/null; then
|
||||
return 0
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "WARNING: 'openclaw gateway restart' did not succeed."
|
||||
echo "Please restart the gateway manually using one of:"
|
||||
echo ""
|
||||
# macOS launchd
|
||||
if [ "$(uname)" = "Darwin" ] && launchctl list 2>/dev/null | grep -q openclaw 2>/dev/null; then
|
||||
local uid
|
||||
uid="$(id -u)"
|
||||
echo " macOS (launchd): launchctl kickstart -k gui/${uid}/bot.molt.gateway"
|
||||
fi
|
||||
# Linux systemd
|
||||
if command -v systemctl >/dev/null 2>&1 && systemctl --user is-enabled openclaw-gateway 2>/dev/null; then
|
||||
echo " Linux (systemd): systemctl --user restart openclaw-gateway"
|
||||
fi
|
||||
echo " Manual: stop and re-run 'openclaw gateway'"
|
||||
echo ""
|
||||
return 1
|
||||
}
|
||||
|
||||
# ── Main ─────────────────────────────────────────────────────────────────────
|
||||
|
||||
echo "Installing ollama-memory-embeddings..."
|
||||
echo ""
|
||||
echo "This configures OpenClaw memory search to use Ollama as the embeddings"
|
||||
echo "server (OpenAI-compatible /v1/embeddings). Chat routing is not affected."
|
||||
echo ""
|
||||
|
||||
require_cmd node
|
||||
require_cmd curl
|
||||
require_cmd ollama
|
||||
|
||||
# openclaw CLI is optional (needed for restart only)
|
||||
if ! command -v openclaw >/dev/null 2>&1; then
|
||||
echo "NOTE: 'openclaw' CLI not found. Gateway restart will be skipped."
|
||||
SKIP_RESTART=1
|
||||
fi
|
||||
|
||||
validate_model "$MODEL"
|
||||
validate_import_mode "$IMPORT_LOCAL_GGUF"
|
||||
validate_reindex_mode "$REINDEX_MEMORY"
|
||||
|
||||
if ! ollama_running; then
|
||||
echo "ERROR: Ollama is not reachable at http://127.0.0.1:11434"
|
||||
echo "Start Ollama first, then re-run installer."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
prompt_model_if_needed
|
||||
validate_model "$MODEL"
|
||||
|
||||
# ── GGUF detection and optional import ───────────────────────────────────────
|
||||
|
||||
LOCAL_GGUF=""
|
||||
LOCAL_GGUF_MATCHES_MODEL=0
|
||||
if LOCAL_GGUF="$(find_local_embedding_gguf)"; then
|
||||
echo "Detected local embedding GGUF:"
|
||||
echo " $LOCAL_GGUF"
|
||||
if gguf_matches_selected_model "$LOCAL_GGUF" "$MODEL"; then
|
||||
LOCAL_GGUF_MATCHES_MODEL=1
|
||||
else
|
||||
DETECTED_GGUF_MODEL="$(guess_model_from_gguf "$LOCAL_GGUF")"
|
||||
echo "Local GGUF does not match selected model '${MODEL}' (detected: ${DETECTED_GGUF_MODEL}); skipping GGUF import prompt."
|
||||
fi
|
||||
fi
|
||||
|
||||
MODEL_TO_USE="$MODEL"
|
||||
WILL_IMPORT="no"
|
||||
|
||||
if [ -n "$LOCAL_GGUF" ] && [ "$LOCAL_GGUF_MATCHES_MODEL" -eq 1 ]; then
|
||||
case "$IMPORT_LOCAL_GGUF" in
|
||||
yes) WILL_IMPORT="yes" ;;
|
||||
no) WILL_IMPORT="no" ;;
|
||||
auto)
|
||||
if [ "$NON_INTERACTIVE" -eq 1 ]; then
|
||||
# In non-interactive mode, auto = no (safe default; use --import-local-gguf yes to opt in)
|
||||
WILL_IMPORT="no"
|
||||
echo "Non-interactive mode: skipping GGUF import (use --import-local-gguf yes to enable)."
|
||||
else
|
||||
printf "Import local GGUF into Ollama as '%s'? [Y/n]: " "$IMPORT_MODEL_NAME"
|
||||
read -r yn
|
||||
case "${yn:-Y}" in
|
||||
Y|y|yes|YES) WILL_IMPORT="yes" ;;
|
||||
*) WILL_IMPORT="no" ;;
|
||||
esac
|
||||
fi
|
||||
;;
|
||||
esac
|
||||
elif [ -n "$LOCAL_GGUF" ] && [ "$IMPORT_LOCAL_GGUF" = "yes" ]; then
|
||||
echo "WARNING: --import-local-gguf yes ignored because detected GGUF does not match selected model '${MODEL}'."
|
||||
fi
|
||||
|
||||
if [ "$WILL_IMPORT" = "yes" ]; then
|
||||
echo "Importing local GGUF into Ollama as: ${IMPORT_MODEL_NAME}"
|
||||
if import_gguf_to_ollama "$LOCAL_GGUF" "$IMPORT_MODEL_NAME"; then
|
||||
MODEL_TO_USE="$IMPORT_MODEL_NAME"
|
||||
echo "Import succeeded."
|
||||
else
|
||||
echo "WARNING: import failed. Falling back to pulling '${MODEL}'."
|
||||
fi
|
||||
fi
|
||||
|
||||
# ── Ensure model is available in Ollama ──────────────────────────────────────
|
||||
|
||||
if ! model_exists_in_ollama "$MODEL_TO_USE"; then
|
||||
echo "Pulling Ollama model: ${MODEL_TO_USE}"
|
||||
ollama pull "$MODEL_TO_USE"
|
||||
fi
|
||||
|
||||
# Normalize for config and API calls
|
||||
MODEL_TO_USE_CANON="$(normalize_model "$MODEL_TO_USE")"
|
||||
echo "Using model: ${MODEL_TO_USE_CANON}"
|
||||
|
||||
# ── Install skill files ─────────────────────────────────────────────────────
|
||||
|
||||
echo ""
|
||||
echo "1. Skill files -> ${SKILLS_DIR}/"
|
||||
mkdir -p "$SKILLS_DIR"
|
||||
for f in SKILL.md README.md install.sh verify.sh; do
|
||||
if [ -f "${SKILL_DIR}/${f}" ]; then
|
||||
# Avoid copying a file onto itself when running from installed skill path.
|
||||
if [ "${SKILL_DIR}/${f}" != "${SKILLS_DIR}/${f}" ]; then
|
||||
cp "${SKILL_DIR}/${f}" "${SKILLS_DIR}/"
|
||||
fi
|
||||
fi
|
||||
done
|
||||
for f in enforce.sh watchdog.sh; do
|
||||
if [ -f "${SKILL_DIR}/${f}" ]; then
|
||||
if [ "${SKILL_DIR}/${f}" != "${SKILLS_DIR}/${f}" ]; then
|
||||
cp "${SKILL_DIR}/${f}" "${SKILLS_DIR}/"
|
||||
fi
|
||||
fi
|
||||
done
|
||||
chmod +x "${SKILLS_DIR}/install.sh" "${SKILLS_DIR}/verify.sh" "${SKILLS_DIR}/enforce.sh" "${SKILLS_DIR}/watchdog.sh" 2>/dev/null || true
|
||||
|
||||
# ── Config backup ────────────────────────────────────────────────────────────
|
||||
|
||||
mkdir -p "$(dirname "$CONFIG_PATH")"
|
||||
if [ -f "$CONFIG_PATH" ]; then
|
||||
BACKUP_PATH="${CONFIG_PATH}.bak.$(date -u +%Y-%m-%dT%H-%M-%SZ)"
|
||||
cp "$CONFIG_PATH" "$BACKUP_PATH"
|
||||
echo "2. Config backup -> ${BACKUP_PATH}"
|
||||
else
|
||||
echo "{}" > "$CONFIG_PATH"
|
||||
echo "2. Config created -> ${CONFIG_PATH}"
|
||||
fi
|
||||
|
||||
# Capture pre-change embedding fingerprint to decide if reindex is needed.
|
||||
PRE_MS="$(node -e '
|
||||
const fs=require("fs");
|
||||
const p=process.argv[1];
|
||||
let cfg={};
|
||||
try { cfg=JSON.parse(fs.readFileSync(p,"utf8")); } catch (_) {}
|
||||
const ms=cfg?.agents?.defaults?.memorySearch || {};
|
||||
const fp={
|
||||
provider: ms.provider || "",
|
||||
model: ms.model || "",
|
||||
baseUrl: ms?.remote?.baseUrl || "",
|
||||
apiKeySet: !!(ms?.remote?.apiKey || ""),
|
||||
};
|
||||
process.stdout.write(JSON.stringify(fp));
|
||||
' "$CONFIG_PATH")"
|
||||
|
||||
# ── Enforce config (single source of truth) ───────────────────────────────────
|
||||
|
||||
"${SKILLS_DIR}/enforce.sh" \
|
||||
--model "${MODEL_TO_USE_CANON}" \
|
||||
--openclaw-config "${CONFIG_PATH}" \
|
||||
--base-url "http://127.0.0.1:11434/v1/" >/dev/null
|
||||
|
||||
# ── Post-write sanity check ─────────────────────────────────────────────────
|
||||
|
||||
echo "3. Config updated -> ${CONFIG_PATH}"
|
||||
echo ""
|
||||
echo " Verifying config write..."
|
||||
export CONFIG_PATH
|
||||
SANITY="$(node -e '
|
||||
const fs = require("fs");
|
||||
const p = process.env.CONFIG_PATH;
|
||||
try {
|
||||
const cfg = JSON.parse(fs.readFileSync(p, "utf8"));
|
||||
const ms = cfg?.agents?.defaults?.memorySearch || {};
|
||||
console.log(" provider: " + (ms.provider || "(missing)"));
|
||||
console.log(" model: " + (ms.model || "(missing)"));
|
||||
console.log(" baseUrl: " + (ms?.remote?.baseUrl || "(missing)"));
|
||||
console.log(" apiKey: " + (ms?.remote?.apiKey ? "(set)" : "(missing)"));
|
||||
if (ms.provider !== "openai") { console.log(" WARNING: provider is not openai"); process.exit(1); }
|
||||
if (!ms.model) { console.log(" WARNING: model is empty"); process.exit(1); }
|
||||
if (!ms?.remote?.baseUrl) { console.log(" WARNING: baseUrl is empty"); process.exit(1); }
|
||||
} catch (e) {
|
||||
console.error(" ERROR: config is not valid JSON: " + e.message);
|
||||
process.exit(1);
|
||||
}
|
||||
')"
|
||||
echo "$SANITY"
|
||||
|
||||
if [ "$INSTALL_WATCHDOG" -eq 1 ]; then
|
||||
echo ""
|
||||
echo " Installing drift auto-heal watchdog..."
|
||||
if [ "$(uname)" = "Darwin" ]; then
|
||||
"${SKILLS_DIR}/watchdog.sh" \
|
||||
--install-launchd \
|
||||
--model "${MODEL_TO_USE_CANON}" \
|
||||
--openclaw-config "${CONFIG_PATH}" \
|
||||
--interval-sec "${WATCHDOG_INTERVAL}" >/dev/null
|
||||
echo " Watchdog installed (launchd, ${WATCHDOG_INTERVAL}s interval)."
|
||||
else
|
||||
echo " WARNING: --install-watchdog currently supports macOS launchd only."
|
||||
echo " Run watchdog manually: ${SKILLS_DIR}/watchdog.sh --once --model ${MODEL_TO_USE_CANON}"
|
||||
fi
|
||||
fi
|
||||
|
||||
# ── Gateway restart ──────────────────────────────────────────────────────────
|
||||
|
||||
if [ "$SKIP_RESTART" -eq 1 ]; then
|
||||
echo ""
|
||||
echo "4. Skipping gateway restart (--skip-restart)"
|
||||
else
|
||||
echo ""
|
||||
echo "4. Restarting OpenClaw gateway..."
|
||||
restart_gateway || true
|
||||
fi
|
||||
|
||||
# ── Verify embeddings endpoint ───────────────────────────────────────────────
|
||||
|
||||
echo ""
|
||||
echo "5. Verifying Ollama embeddings endpoint..."
|
||||
if "${SKILLS_DIR}/verify.sh" --model "$MODEL_TO_USE_CANON" --base-url "http://127.0.0.1:11434/v1/"; then
|
||||
echo " Verification passed."
|
||||
else
|
||||
echo " WARNING: Verification failed. Check Ollama model and gateway logs."
|
||||
fi
|
||||
|
||||
# ── Optional memory reindex ──────────────────────────────────────────────────
|
||||
|
||||
POST_MS="$(node -e '
|
||||
const fs=require("fs");
|
||||
const p=process.argv[1];
|
||||
let cfg={};
|
||||
try { cfg=JSON.parse(fs.readFileSync(p,"utf8")); } catch (_) {}
|
||||
const ms=cfg?.agents?.defaults?.memorySearch || {};
|
||||
const fp={
|
||||
provider: ms.provider || "",
|
||||
model: ms.model || "",
|
||||
baseUrl: ms?.remote?.baseUrl || "",
|
||||
apiKeySet: !!(ms?.remote?.apiKey || ""),
|
||||
};
|
||||
process.stdout.write(JSON.stringify(fp));
|
||||
' "$CONFIG_PATH")"
|
||||
|
||||
NEEDS_REINDEX=1
|
||||
if [ "$PRE_MS" = "$POST_MS" ]; then
|
||||
NEEDS_REINDEX=0
|
||||
fi
|
||||
|
||||
RUN_REINDEX=0
|
||||
case "$REINDEX_MEMORY" in
|
||||
yes) RUN_REINDEX=1 ;;
|
||||
no) RUN_REINDEX=0 ;;
|
||||
auto)
|
||||
if [ "$NEEDS_REINDEX" -eq 1 ]; then
|
||||
if [ "$NON_INTERACTIVE" -eq 1 ]; then
|
||||
RUN_REINDEX=1
|
||||
else
|
||||
echo ""
|
||||
echo "6. Embedding fingerprint changed."
|
||||
printf " Rebuild memory index now (recommended)? [Y/n]: "
|
||||
read -r yn
|
||||
case "${yn:-Y}" in
|
||||
Y|y|yes|YES) RUN_REINDEX=1 ;;
|
||||
*) RUN_REINDEX=0 ;;
|
||||
esac
|
||||
fi
|
||||
else
|
||||
RUN_REINDEX=0
|
||||
fi
|
||||
;;
|
||||
esac
|
||||
|
||||
if [ "$NEEDS_REINDEX" -eq 0 ]; then
|
||||
echo ""
|
||||
echo "6. Memory reindex not needed (embedding fingerprint unchanged)."
|
||||
elif [ "$RUN_REINDEX" -eq 1 ]; then
|
||||
echo ""
|
||||
echo "6. Rebuilding memory index..."
|
||||
if command -v openclaw >/dev/null 2>&1; then
|
||||
if openclaw memory index --force --verbose; then
|
||||
echo " Memory reindex completed."
|
||||
else
|
||||
echo " WARNING: Memory reindex failed. Run manually:"
|
||||
echo " openclaw memory index --force --verbose"
|
||||
fi
|
||||
else
|
||||
echo " WARNING: openclaw CLI not found; cannot run reindex automatically."
|
||||
echo " Run manually on host with OpenClaw CLI:"
|
||||
echo " openclaw memory index --force --verbose"
|
||||
fi
|
||||
else
|
||||
echo ""
|
||||
echo "6. Skipping memory reindex."
|
||||
if [ "$NEEDS_REINDEX" -eq 1 ]; then
|
||||
echo " Recommended command:"
|
||||
echo " openclaw memory index --force --verbose"
|
||||
fi
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "Done. OpenClaw memory embeddings now use Ollama."
|
||||
echo ""
|
||||
echo " Model: ${MODEL_TO_USE_CANON}"
|
||||
echo " Config: ${CONFIG_PATH}"
|
||||
echo " Enforce: ${SKILLS_DIR}/enforce.sh"
|
||||
echo " Guard: ${SKILLS_DIR}/watchdog.sh"
|
||||
echo " Verify: ${SKILLS_DIR}/verify.sh"
|
||||
echo ""
|
||||
222
skills/ollama-memory-embeddings/verify.sh
Normal file
222
skills/ollama-memory-embeddings/verify.sh
Normal file
@@ -0,0 +1,222 @@
|
||||
#!/usr/bin/env bash
|
||||
# Verify Ollama embeddings endpoint with selected model.
|
||||
# Checks: model exists in Ollama → endpoint reachable → valid embedding response.
|
||||
set -euo pipefail
|
||||
|
||||
MODEL=""
|
||||
BASE_URL=""
|
||||
CONFIG_PATH="${OPENCLAW_CONFIG_PATH:-${HOME}/.openclaw/openclaw.json}"
|
||||
VERBOSE=0
|
||||
|
||||
usage() {
|
||||
cat <<'EOF'
|
||||
Usage:
|
||||
verify.sh [--model <id>] [--base-url <url>] [--openclaw-config <path>] [--verbose]
|
||||
|
||||
Verifies that the configured Ollama embeddings endpoint is working correctly.
|
||||
|
||||
Behavior:
|
||||
- If --model is omitted, reads memorySearch.model from OpenClaw config.
|
||||
- If --base-url is omitted, reads memorySearch.remote.baseUrl from config,
|
||||
then defaults to http://127.0.0.1:11434/v1/
|
||||
- Checks: (1) model exists in Ollama, (2) endpoint returns valid embedding.
|
||||
- Use --verbose to dump raw API response on failure.
|
||||
EOF
|
||||
}
|
||||
|
||||
while [ $# -gt 0 ]; do
|
||||
case "$1" in
|
||||
--model) MODEL="$2"; shift 2 ;;
|
||||
--base-url) BASE_URL="$2"; shift 2 ;;
|
||||
--openclaw-config) CONFIG_PATH="$2"; shift 2 ;;
|
||||
--verbose) VERBOSE=1; shift ;;
|
||||
--help|-h) usage; exit 0 ;;
|
||||
*) echo "Unknown option: $1"; usage; exit 1 ;;
|
||||
esac
|
||||
done
|
||||
|
||||
require_cmd() {
|
||||
command -v "$1" >/dev/null 2>&1 || {
|
||||
echo "ERROR: '$1' not found in PATH."
|
||||
exit 1
|
||||
}
|
||||
}
|
||||
|
||||
# Normalize model name: add :latest if no tag present.
|
||||
normalize_model() {
|
||||
local m="$1"
|
||||
if [[ "$m" != *:* ]]; then
|
||||
echo "${m}:latest"
|
||||
else
|
||||
echo "$m"
|
||||
fi
|
||||
}
|
||||
|
||||
require_cmd node
|
||||
require_cmd curl
|
||||
|
||||
# ── Read config if needed ────────────────────────────────────────────────────
|
||||
|
||||
if [ -z "$MODEL" ] || [ -z "$BASE_URL" ]; then
|
||||
export CONFIG_PATH
|
||||
MAP_OUTPUT="$(node -e '
|
||||
const fs = require("fs");
|
||||
const p = process.env.CONFIG_PATH;
|
||||
const CANDIDATES = [
|
||||
["agents","defaults","memorySearch"],
|
||||
["memorySearch"],
|
||||
["agents","memorySearch"],
|
||||
["agents","defaults","memory","search"],
|
||||
["memory","search"],
|
||||
];
|
||||
function getAt(obj, path) {
|
||||
let cur = obj;
|
||||
for (const k of path) {
|
||||
if (!cur || typeof cur !== "object" || !(k in cur)) return undefined;
|
||||
cur = cur[k];
|
||||
}
|
||||
return cur;
|
||||
}
|
||||
function resolveMs(cfg) {
|
||||
const canonical = getAt(cfg, CANDIDATES[0]);
|
||||
if (canonical && typeof canonical === "object" && !Array.isArray(canonical)) return canonical;
|
||||
for (const p of CANDIDATES.slice(1)) {
|
||||
const v = getAt(cfg, p);
|
||||
if (v && typeof v === "object" && !Array.isArray(v)) return v;
|
||||
}
|
||||
return {};
|
||||
}
|
||||
let cfg = {};
|
||||
try { cfg = JSON.parse(fs.readFileSync(p, "utf8")); } catch (_) {}
|
||||
const ms = resolveMs(cfg);
|
||||
const model = ms.model || "";
|
||||
const base = (ms?.remote?.baseUrl || "http://127.0.0.1:11434/v1/").trim();
|
||||
console.log(model);
|
||||
console.log(base);
|
||||
')"
|
||||
CFG_MODEL="$(printf "%s\n" "$MAP_OUTPUT" | sed -n '1p')"
|
||||
CFG_BASE_URL="$(printf "%s\n" "$MAP_OUTPUT" | sed -n '2p')"
|
||||
[ -z "$MODEL" ] && MODEL="$CFG_MODEL"
|
||||
[ -z "$BASE_URL" ] && BASE_URL="$CFG_BASE_URL"
|
||||
fi
|
||||
|
||||
if [ -z "$MODEL" ]; then
|
||||
echo "ERROR: Could not determine embedding model."
|
||||
echo " Provide --model <id> or configure memorySearch.model in ${CONFIG_PATH}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Normalize model tag
|
||||
MODEL="$(normalize_model "$MODEL")"
|
||||
|
||||
# Normalize URL to .../v1 and call /embeddings
|
||||
BASE_URL="${BASE_URL%/}"
|
||||
if [[ "$BASE_URL" != */v1 ]]; then
|
||||
BASE_URL="${BASE_URL}/v1"
|
||||
fi
|
||||
EMBED_URL="${BASE_URL}/embeddings"
|
||||
|
||||
echo "Checking Ollama embeddings:"
|
||||
echo " URL: ${EMBED_URL}"
|
||||
echo " Model: ${MODEL}"
|
||||
|
||||
# ── Step 1: Check model exists in Ollama ─────────────────────────────────────
|
||||
|
||||
echo ""
|
||||
echo " [1/2] Checking model availability in Ollama..."
|
||||
if command -v ollama >/dev/null 2>&1; then
|
||||
if ! ollama list 2>/dev/null | awk 'NR>1{print $1}' | grep -qE "^(${MODEL}|${MODEL%%:*})$" 2>/dev/null; then
|
||||
echo " WARNING: model '${MODEL}' not found in 'ollama list'."
|
||||
echo " The model may not be pulled. Try: ollama pull ${MODEL%%:*}"
|
||||
echo ""
|
||||
echo " Continuing with endpoint check anyway..."
|
||||
else
|
||||
echo " Model '${MODEL}' found in Ollama."
|
||||
fi
|
||||
else
|
||||
echo " NOTE: 'ollama' CLI not in PATH; skipping model existence check."
|
||||
fi
|
||||
|
||||
# ── Step 2: Call embeddings endpoint ─────────────────────────────────────────
|
||||
|
||||
echo " [2/2] Calling embeddings endpoint..."
|
||||
|
||||
PAYLOAD=$(cat <<EOF
|
||||
{"model":"${MODEL}","input":"openclaw memory embeddings health check"}
|
||||
EOF
|
||||
)
|
||||
|
||||
HTTP_CODE=""
|
||||
RESP=""
|
||||
|
||||
# Capture HTTP status and response body without mixing stderr into status code.
|
||||
TMP_BODY="$(mktemp)"
|
||||
TMP_ERR="$(mktemp)"
|
||||
set +e
|
||||
HTTP_CODE="$(curl -sS -o "$TMP_BODY" -w "%{http_code}" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d "$PAYLOAD" \
|
||||
"$EMBED_URL" 2>"$TMP_ERR")"
|
||||
CURL_STATUS=$?
|
||||
set -e
|
||||
|
||||
RESP="$(cat "$TMP_BODY")"
|
||||
CURL_ERR="$(cat "$TMP_ERR")"
|
||||
rm -f "$TMP_BODY" "$TMP_ERR"
|
||||
|
||||
if [ "$CURL_STATUS" -ne 0 ]; then
|
||||
echo " ERROR: curl failed to reach ${EMBED_URL}"
|
||||
echo " Is Ollama running? Check: curl http://127.0.0.1:11434/api/tags"
|
||||
if [ "$VERBOSE" -eq 1 ] && [ -n "$CURL_ERR" ]; then
|
||||
echo " curl error: ${CURL_ERR}"
|
||||
fi
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ "$HTTP_CODE" != "200" ]; then
|
||||
echo " ERROR: embeddings endpoint returned HTTP ${HTTP_CODE}"
|
||||
if [ "$VERBOSE" -eq 1 ] && [ -n "$RESP" ]; then
|
||||
echo ""
|
||||
echo " Raw response (first 2000 chars):"
|
||||
echo " ${RESP:0:2000}"
|
||||
fi
|
||||
# Try to extract error message
|
||||
if [ -n "$RESP" ]; then
|
||||
ERR_MSG="$(echo "$RESP" | node -e '
|
||||
let d=""; process.stdin.on("data",c=>d+=c); process.stdin.on("end",()=>{
|
||||
try { const j=JSON.parse(d); console.log(j.error||j.message||""); } catch { console.log(""); }
|
||||
});
|
||||
' 2>/dev/null)" || true
|
||||
if [ -n "$ERR_MSG" ]; then
|
||||
echo " Server error: ${ERR_MSG}"
|
||||
fi
|
||||
fi
|
||||
exit 1
|
||||
fi
|
||||
|
||||
export RESP VERBOSE
|
||||
node <<'NODEOF'
|
||||
const raw = process.env.RESP || "";
|
||||
const verbose = process.env.VERBOSE === "1";
|
||||
let body;
|
||||
try { body = JSON.parse(raw); } catch {
|
||||
console.error(" ERROR: embeddings endpoint did not return valid JSON.");
|
||||
if (verbose) {
|
||||
console.error(" Raw response (first 2000 chars):");
|
||||
console.error(" " + raw.slice(0, 2000));
|
||||
}
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const arr = body?.data?.[0]?.embedding;
|
||||
if (!Array.isArray(arr) || arr.length === 0) {
|
||||
console.error(" ERROR: embeddings response missing data[0].embedding.");
|
||||
console.error(" Top-level keys: " + Object.keys(body).join(", "));
|
||||
if (verbose) {
|
||||
console.error(" Raw response (first 2000 chars):");
|
||||
console.error(" " + raw.slice(0, 2000));
|
||||
}
|
||||
process.exit(1);
|
||||
}
|
||||
console.log(` OK: received embedding vector (dims=${arr.length})`);
|
||||
NODEOF
|
||||
217
skills/ollama-memory-embeddings/watchdog.sh
Normal file
217
skills/ollama-memory-embeddings/watchdog.sh
Normal file
@@ -0,0 +1,217 @@
|
||||
#!/usr/bin/env bash
|
||||
# Drift watchdog for OpenClaw memorySearch embeddings config.
|
||||
set -euo pipefail
|
||||
|
||||
SKILL_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
ENFORCE_SH="${SKILL_DIR}/enforce.sh"
|
||||
CONFIG_PATH="${OPENCLAW_CONFIG_PATH:-${HOME}/.openclaw/openclaw.json}"
|
||||
MODEL=""
|
||||
BASE_URL="http://127.0.0.1:11434/v1/"
|
||||
INTERVAL_SEC=60
|
||||
ONCE=0
|
||||
RESTART_ON_HEAL=0
|
||||
INSTALL_LAUNCHD=0
|
||||
UNINSTALL_LAUNCHD=0
|
||||
QUIET=0
|
||||
|
||||
PLIST_NAME="bot.molt.openclaw.embedding-guard"
|
||||
PLIST_PATH="${HOME}/Library/LaunchAgents/${PLIST_NAME}.plist"
|
||||
LOG_DIR="${HOME}/.openclaw/logs"
|
||||
STDOUT_LOG="${LOG_DIR}/embedding-guard.out.log"
|
||||
STDERR_LOG="${LOG_DIR}/embedding-guard.err.log"
|
||||
|
||||
usage() {
|
||||
cat <<'EOF'
|
||||
Usage:
|
||||
watchdog.sh [options]
|
||||
|
||||
Modes:
|
||||
--once run one check/heal cycle, then exit
|
||||
(default) run continuously and check every --interval-sec
|
||||
--install-launchd install + load launchd job (macOS)
|
||||
--uninstall-launchd unload + remove launchd job (macOS)
|
||||
|
||||
Other OS guidance:
|
||||
Linux: run --once via cron/systemd timer
|
||||
Windows: not supported (bash script)
|
||||
|
||||
Linux cron example (every 5 min):
|
||||
*/5 * * * * /bin/bash ~/.openclaw/skills/ollama-memory-embeddings/watchdog.sh --once --model embeddinggemma >/dev/null 2>&1
|
||||
|
||||
Options:
|
||||
--model <id> model to enforce (required for new installs)
|
||||
--base-url <url> base URL to enforce (default: http://127.0.0.1:11434/v1/)
|
||||
--openclaw-config <path> config path (default: ~/.openclaw/openclaw.json)
|
||||
--interval-sec <n> check interval (default: 60)
|
||||
--restart-on-heal restart gateway after drift heal
|
||||
--quiet suppress non-error output
|
||||
--help show help
|
||||
EOF
|
||||
}
|
||||
|
||||
while [ $# -gt 0 ]; do
|
||||
case "$1" in
|
||||
--model) MODEL="$2"; shift 2 ;;
|
||||
--base-url) BASE_URL="$2"; shift 2 ;;
|
||||
--openclaw-config) CONFIG_PATH="$2"; shift 2 ;;
|
||||
--interval-sec) INTERVAL_SEC="$2"; shift 2 ;;
|
||||
--once) ONCE=1; shift ;;
|
||||
--restart-on-heal) RESTART_ON_HEAL=1; shift ;;
|
||||
--install-launchd) INSTALL_LAUNCHD=1; shift ;;
|
||||
--uninstall-launchd) UNINSTALL_LAUNCHD=1; shift ;;
|
||||
--quiet) QUIET=1; shift ;;
|
||||
--help|-h) usage; exit 0 ;;
|
||||
*) echo "Unknown option: $1"; usage; exit 1 ;;
|
||||
esac
|
||||
done
|
||||
|
||||
log() {
|
||||
[ "$QUIET" -eq 1 ] || echo "$@"
|
||||
}
|
||||
|
||||
require_cmd() {
|
||||
command -v "$1" >/dev/null 2>&1 || {
|
||||
echo "ERROR: '$1' not found in PATH."
|
||||
exit 1
|
||||
}
|
||||
}
|
||||
|
||||
resolve_model_if_missing() {
|
||||
if [ -n "$MODEL" ]; then
|
||||
return 0
|
||||
fi
|
||||
MODEL="$(node -e '
|
||||
const fs=require("fs");
|
||||
const p=process.argv[1];
|
||||
try {
|
||||
const cfg=JSON.parse(fs.readFileSync(p,"utf8"));
|
||||
process.stdout.write(cfg?.agents?.defaults?.memorySearch?.model || "");
|
||||
} catch (_) {}
|
||||
' "$CONFIG_PATH")"
|
||||
if [ -z "$MODEL" ]; then
|
||||
echo "ERROR: --model is required (or set memorySearch.model first)."
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
run_cycle() {
|
||||
set +e
|
||||
"$ENFORCE_SH" \
|
||||
--check-only \
|
||||
--model "$MODEL" \
|
||||
--base-url "$BASE_URL" \
|
||||
--openclaw-config "$CONFIG_PATH" \
|
||||
--quiet
|
||||
status=$?
|
||||
set -e
|
||||
|
||||
if [ "$status" -eq 0 ]; then
|
||||
log "[$(date -u +%Y-%m-%dT%H:%M:%SZ)] OK: no drift"
|
||||
return 0
|
||||
fi
|
||||
if [ "$status" -ne 10 ]; then
|
||||
log "[$(date -u +%Y-%m-%dT%H:%M:%SZ)] ERROR: drift check failed (status $status)"
|
||||
return 1
|
||||
fi
|
||||
|
||||
log "[$(date -u +%Y-%m-%dT%H:%M:%SZ)] DRIFT: healing..."
|
||||
if [ "$RESTART_ON_HEAL" -eq 1 ]; then
|
||||
"$ENFORCE_SH" --model "$MODEL" --base-url "$BASE_URL" --openclaw-config "$CONFIG_PATH" --restart-on-change --quiet
|
||||
else
|
||||
"$ENFORCE_SH" --model "$MODEL" --base-url "$BASE_URL" --openclaw-config "$CONFIG_PATH" --quiet
|
||||
fi
|
||||
log "[$(date -u +%Y-%m-%dT%H:%M:%SZ)] HEALED"
|
||||
}
|
||||
|
||||
install_launchd() {
|
||||
if [ "$(uname)" != "Darwin" ]; then
|
||||
echo "ERROR: --install-launchd is macOS only."
|
||||
echo "Linux recommendation:"
|
||||
echo " Use cron or a systemd timer to run:"
|
||||
echo " /bin/bash ${SKILL_DIR}/watchdog.sh --once --model <model>"
|
||||
echo "Windows: not supported (bash script)."
|
||||
exit 1
|
||||
fi
|
||||
require_cmd launchctl
|
||||
require_cmd node
|
||||
resolve_model_if_missing
|
||||
mkdir -p "$(dirname "$PLIST_PATH")" "$LOG_DIR"
|
||||
local shell_bin
|
||||
shell_bin="$(command -v bash)"
|
||||
cat > "$PLIST_PATH" <<EOF
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
|
||||
<plist version="1.0">
|
||||
<dict>
|
||||
<key>Label</key>
|
||||
<string>${PLIST_NAME}</string>
|
||||
<key>ProgramArguments</key>
|
||||
<array>
|
||||
<string>${shell_bin}</string>
|
||||
<string>${SKILL_DIR}/watchdog.sh</string>
|
||||
<string>--once</string>
|
||||
<string>--model</string>
|
||||
<string>${MODEL}</string>
|
||||
<string>--base-url</string>
|
||||
<string>${BASE_URL}</string>
|
||||
<string>--openclaw-config</string>
|
||||
<string>${CONFIG_PATH}</string>
|
||||
$( [ "$RESTART_ON_HEAL" -eq 1 ] && echo " <string>--restart-on-heal</string>" )
|
||||
</array>
|
||||
<key>RunAtLoad</key>
|
||||
<true/>
|
||||
<key>StartInterval</key>
|
||||
<integer>${INTERVAL_SEC}</integer>
|
||||
<key>StandardOutPath</key>
|
||||
<string>${STDOUT_LOG}</string>
|
||||
<key>StandardErrorPath</key>
|
||||
<string>${STDERR_LOG}</string>
|
||||
</dict>
|
||||
</plist>
|
||||
EOF
|
||||
|
||||
launchctl bootout "gui/$(id -u)/${PLIST_NAME}" >/dev/null 2>&1 || true
|
||||
launchctl bootstrap "gui/$(id -u)" "$PLIST_PATH"
|
||||
launchctl kickstart -k "gui/$(id -u)/${PLIST_NAME}"
|
||||
log "Installed launchd watchdog: ${PLIST_PATH}"
|
||||
}
|
||||
|
||||
uninstall_launchd() {
|
||||
if [ "$(uname)" != "Darwin" ]; then
|
||||
echo "ERROR: --uninstall-launchd is macOS only."
|
||||
echo "Windows: not supported (bash script)."
|
||||
exit 1
|
||||
fi
|
||||
require_cmd launchctl
|
||||
launchctl bootout "gui/$(id -u)/${PLIST_NAME}" >/dev/null 2>&1 || true
|
||||
rm -f "$PLIST_PATH"
|
||||
log "Removed launchd watchdog: ${PLIST_PATH}"
|
||||
}
|
||||
|
||||
if [ "$INSTALL_LAUNCHD" -eq 1 ] && [ "$UNINSTALL_LAUNCHD" -eq 1 ]; then
|
||||
echo "ERROR: choose only one of --install-launchd or --uninstall-launchd."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ "$INSTALL_LAUNCHD" -eq 1 ]; then
|
||||
install_launchd
|
||||
exit 0
|
||||
fi
|
||||
|
||||
if [ "$UNINSTALL_LAUNCHD" -eq 1 ]; then
|
||||
uninstall_launchd
|
||||
exit 0
|
||||
fi
|
||||
|
||||
require_cmd node
|
||||
resolve_model_if_missing
|
||||
|
||||
if [ "$ONCE" -eq 1 ]; then
|
||||
run_cycle
|
||||
exit 0
|
||||
fi
|
||||
|
||||
while true; do
|
||||
run_cycle || true
|
||||
sleep "$INTERVAL_SEC"
|
||||
done
|
||||
Reference in New Issue
Block a user