database-toolset-general
npx machina-cli add skill filmicgaze/MiChat-desktop/database-toolset-general --openclawWhen to use this skill
Use this skill whenever you’re about to do real work with MiChat’s database toolset (create/inspect/migrate/query/update/backup).
Default stance: one DB per profile (multiple tables). Create a second DB only for scratch/import/staging workflows or a clear separation reason.
Practical workflow (the safe default)
- Discover / pick DB
- If unsure what exists, list DBs.
- Otherwise default to a single main DB name (e.g.
personal).
- Ensure it exists
- If a DB might not exist yet, ensure it with create enabled.
- Inspect before changing
- Use schema inspection to understand the current tables/columns.
- Back up before risky changes
- Before DDL or destructive edits, take a backup with a short label.
- Migrate/seed in a batch
- Prefer batched execution in a transaction for migrations and small seed data.
- Keep batches short and readable.
- Verify
- Re-check schema and run small verification SELECTs.
Tips and gotchas
1) SELECT vs non-SELECT routing
- SELECT/CTE belongs in the query tool; using the execute tool for SELECT will be rejected.
2) Paging and hard caps
- Query results are paged with an
offset:<n>cursor. - Even if you request a huge
max_rows, results are hard-capped (practically: 200 rows/page). Usenext_cursorto continue. - For stable paging, include an ORDER BY.
3) db_execute_many failure semantics (choose intentionally)
- Atomic default:
transaction=true, stop_on_error=true.- If any statement fails, you get an error envelope only (no partial results list).
- Error details include
failed_index,sql_preview, and sqlite error codes/names.
- If you need partial progress / diagnostics:
- use
transaction=false. stop_on_error=truestops at first error and returns results up to the failure.stop_on_error=falsecontinues and returns per-statement errors; overallok=falseif any failed.
- use
- Rule: when
transaction=true,stop_on_errormust be true.
4) RETURNING is for small results
- Statements with
RETURNINGcap returned rows (e.g. 50 per statement in batched execution). - If you need lots of rows back, prefer:
- write changes without relying on a large RETURNING payload, then
- fetch what you need with a paged SELECT.
5) Debugging a failed batch quickly
- Use the batch error’s
failed_index+sql_previewto pinpoint the failing statement. - If the failure looks like a schema mismatch, re-run schema inspection (full detail if available).
Recommended conventions (lightweight)
- Use params (positional/named) instead of interpolating values into SQL.
- Prefer idempotent DDL where possible (e.g.,
IF NOT EXISTS). - Use consistent timestamps (e.g.,
datetime('now')). - Start with minimal indexes; add them when queries become slow or frequent.
Optional: scratch/import staging DB (second DB)
Use a second DB only when it makes your workflow simpler:
- messy imports
- dedup/normalization staging
- experimental schema work Then copy cleaned data into the main DB and discard/ignore the scratch DB.
Troubleshooting checklist
- not found: ensure DB exists (create enabled)
- paging confusion: add ORDER BY; follow
next_cursor - batch failure: check
failed_indexand the sqlite error; consider rerun withtransaction=falsefor more visibility
Source
git clone https://github.com/filmicgaze/MiChat-desktop/blob/main/profiles/_global/skills/database-toolset-general/SKILL.mdView on GitHub Overview
This skill helps you manage MiChat’s profile-scoped SQLite databases—creating, inspecting, migrating, querying, updating, and backing up. It follows a conservative default: one DB per profile (with a separate scratch DB for testing) to minimize risk. Use the practical workflow to discover, ensure, inspect, backup, migrate, and verify changes safely.
How This Skill Works
Operate the MiChat database toolset to select or create the main profile DB, inspect its schema, and perform batched migrations or seeds within a transaction. Before risky changes, back up with a short label; after changes, re-check the schema and run small verification SELECTs. Note: SELECTs must use the query tool, results are paged (limited to ~200 rows per page) and should be ordered with ORDER BY for stable paging.
When to Use It
- Creating or selecting a profile-scoped DB (default main 'personal' DB)
- Ensuring the target DB exists before making changes
- Inspecting the current schema to understand tables and columns
- Backing up before DDL or destructive edits
- Running migrations/seeds in batches within a transaction and verifying results
Quick Start
- Step 1: Discover/pick the main profile DB (default to personal)
- Step 2: Ensure the DB exists (create if missing)
- Step 3: Inspect the schema, back up, then run batched migrations/seeds within a transaction and verify
Best Practices
- Use parameterized queries; avoid interpolating values directly into SQL
- Prefer idempotent DDL (e.g., IF NOT EXISTS) where possible
- Keep migrations/seed batches short and clear; use transaction=true
- RETURNING payloads are small; for large results, fetch with paged SELECT
- Enable stable paging with ORDER BY and follow next_cursor; limit max_rows to avoid surprises
Example Use Cases
- Discover and list DBs, then select the main profile DB (e.g., personal) for work
- Inspect the schema of the current profile DB to understand tables/columns
- Back up the profile DB with a short label before a risky DDL change
- Migrate data in a batch inside a transaction to ensure atomicity
- Use a scratch DB for messy imports or staging, then copy clean data into the main DB