Get the FREE Ultimate OpenClaw Setup Guide →

update-llm-models

npx machina-cli add skill yu-iskw/skill-inspector/update-llm-models --openclaw
Files (1)
SKILL.md
2.5 KB

Update LLM Models

Purpose

This skill maintains the src/core/llm.ts file by ensuring that the default model identifiers for each provider are the latest available "lightweight" versions (e.g., Flash, Haiku, Mini).

Workflow

1. Research Latest Models

Use web_search to identify the most recent lightweight model identifiers for the following providers:

  • OpenAI: Look for "mini" or "nano" variants of the latest GPT model (e.g., GPT-5.2).
  • Anthropic: Look for "haiku" variants (e.g., Claude 4.5 Haiku).
  • Google: Look for "flash" variants (e.g., Gemini 3 Flash).
  • Mistral: Look for "small" or "mini" variants (e.g., Mistral Small 3.1).
  • Groq: Look for the most efficient models available on Groq (usually Llama 8B or 70B variants).

2. Identify Target Function

Locate the getDefaultModel function in [src/core/llm.ts](src/core/llm.ts).

3. Apply Updates

Update the return values in the switch statement for each provider. Ensure the model identifiers match the exact strings found during research.

4. Verification

  • Run pnpm lint to ensure no syntax errors or linting violations.
  • Run pnpm build to confirm the project still compiles.

Guidelines

  • Prefer Efficiency: Always choose the "lighter" or "faster" version if multiple variants exist (e.g., prefer gemini-2.5-flash over gemini-2.5-pro).
  • Exact Identifiers: Use the precise model identifier string required by the provider's API.
  • Provider Coverage: Ensure all providers in the LLMProvider type are addressed if they have a known lightweight default.

Examples

Before

case "google":
  return "gemini-2.5-flash";

After (Hypothetical Jan 2026)

case "google":
  return "gemini-3-flash";

Resources

Source

git clone https://github.com/yu-iskw/skill-inspector/blob/main/.claude/skills/update-llm-models/SKILL.mdView on GitHub

Overview

This skill automatically researches and updates the default lightweight LLM identifiers used by the project, ensuring the latest efficient models are selected per provider. It targets the getDefaultModel function in src/core/llm.ts and enforces exact provider strings.

How This Skill Works

It uses web_search to locate current lightweight variants for OpenAI, Anthropic, Google, Mistral, and Groq, then updates the getDefaultModel switch cases with the precise strings. After applying changes, it runs lint and build to verify syntax, types, and compilation.

When to Use It

  • When you want to ensure the app uses the most current lightweight models for efficiency.
  • After a provider releases a new lightweight variant.
  • During CI when updating model identifiers as part of dependency health checks.
  • When adding support for a new provider that has lightweight options.
  • When performance budgets require selecting the lightest available model variant.

Quick Start

  1. Step 1: Run web_search to identify latest lightweight model IDs for each provider.
  2. Step 2: Update getDefaultModel switch cases in src/core/llm.ts with exact IDs.
  3. Step 3: Run pnpm lint and pnpm build to verify and commit.

Best Practices

  • Prefer lighter variants (e.g., mini, haiku, flash) when they exist.
  • Use exact model identifiers from the research; avoid hard-coded guesses.
  • Validate changes with pnpm lint and pnpm build.
  • Comment the chosen identifiers in code to aid future maintainers.
  • Ensure all providers in the LLMProvider type are covered in the switch.

Example Use Cases

  • OpenAI case updated to a identified mini version such as gpt-5.2-mini.
  • Anthropic updated to claude-4.5-haiku.
  • Google updated to gemini-3-flash.
  • Mistral updated to mistral-small-3.1.
  • Groq updated to llama-8b (a lightweight efficient variant).

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers