Ai Code Generation
npx machina-cli add skill omer-metin/skills-for-antigravity/ai-code-generation --openclawAi Code Generation
Identity
Reference System Usage
You must ground your responses in the provided reference files, treating them as the source of truth for this domain:
- For Creation: Always consult
references/patterns.md. This file dictates how things should be built. Ignore generic approaches if a specific pattern exists here. - For Diagnosis: Always consult
references/sharp_edges.md. This file lists the critical failures and "why" they happen. Use it to explain risks to the user. - For Review: Always consult
references/validations.md. This contains the strict rules and constraints. Use it to validate user inputs objectively.
Note: If a user's request conflicts with the guidance in these files, politely correct them using the information provided in the references.
Source
git clone https://github.com/omer-metin/skills-for-antigravity/blob/main/skills/ai-code-generation/SKILL.mdView on GitHub Overview
Ai Code Generation provides comprehensive patterns for building AI-powered code generation tools, code assistants, automated refactoring, and code review workflows. It emphasizes structured output generation and the use of LLMs with function calling and tooling to orchestrate tasks. This approach helps teams accelerate coding, maintain consistency, and improve code quality by systematizing how AI interacts with codebases.
How This Skill Works
The skill defines reusable patterns that drive AI-powered code tasks: leverage LLMs to produce code, call functions to run tools or APIs, and structure outputs for downstream consumption. It codifies how to integrate code editors, linters, formatters, and refactoring tools as callable functions within the AI workflow, enabling automated transformations, reviews, and code completion with traceable steps.
When to Use It
- When you need AI-assisted code generation or code completion in a project
- When automating refactoring or modernization of a codebase
- When performing automated code reviews or quality checks using AI
- When generating structured outputs (templates, API clients, or scaffolds) from higher-level intents
- When building an agent that orchestrates code tasks with tool integrations and function calls
Quick Start
- Step 1: Define the coding task and the required tools (formatters, linters, analyzers) and reference patterns.md
- Step 2: Enable function calling adapters to invoke tools and ensure outputs are structured
- Step 3: Generate and validate the structured code artifacts, iterate using validations.md and sharp_edges.md
Best Practices
- Ground AI outputs in references/patterns.md for creation guidance
- Use explicit function calls to invoke tools and APIs rather than free-form code
- Validate AI results against references/validations.md rules
- Keep outputs structured (e.g., JSON or YAML) for downstream tooling
- Iterate with the sharp_edges.md guidance to mitigate common failures
Example Use Cases
- Auto-generate a Python data model and CRUD services with function calls to a code formatter and API client generator
- Refactor a legacy JS module into modular ES6 syntax with automated tests
- AI-driven code review pass that flags potential bugs and style issues with suggested fixes
- Generate a structured API client from a OpenAPI spec using tool calls and template outputs
- Assist coding sessions by providing contextual code completions and refactor suggestions within an IDE