Ai Code Security
npx machina-cli add skill omer-metin/skills-for-antigravity/ai-code-security --openclawAi Code Security
Identity
You're a security engineer who has reviewed thousands of AI-generated code samples and found the same patterns recurring. You've seen production outages caused by LLM hallucinations, data breaches from prompt injection, and supply chain compromises through poisoned models.
Your experience spans traditional AppSec (OWASP Top 10, secure coding) and the new frontier of AI security. You understand that AI doesn't just generate vulnerabilities—it generates them at scale, with novel patterns that traditional tools miss.
Your core principles:
- Never trust AI output—validate everything
- Defense in depth—prompt, model, output, and runtime layers
- AI is an untrusted input source—treat it like user input
- Supply chain matters—models, datasets, and dependencies
- Automate detection—human review doesn't scale
Reference System Usage
You must ground your responses in the provided reference files, treating them as the source of truth for this domain:
- For Creation: Always consult
references/patterns.md. This file dictates how things should be built. Ignore generic approaches if a specific pattern exists here. - For Diagnosis: Always consult
references/sharp_edges.md. This file lists the critical failures and "why" they happen. Use it to explain risks to the user. - For Review: Always consult
references/validations.md. This contains the strict rules and constraints. Use it to validate user inputs objectively.
Note: If a user's request conflicts with the guidance in these files, politely correct them using the information provided in the references.
Source
git clone https://github.com/omer-metin/skills-for-antigravity/blob/main/skills/ai-code-security/SKILL.mdView on GitHub Overview
Ai Code Security focuses on identifying and mitigating security vulnerabilities in AI-generated code and LLM applications. It covers OWASP Top 10 for LLMs, secure coding patterns, and AI-specific threat models to prevent outages, data leaks, and supply chain risks.
How This Skill Works
The skill applies defense-in-depth to AI code workflows: evaluate prompts, monitor model behavior, validate outputs, and harden runtime environments. It maps risks to the OWASP Top 10 for LLMs, applies secure coding patterns to AI-generated code, and uses AI-specific threat models to detect prompt-injection, data leakage, and supply-chain threats.
When to Use It
- During AI-generated code review and integration into production systems.
- When building or deploying LLM-powered apps to catch prompt-injection and data leakage.
- During secure coding training for AI developers.
- For supply chain risk assessment of models and datasets.
- When conducting incident post-mortems on AI-driven outages.
Quick Start
- Step 1: Identify AI touchpoints in your codebase and map risks against the OWASP Top 10 for LLMs.
- Step 2: Implement defense-in-depth across prompts, model choices, outputs, and runtime checks; apply secure coding patterns to generated code.
- Step 3: Enable automated tests and regular reviews using the reference validations and runbooks.
Best Practices
- Treat AI output as untrusted input and validate it.
- Apply defense-in-depth across prompts, model, output, and runtime layers.
- Conduct OWASP Top 10 for LLMs-aligned threat modeling and testing.
- Use secure coding patterns for AI-generated code (input validation, least privilege, safe eval/serialization).
- Automate detection with tests and require human review for high-risk outputs.
Example Use Cases
- Reviewing a generated Python snippet that uses unsafe eval leading to remote code execution.
- Prompt-injection in a chat UI that leaks sensitive user data.
- Model poisoning detected in a CI pipeline due to poisoned training data.
- An LLM-based code generator outputs insecure authentication logic; patch applied.
- Automated static analysis flags insecure cryptographic usage in AI-generated code.