ai-context
npx machina-cli add skill littlebearapps/pitchdocs/ai-context --openclawAI Context File Generator
Philosophy
AI coding assistants work better when they understand a project's conventions, architecture, and constraints. Context files tell AI tools how to work with the codebase — coding standards, test patterns, import conventions, key file paths, and deployment workflows.
This skill generates context files for multiple AI tools from a single codebase analysis. The same scan produces output for Claude Code, Codex CLI, Cursor, GitHub Copilot, and Gemini CLI.
Supported Context Files
| File | AI Tool | Purpose | Adoption |
|---|---|---|---|
AGENTS.md | Codex CLI, Cursor, Gemini CLI, Claude Code | Cross-tool agent context — identity, capabilities, conventions | 40,000+ repos |
CLAUDE.md | Claude Code | Project-specific instructions loaded on every session | Native to Claude Code |
.cursorrules | Cursor | Editor-specific rules for code generation | Native to Cursor |
.github/copilot-instructions.md | GitHub Copilot | Repository-level instructions for Copilot suggestions | Native to Copilot |
.windsurfrules | Windsurf | Project-specific rules for Windsurf's Cascade AI | Native to Windsurf |
.clinerules | Cline (VS Code extension) | Project context for autonomous Cline tasks | Native to Cline |
GEMINI.md | Gemini CLI | Project context loaded at the start of every Gemini CLI session | Native to Gemini CLI |
Codebase Analysis Workflow
Step 1: Detect Project Profile
# Language and runtime
ls package.json pyproject.toml Cargo.toml go.mod pom.xml build.gradle 2>/dev/null
# Framework detection
cat package.json 2>/dev/null | grep -E '"(react|next|express|fastify|hono|astro|svelte)"'
cat pyproject.toml 2>/dev/null | grep -E '(fastapi|django|flask|starlette)'
# Test runner
ls jest.config* vitest.config* pytest.ini pyproject.toml .mocharc* 2>/dev/null
grep -l "test" package.json 2>/dev/null
# Linter/formatter
ls .eslintrc* eslint.config* .prettierrc* biome.json ruff.toml .flake8 2>/dev/null
# TypeScript configuration
ls tsconfig*.json 2>/dev/null
# Monorepo detection
ls pnpm-workspace.yaml lerna.json nx.json turbo.json 2>/dev/null
# CI/CD
ls .github/workflows/*.yml 2>/dev/null
Step 2: Extract Conventions
From the codebase analysis, extract:
- Language version — Node.js version from
.nvmrc/engines, Python fromrequires-python, Go fromgo.mod - Import conventions — ESM vs CommonJS, absolute vs relative imports, path aliases (
@/) - Naming patterns — camelCase/snake_case for variables, PascalCase for types, file naming
- Directory structure — where source, tests, config, and docs live
- Test patterns — test file location (
__tests__/,*.test.ts,tests/), test runner, assertion style - Build/deploy — build command, deploy target (Cloudflare, Vercel, AWS, etc.)
- Error handling — custom error classes, Result types, try-catch patterns
- Security rules — .gitignore patterns, secret management, input validation
Step 3: Generate Context Files
AGENTS.md Structure
# AGENTS.md
## Identity
[Project name] is a [brief description]. Built with [language/framework].
## Project Structure
[Key directories and their purpose]
## Coding Conventions
- [Language]: [version]
- Style: [naming conventions, import order]
- Types: [strict mode, no any, explicit returns — if TypeScript]
- Tests: [runner, location, naming pattern]
- Commits: [conventional commits, branch naming]
## Key Commands
```bash
[install command]
[test command]
[build command]
[lint command]
[deploy command]
Architecture
[2-3 sentences on architecture: patterns used, key abstractions, data flow]
Important Files
- [key config file] — [purpose]
- [main entry point] — [purpose]
- [key module] — [purpose]
Rules
- [Critical rule 1 — e.g., never commit secrets]
- [Critical rule 2 — e.g., all public functions need tests]
- [Critical rule 3 — e.g., use direnv exec for deploy commands]
#### CLAUDE.md Structure
Claude Code loads this file at the start of every session. Keep it concise — under 200 lines.
```markdown
# [Project Name]
## Quick Reference
- **Language**: [X] with [framework]
- **Test**: `[test command]`
- **Build**: `[build command]`
- **Deploy**: `[deploy command]`
## Coding Standards
[3-5 bullet points on the most important conventions]
## Architecture
[Key patterns, file organisation, data flow — 3-5 bullet points]
## Key Paths
| Path | Purpose |
|------|---------|
| `src/` | Source code |
| `tests/` | Test files |
| ... | ... |
## Rules
[Critical do/don't rules that prevent common mistakes]
.cursorrules Structure
Cursor rules are plain text, loaded when editing files in the project.
You are working on [project name], a [description].
Language: [X] with [framework]
Style: [conventions]
Key rules:
- [Rule 1]
- [Rule 2]
- [Rule 3]
When writing code:
- [Pattern 1]
- [Pattern 2]
When writing tests:
- [Test pattern 1]
- [Test pattern 2]
File structure:
- src/ — source code
- tests/ — test files
.github/copilot-instructions.md Structure
# Copilot Instructions
## Project Context
This is a [description] built with [language/framework].
## Coding Standards
- [Convention 1]
- [Convention 2]
- [Convention 3]
## Patterns to Follow
- [Pattern 1]
- [Pattern 2]
## Patterns to Avoid
- [Anti-pattern 1]
- [Anti-pattern 2]
.windsurfrules Structure
Windsurf's Cascade AI reads .windsurfrules from the project root. Format is plain text — similar to .cursorrules. Windsurf supports both global (~/.codeium/windsurf/memories/global_rules.md) and project-level rules.
# [Project Name] — Windsurf Rules
## Project Context
[Project name] is a [description]. Built with [language/framework].
## Coding Standards
- [Convention 1]
- [Convention 2]
- [Convention 3]
## Key Files
- [main entry] — [purpose]
- [config file] — [purpose]
## Commands
[test command]
[build command]
[deploy command]
## Rules
- [Critical rule 1]
- [Critical rule 2]
.clinerules Structure
Cline reads .clinerules from the project root. It supports a richer format than .cursorrules including task checklists.
# [Project Name]
## Project Overview
[1-2 sentence description of what the project is and does]
## Tech Stack
- **Language**: [X]
- **Framework**: [Y]
- **Test runner**: [Z]
- **Linter**: [W]
## Coding Standards
- [Rule 1]
- [Rule 2]
- [Rule 3]
## Important Paths
- `[path]` — [purpose]
## Before Committing
- [ ] Tests pass (`[test command]`)
- [ ] Linting passes (`[lint command]`)
- [ ] No secrets or credentials in changed files
GEMINI.md Structure
Gemini CLI reads GEMINI.md from the project root (or .gemini/GEMINI.md). Keep it concise — Gemini CLI's context window handling differs from Claude Code.
# [Project Name]
[One sentence: what is this project and who is it for]
## Tech Stack
[Language], [Framework], [Key dependencies]
## Commands
[test command]
[build command]
[deploy command]
## Conventions
- [Convention 1]
- [Convention 2]
- [Convention 3]
## Key Paths
- `[path]`: [purpose]
Staleness Audit
When running in audit mode, check existing context files for drift:
- Version mismatch — Does the context file reference the correct language/framework version?
- Missing commands — Are test/build/deploy commands still accurate? Run them to verify.
- Stale paths — Do referenced file paths still exist?
- New conventions — Has the project adopted new patterns (e.g., added ESLint, switched to Vitest) that aren't reflected?
Report format:
AI Context Audit:
✓ AGENTS.md — up to date (last modified: 2 days ago)
⚠ CLAUDE.md — references jest but vitest.config.ts detected
✗ .cursorrules — references src/index.ts but file moved to src/main.ts
· .github/copilot-instructions.md — not present
· .windsurfrules — not present (recommend generating)
· .clinerules — not present (recommend generating)
· GEMINI.md — not present (recommend generating)
AGENTS.md Spec Version Tracking
The AGENTS.md format is defined by the agents.md spec. PitchDocs tracks the pinned version in upstream-versions.json.
Current Stable: v1.0
The v1.0 spec defines these standard sections:
| Section | Purpose |
|---|---|
| Identity | Project name, description, what it does |
| Project Structure | Key directories and their purposes |
| Conventions | Coding standards, naming, commit conventions |
| Commands | Test, build, deploy, lint commands |
| Architecture | System design, data flow, key abstractions |
| Files | Important files and their roles |
| Rules | Hard constraints (security, compatibility) |
Proposed v1.1 Features (Draft — Do Not Implement)
These features are under discussion and may change before stabilisation:
| Feature | Status | Notes |
|---|---|---|
| Sub-agents section | Draft | Nested agent definitions within AGENTS.md |
| Tool permissions | Proposed | Declaring which tools an agent can use |
.agent/ directory | Proposed | Directory-based agent definitions (alternative to single file) |
when: frontmatter | Draft | Trigger conditions for agent activation |
Guidance: Do not generate these proposed sections until they reach stable status. Monitor the agents.md releases for v1.1 announcement. The check-upstream GitHub Action will flag when a new version is available. When v1.1 reaches stable, update the pinned version in upstream-versions.json and add the new sections to the generation templates above.
Anti-Patterns
- Don't dump entire codebase — context files should be concise summaries, not file listings
- Don't include secrets — no API keys, tokens, or credentials in context files
- Don't repeat framework docs — reference framework conventions, don't reproduce them
- Don't over-constrain — provide patterns, not rigid rules that prevent creative problem-solving
- Don't include session-specific state — context files should be durable across sessions
Source
git clone https://github.com/littlebearapps/pitchdocs/blob/main/.claude/skills/ai-context/SKILL.mdView on GitHub Overview
ai-context analyzes a repository to produce project-specific AI tool context files (AGENTS.md, CLAUDE.md, .cursorrules, copilot-instructions.md). These files encode conventions, architecture, and workflows so AI coding assistants operate reliably within the codebase.
How This Skill Works
The tool scans the codebase to detect language versions, imports, naming patterns, directory structure, tests, and deployment signals. It then formats and exports context content tailored for Claude Code, Codex CLI, Cursor, Copilot, and Gemini CLI, so AI tools understand project conventions and workflows from startup.
When to Use It
- When setting up AI tool context for a new repository
- When standardizing AI assistants across a monorepo
- Before onboarding a new AI coding assistant to a project
- After major refactors that change conventions or architecture
- When migrating to additional AI tools (e.g., Copilot to Claude Code)
Quick Start
- Step 1: Run the codebase analysis to detect project profile and conventions
- Step 2: Review and, if needed, tweak the generated AGENTS.md, CLAUDE.md, .cursorrules, and copilot-instructions.md
- Step 3: Commit the generated files to the repository and configure AI tools to load them
Best Practices
- Run the analyzer after the initial repo scan and after significant refactors
- Keep AGENTS.md, CLAUDE.md, and .cursorrules in sync with code changes
- Review extracted conventions for accuracy before distribution
- Exclude secrets; ensure security rules are reflected in context files
- Document any non-standard workflows or CI/CD steps within the context files
Example Use Cases
- A React + Next.js frontend with a Python FastAPI backend, using npm and pyproject.toml in a monorepo
- A Go microservice cluster with Docker, GitHub Actions, and a multi-module go.mod workspace
- A Python data science project with pytest tests and Poetry-managed dependencies
- A Node CLI tool written in TypeScript with ESLint/Prettier, deployed via CI
- An enterprise Rails app with PostgreSQL, RSpec tests, and CircleCI deployments