test-integration-docker
npx machina-cli add skill yu-iskw/coding-agent-fabric/test-integration-docker --openclawIntegration Testing with Docker & Real Skills
This skill executes integration tests in a containerized environment using actual skill repositories from the community. It uses a Makefile to manage the build and test lifecycle.
Workflow Checklist
- Step 1: Environment Readiness
- Verify Docker is running.
- Step 2: Execute Tests
- Run
make -C integration_tests testin the project root.
- Run
- Step 3: Lifecycle Management
- Use
make -C integration_tests cleanif needed to prune images.
- Use
Detailed Instructions
1. Run Integration Tests
Ensure the Docker daemon is active, then execute the following command from the project root to build the image and run the full test suite:
make -C integration_tests test-verbose
2. Standard Makefile Targets
You can also use the Makefile directly for granular control:
Build the image only
make -C integration_tests build
Clean up images
make -C integration_tests clean
Success Criteria
- Docker image builds successfully with
gitandmakeinstalled. cafCLI is linked and available in the container.- Real-world skills are cloned from GitHub (Vercel, Anthropic).
caf skills addsuccessfully installs these real skills.- All integration scenarios pass.
Scenarios Covered
- CLI Health: Version and help checks.
- Real-World Skills (Vercel): Clones
vercel-labs/agent-skillsand installs skills likeweb-design-guidelines. - Real-World Skills (Anthropic): Clones
anthropics/skillsand installs document skills. - Subagent Verification: Lists available subagents in the clean container.
- Multi-Agent Matrix: Tests are executed for both
Claude CodeandCursoragents. - Scope Isolation: Tests are executed for both
projectandglobalscopes, verifying that global installations do not leak into the project'spackage.json.
Source
git clone https://github.com/yu-iskw/coding-agent-fabric/blob/main/.claude/skills/test-integration-docker/SKILL.mdView on GitHub Overview
This skill runs end-to-end integration tests inside a Docker container against real community skills (Vercel, Anthropic, Expo). It uses a Makefile to manage the build, test, and cleanup lifecycle for reproducible results.
How This Skill Works
Tests are executed inside a container built by the integration_tests Makefile. Run the verbose test target to exercise all scenarios, and use build or clean targets to manage images and artifacts.
When to Use It
- Need reproducible end-to-end validation of real skills inside a containerized environment.
- Test real-world skills from Vercel (vercel-labs/agent-skills) and install examples like web-design-guidelines.
- Validate Anthropic skills and document-based skills using actual GitHub repos.
- Verify subagent listing and multi-agent coordination within a clean container.
- Check scope isolation between project and global installations to prevent leakage into package.json.
Quick Start
- Step 1: Ensure Docker is running and run: make -C integration_tests test-verbose
- Step 2: Build the image only: make -C integration_tests build
- Step 3: Clean up: make -C integration_tests clean
Best Practices
- Ensure Docker is running before starting tests.
- Use make -C integration_tests test-verbose for full coverage.
- Leverage Makefile targets (build, test, clean) for granular control.
- Verify caf CLI is linked and available inside the container.
- Prune images and artifacts after runs to reclaim disk space.
Example Use Cases
- Build and test Vercel-based skills by cloning vercel-labs/agent-skills and installing skills like web-design-guidelines.
- Clone anthropics/skills and run tests for document-based skills within the container.
- Run the subagent verification scenario to list available subagents in a clean container.
- Execute multi-agent matrix tests for Claude Code and Cursor agents.
- Validate scope isolation by testing both project and global scopes and ensuring no leakage into package.json.