ai-ml
npx machina-cli add skill bcastelino/agent-skills-kit/ai-ml --openclawAI/ML Workflow Bundle
Overview
Comprehensive AI/ML workflow for building LLM applications, implementing RAG systems, creating AI agents, and developing machine learning pipelines. This bundle orchestrates skills for production AI development.
When to Use This Workflow
Use this workflow when:
- Building LLM-powered applications
- Implementing RAG (Retrieval-Augmented Generation)
- Creating AI agents
- Developing ML pipelines
- Adding AI features to applications
- Setting up AI observability
Workflow Phases
Phase 1: AI Application Design
Skills to Invoke
ai-product- AI product developmentai-engineer- AI engineeringai-agents-architect- Agent architecturellm-app-patterns- LLM patterns
Actions
- Define AI use cases
- Choose appropriate models
- Design system architecture
- Plan data flows
- Define success metrics
Copy-Paste Prompts
Use @ai-product to design AI-powered features
Use @ai-agents-architect to design multi-agent system
Phase 2: LLM Integration
Skills to Invoke
llm-application-dev-ai-assistant- AI assistant developmentllm-application-dev-langchain-agent- LangChain agentsllm-application-dev-prompt-optimize- Prompt engineeringgemini-api-dev- Gemini API
Actions
- Select LLM provider
- Set up API access
- Implement prompt templates
- Configure model parameters
- Add streaming support
- Implement error handling
Copy-Paste Prompts
Use @llm-application-dev-ai-assistant to build conversational AI
Use @llm-application-dev-langchain-agent to create LangChain agents
Use @llm-application-dev-prompt-optimize to optimize prompts
Phase 3: RAG Implementation
Skills to Invoke
rag-engineer- RAG engineeringrag-implementation- RAG implementationembedding-strategies- Embedding selectionvector-database-engineer- Vector databasessimilarity-search-patterns- Similarity searchhybrid-search-implementation- Hybrid search
Actions
- Design data pipeline
- Choose embedding model
- Set up vector database
- Implement chunking strategy
- Configure retrieval
- Add reranking
- Implement caching
Copy-Paste Prompts
Use @rag-engineer to design RAG pipeline
Use @vector-database-engineer to set up vector search
Use @embedding-strategies to select optimal embeddings
Phase 4: AI Agent Development
Skills to Invoke
autonomous-agents- Autonomous agent patternsautonomous-agent-patterns- Agent patternscrewai- CrewAI frameworklanggraph- LangGraphmulti-agent-patterns- Multi-agent systemscomputer-use-agents- Computer use agents
Actions
- Design agent architecture
- Define agent roles
- Implement tool integration
- Set up memory systems
- Configure orchestration
- Add human-in-the-loop
Copy-Paste Prompts
Use @crewai to build role-based multi-agent system
Use @langgraph to create stateful AI workflows
Use @autonomous-agents to design autonomous agent
Phase 5: ML Pipeline Development
Skills to Invoke
ml-engineer- ML engineeringmlops-engineer- MLOpsmachine-learning-ops-ml-pipeline- ML pipelinesml-pipeline-workflow- ML workflowsdata-engineer- Data engineering
Actions
- Design ML pipeline
- Set up data processing
- Implement model training
- Configure evaluation
- Set up model registry
- Deploy models
Copy-Paste Prompts
Use @ml-engineer to build machine learning pipeline
Use @mlops-engineer to set up MLOps infrastructure
Phase 6: AI Observability
Skills to Invoke
langfuse- Langfuse observabilitymanifest- Manifest telemetryevaluation- AI evaluationllm-evaluation- LLM evaluation
Actions
- Set up tracing
- Configure logging
- Implement evaluation
- Monitor performance
- Track costs
- Set up alerts
Copy-Paste Prompts
Use @langfuse to set up LLM observability
Use @evaluation to create evaluation framework
Phase 7: AI Security
Skills to Invoke
prompt-engineering- Prompt securitysecurity-scanning-security-sast- Security scanning
Actions
- Implement input validation
- Add output filtering
- Configure rate limiting
- Set up access controls
- Monitor for abuse
- Implement audit logging
AI Development Checklist
LLM Integration
- API keys secured
- Rate limiting configured
- Error handling implemented
- Streaming enabled
- Token usage tracked
RAG System
- Data pipeline working
- Embeddings generated
- Vector search optimized
- Retrieval accuracy tested
- Caching implemented
AI Agents
- Agent roles defined
- Tools integrated
- Memory working
- Orchestration tested
- Error handling robust
Observability
- Tracing enabled
- Metrics collected
- Evaluation running
- Alerts configured
- Dashboards created
Quality Gates
- All AI features tested
- Performance benchmarks met
- Security measures in place
- Observability configured
- Documentation complete
Related Workflow Bundles
development- Application developmentdatabase- Data managementcloud-devops- Infrastructuretesting-qa- AI testing
Source
git clone https://github.com/bcastelino/agent-skills-kit/blob/main/skills/ai-ml/SKILL.mdView on GitHub Overview
The AI/ML Workflow Bundle provides a production-ready path for building LLM-powered applications, implementing retrieval-augmented generation (RAG), designing AI agents, and developing end-to-end ML pipelines. It organizes related skills into clear phases to accelerate AI system delivery from design to observability.
How This Skill Works
It defines six phases, each listing the specific skills to invoke and concrete actions like designing data flows, selecting models, setting up embedding and vector DBs, building agents, and enabling observability. Practitioners move from AI application design through LLM integration, RAG, agent development, ML pipelines, and finally AI observability, with phase-specific prompts and workflows to guide implementation.
When to Use It
- Building LLM-powered applications
- Implementing Retrieval-Augmented Generation (RAG)
- Creating AI agents
- Developing end-to-end ML pipelines
- Adding AI features and observability to existing apps
Quick Start
- Step 1: Define AI use cases and architecture (select models, plan data flows, assign roles).
- Step 2: Implement LLM integration and RAG setup (embed choices, vector DB, prompts).
- Step 3: Develop ML pipelines and enable observability (training, registry, deployment, telemetry).
Best Practices
- Start with clear use cases and success metrics before design.
- Select LLM providers and embedding models aligned to data and latency goals.
- Design modular, reusable components per phase (design, integration, RAG, agents, pipelines).
- Prioritize observability early with telemetry, monitoring, and tracing.
- Iterate in small, testable steps with robust error handling and rollback plans.
Example Use Cases
- An LLM-powered customer support assistant using RAG over a product knowledge base.
- A LangChain-driven multi-agent system coordinating tools and tasks.
- RAG-based document search for an enterprise knowledge repository.
- A complete ML pipeline with data processing, training, model registry, and deployment.
- An AI feature in a consumer app with end-to-end observability using Langfuse and telemetry.