langchain-chains
Scannednpx machina-cli add skill a5c-ai/babysitter/langchain-chains --openclawFiles (1)
SKILL.md
1.2 KB
LangChain Chains Skill
Capabilities
- Compose LangChain chains using LCEL (LangChain Expression Language)
- Implement sequential chain patterns
- Design router chains for conditional logic
- Create parallel execution chains
- Handle chain fallbacks and retries
- Implement streaming chains
Target Processes
- dialogue-flow-design
- chatbot-design-implementation
Implementation Details
Chain Patterns
- LCEL Pipelines: Modern composition with | operator
- SequentialChain: Linear chain execution (legacy)
- RouterChain: Conditional routing based on input
- RunnableParallel: Parallel execution branches
- RunnableBranch: Conditional branching
Configuration Options
- Input/output key mapping
- Error handling strategies
- Retry configuration
- Streaming settings
- Batch processing options
Best Practices
- Use LCEL for new implementations
- Implement proper input/output schemas
- Add fallback chains for resilience
- Use streaming for long operations
Dependencies
- langchain-core
- langchain
Source
git clone https://github.com/a5c-ai/babysitter/blob/main/plugins/babysitter/skills/babysit/process/specializations/ai-agents-conversational/skills/langchain-chains/SKILL.mdView on GitHub Overview
This skill enables composing LangChain chains using LCEL, including sequential, router, and parallel patterns. It covers streaming capabilities, fallbacks, and retries to build resilient conversational pipelines.
How This Skill Works
Leverage LCEL Pipelines for modern chain composition with the | operator, while also supporting SequentialChain for legacy linear flows and RouterChain for conditional routing. Use RunnableParallel and RunnableBranch to execute branches in parallel or conditionally, with configurable input/output mappings, error handling, and streaming settings.
When to Use It
- Designing a dialogue flow that requires LCEL-based pipelines for flexible routing
- Implementing conditional logic to route user input to different handlers
- Fetching data from multiple sources in parallel to speed up responses
- Adding fallback chains and retry policies for unreliable services
- Streaming long-running operations to the user as results are produced
Quick Start
- Step 1: Install and import LangChain dependencies (langchain-core, langchain) and set up keys.
- Step 2: Define an LCEL pipeline or a combination of SequentialChain and RouterChain to model your flow.
- Step 3: Run the chain, map input/output keys, and enable streaming or retries as needed.
Best Practices
- Use LCEL for new implementations to leverage modern piping semantics
- Define clear and consistent input/output schemas for all chains
- Add fallback chains to improve resilience against failures
- Enable streaming for long operations to improve user experience
- Test chains with both LCEL Pipelines and legacy patterns to ensure compatibility
Example Use Cases
- A customer-support chatbot that routes to specialist chains based on user intent
- A purchase flow implemented as a SequentialChain with ordered steps
- A data-aggregation bot that runs multiple API calls in parallel via RunnableParallel
- A failure-tolerant workflow that includes RouterChain-based fallbacks
- A streaming chat that progressively returns results from a long-running task
Frequently Asked Questions
Add this skill to your agents