Get the FREE Ultimate OpenClaw Setup Guide →

langchain

npx machina-cli add skill G1Joshi/Agent-Skills/langchain --openclaw
Files (1)
SKILL.md
1.2 KB

LangChain

LangChain is the standard framework for chaining LLM components. In 2025, the focus shifted to LangGraph for building stateful, cyclic agents.

When to Use

  • Orchestration: Chaining "Prompt -> LLM -> Parser".
  • Agents: Using LangGraph to build agents that can loop, retry, and keep state.
  • Integrations: 1000+ connectors for vector DBs, APIs, and tools.

Core Concepts

LangGraph

The successor to AgentExecutor. A graph-based way to define agent flows with cycles (loops).

LCEL (LangChain Expression Language)

The declarative pipe syntax: prompt | llm | output_parser.

LangSmith

Observability platform to trace and debug complex chains.

Best Practices (2025)

Do:

  • Use LangGraph: For any non-trivial agent. AgentExecutor is legacy.
  • Use LCEL: It enables streaming and async out of the box.
  • Trace everything: Connect to LangSmith to see why your agent failed.

Don't:

  • Don't over-abstract: If a simple Python function works, don't wrap it in a Chain.

References

Source

git clone https://github.com/G1Joshi/Agent-Skills/blob/main/skills/ai-ml/langchain/SKILL.mdView on GitHub

Overview

LangChain is the standard framework for chaining LLM components. In 2025, the focus shifted to LangGraph for building stateful, cyclic agents. It enables orchestration with prompt–LLM–parser pipelines, thousands of integrations, and observability through LangSmith.

How This Skill Works

Define pipelines with LCEL by connecting prompt, llm, and output_parser (prompt | llm | output_parser). For complex workflows, use LangGraph to create graph-based flows with loops and state, while LangSmith provides tracing to diagnose issues. Leverage 1000+ connectors to integrate vector stores, APIs, and tools.

When to Use It

  • Orchestrating a prompt → LLM → parser pipeline for structured results.
  • Building looping, stateful agents using LangGraph that can retry and remember context.
  • Integrating with vector DBs, APIs, and tools via 1000+ connectors.
  • Observability and debugging of complex chains with LangSmith.
  • Migrating non-trivial workflows from AgentExecutor to LangGraph.

Quick Start

  1. Step 1: Install LangChain and import the required modules.
  2. Step 2: Build a simple chain using LCEL: prompt | llm | output_parser to process results.
  3. Step 3: For advanced scenarios, migrate to LangGraph for stateful agents and enable tracing with LangSmith.

Best Practices

  • Use LangGraph for any non-trivial agent; AgentExecutor is legacy.
  • Use LCEL to enable streaming and async execution out of the box.
  • Trace everything by connecting to LangSmith to diagnose failures.
  • Don't over-abstract: if a simple Python function works, don't wrap it in a Chain.
  • Leverage the wide connector ecosystem for rapid integrations with data stores, APIs, and tools.

Example Use Cases

  • A simple Chain: Prompt -> LLM -> OutputParser to extract structured data.
  • A LangGraph-based agent that loops and retries until a condition is met.
  • A data‑driven inquiry assistant that queries a vector DB and calls tools via connectors.
  • Streaming responses with LCEL to show real-time updates in an interactive workflow.
  • An end-to-end observability workflow using LangSmith to debug complex chains.

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers