langchain
Scannednpx machina-cli add skill Makiya1202/ai-agents-skills/langchain --openclawFiles (1)
SKILL.md
4.8 KB
LangChain & LangGraph
Build sophisticated LLM applications with composable chains and agent graphs.
Quick Start
pip install langchain langchain-openai langchain-anthropic langgraph
from langchain_anthropic import ChatAnthropic
from langchain_core.prompts import ChatPromptTemplate
# Simple chain
llm = ChatAnthropic(model="claude-3-sonnet-20240229")
prompt = ChatPromptTemplate.from_template("Explain {topic} in simple terms.")
chain = prompt | llm
response = chain.invoke({"topic": "quantum computing"})
LCEL (LangChain Expression Language)
Compose chains with the pipe operator:
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnablePassthrough
# Chain with parsing
chain = (
{"topic": RunnablePassthrough()}
| prompt
| llm
| StrOutputParser()
)
result = chain.invoke("machine learning")
RAG Pipeline
from langchain_openai import OpenAIEmbeddings
from langchain_community.vectorstores import Chroma
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
# Create vector store
embeddings = OpenAIEmbeddings()
vectorstore = Chroma.from_documents(documents, embeddings)
retriever = vectorstore.as_retriever(search_kwargs={"k": 4})
# RAG prompt
prompt = ChatPromptTemplate.from_template("""
Answer based on the following context:
{context}
Question: {question}
""")
# RAG chain
rag_chain = (
{"context": retriever, "question": RunnablePassthrough()}
| prompt
| llm
| StrOutputParser()
)
answer = rag_chain.invoke("What is the refund policy?")
LangGraph Agent
from langgraph.graph import StateGraph, END
from langgraph.prebuilt import ToolNode
from langchain_core.tools import tool
from typing import TypedDict, Annotated
import operator
# Define state
class AgentState(TypedDict):
messages: Annotated[list, operator.add]
# Define tools
@tool
def search(query: str) -> str:
"""Search the web."""
return f"Results for: {query}"
@tool
def calculator(expression: str) -> str:
"""Calculate mathematical expression."""
return str(eval(expression))
tools = [search, calculator]
# Create graph
graph = StateGraph(AgentState)
# Add nodes
graph.add_node("agent", call_model)
graph.add_node("tools", ToolNode(tools))
# Add edges
graph.set_entry_point("agent")
graph.add_conditional_edges(
"agent",
should_continue,
{"continue": "tools", "end": END}
)
graph.add_edge("tools", "agent")
# Compile
app = graph.compile()
# Run
result = app.invoke({"messages": [HumanMessage(content="What is 25 * 4?")]})
Structured Output
from langchain_core.pydantic_v1 import BaseModel, Field
class Person(BaseModel):
name: str = Field(description="Person's name")
age: int = Field(description="Person's age")
occupation: str = Field(description="Person's job")
# Structured LLM
structured_llm = llm.with_structured_output(Person)
result = structured_llm.invoke("John is a 30 year old engineer")
# Person(name='John', age=30, occupation='engineer')
Memory
from langchain_community.chat_message_histories import ChatMessageHistory
from langchain_core.runnables.history import RunnableWithMessageHistory
# Message history
store = {}
def get_session_history(session_id: str):
if session_id not in store:
store[session_id] = ChatMessageHistory()
return store[session_id]
# Chain with memory
with_memory = RunnableWithMessageHistory(
chain,
get_session_history,
input_messages_key="input",
history_messages_key="history"
)
# Use with session
response = with_memory.invoke(
{"input": "My name is Alice"},
config={"configurable": {"session_id": "user123"}}
)
Streaming
# Stream tokens
async for chunk in chain.astream({"topic": "AI"}):
print(chunk.content, end="", flush=True)
# Stream events (for debugging)
async for event in chain.astream_events({"topic": "AI"}, version="v1"):
print(event)
LangSmith Tracing
import os
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_API_KEY"] = "your-api-key"
os.environ["LANGCHAIN_PROJECT"] = "my-project"
# All chains are now traced automatically
chain.invoke({"topic": "AI"})
Resources
- LangChain Docs: https://python.langchain.com/docs/introduction/
- LangGraph Docs: https://langchain-ai.github.io/langgraph/
- LangSmith: https://smith.langchain.com/
- LangChain Hub: https://smith.langchain.com/hub
- LangChain Templates: https://github.com/langchain-ai/langchain/tree/master/templates
Source
git clone https://github.com/Makiya1202/ai-agents-skills/blob/master/skills/langchain/SKILL.mdView on GitHub Overview
LangChain and LangGraph enable sophisticated LLM applications through composable chains and agent graphs. Use them to build RAG pipelines, agent workflows, and complex orchestration with features like memory and structured outputs.
How This Skill Works
Developers install LangChain packages, create prompts and LLMs, and compose them into runnable graphs using the pipe operator or LCEL. LangGraph supports agent graphs with tools, while RAG pipelines connect embeddings, retrievers, and prompts to bring in relevant context.
When to Use It
- Create a RAG-enabled QA system that retrieves context before answering.
- Orchestrate multi-step LLM tasks with LCEL-based pipelines.
- Build agent workflows that invoke tools like web search or calculators.
- Implement memory-enabled chats that persist user history across sessions.
- Design systems that produce structured outputs for downstream parsing.
Quick Start
- Step 1: pip install langchain langchain-openai langchain-anthropic langgraph
- Step 2: Create a simple chain by piping a prompt into an LLM: chain = prompt | llm
- Step 3: Run a test: response = chain.invoke({"topic": "quantum computing"})
Best Practices
- Leverage LCEL to compose clean, readable chains using the pipe style.
- Use structured output schemas to produce machine-friendly results.
- Combine embeddings and retrievers in RAG pipelines with context-aware prompts.
- Incorporate memory components (e.g., RunnableWithMessageHistory) for session continuity.
- Test prompts, LLM choices, and tool integration incrementally before full graphs.
Example Use Cases
- Simple chain: prompt -> llm to explain a topic (e.g., explain {topic} in simple terms).
- RAG pipeline: embed documents, build a vector store, create a retriever, and chain context with a prompt and llm.
- LangGraph Agent: define tools with @tool, assemble a StateGraph, compile, and run with a messages input.
- Structured Output: parse LLM results into a typed structure (e.g., a Pydantic model) for downstream use.
- Memory-enabled chat: persist conversation history across sessions using a memory wrapper.
Frequently Asked Questions
Add this skill to your agents