Using claw.zip with LangChain
LangChain chains and agents can generate large prompts. claw.zip compresses them automatically, reducing OpenClaw API costs across your entire pipeline. This is especially valuable for OpenClaw users running multi-step agents or RAG pipelines β each intermediate prompt gets compressed too.
Before You Start
Prerequisites
- βPython 3.9+
- βLangChain installed
- βAn OpenClaw API key
- βA claw.zip account and API key
Step-by-Step
Integration Steps
Install dependencies
Install LangChain with the Anthropic integration.
pip install langchain langchain-anthropicConfigure ChatAnthropic with claw.zip
Pass the claw.zip base URL and API key to the ChatAnthropic model.
from langchain_anthropic import ChatAnthropic
llm = ChatAnthropic(
model="claude-sonnet-4-20250514",
anthropic_api_key="your_openclaw_key",
anthropic_api_url="https://api.claw.zip",
default_headers={
"x-api-key": "your_clawzip_key",
},
)Use in a chain
Use the configured LLM in any LangChain chain. Compression is transparent.
from langchain_core.prompts import ChatPromptTemplate
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant that answers concisely."),
("human", "{input}"),
])
chain = prompt | llm
response = chain.invoke({
"input": "What are the benefits of prompt compression?"
})
print(response.content)Use with agents
LangChain agents work the same way. claw.zip compresses the prompts that agents generate internally.
from langchain.agents import create_tool_calling_agent, AgentExecutor
from langchain_core.prompts import ChatPromptTemplate
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant."),
("human", "{input}"),
("placeholder", "{agent_scratchpad}"),
])
agent = create_tool_calling_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools)
result = executor.invoke({"input": "Search for AI cost optimization"})Done
You Are All Set
All LangChain calls now go through claw.zip. This is especially valuable for OpenClaw users running agents and complex chains that generate many intermediate prompts β each of which gets compressed automatically.
More Guides