Skip to main content

Build AI Apps with Claude API + LangChain: A Complete 2026 Developer Guide

LangChain is one of the most popular frameworks for building LLM-powered applications, and Claude is widely regarded as one of the strongest models for coding and reasoning tasks. Together, they make a powerful stack for shipping everything from simple chatbots to complex AI agents. This guide walks you through 6 core use cases with LangChain + Claude API — all code is tested and ready to run (langchain-anthropic 1.4.0).

Dev GuidesDeveloper GuideTechnical DocsAI ApplicationsEst. read8min
2026.04.04 published
Build AI Apps with Claude API + LangChain: A Complete 2026 Developer Guide

Build AI Apps with Claude API + LangChain: A Complete 2026 Developer Guide

LangChain is one of the most popular frameworks for building LLM-powered applications, and Claude is widely regarded as one of the strongest models for coding and reasoning tasks. Together, they make a powerful stack for shipping everything from simple chatbots to complex AI agents.

This guide walks you through 6 core use cases with LangChain + Claude API — all code is tested and ready to run (langchain-anthropic 1.4.0).


Content

  1. Environment Setup
  2. Basic Usage
  3. Streaming Output
  4. Prompt Templates + Chain
  5. Tool Use (Function Calling)
  6. Multi-Turn Conversations
  7. Structured Output
  8. Project: CLI Chatbot
  9. What to Explore Next
  10. Wrapping Up

1.Environment Setup

1.1 Install Dependencies

pip install langchain langchain-anthropic langchain-core
pip install langchain langchain-anthropic langchain-core

Tested versions: langchain-anthropic 1.4.0 · langchain 1.2.14 · langchain-core 1.2.23

1.2 Get Your API Key

1.2 Get Your API Key Get your key from ClaudeAPI.com in 3 simple steps:

  • Create an account — sign up with your email at ClaudeAPI.com
  • Add credits — pay-as-you-go, start with as little as $5
  • Get your key — Dashboard → Create Token → Select your model → Copy key

⚠️ Your API key is only shown once at creation. Save it immediately. Keys start with sk-.

1.3 Configuration

The recommended approach is to manage your key via environment variables (Linux/macOS/windows):

export ANTHROPIC_API_KEY="sk-your-Key"
export ANTHROPIC_BASE_URL="https://claudeapi.com"
export ANTHROPIC_API_KEY="sk-your-Key"
export ANTHROPIC_BASE_URL="https://claudeapi.com"

Or pass the parameters directly in code:

from langchain_anthropic import ChatAnthropic

llm = ChatAnthropic(
    model="claude-sonnet-4-6",
    api_key="sk-your-Key",
    base_url="https://claudeapi.com",  # Direct access, no VPN required
    max_tokens=1024
)
from langchain_anthropic import ChatAnthropic

llm = ChatAnthropic(
    model="claude-sonnet-4-6",
    api_key="sk-your-Key",
    base_url="https://claudeapi.com",  # Direct access, no VPN required
    max_tokens=1024
)

Key Parameter Description:

Parameter Value Description
model claude-sonnet-4-6 Recommended for daily use, best cost-performance ratio
api_key sk-xxx Obtain your Key at claudeapi.com
base_url https://api.claudeapi.com Domestic direct-connect endpoint, no proxy needed
max_tokens 1024 Maximum output token count, adjust as needed

1.4 Available Models

Main Model Name Positioning Input Price Output Price
claude-sonnet-4-6 All-round balanced, daily go-to $3/ MToken $15/ MToken
claude-opus-4-6 Strongest reasoning, complex tasks $5/ MToken $25/ MToken
claude-haiku-4-5-20251001 Ultra-fast & lightweight, simple tasks $1/ MToken $5/ MToken

Prices are subject to real-time display on the ClaudeAPI.com dashboard.


2. Basic Usage

The simplest approach — call Claude directly: create a working file, e.g. de

import os

from langchain_anthropic import ChatAnthropic

# ============ Step 1: Basic Call============
llm = ChatAnthropic(
    model="claude-sonnet-4-6",
    api_key=os.environ.get("ANTHROPIC_API_KEY"),#Paste your Key here
    base_url="https://claudeapi.com",
    max_tokens=1024
)

response = llm.invoke("Describe the advantages of Python in one sentence")
print(response.content)

import os

from langchain_anthropic import ChatAnthropic

# ============ Step 1: Basic Call============
llm = ChatAnthropic(
    model="claude-sonnet-4-6",
    api_key=os.environ.get("ANTHROPIC_API_KEY"),#Paste your Key here
    base_url="https://claudeapi.com",
    max_tokens=1024
)

response = llm.invoke("Describe the advantages of Python in one sentence")
print(response.content)

Run the script from your terminal. Example output:

Python's concise syntax and rich ecosystem allow developers to quickly tackle everything from data analysis to AI development.
Python's concise syntax and rich ecosystem allow developers to quickly tackle everything from data analysis to AI development.

invoke() is the most basic calling method — pass in a string or a list of messages, and it returns the complete AI response.


3. Streaming Output

When AI responses are long, streaming lets users see the output in real time. Only one line needs to change in your script:

for chunk in llm.stream("List the 5 most commonly used Python built-in libraries with a brief description"):
    print(chunk.content, end="", flush=True)
for chunk in llm.stream("List the 5 most commonly used Python built-in libraries with a brief description"):
    print(chunk.content, end="", flush=True)

Just swap invoke() for stream() and you get token-by-token output — ideal for chat interfaces, CLI tools, or any scenario that needs real-time feedback.


4.Prompt Templates + Chain

In real applications, you rarely pass raw strings directly. The more common pattern is to manage prompts with Prompt Templates and chain components together using LangChain’s Chain syntax:

from langchain_anthropic import ChatAnthropic
from langchain_core.prompts import ChatPromptTemplate

llm = ChatAnthropic(
    model="claude-sonnet-4-6",
    api_key="sk-your-Key",
    base_url="https://claudeapi.com",
    max_tokens=1024
)

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a senior {language} developer. Answer questions concisely and clearly."),
    ("human", "{question}")
])

chain = prompt | llm
response = chain.invoke({"language": "Python", "question": "What is list comprehension?"})
print(response.content)
from langchain_anthropic import ChatAnthropic
from langchain_core.prompts import ChatPromptTemplate

llm = ChatAnthropic(
    model="claude-sonnet-4-6",
    api_key="sk-your-Key",
    base_url="https://claudeapi.com",
    max_tokens=1024
)

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a senior {language} developer. Answer questions concisely and clearly."),
    ("human", "{question}")
])

chain = prompt | llm
response = chain.invoke({"language": "Python", "question": "What is list comprehension?"})
print(response.content)

Key concepts:

  • ChatPromptTemplate:manages system/user message templates with variable interpolation
  • | pipe operator:LangChain’s LCEL syntax for chaining multiple components into a pipeline
  • chain.invoke():pass in a variable dictionary; it auto-fills the template and calls the model

5.Tool Use (Function Calling)

Letting Claude call your own defined functions is the foundational capability for building AI Agents:

from langchain_anthropic import ChatAnthropic
from langchain_core.tools import tool

@tool
def get_weather(city: str) -> str:
    """Get the current weather for a specified city."""
    # In production, replace with a real weather API (e.g. OpenWeatherMap)
    weather_data = {"Beijing": "Sunny, 25°C", "Shanghai": "Cloudy, 22°C"}
    return weather_data.get(city, f"{city}:No data available")

llm = ChatAnthropic(
    model="claude-sonnet-4-6",
    api_key="sk-your-Key",
    base_url="https://hk.code0.ai",
    max_tokens=1024
)

llm_with_tools = llm.bind_tools([get_weather])
response = llm_with_tools.invoke("What's the weather like in Beijing today?")

if response.tool_calls:
    for tc in response.tool_calls:
        print(f"Tool called: {tc['name']}, Args: {tc['args']}")
        result = get_weather.invoke(tc["args"])
        print(f"Result: {result}")
from langchain_anthropic import ChatAnthropic
from langchain_core.tools import tool

@tool
def get_weather(city: str) -> str:
    """Get the current weather for a specified city."""
    # In production, replace with a real weather API (e.g. OpenWeatherMap)
    weather_data = {"Beijing": "Sunny, 25°C", "Shanghai": "Cloudy, 22°C"}
    return weather_data.get(city, f"{city}:No data available")

llm = ChatAnthropic(
    model="claude-sonnet-4-6",
    api_key="sk-your-Key",
    base_url="https://hk.code0.ai",
    max_tokens=1024
)

llm_with_tools = llm.bind_tools([get_weather])
response = llm_with_tools.invoke("What's the weather like in Beijing today?")

if response.tool_calls:
    for tc in response.tool_calls:
        print(f"Tool called: {tc['name']}, Args: {tc['args']}")
        result = get_weather.invoke(tc["args"])
        print(f"Result: {result}")

Example output:

Tool called: get_weather,  Args: {'city': 'Beijing'}
Result: Sunny, 25°C
Tool called: get_weather,  Args: {'city': 'Beijing'}
Result: Sunny, 25°C

Key steps:

  1. Use the @tool decorator to define tool functions — the docstring is sent to Claude as the tool description
  2. Use bind_tools() to attach tools to the LLM 3.Check response.tool_calls to retrieve the tool call requests

⚠️ The weather data in this example is mocked. In production, connect to a real weather API.


6.Multi-Turn Conversations

Building a chatbot requires maintaining conversation context across turns:

from langchain_anthropic import ChatAnthropic
from langchain_core.messages import HumanMessage, AIMessage, SystemMessage

llm = ChatAnthropic(
    model="claude-sonnet-4-6",
    api_key="sk-your-Key",
    base_url="https://claudeapi.com",
    max_tokens=1024
)

history = [SystemMessage(content="You are a helpful and friendly assistant.")]

def chat(user_input: str) -> str:
    history.append(HumanMessage(content=user_input))
    response = llm.invoke(history)
    history.append(AIMessage(content=response.content))
    return response.content

print(chat("Hi, my name is Alex and I'm a Python developer."))
print(chat("What's my name and what do I do?"))
from langchain_anthropic import ChatAnthropic
from langchain_core.messages import HumanMessage, AIMessage, SystemMessage

llm = ChatAnthropic(
    model="claude-sonnet-4-6",
    api_key="sk-your-Key",
    base_url="https://claudeapi.com",
    max_tokens=1024
)

history = [SystemMessage(content="You are a helpful and friendly assistant.")]

def chat(user_input: str) -> str:
    history.append(HumanMessage(content=user_input))
    response = llm.invoke(history)
    history.append(AIMessage(content=response.content))
    return response.content

print(chat("Hi, my name is Alex and I'm a Python developer."))
print(chat("What's my name and what do I do?"))

Example output:

Nice to meet you, Alex! Python is a fantastic language — how can I help you today?
Your name is Alex, and you're a Python developer.
Nice to meet you, Alex! Python is a fantastic language — how can I help you today?
Your name is Alex, and you're a Python developer.

💡 Heads up: The longer the conversation, the more tokens get consumed. In production, consider capping the history length (e.g. keep only the last 10 turns) to control costs.


7.Structured Output

Get Claude to return structured JSON data that your application can process directly:

from langchain_anthropic import ChatAnthropic
from langchain_core.output_parsers import JsonOutputParser
from langchain_core.prompts import ChatPromptTemplate

llm = ChatAnthropic(
    model="claude-sonnet-4-6",
    api_key="sk-your-Key",
    base_url="https://claudeapi.com",
    max_tokens=1024
)

parser = JsonOutputParser()
prompt = ChatPromptTemplate.from_messages([
    ("system", "Return strict JSON only. Do not include any other text."),
    ("human", "Generate a fictional user profile with four fields: name, age, city, and hobby.")
])

chain = prompt | llm | parser
result = chain.invoke({})

print("Return type:", type(result).__name__)
print("Full result:", result)
print("Sample values -> name:", result["name"], "| city:", result["city"])
from langchain_anthropic import ChatAnthropic
from langchain_core.output_parsers import JsonOutputParser
from langchain_core.prompts import ChatPromptTemplate

llm = ChatAnthropic(
    model="claude-sonnet-4-6",
    api_key="sk-your-Key",
    base_url="https://claudeapi.com",
    max_tokens=1024
)

parser = JsonOutputParser()
prompt = ChatPromptTemplate.from_messages([
    ("system", "Return strict JSON only. Do not include any other text."),
    ("human", "Generate a fictional user profile with four fields: name, age, city, and hobby.")
])

chain = prompt | llm | parser
result = chain.invoke({})

print("Return type:", type(result).__name__)
print("Full result:", result)
print("Sample values -> name:", result["name"], "| city:", result["city"])

Example output:

{"name": "Emily Carter", "age": 28, "city": "Austin", "hobby": "Photography"}
{"name": "Emily Carter", "age": 28, "city": "Austin", "hobby": "Photography"}

JsonOutputParser automatically extracts and parses the JSON from Claude’s response into a Python dictionary — no manual parsing needed.


8.Project: CLI Chatbot

Let’s put everything together and build a fully working chatbot with multi-turn memory + streaming output:

from langchain_anthropic import ChatAnthropic
from langchain_core.messages import HumanMessage, AIMessage, SystemMessage

def create_chatbot():
    llm = ChatAnthropic(
        model="claude-sonnet-4-6",
        api_key="sk-your-Key",
        base_url="https://hk.code0.ai",
        max_tokens=2048
    )
    history = [SystemMessage(content="You are a professional coding assistant. Answer clearly and concisely.")]

    print("=" * 50)
    print(" Claude Coding Assistant (type 'quit' to exit)")
    print("  Powered by ClaudeAPI.com")
    print("=" * 50)

    while True:
        user_input = input("\nYou: ").strip()
        if user_input.lower() in ("quit", "exit", "q"):
            print("Goodbye!")
            break
        if not user_input:
            continue
        history.append(HumanMessage(content=user_input))
        print("\nClaude: ", end="", flush=True)
        full = ""
        for chunk in llm.stream(history):
            print(chunk.content, end="", flush=True)
            full += chunk.content
        print()
        history.append(AIMessage(content=full))
        # Cap history: keep SystemMessage + last 20 exchanges (40 messages)
        if len(history) > 41:
            history = [history[0]] + history[-40:]

if __name__ == "__main__":
    create_chatbot()
from langchain_anthropic import ChatAnthropic
from langchain_core.messages import HumanMessage, AIMessage, SystemMessage

def create_chatbot():
    llm = ChatAnthropic(
        model="claude-sonnet-4-6",
        api_key="sk-your-Key",
        base_url="https://hk.code0.ai",
        max_tokens=2048
    )
    history = [SystemMessage(content="You are a professional coding assistant. Answer clearly and concisely.")]

    print("=" * 50)
    print(" Claude Coding Assistant (type 'quit' to exit)")
    print("  Powered by ClaudeAPI.com")
    print("=" * 50)

    while True:
        user_input = input("\nYou: ").strip()
        if user_input.lower() in ("quit", "exit", "q"):
            print("Goodbye!")
            break
        if not user_input:
            continue
        history.append(HumanMessage(content=user_input))
        print("\nClaude: ", end="", flush=True)
        full = ""
        for chunk in llm.stream(history):
            print(chunk.content, end="", flush=True)
            full += chunk.content
        print()
        history.append(AIMessage(content=full))
        # Cap history: keep SystemMessage + last 20 exchanges (40 messages)
        if len(history) > 41:
            history = [history[0]] + history[-40:]

if __name__ == "__main__":
    create_chatbot()

Save it as chatbot.py and run python chatbot.py to try it out.

Example session:

==================================================
  Claude Coding Assistant (type 'quit' to exit)
  Powered by ClaudeAPI.com
==================================================
You: hello
Claude: Hey! Great to meet you. What can I help you with today? 😊
You: what should I learn in Python?
Claude: Great question! Here are the core modules you should cover to get hands-on with Python:
  --
==================================================
  Claude Coding Assistant (type 'quit' to exit)
  Powered by ClaudeAPI.com
==================================================
You: hello
Claude: Hey! Great to meet you. What can I help you with today? 😊
You: what should I learn in Python?
Claude: Great question! Here are the core modules you should cover to get hands-on with Python:
  --

Code highlights:

  • Streaming outputllm.stream(history) prints tokens as they arrive, so responses feel instant
  • History capping — automatically trims to the last 20 exchanges to prevent token overflow
  • System message preserved — the first system instruction is always kept when trimming

9.What to Explore Next

Once you’re comfortable with the 6 core patterns, here’s where to go deeper:

Topic What it does Best for
RAG (Retrieval-Augmented Generation) LangChain + vector store + Claude Enterprise knowledge bases, document Q&A
Agent Let Claude plan and execute multi-step tasks autonomously Workflow automation, data analysis
LangGraph Build stateful, multi-step AI pipelines Complex business workflows
LangServe Deploy any Chain as a REST API in one command Backend service integration
Multi-model routing Multi-model routing Cost optimization

10.Wrapping Up

In this guide, you’ve covered the 6 essential LangChain + Claude API patterns:

1.Basic invocation — one call with invoke()

2.Streaming output — real-time responses with stream()

3.Prompt templates + Chains — manage prompts, compose pipelines

4.Tool Use — let Claude call your own functions

5.Multi-turn conversation — stateful chatbot with context memory

6.Structured output — get clean JSON back from Claude

All examples share the same two config values:

api_key="sk-your-Key"                    # get yours at ClaudeAPI.com 
base_url="https://claudeapi.com"    # Direct access, no VPN needed
api_key="sk-your-Key"                    # get yours at ClaudeAPI.com 
base_url="https://claudeapi.com"    # Direct access, no VPN needed

Get started now: Sign up at ClaudeAPI.com, grab your API key, top up via card or PayPal, and ship your first LangChain + Claude app in under 5 minutes.


FAQ

Q: What’s the difference between langchain-anthropic and anthropic?

anthropic is Anthropic’s official SDK that calls the underlying API directly. langchain-anthropic is LangChain’s wrapper around it — it exposes the ChatAnthropic class and plugs seamlessly into the LangChain ecosystem (Prompts, Chains, Tools, Parsers, etc.). You can use both in the same project; they don’t conflict.

Q: Do I really only need to change base_url?

Yes. Add base_url="https://api.claudeapi.com" to any ChatAnthropic instance and everything else stays exactly the same. It’s fully compatible with the native Anthropic SDK format — zero other changes required.

Q: Can Tool Use functions be async?

Absolutely. Define your tool function with async def and use ainvoke() / astream() instead. This is ideal for I/O-heavy operations like HTTP requests or database queries.

Q: How do I handle multi-turn conversations eating too many tokens?

Three common strategies:

1.Cap history length — keep only the last N turns (this guide uses 20) 2.Periodic summarization — compress older history into a summary message 3.Model routing by turn — use Haiku for early turns, switch to Sonnet for critical exchanges Q: What if JsonOutputParser fails to parse?

Claude occasionally wraps JSON in explanatory text, which breaks parsing. Fix it with:

1.Reinforce in the system prompt: “Return JSON only. No other text.”

2.Catch OutputParserException and retry

3.Switch to with_structured_output() — added in LangChain 1.x and significantly more reliable


Further Reading


Published April 2026 · 10 min read · Claude API Technical Tutorials

Related Articles