LangChain as LLM provider
LangChain as LLM Provider
LiveKit provides excellent native plugins for OpenAI, Anthropic, Google, and many other providers. LangChain adds value in three specific scenarios: provider flexibility (swap models by changing one line), reusing existing LangChain chains and tools, and LangGraph orchestration for complex workflows. This chapter covers when to use LangChain, how the wrapper works, and how to configure it.
What you'll learn
- When LangChain adds value over native LiveKit plugins (and when it does not)
- How to install and configure
livekit-plugins-langchain - How the
LangChainLLMwrapper translates between LangChain and LiveKit - How to swap LLM providers without changing agent logic
- How function calling works through the wrapper automatically
When to use LangChain vs native plugins
| Scenario | Recommendation |
|---|---|
| Simple voice agent with OpenAI or Anthropic | Native plugin — fewer dependencies, simpler setup |
| Need to swap providers frequently or A/B test models | LangChain — uniform interface across 60+ providers |
| Existing LangChain codebase to integrate | LangChain — reuse your investment |
| Complex multi-step workflows | LangGraph via LangChain plugin |
| Maximum performance, minimum overhead | Native plugin — one fewer abstraction layer |
LangChain is optional, not required
If you do not need provider abstraction, existing chain reuse, or LangGraph orchestration, the native plugins are simpler with fewer dependencies. You can also mix approaches — use a native STT plugin with a LangChain LLM wrapper and a native TTS plugin.
Installation and setup
# Install the LiveKit LangChain plugin
pip install "livekit-plugins-langchain"
# Install the LangChain provider you want
pip install langchain-openai
# or: pip install langchain-anthropic
# or: pip install langchain-google-genaiBuilding a voice agent with LangChain
The integration is two lines: create a LangChain chat model and wrap it with LangChainLLM. Everything else — STT, TTS, agent instructions, tools — works exactly the same as native plugins.
from livekit.agents import AgentServer, Agent, AgentSession, rtc_session
from livekit.plugins import deepgram, cartesia
from livekit.plugins.langchain import LangChainLLM
from langchain_openai import ChatOpenAI
server = AgentServer()
langchain_llm = ChatOpenAI(
model="gpt-4o",
streaming=True,
temperature=0.7,
)
lk_llm = LangChainLLM(langchain_llm)
@server.rtc_session
async def entrypoint(session: AgentSession):
await session.start(
agent=Agent(
instructions="""You are a helpful voice assistant.
Keep responses concise and conversational.""",
),
room=session.room,
stt=deepgram.STT(model="nova-3"),
llm=lk_llm,
tts=cartesia.TTS(voice="<voice-id>"),
)
if __name__ == "__main__":
server.run()Streaming must be enabled
Always set streaming=True on your LangChain model. Voice agents need token-by-token streaming so TTS can begin synthesizing audio before the full response is generated. Without streaming, the agent waits for the complete response before speaking, adding seconds of latency.
Swapping providers
Because the wrapper accepts any LangChain chat model, you can make the provider configurable with an environment variable:
import os
from livekit.plugins.langchain import LangChainLLM
def get_llm():
provider = os.environ.get("LLM_PROVIDER", "openai")
if provider == "openai":
from langchain_openai import ChatOpenAI
return ChatOpenAI(model="gpt-4o", streaming=True)
elif provider == "anthropic":
from langchain_anthropic import ChatAnthropic
return ChatAnthropic(model="claude-sonnet-4-20250514", streaming=True, max_tokens=300)
elif provider == "google":
from langchain_google_genai import ChatGoogleGenerativeAI
return ChatGoogleGenerativeAI(model="gemini-2.0-flash", streaming=True)
else:
raise ValueError(f"Unknown provider: {provider}")
lk_llm = LangChainLLM(get_llm())Set LLM_PROVIDER=anthropic in your environment and your voice agent switches to Claude without any code changes. The rest of the pipeline — STT, TTS, agent instructions, tools — remains untouched because the wrapper handles all translation. This is particularly useful for A/B testing different models.
Function calling through the wrapper
When you register tools on your LiveKit agent, the LangChainLLM wrapper translates them into LangChain's tool format automatically. You do not need to define tools twice.
from livekit.agents import Agent, function_tool, RunContext
from livekit.plugins.langchain import LangChainLLM
from langchain_openai import ChatOpenAI
langchain_llm = ChatOpenAI(model="gpt-4o", streaming=True)
lk_llm = LangChainLLM(langchain_llm)
@function_tool
async def get_weather(context: RunContext, city: str) -> str:
"""Get the current weather for a city.
Args:
city: The city name to check weather for.
"""
return f"The weather in {city} is sunny and 72 degrees."
agent = Agent(
instructions="You are a helpful assistant. Use the weather tool when asked.",
tools=[get_weather],
)The LiveKit framework converts get_weather into a tool schema and passes it to the LangChain model through the wrapper. Function calling works transparently.
TypeScript integration
import { AgentSession, Agent } from "@livekit/agents";
import { LangChainLLM } from "@livekit/agents-plugin-langchain";
import { ChatOpenAI } from "@langchain/openai";
const langchainLLM = new ChatOpenAI({
modelName: "gpt-4o",
streaming: true,
temperature: 0.7,
});
const lkLLM = new LangChainLLM(langchainLLM);
// Use lkLLM in your agent session exactly like a native plugin
const session = new AgentSession({
llm: lkLLM,
});Package names differ in TypeScript
In Python: livekit-plugins-langchain and langchain-openai. In TypeScript: @livekit/agents-plugin-langchain and @langchain/openai.
Test your knowledge
Question 1 of 3
When should you use LangChain with LiveKit instead of native LiveKit LLM plugins?
What you learned
- LangChain is most valuable for provider flexibility, existing chain reuse, or LangGraph orchestration
- The
LangChainLLMwrapper adapts any LangChain chat model for LiveKit's voice pipeline in two lines - Streaming must be enabled for acceptable voice latency
- LiveKit tools work automatically through the wrapper with no additional configuration
- Native LiveKit plugins remain the simpler choice for straightforward single-provider agents
Next up
In the next chapter, you will use LangGraph to build stateful, multi-step conversation flows — intent classification, conditional routing, memory, and checkpointing.