LangChain
LangChain is a framework for developing applications powered by language models. It enables applications that are context-aware and can reason based on provided context. The main value propositions of LangChain are its components and off-the-shelf chains.
Use Cases
LangChain supports various use cases, including:
- Autonomous Agents: Long-running agents that take multiple steps to accomplish an objective.
- Document Question Answering: Answering questions based on specific documents.
- Chatbots: Creating chatbots using language models.
- Querying Tabular Data: Using language models to query data stored in a tabular format.
- Interacting with APIs: Enabling language models to interact with APIs.
- Extraction: Extracting structured information from text.
- Summarization: Summarizing longer documents into shorter chunks of information.
- Evaluation: Evaluating generative models using language models.
Modules
LangChain provides several modules that serve as the building blocks of any LangChain-powered application. These modules include:
- Models: Supported model types and integrations.
- Prompts: Prompt management, optimization, and serialization.
- Memory: State persistence between calls of a chain/agent.
- Indexes: Loading, querying, and updating external data.
- Chains: Structured sequences of calls.
- Agents: Chains with high-level directives for decision-making.
- Callbacks: Logging and streaming intermediate steps of any chain.
Example Usages
Sending chats
from langchain.prompts import ChatPromptTemplate
from langchain.chat_models import ChatOpenAI
# Create a prompt template for a chat conversation
prompt = ChatPromptTemplate.from_template("tell me a joke about {foo}")
# Create a ChatOpenAI model
model = ChatOpenAI()
# Create a chain by combining the prompt and model
chain = prompt | model
# Invoke the chain by providing the input values
result = chain.invoke({"foo": "bears"})
# Print the content of the result
print(result.content)
Forwarding chat results as the input for another chat
from operator import itemgetter
from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain.schema import StrOutputParser
# Create prompt templates for two chat conversations
prompt1 = ChatPromptTemplate.from_template(
"What is the name of a celebrity that was born in {year_of_birth}?"
)
prompt2 = ChatPromptTemplate.from_template(
"Give me 3 interesting facts about {celebrity_name}."
)
# Create a ChatOpenAI model
model = ChatOpenAI()
# Create a chain for the first chat conversation
chain1 = prompt1 | model | StrOutputParser()
# Create a chain for the second chat conversation, using the output of the first chain as input
chain2 = (
{"celebrity_name": chain1, "year_of_birth": itemgetter("year_of_birth")}
| prompt2
| model
| StrOutputParser()
)
# Invoke the second chain by providing the input values
result = chain2.invoke({"year_of_birth": 1980})
# Print the result
print(result)
Parsing chat message output
from langchain.chat_models import ChatOpenAI
from langchain.prompts.chat import ChatPromptTemplate
from langchain.schema import BaseOutputParser
class CommaSeparatedListOutputParser(BaseOutputParser):
"""Parse the output of an LLM call to a comma-separated list."""
def parse(self, text: str):
"""Parse the output of an LLM call."""
return text.strip().split(", ")
# Define a template for the chat conversation
template = """You are a helpful assistant who generates comma separated lists.
A user will pass in a category, and you should generate 5 objects in that category in a comma separated list.
ONLY return a comma separated list, and nothing more."""
# Define a template for the user's response
human_template = "{text}"
# Create a chat prompt using the system and human templates
chat_prompt = ChatPromptTemplate.from_messages(
[
("system", template),
("human", human_template),
]
)
# Create a chain by combining the chat prompt, ChatOpenAI model, and the output parser
chain = chat_prompt | ChatOpenAI() | CommaSeparatedListOutputParser()
# Invoke the chain by providing the input values
result = chain.invoke({"text": "countries"})
# Print the result
print(result)