Cookbook 1 - Concepts
Cookbook 2 - Usecases
There are two types of language models, which in LangChain are called:
- LLMs: this is a language model which takes a string as input and returns a string
- ChatModels: this is a language model which takes a list of messages as input and returns a message
Most LLM applications do not pass user input directly into an LLM. Usually they will add the user input to a larger piece of text, called a prompt template, that provides additional context on the specific task at hand.
The output parser will transform the LLM output into formated outputs such as JSON.
- Temperature: the randomness of the model. If
temperature=0
the model will yield the same result for same prompt
Prompts
Either a string or a list of messages
1 | from langchain.prompts import ChatPromptTemplate |
Chat Messages
- System: helpful background context that tell AI what to do
- Human: Messages that are intended to represent the user
- AI: Messages that show what the AI responded with
Selectors
1 | from langchain.prompts.example_selector import SemanticSimilarityExampleSelector |
Chains
Combining different LLM calls and action automatically
1 | from langchain.chains import LLMChain |
Simple Sequential Chains
Easy chains where you can use the output of an LLM as an input into another. Good for breaking up tasks
1 | from langchain.llms import OpenAI |
Summarize Chain
Easily run through long numerous documents and get a summary. Check out this video for other chain types besides map-reduce
1 | from langchain.chains.summarize import load_summarize_chain |
Agents: dynamically call chains based on user input
1 | # example uses serpAPI which scrapes google search results |
Tools
A ‘capability’ of an agent. This is an abstraction on top of a function that makes it easy for LLMs (and agents) to interact with it. Ex: Google search.
Toolkit
Groups of tools that your agent can select from
1 | from langchain.agents import load_tools |
Memory: Add state to chains and agents
Chat Message History
1 | from langchain.memory import ChatMessageHistory |
Models
Language Model
Text in and text out
1 | lm = OpenAI(model_name="text-ada-001") |
Chat Model
Takes a series of messages and returns a message output
1 | chat = ChatOpenAI(temperature=1) |
Text Embedding Model
Change text into a vector (a series of numbers that hold the semantic ‘meaning’ of the text).
1 | embeddings = OpenAIEmbeddings() |
Indexes: Structuring documents to LLM can work with them
Documents Loaders
1 | from langchain.document_loaders import HNLoader # Hackernews loader |
Text Splitters
If the documents are too long for the LLM, you need to split it up into chunks. Text splitters help with this.
1 | from langchain.text_splitter import RecursiveCharacterTextSplitter |
Retrievers
Easy way to combine multiple documents with LLMs. Most widely supported: VectorStoreRetriever
1 | from langchain.vectorstores import FAISS |
VectorStores
Databases to store vectors. Tables with a column for embeddings and a column for metadata
1 | embedding_list = embeddings.embed_documents([text.page_content for text in texts]) |
Feature Store
Feast
1 | from feast import FeatureStore |