LangChain for LLM App Development



The LangChain for LLM Application Development Course, focuses on utilizing the LangChain Python/TypeScript framework to streamline the creation of Language Model (LLM) applications. The course highlights key processes specific to Retrieval-Augmented Generation (RAG), emphasizing four crucial aspects:

Building Vector Database :

  • Extracting information from documents to construct a vectors database.

Query and Similar Document Retrieval :

  • Employing queries to identify similar documents efficiently.

Parsing Documents to LLM :

  • Processing the selected documents through the LLM to generate meaningful responses.

The course introduce you a LangChain framework with

LLM Model Compatibility :

  • Compatibility with various LLM models, such as OpenAI.
from langchain.chat_models import ChatOpenAI
chat = ChatOpenAI(temperature=0.0, model="gpt-3.5-turbo")

Reusable Templates :

  • Obtaining results based on customizable prompts with reusable templates.
template_string = """Translate the text \
that is delimited by triple backticks \
into a style that is {style}. \
text: ```{text}```

from langchain.prompts import ChatPromptTemplate

prompt_template = ChatPromptTemplate.from_template(template_string)

customer_style = """American English \
in a calm and respectful tone

customer_email = """
Arrr, I be fuming that me blender lid \
flew off and splattered me kitchen walls \
with smoothie! And to make matters worse, \
the warranty don't cover the cost of \
cleaning up me kitchen. I need yer help \
right now, matey!

customer_messages = prompt_template.format_messages(

Result Parsing :

  • Parsing results into dictionaries or JSON format.
from langchain.output_parsers import ResponseSchema
from langchain.output_parsers import StructuredOutputParser
gift_schema = ResponseSchema(name="gift",
                             description="Was the item purchased\
                             as a gift for someone else? \
                             Answer True if yes,\
                             False if not or unknown.")
delivery_days_schema = ResponseSchema(name="delivery_days",
                                      description="How many days\
                                      did it take for the product\
                                      to arrive? If this \
                                      information is not found,\
                                      output -1.")
price_value_schema = ResponseSchema(name="price_value",
                                    description="Extract any\
                                    sentences about the value or \
                                    price, and output them as a \
                                    comma separated Python list.")

response_schemas = [gift_schema, 
output_parser = StructuredOutputParser.from_response_schemas(response_schemas)
format_instructions = output_parser.get_format_instructions()
response = chat(messages)
output_dict = output_parser.parse(response.content)

Memory Module :

  • Utilizing a Memory module to enable LLMs to remember context, particularly useful for chatbots.
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationChain
from langchain.memory import ConversationBufferMemory

llm = ChatOpenAI(temperature=0.0, model=llm_model)
memory = ConversationBufferMemory()
conversation = ConversationChain(
    memory = memory,

memory.save_context({"input": "Hi"}, 
                    {"output": "What's up"})
##-- Window Memory --##
from langchain.memory import ConversationBufferWindowMemory
memory = ConversationBufferWindowMemory(k=1)               
memory.save_context({"input": "Hi"},
                    {"output": "What's up"})
memory.save_context({"input": "Not much, just hanging"},
                    {"output": "Cool"})

##-- Token Buffer --##
from langchain.memory import ConversationTokenBufferMemory
from langchain.llms import OpenAI
llm = ChatOpenAI(temperature=0.0, model=llm_model)

memory = ConversationTokenBufferMemory(llm=llm, max_token_limit=50)
memory.save_context({"input": "AI is what?!"},
                    {"output": "Amazing!"})
memory.save_context({"input": "Backpropagation is what?"},
                    {"output": "Beautiful!"})
memory.save_context({"input": "Chatbots are what?"}, 
                    {"output": "Charming!"})

##-- Summary Memory --##
from langchain.memory import ConversationSummaryBufferMemory

# create a long string
schedule = "There is a meeting at 8am with your product team. \
You will need your powerpoint presentation prepared. \
9am-12pm have time to work on your LangChain \
project which will go quickly because Langchain is such a powerful tool. \
At Noon, lunch at the italian resturant with a customer who is driving \
from over an hour away to meet you to understand the latest in AI. \
Be sure to bring your laptop to show the latest LLM demo."

memory = ConversationSummaryBufferMemory(llm=llm, max_token_limit=100)
memory.save_context({"input": "Hello"}, {"output": "What's up"})
memory.save_context({"input": "Not much, just hanging"},
                    {"output": "Cool"})
memory.save_context({"input": "What is on the schedule today?"}, 
                    {"output": f"{schedule}"})

conversation = ConversationChain(
    memory = memory,

conversation.predict(input="What would be a good demo to show?")

Chaining Input/Output :

  • Employing the Chain module to link input/output across different LLM responses, including the router chain for routing specific responses to designated prompts.
from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain.chains import LLMChain
from langchain.chains import SimpleSequentialChain

llm = ChatOpenAI(temperature=0.9, model=llm_model)

# prompt template 1
first_prompt = ChatPromptTemplate.from_template(
    "What is the best name to describe \
    a company that makes {product}?"

# Chain 1
chain_one = LLMChain(llm=llm, prompt=first_prompt)
# prompt template 2
second_prompt = ChatPromptTemplate.from_template(
    "Write a 20 words description for the following \
# chain 2
chain_two = LLMChain(llm=llm, prompt=second_prompt)
overall_simple_chain = SimpleSequentialChain(chains=[chain_one, chain_two],
from langchain.chains import SequentialChain
llm = ChatOpenAI(temperature=0.9, model=llm_model)

# prompt template 1: translate to english
first_prompt = ChatPromptTemplate.from_template(
    "Translate the following review to english:"
# chain 1: input= Review and output= English_Review
chain_one = LLMChain(llm=llm, prompt=first_prompt, 
second_prompt = ChatPromptTemplate.from_template(
    "Can you summarize the following review in 1 sentence:"
# chain 2: input= English_Review and output= summary
chain_two = LLMChain(llm=llm, prompt=second_prompt, 
# prompt template 3: translate to english
third_prompt = ChatPromptTemplate.from_template(
    "What language is the following review:\n\n{Review}"
# chain 3: input= Review and output= language
chain_three = LLMChain(llm=llm, prompt=third_prompt,

# prompt template 4: follow up message
fourth_prompt = ChatPromptTemplate.from_template(
    "Write a follow up response to the following "
    "summary in the specified language:"
    "\n\nSummary: {summary}\n\nLanguage: {language}"
# chain 4: input= summary, language and output= followup_message
chain_four = LLMChain(llm=llm, prompt=fourth_prompt,
# overall_chain: input= Review 
# and output= English_Review,summary, followup_message
overall_chain = SequentialChain(
    chains=[chain_one, chain_two, chain_three, chain_four],
    output_variables=["English_Review", "summary","followup_message"],


and also RouterChain

Loader and Q/A Chain :

  • Creating Question-and-Answer LLMs using the Loader and Q/A chain.
from langchain.document_loaders import CSVLoader
loader = CSVLoader(file_path=file)
docs = loader.load()

from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()

db = DocArrayInMemorySearch.from_documents(
query = "Please suggest a shirt with sunblocking"
docs = db.similarity_search(query)

## -- OR
retriever = db.as_retriever()
qa_stuff = RetrievalQA.from_chain_type(
query =  "Please list all your shirts with sun protection in a table \
in markdown and summarize each one."
response =

##-- OR
index = VectorstoreIndexCreator(
response = index.query(query, llm=llm)

Evaluation and Testing :

  • Using LLMs for result evaluation and generating examples for testing purposes. – This seems to be the same technique as use in why labs which mentioned in Quality and Safety for LLM ApplicationsQuality and Safety for LLM Applications
    Whylogs help logs the data, as well as profiling and monitoring the data issue with LLM.

    In the note, we will cover

    Data Leakage
    Refusals and Prompt Injections
    Passive and a...

Evaluation Prompt

from import QAGenerateChain

file = 'OutdoorClothingCatalog_1000.csv'
loader = CSVLoader(file_path=file)
data = loader.load()

example_gen_chain = QAGenerateChain.from_llm(ChatOpenAI(model="gpt-3.5-turbo"))
new_examples = example_gen_chain.apply_and_parse(
    [{"doc": t} for t in data[:5]]
from import QAEvalChain
llm = ChatOpenAI(temperature=0, model=""gpt-3.5-turbo"")
eval_chain = QAEvalChain.from_llm(llm)

examples = [
        "query": "Do the Cozy Comfort Pullover Set\
        have side pockets?",
        "answer": "Yes"
        "query": "What collection is the Ultra-Lofty \
        850 Stretch Down Hooded Jacket from?",
        "answer": "The DownTek collection"
predictions = qa.apply(examples)

graded_outputs = eval_chain.evaluate(examples, predictions)
for i, eg in enumerate(examples):
    print(f"Example {i}:")
    print("Question: " + predictions[i]['query'])
    print("Real Answer: " + predictions[i]['answer'])
    print("Predicted Answer: " + predictions[i]['result'])
    print("Predicted Grade: " + graded_outputs[i]['text'])

Agents Module :

  • Leveraging the ReAct prompting in the Agents module to enable LLMs to perform various actions.
from langchain.agents.agent_toolkits import create_python_agent
from langchain.agents import load_tools, initialize_agent
from langchain.agents import AgentType
from import PythonREPLTool
from langchain.python import PythonREPL
from langchain.chat_models import ChatOpenAI

llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo")
tools = load_tools(["llm-math","wikipedia"], llm=llm)
agent= initialize_agent(
    verbose = True)

## Math example
agent("What is the 25% of 300?")

## Wikipedia example
question = "Tom M. Mitchell is an American computer scientist \
and the Founders University Professor at Carnegie Mellon University (CMU)\
what book did he write?"
result = agent(question) 
## Python Agent
agent = create_python_agent(
customer_list = [["Harrison", "Chase"], 
                 ["Lang", "Chain"],
                 ["Dolly", "Too"],
                 ["Elle", "Elem"], 
                ]"""Sort these customers by \
last name and then first name \
and print the output: {customer_list}""") 
## Create your own tool
from langchain.agents import tool
from datetime import date

def time(text: str) -> str:
    """Returns todays date, use this for any \
    questions related to knowing todays date. \
    The input should always be an empty string, \
    and this function will always return todays \
    date - any date mathmatics should occur \
    outside this function."""
    return str(
agent= initialize_agent(
    tools + [time], 
    verbose = True)

    result = agent("whats the date today?") 
    print("exception on external access")

In essence, the course provides a comprehensive overview of the LangChain framework, empowering developers to rapidly create and enhance LLM applications with diverse functionalities.

#llm #chatgpt #langchain #deeplearning-ai