Resources for Large Language ModelResources for Large Language Model
After some learnings, I have tried to summarise my understanding into different sections, and write them in [[LLM]]

Top Tech News

Amazon Q
Google Vertex Generative AI Studio
Google Duet


It would be easier if we have knowledge on NLP or Embedding, I would recommend Databricks course, the course start from the NLP knowledge till putting LLM to production.

What is LLM

LLM stands for Large Language Model, which aim to predict the next words. There are long history of LLM from word2vec, LSTM, Transformer, BERT, ELMo, till the PaLM, GPT (Generative Pre-training Transformer), an interesting blog post explain how GPT3 works , and the Gemini, multimodal model from google

Any model lead to the same results, the only difference is dataset

The model itself trying to predict the probability distribution of the next word, after training for the very large dataset, and massive parameters, as a result we can use it to do most of the work. On the other hand, since it is a next word prediction, it can produce the hallucination result.

What we should know about it

We can read a good comprehensive blog post by Eugene - compile the patterns for building LLM-based system, and also great blog post by Chip - addressing the challenges in LLM research

To start using it

  • The easiest way to start using is to use ChatGPT
  • Then we start to learn different type of prompting technique
  • Then we may would like to apply LLM with our own data, so I recommend start learning about RAG
  • Then we may would like to create our own application, I would recommend start learning about LangChain or LlamaIndex
  • At this point, I may recommend to start reading the below information, to understand the overview, approaches, and technique of LLM.

The first few courses, we would learn about LangChain from Deep Learning.AI, these courses are easy to follow, and we can start building our PoC right away.

  • LangChain for LLM App DevelopmentLangChain for LLM App Development

    The LangChain for LLM Application Development Course, focuses on utilizing the LangChain Python/TypeScript framework to streamline the creation of Language Model (LLM) applications. The c...
    : Link
  • LangChain Chat with Your DataLangChain Chat with Your Data
    The LangChain Chat with Your Data course provides a step-by-step exploration of [[RAG Essentials]], guiding users through fundamental processes. The course covers loading various document ...
    : Link
  • Functions, tools, and Agents with LangChainFunctions, tools, and Agents with LangChain

    OpenAI Function Calling

    Recent update from OpenAI to support Function calling

    import json

    # Example dummy function hard coded to return the same weather
    # In production, this could be y...
    : Link
  • Bonus: if we have no experience with building PoC website using python, we can check out the streamlit or Gradio


As mention by Eugene, evaluation is crucial. Then we would need to learn how to evaluate our LLM or RAG.

To maximizing LLM performance

I would recommend starting with youtube video from OpenAI, it will give us an overview of RAG and fine-tuning with the result.

Starting with prompting

Then if we focusing on the RAG, we read the follow.

WE can skip this for later - To go deeper on improving RAG, we can go deeper to understand how vector store works, and different retrieval algorithms, or strategies.

Then it's a good time to start exploring about different model, and how to fine-tune them

Put the model to production

Start by reading Emerging Architectures for LLM Application, this help us paint the big picture of the solutions we should apply in production, with additional resources to read from on different component.

Other Resources