If your weekend plans include catching up on AI developments and understanding Large Language Models (LLMs), I’ve prepared a 1-hour presentation on the development cycle of LLMs, covering everything from architectural implementation to the finetuning stages.

The presentation also includes and overview and discussion of the different ways LLMs are evaluated, along with the caveats of each method.





Below, you’ll find a table of contents with links to specific segments of the video, allowing you to jump directly to topics of interest:

  1. Using Large Language Models
  2. Stages of Developing an LLM
  3. Dataset Considerations
  4. Multi-Word Output Generation
  5. Tokenization Explained
  6. Pretraining Datasets
  7. Architecture of LLMs
  8. Pretraining Techniques
  9. Finetuning for Classification
  10. Instruction Finetuning
  11. Preference Finetuning
  12. Evaluating Large Language Models
  13. Rules of Thumb for Pretraining and Finetuning

It’s a slight departure from my usual text-based content, but if you find this format useful and informative, I might occasionally create and share more of them in the future.

Happy viewing!