Released 9/2024
MP4 | Video: h264, 1280×720 | Audio: AAC, 44.1 KHz, 2 Ch
Skill Level: Intermediate | Genre: eLearning | Language: English + srt | Duration: 1h 52m | Size: 333 MB
Large language models (LLMs) are becoming increasingly crucial in various industries. This course with instructor Harpreet Sahota offers a deep dive into the inner workings of text generation using LLMs. Learn about the importance of tokenization, special tokens, and chat templates in text generation. Explore how to manipulate the next selected token and gain a technical and intuitive understanding of generation parameters such as temperature, top-p, top-k, repetition penalty, length penalty, and bad words list. Discover how these parameters can be combined to form powerful decoding strategies, including greedy search, multinomial sampling, beam search, and contrastive search. Gain hands-on experience using the Hugging Face text generation API and get a sneak peek into interacting with the NVIDIA NIM API to explore larger models. By the end of this course, you’ll have a solid foundation in controlling text generation with LLMs, enabling you to apply these skills in real-world scenarios.
Homepage