Large Language Models and Applications
Graduate course, Taiwan Tech (NTUST), Department of Electronic & Computer Engineering, 2026
This course offers a comprehensive deep dive into Large Language Models (LLMs) and Generative AI, covering theoretical foundations, technical architectures, and real-world applications. Students will explore internal mechanisms like pre-training, transfer learning, and task-specific fine-tuning. The curriculum extends to generative models (specifically text-to-image) and their training methods, equipping students with the practical skills needed to master these cutting-edge technologies.
Outline of Lectures
- LLM Fundamentals
- The LLM Architecture
- Pre-training Models
- Supervised Fine-Tuning
- Parameter-Efficient Fine-Tuning Techniques and Design Spaces
- LoRA, QLoRA, Axolotl, DeepSpeed, Adaptation
- Preference Alignment
- RLHF: Aligning LLM responses with human expectations using preference datasets and PPO
- LLMs APP Development
- LangChain
- Prompt Engineering
- Retrieval Augmented Generation (RAG)
- Agent Framework
- Multimodal Models 11 Text to Image Generation
