Large Language Models and Applications

Graduate course, Taiwan Tech (NTUST), Department of Electronic & Computer Engineering, 2026

This course offers a comprehensive deep dive into Large Language Models (LLMs) and Generative AI, covering theoretical foundations, technical architectures, and real-world applications. Students will explore internal mechanisms like pre-training, transfer learning, and task-specific fine-tuning. The curriculum extends to generative models (specifically text-to-image) and their training methods, equipping students with the practical skills needed to master these cutting-edge technologies.

Outline of Lectures

  1. LLM Fundamentals
  2. The LLM Architecture
  3. Pre-training Models
  4. Supervised Fine-Tuning
  5. Parameter-Efficient Fine-Tuning Techniques and Design Spaces
    • LoRA, QLoRA, Axolotl, DeepSpeed, Adaptation
  6. Preference Alignment
    • RLHF: Aligning LLM responses with human expectations using preference datasets and PPO
  7. LLMs APP Development
    • LangChain
    • Prompt Engineering
  8. Retrieval Augmented Generation (RAG)
  9. Agent Framework
  10. Multimodal Models 11 Text to Image Generation