University of Oxford Logo University of OxfordProfessional Master’s programme - Home

Generative AI with Large Language Models

Summary

This course introduces generative models, taxonomy and applications, including tasks linked to natural language processing, code generation, retrieval augmented generation and visual scene understanding. It introduces fundamentals of classes of foundation models - from autoencoders to transformers, state space models, diffusion models, and multi-modal models, as well as methods for their tuning and refinement, including reinforcement learning with human feedback. Its practical component explores cutting-edge "agentic" programming: the building of highly flexible and autonomous systems using LLMs as core components.

Objectives

The objectives of this course are to:

  • Explain the core principles of generative models and foundation model architectures, including autoencoders, transformers, diffusion models, and others;
  • Present model development pipelines: pretraining, evaluation, and task-specific tuning, including transfer learning strategies;
  • Apply tuning techniques such as reinforcement learning with human feedback (RLHF), and parameter-efficient methods like LoRA and QLoRA.
  • Describe prompting strategies, including zero-shot, few-shot, and in-context learning, and assess their impact on model performance on specific tasks.
  • Identify and understand how to implement applications of various kinds of foundation models in natural language processing, code generation, visual scene understanding, and information retrieval; Evaluate and use agentic toolkits (e.g., LangChain, LangGraph) to build modular and scalable LLM-based applications; Design and implement autonomous agents using LLMs as core reasoning and control components.

Contents

  • Introduction to generative models: taxonomy, capabilities, and limitations; Foundation models: autoencoders, transformers, state space models, diffusion models, multimodal architectures;
  • Pretraining and transfer learning: supervised fine-tuning, instruction tuning, RLHF
  • Efficient fine-tuning techniques: LoRA, QLoRA, PEFT strategies
  • Prompting and in-context learning: principles and best practices
  • Applications of generative models in NLP, code synthesis, vision, and retrieval
  • Retrieval-Augmented Generation (RAG) and moderation workflows
  • Agentic programming: LLMs as decision-makers, planners, and tool users
  • Toolkits for building agentic systems: LangChain, LangGraph, and others
  • Design and engineering of autonomous AI agents: modularity, scalability, and limitations

Requirements

  • DNN (Optional/Recommended)
  • VIS (Optional/Recommended)
  • Some background on AI Governance (Optional/Recommended)