This module introduces students to Generative AI (GenAI) from both a technical and organizational perspective. It covers the development, deployment, and governance of GenAI systems, preparing students to apply the technology while considering strategic, ethical, and regulatory challenges. The module covers Large Language Models (LLMs), GANs, diffusion models, and AI-powered code assistants. Through hands-on labs, students will develop real-world GenAI applications. Ethical considerations, including bias, misinformation, and AI safety, will be critically analyzed. By the end, students will have practical experience in leveraging GenAI for creative and technical problem-solving.
Introduction to Generative AI
• What is Generative AI?• Applications in text, images, music, video, and code• Overview of key models (GPT, Stable Diffusion, StyleGAN, etc.)• Tools and frameworks (Hugging Face, OpenAI API, PyTorch, TensorFlow)
Text Generation
• NLP fundamentals (tokenization, embeddings, transformers)• Large Language Models (LLMs) – GPT, LLaMA, Mistral, Claude• Fine-tuning models for domain-specific tasks• Evaluation metrics for text generation (BLEU, ROUGE, perplexity)
Image, Video and Audio Generation
• Introduction to Generative Adversarial Networks (GANs) and VAEs• Diffusion models (Stable Diffusion, DALL·E)• Neural networks for audio synthesis (WaveNet, Jukebox, Riffusion)• Ethical concerns (deepfakes, bias, misuse)
Code Generation and Automation
• AI for software development (Codex, Copilot, AlphaCode)• Best practices for using AI-generated code• Limitations and security concerns
Model Fine-tuning and Customization
• Fine-tuning LLMs using LoRA and PEFT• Training lightweight models on small datasets• Parameter-efficient techniques for domain-specific applications
AI Business Models and Strategy
• Utility in specific departments for specific tasks• GenAI in full process reengineering• How businesses leverage GenAI (cost, innovation, automation)• AI-driven product development• Case studies from startups and big tech
Governance, Policy, and Compliance
• AI regulations (EU AI Act, GDPR, US AI policy)• Intellectual property and copyright in AI-generated content• Organizational compliance strategies
AI Risk and Security Management
• Cybersecurity risks in AI systems• AI-generated fraud and disinformation• Risk assessment frameworks
This module will employ teaching methods and learning situations in the traditional roles such as workshops, seminars and tutorials, as well as more innovative, student-centred activities such as problem solving in groups for both theoretical and practical situations.
Participants will be encouraged to be proactive in their approach to learning through the use of case studies and simulation exercises, working independently and in groups. In some cases participants will be expected to use online materials to supplement studies.
The practical element of the module will be supported through the medium of supervised practical sessions. Participants will be able to explore the characteristics, advantages and limitations of approaches learnt through their application to suitable case studies and simulation exercises. Where appropriate, participants will provide feedback from group research through cascading the knowledge to peers and through presentations. Guest lecturers from industry and academia will be invited where appropriate to expose participants to how topics covered in this module are used within the broader area of assistive technology.
| Module Content & Assessment | |
|---|---|
| Assessment Breakdown | % |
| Formal Examination | 50 |
| Other Assessment(s) | 50 |