Module Overview

Human Centric Deep Learning

Deep Learning is one of the most important topics in the area of Data Analytics today, however, transparency and explainability are often not implemented resulting in models that are not trustworthy. Deep Learning is based on the long-standing Feed Forward Neural Network design but significantly goes beyond that through the use of pre-training and the development of exotic network architectures. These black-box algorithms are often harder to explain and evaluate, especially with respect to ethics and trustworthiness. This module aims to take students with a background in Machine Learning through the practical design and application of Deep Learning solutions, where at each stage, transparency and trustworthiness are a key focus for a human-centred approach in developing and deploying these algorithms/models. A high degree of self-study will be used in this module fitting the level of students that this module is aimed at. 

Module Code

HCDL H6000

ECTS Credits

10

*Curricular information is subject to change

Indicative syllabus covered in the module and/or in its discrete elements

Week 1: Review of Essentials
1.    Algorithms that are core for deep learning, including algorithm and data bias

Weeks 2 through 5: Neural Network Essentials
2.    Structure of a neural network and the Feedforward Algorithm

3.    The Backpropagation Algorithm

4.    Hyperparameter tuning, activations and losses for practical Neural Networks with model reproducibility

5.    Preventing overfitting and model evaluation, including model transparency, interpretation, and explainability (XAI). 

Weeks 6 through 8: Image and Text Processing
6.    Introduction to Convolutional Neural Networks for images, with methods for explainability such as heatmaps.

7.    NLP – 1D Convolutional Neural Networks, and Word Embeddings.

8.    Introduction to Recurrent Neural Networks and LSTMs, including evaluation for bias or unmitigated outcomes, for model reliability/reproducibility.

Weeks 9 through 12: Deployment and XAI
9.      Developing models on GPU infrastructure (such as Cloud hardware)

10.    Production models and deployment with evaluation, to ensure that the model is performing as expected both ethically and legally.

11.    Model transparency and explainability.

12.    Explainable AI (XAI) techniques such as surrogate models (for example using EBM’s or LIME) 

Week 13: Review and assessment 
 

Module Content & Assessment
Assessment Breakdown %
Other Assessment(s)100