In order to design sustainable, human centred AI systems, we need to understand the ethical and societal issues of inequalities, bias and discrimination that can be perpetuated in our data and data driven systems. We need to understand and adhere to legal obligations with regards to data protection and management of personal information (i.e. GDPR and AI Regulations in the EU). It is also important to look beyond data specific legislation to ethical tools and frameworks that promote equality, safety and inclusion and protect against bias and discrimination and are central to a sustainable human centred design approach to the design of data driven systems.
The objectives of this module are:
-
To provide learners with the knowledge of legal obligations and ethical responsibilities to identify issues of safety, inequity, bias and exclusion in software and data driven systems.
-
To provide learners with strategies to mitigate or ameliorate potentially unethical or unsafe systems
On successful completion of this module students will have acquired the knowledge and skills needed to shape the on-going discussion of the role of AI in society and co
Moral Philosophy and Data Ethics
Range of ethical theories, approaches, and perspectives Major ethical theories and their application to digital or computer ethics (Deontology, Virtue Ethics, Utilitarianism) Compare and contrast ethical approaches to technology design i.e. Utilitarian vs. Deontological approaches Explore ethical frameworks (i.e. first principles test) to evaluate AI and other technical systems
Legal Regulations and Professional Ethics
Legal ethics (foundations of human rights including right to privacy in deontology) Data Management Responsibilities- Research Ethics and Data Collection (GDPR) EU legislation on regulation of artificial intelligence IP law (i.e. battles between tech companies an artists/writers/programmers to protect their content online) Professional Ethics & General Code of Ethics
Bias and Exclusion and Discrimination (Data and Algorithmic)
Bias in Data (racism, sexism, etc.), Confidence in Data (Dataset size), Visualisation biasing, Statistical biasing, unacknowledged data collection (GPS tracking, microphone, and camera activation without the user’s consent), Algorithmic Ethics: Bias in algorithms (racism, sexism, etc.), Lack of explainability of some algorithms, value-based development,
Strategies for Sustainable and Inclusive technologies
Fairness, data balancing Active Inclusion of minority stakeholders (racial minorities, gender, people with disabilities) Transparency, accessible XAI and usability ethics (dark patterns)
Practical applications (topics for case studies practical applications of theory)
Driverless Cars Drones Internet of Things Home Assistants (Siri, etc.) Critical systems with safety implications (i.e. health, defence)
The module will be delivered primarily through lectures and tutorials using any combination of discussion, case study, problem-solving exercises, readings, seminars, and computer-based learning. Emphasis will be placed on worked examples and group discussion of exercises.
| Module Content & Assessment | |
|---|---|
| Assessment Breakdown | % |
| Other Assessment(s) | 100 |