This project has been funded with support from the European Commission. The author is solely responsible for this publication (communication) and the Commission accepts no responsibility for any use may be made of the information contained therein. In compliance of the new GDPR framework, please note that the Partnership will only process your personal data in the sole interest and purpose of the project and without any prejudice to your rights.

Ethical Engineer Case Studies

Our Ethical Engineer Case Studies invite you to explore how artificial intelligence is shaping real-world decisions across sectors such as hiring, healthcare, and law enforcement. Each scenario highlights both the opportunities AI offers and the ethical questions it raises for individuals, organisations, and wider society.

As you explore these case studies, you are encouraged to think critically about issues such as fairness, transparency, privacy, and accountability. Rather than providing fixed answers, this compendium supports reflection and discussion, helping you consider how AI can be developed and used in responsible and ethical ways.


1

AI in the Hiring Processes 

 

AI is transforming recruitment by enabling organisations to quickly screen applications and standardise decision-making. While this can improve efficiency and consistency, it also raises concerns about bias, transparency, and fairness, particularly when systems rely on historical data that may reflect existing inequalities.

2

AI in Healthcare

 

AI-powered fall detection systems help monitor vulnerable individuals and alert caregivers in real time, improving safety and response times. However, their use also raises ethical questions around privacy, data security, and the balance between independent living and constant monitoring.


3

AI in Witness Testimonies ​

 

AI tools for recording witness statements can improve efficiency and capture details quickly after an incident. Yet their use in sensitive legal contexts raises concerns about accuracy, data protection, and whether automated systems can appropriately handle complex human experiences.


4

AI in Facial Recognition 

 

Facial recognition technology is used to improve security and streamline processes such as airport boarding. Despite these benefits, it raises important ethical issues around privacy, data security, and potential bias, particularly where systems may not perform equally across different groups.



5

AI Self-Driving Taxis

 

AI-powered self-driving taxis promise more efficient, accessible, and sustainable transport in urban environments. However, they also raise important ethical questions around safety, accountability in the event of accidents, and the wider social impact, including job displacement and public trust in autonomous systems.


6

GenAI’s Ecological Impact​

 

Generative AI tools offer significant benefits in productivity, creativity, and problem-solving across sectors such as education and marketing. At the same time, their development and use require substantial energy and water resources, prompting concerns about environmental sustainability and the long-term ecological cost of rapid technological growth.

7

AI in Marketing

 

AI is becoming a key tool in marketing, helping professionals generate content, analyse trends, and improve efficiency. While it can support creativity and reduce workload, it also raises concerns about job security, changing skill requirements, and the risk of over-reliance on automated content generation.


8

AI in Photo Assessment

 

AI is used to analyse trends and predict which historic images may be most appealing or profitable to promote. However, relying on data drawn from the internet can lead to ethical challenges, particularly when sensitive historical content is presented without proper context or when societal biases influence what is prioritised.

9

AI in Data Analytics

 

AI-driven data analytics can transform decision-making by uncovering patterns across large and complex datasets. Yet, these systems are not neutral and may reflect underlying biases in how data is collected and interpreted, raising concerns about fairness, privacy, and the potential for misuse or manipulation.


10

Bias in Data

 

Bias in data can lead to inaccurate predictions and unequal outcomes, particularly when algorithms are trained on unrepresentative or historically biased datasets. This case study highlights how different types of bias can affect decision-making and the importance of actively identifying and addressing these issues to ensure more equitable AI systems.

11

AI in Pulsatile Heart Pumps

 

The use of standardised medical devices, such as ventricular assist devices, can conflict with the need for personalised care. While standardisation improves accessibility and efficiency, it may not meet the needs of all patients, raising ethical questions about safety, equity, and how to balance practical constraints with individualised treatment.

menu