Inclusive Ethical AI in Human–Computer Interaction in Autonomous Vehicles

Abstract

Artificial Intelligence (AI) used in Autonomous Vehicles (AV) to check for driver alertness is a critical piece of technology that makes the decision to hand over control to the human if there is a disengagement of the autonomous capability. It is important that this AI be inclusive without bias, because treating drivers differently will have an impact on the safety of humans not only in the vehicle but also on the road. This paper evaluates the AI that powers driver attention systems in the car to check if the AI treats all humans inclusively the same way beyond their ethnicity, gender and age, and whether it follows AI ethics principles of trust, transparency and fairness. Driver attention is built using two different AI models. One uses camera data to recognise humans and the other evaluates whether the human is alert. We found that both these AI models are biased and not inclusive of all people in all situations. We also found that there are unethical practices in how humans are tracked to check for alertness by using infrared sensors that track their retina movements without any concept of consent or privacy of people being tracked in the vehicles. The paper builds upon prior research on face detection outside the car and research that shows that car cognition AI does not recognise all humans on the road equally. We present research results about how the car is biased against some humans in its face identification and how the assertion of alertness of humans to hand over control during an emergency is fundamentally flawed in its definition of alertness. We recommend mitigation techniques and call for further research to build upon our work to make the AV inclusive with bias mitigation of bias in all forms of AI in AVs.

Published in: Journal of AI, Robotics & Workplace Automation

AUTHORS

Yashaswini Viswanath


Suresh Lokiah


Leave a Reply

Your email address will not be published. Required fields are marked *