Worries abound out the use of (black box) machine learning (ML) algorithms. In EU circles, several regulations demand that high-risk applications of AI are to be sufficiently transparent to allow users to interpret the model output. Applications are classified as such if they pose high risk to the health or safety of natural persons or to their fundamental rights. Recently, Scott Robbins...
This paper concerns the question of human moral duties towards social robots—robotic companions, caregivers, pets, and other machines that are built to interact with human beings on a social level. The issue is a pressing one, because social robots will become considerably more sophisticated in the near future while their moral and legal standing remains a contested point. In the present...
In this contribution, I argue that ethical assessments of AI require a pragmatic, yet philosophically robust reconsideration of the ‘site’ of inquiry. This requires an abandonment of translating issues of interpretability or fairness into decontextualized mathematical models in the technical field; and a more detailed engagement with actual technological deployments in the philosophical field...