
Artificial Intelligence (AI) and Algorithmic Ethics (AE)
Let’s put ourselves in an immediate future:
- We have mathematicians creating algorithms and innovating in Artificial Intelligence (AI)
- We have the experts in robotics implementing these algorithms in the new robots
- We have a robot (a car, an airplane, an assistant …) «endowed» with AI that learns as it works (machine learning-ML) and makes increasingly complex decisions
- We do not have anyone controlling all of the above …
It is curious that something as important as the first three points above has not already established a coordinated team of experts supervising that these robots equipped with AI are built on a solid ethical basis. Because, if we want in the not too distant future that robots learn, make decisions and innovate for us, ethics will constitute the basis on which the whole new subject of AI should be based.
The question I ask myself is: Who is going to program the decisions that the new robot with AI will take on the market? I know that the robot will be programmed to follow legal norms and with the idea of not harming any human but, who will be in charge of programming how to settle between the different situations that may arise to make decisions according to at least with a minimum ethic criterion? Who will decide which algorithmic ethics is correct, desired or least harmful?
About this ethical basis on which to base the robots with AI, we can distinguish two moments:
- one of initial planning in which it would settle for what robots want, in what field, with what functions and the social and particular repercussions that the substitution of people by robots would produce in the selected areas: autonomous car, autopilot, teachers, accompaniment or surveillance of elders and children, salvage drone, surgeon Da Vinci …
- another moment later, after having determined the fields in which they would act, in which they specify clearly what minimum ethical criteria these robots would have to comply with and, therefore, what would be the limits to which the mathematical algorithms to be developed would have to adjust to stay within these previously established margins
We will have to go «ahead» (although I think we are already late) and start to establish the legal standards that these robots must meet. And above all, it will be necessary to create committees that decide the future applications of robotics and those that have experts in «minimum robotic ethics» or algorithmic ethics to clearly establish the ethical limits of the decisions that robots may or may not make.
The first robots are already underway. It is up to us humans to establish the functions and limits of them.
It would be good if we started to think and talk about it and create a critical social mass that would make this issue already included in the electoral programs explaining which fields will be subsidized and if money will be allocated to create minimum robotic ethics committees and vote them
Think about it, if when you are older a robot has to take care about you, I am convinced that you would like to decide in advance what functions you want it has in order to choose the one that best suits your preferences. A personal robot will be very expensive, so it would be nice if you can choose it, as in a Tinder page but of robots, and select one with a good bot-chatbot to talk or maybe one that simply make us company like a petbot … you might find this something futuristic but you better think about it unless you like HAL 9000.
