Right from Wrong

Right and Wrong
(Image taken from pixabay)

I often read or listen to people´s ideas on handling accountability in the artificial intelligence field of study and solution development. I could spare the main trends in the two: the first being the need for formal modeling of human morality and ethics, and its further coding into algorithms; and the second, the need to build a machine learning solution who follows (the idea of how is still a black box) human moral and ethics rules. Both, in my opinion, are opposite poles just as the sciences and backgrounds of their promoters. A practical example of the eternal dichotomy between hard-sciences and soft-sciences practitioners and academics.

One can find several instruments supporting the latter trend like the “Statement on Artificial Intelligence, Robotics and ‘Autonomous’ Systems” of the European Group on Ethics in Science and New Technologies; the “The Toronto Declaration: Protecting the rights to equality and non-discrimination in machine learning systems”; or the multinational research project aiming to arise with a number of universal moral rules showcasing a common understanding of every society on the planet. All of them with strong fundamentals but lacking on technical specifications to be followed by computer scientists and software developers.

On the other hand, the first trend finds some mechanisms to evaluate responsibility degree, or to accountability recognition after bad decisions are made. Something similar to David Gunkel´s “responsibility gap” theory. One example of this is the calibration checks algorithms; which will tell whereas an algorithm is being biased at a given point comparing the results for a target object in different datasets (the object included). However, yet exist a long path before we are able to formalize ethical and moral frameworks for artificial intelligence solutions to work with on their learning process.

What brings me to request the attention upon artificial intelligence´s dependence on machine learning development as per the decision making processes, consequently the learning core its a capstone; then machine learning methods and techniques aim to simulate human behavioral systems on its procedures. What makes the very object we are criticizing the reflection of our own – let´s not simplify it by saying poor – but complex learning and decision-making processes.

What could be an accurate answer to how effectively we teach right from wrong to children or other individuals? We can all agree prisons are full of people who were found accountable after a bad decision was made. I like this particular statement cause also includes the number of individuals that have been erroneously convicted when judged.  To verify that one only need to consult the innocence project records in the United States of America just to put an example. Where, as a result of bad decisions, some individuals have spent a mean time of 32 years in prison before their innocence was proved. An issue that is currently costing millions to the American government to amend.

Yet a large number of critics believe AI Systems has to be impeccable when learning from our biases (present in datasets), misconceptions (introduced by supervised learning to mention one example), and practices (limiting the machine capability to our human system context due we are usually afraid of what we cannot controlled/explain). Therefore, our way of teaching, learning, experiencing the way of distinguishing right from wrong as humans does not have to be the same for AI systems. Paralelly AI Systems are not the ones should be accountable for when a bad decision is made. We will be incurring in responsibility transference otherwise. Our perceptions and laws need to catch up with AI development.

An approach to tackle this problem could be to ideate how to work the data AI solutions will further use to learn and train a topic left aside most of the time. As in regard to data, the frequently tackled issues are related to duplicity, noise, absence, normalization, access, privacy, and governance; not bias. Complementarily, we also need to develop new methods tunned with the conception of AI solutions being machines and not human imitations; acknowledging we: developers, consumers… have the entire responsibility upon any outcome produced by an AI solution – except when such AI solution was designed and created by other AI solution 🙂 -. Otherwise, ideals such as fairness or justice will become even more subjective issues in a data-driven society.

Distinguishing right from wrong, we are the ones called to do that.

How to reference: Varona, Daniel. 2019. Right from wrong. www.danielvarona.ca [Year of Creation:2019] http://www.danielvarona.ca/2019/06/07/right-from-wrong/ [Consulted on: DDMMYY].

Please follow and like us:

Leave a Reply

Your email address will not be published. Required fields are marked *