Expendable deaths

(Image was taken from Pixabay)

Recently, I had a conversation regarding driverless cars’ decision core and the parameters they currently do not take into consideration when analyzing possible outcomes while trying to avoid a collision.  We were arguing upon the ethical differences around the globe regarding which lives values the most among the people involved in the accident. On the one hand, we had a pedestrian and on the other hand, we had a passenger. In our discussion, we referred to several studies from different countries were factors such as the age of both pedestrian and passenger will determine who´s death will be more ethically correct.

Some studies point the surveys inclined to favor the youngest in contrast with other surveys exhibiting the opposite idea. The different surveyed regions showed results too dissimile and the available research it is too scarce to adopt an inflexible posture. We finally agreed it was like the tale of a father, a son, and a donkey. In which every other individual they crossed in their path will criticize them no matter of who was riding the donkey, or if any of them was doing it at all. Unfortunately, in the case we were discussing the focus point was not who is more comfortable but who lives and who dies.

I was extremely surprised when my interlocutor said: “Anyhow, driverless cars, once introduced into society, will drop the volume of deaths because of traffic accidents; at the end, no matter who lives and who dies but the number of deaths are being saved from the current stats”.

I was pretty worried about my friend not being a regular person, with a regular job. This person is someone highly educated in business, sciences, and public relations; whose job allows him to influence the actual implementation of developing and research policies, to budget a portfolio of research and innovation projects. A person it’s dedicated to connecting people around the world to facilitate ideas flow and project executions.

Then I wondered if I was the one not seeing an imperfect decision core of an artificial intelligence system was actually capable of lowering the number of deaths by traffic accidents regardless of who dies and who doesn’t. After all, these systems are superior in their imitation of our own decision making in the same situation; by being able to calculate hundreds if not thousands of scenarios while in the moment of distress. And the factors those systems take into consideration are just a detail in the equation.

Nevertheless, I thought about the previous technological and scientific advances we had experienced as humanity; thought about their methodological frameworks and concluded: then why we wasted money and effort with ethical approvals?, If vaccines are intended to lower the number of deaths by a particular virus, or emergent medical treatments; why do we need extense trial periods?, some of them even before experimenting with humans. Why do we need to identify and communicate limitations and secondary effects on our drugs?. Why pilots and airlines have had documented and turned into endless checklists every single parameter influencing a decision?.

So I understood, at least for the pilot example, that accidents should be reproducible in order to assess all outcomes – including the actual one that took place-; and I comprehended that responsibility transference its the real issue, accountability. Who it’s responsible for the deaths at accidents where driverless cars are involved? Like doctors are in medicine as prescriptions expenders, or like the pilots/airlines when a plane crash occurs – a closer example to the one I am referring to in this post -.

Are we truly that comfortable with the idea of twenty deaths instead of hundreds when those twenty, depending on the factors parametrized in the driverless car´s decision-making core, might be typified as murder?. This is not an inaccurate thought. It can manifest by prioritizing with a given weight the value of the passenger (the one who bought the driverless car, paid the dealer, pays the insurance…), and a given – just smaller enough – value to the pedestrian or another driver involved in a sinister; to balance the outcome in favour of one side or the other. And yet radio advertisements are currently highlighting the possibility of sleeping while commuting to work when workers live in a city diffrerent from their workplace using a driverless car.

Please do not get me wrong, I am not promoting any conspiracy theory or promoting freezing fear upon technological progress. On the contrary; my intention goes more attuned with arising the need for a technical framework for artificial intelligence designers based on ethics and moral; highlighting the need for speed up our laws and public policies to meet our current realities, to address possible negative outcome given the introduction of new tech in anticipation to the actual events. I am tuned with responsible design as a way of accomplishing justice from the early stages of design.

How to reference: Varona, Daniel. 2019. Expendable deaths. www.danielvarona.ca [Year of Creation:2019] http://www.danielvarona.ca/2019/06/14/expendable-deaths/ [Consulted on: DDMMYY].

Right from Wrong

Right and Wrong
(Image taken from pixabay)

I often read or listen to people´s ideas on handling accountability in the artificial intelligence field of study and solution development. I could spare the main trends in the two: the first being the need for formal modeling of human morality and ethics, and its further coding into algorithms; and the second, the need to build a machine learning solution who follows (the idea of how is still a black box) human moral and ethics rules. Both, in my opinion, are opposite poles just as the sciences and backgrounds of their promoters. A practical example of the eternal dichotomy between hard-sciences and soft-sciences practitioners and academics.

One can find several instruments supporting the latter trend like the “Statement on Artificial Intelligence, Robotics and ‘Autonomous’ Systems” of the European Group on Ethics in Science and New Technologies; the “The Toronto Declaration: Protecting the rights to equality and non-discrimination in machine learning systems”; or the multinational research project aiming to arise with a number of universal moral rules showcasing a common understanding of every society on the planet. All of them with strong fundamentals but lacking on technical specifications to be followed by computer scientists and software developers.

On the other hand, the first trend finds some mechanisms to evaluate responsibility degree, or to accountability recognition after bad decisions are made. Something similar to David Gunkel´s “responsibility gap” theory. One example of this is the calibration checks algorithms; which will tell whereas an algorithm is being biased at a given point comparing the results for a target object in different datasets (the object included). However, yet exist a long path before we are able to formalize ethical and moral frameworks for artificial intelligence solutions to work with on their learning process.

What brings me to request the attention upon artificial intelligence´s dependence on machine learning development as per the decision making processes, consequently the learning core its a capstone; then machine learning methods and techniques aim to simulate human behavioral systems on its procedures. What makes the very object we are criticizing the reflection of our own – let´s not simplify it by saying poor – but complex learning and decision-making processes.

What could be an accurate answer to how effectively we teach right from wrong to children or other individuals? We can all agree prisons are full of people who were found accountable after a bad decision was made. I like this particular statement cause also includes the number of individuals that have been erroneously convicted when judged.  To verify that one only need to consult the innocence project records in the United States of America just to put an example. Where, as a result of bad decisions, some individuals have spent a mean time of 32 years in prison before their innocence was proved. An issue that is currently costing millions to the American government to amend.

Yet a large number of critics believe AI Systems has to be impeccable when learning from our biases (present in datasets), misconceptions (introduced by supervised learning to mention one example), and practices (limiting the machine capability to our human system context due we are usually afraid of what we cannot controlled/explain). Therefore, our way of teaching, learning, experiencing the way of distinguishing right from wrong as humans does not have to be the same for AI systems. Paralelly AI Systems are not the ones should be accountable for when a bad decision is made. We will be incurring in responsibility transference otherwise. Our perceptions and laws need to catch up with AI development.

An approach to tackle this problem could be to ideate how to work the data AI solutions will further use to learn and train a topic left aside most of the time. As in regard to data, the frequently tackled issues are related to duplicity, noise, absence, normalization, access, privacy, and governance; not bias. Complementarily, we also need to develop new methods tunned with the conception of AI solutions being machines and not human imitations; acknowledging we: developers, consumers… have the entire responsibility upon any outcome produced by an AI solution – except when such AI solution was designed and created by other AI solution 🙂 -. Otherwise, ideals such as fairness or justice will become even more subjective issues in a data-driven society.

Distinguishing right from wrong, we are the ones called to do that.

How to reference: Varona, Daniel. 2019. Right from wrong. www.danielvarona.ca [Year of Creation:2019] http://www.danielvarona.ca/2019/06/07/right-from-wrong/ [Consulted on: DDMMYY].