CAN ARTIFICIAL INTELLIGENCE BE RESPONSIBLE FOR ITS ACTIONS?

        Consider the following scenario: an autonomous car drives a family around town. The car is driving fast and did not predict, whether because of a technical problem or an imperfect algorithm, that a group of five people are crossing a red light, just ahead of the car. The self-driving device cannot safely break on time and must face a dilemma:
    Drive over the crossing group and keep its passengers safe or risk the life of the passengers.
    Hit a wall.
    Allow the crossing group to stay intact.
        Now, even if this dilemma is well-known, debatable, and often split the popular opinion in half, it is likely to happen at least once during the long-term development of artificial intelligence. However, an intriguing but less spoken question about this dilemma would be to wonder: Who would be responsible for the death of group A or B? Is it the artificial intelligence itself, the decision-maker of the accident? Or is it the role of the developers who created the algorithm?
Blog asset
        Giving moral responsibility to a machine is absurd to many specialists and lawyers such as Alain Bensoussan because responsibility is usually associated with freedom of choice. Artificial Intelligence cannot make a moral choice funded by ethical criteria: it can only answer through lines of code – rules within its system – and process data to optimize its performance and thus make the "best" decision. I will come back to this last term later in the article.
        It can be the developer's responsibility, as they are the ones who gave the rules to follow and data to process. However, the more complex AI gets, the less blame can be placed on its owners because an algorithm is adaptative and a quick learner. Specifically, the selected data given to the learning machine will significantly influence its behavior, making the latest impossible to predict. Even if AI has a specific code, we cannot predict what it learns from the, often massive, amounts of given data. So, in general, if the machine is autonomous, we cannot make its creators responsible for its unpredictable and self-learned actions.
        A more logical answer, but which in practice won't please the deceased's loved ones, is to blame the community on Man itself. As mentioned, a complex Artificial Intelligence, often neuronal, learns from a massive amount of data and opinions created by a selected population. For example, for the car dilemma, if 90% of humans would choose to hit the wall and save the crossing group, at the risk of killing the car's passengers, the vehicle will process this information and likely choose the same behavior, considered as, again, the "best" thing to do. We could not blame the developer nor AI, as it only follows rationality, or in other terms, human irrationality.
        Consequently, the question remains unanswered.
Blog asset
  To make a small “aparté” on the subject, with the possibility to make systems autonomous, we immediately put down the idea of technical neutrality. A simple object such as a knife would have no more than two uses: for good (to cut food) or bad (to cut anything other than food). A more threatening but straightforward object, such as a bomb with predictable behavior, can also have an ethical or unethical use. A complex algorithm, unpredictable, cannot have "good" or "bad" intentions; it will simply adapt to different circumstances.
Sources:
    “Intelligence Artificielle, La Nouvelle Barbarie” by Marie David and Cédric Sauviat
    Author, Fabien Diaz
OUR TOP 5 NEW FEATURES FOR C# 10
C# 10 is about to release next to the new .NET 6, which brings a lot of new features to the table.
CAN ARTIFICIAL INTELLIGENCE BE RESPONSIBLE FOR ITS ACTIONS?
Have you thought about what it means to give moral responsibility to a machine?
DESIGNING A BRAND: RAMPAGE
Read about how our Designer, Bryan, created the look and feel for Rampage. Ranging from their logo, to product design and more.
AN INTERVIEW WITH OUR FOUNDER
Meet our founder, Michiel de Graaf. Find out why he decided to start Rebels and what motivates him to Realize Revolutions daily.