Quantcast
Channel: Qualitative software development
Viewing all articles
Browse latest Browse all 10

Challenges for the IT Industry: a Robot Judge as an Artificial Moral Agent, Pt2

$
0
0

This is the second part of an article about digital justice. In the first part, we considered an AI judge’s specific qualities and the attitudes of different groups of society towards them. Now, let’s try to figure out what directions of development exist for a robot judge and how this can affect its moral qualities.

Considering the risks of introducing AI systems in justice or counting on the benefits they give, we sooner or later come up with the question of how a digital judge should act. We obviously can’t treat it simply as a large calculator that successfully summarizes the articles of existing legal codes for particular court cases. By imposing the requirement of fairness on the court, we thereby expect that the basic concepts of good and evil, justice and injustice will also fall within the competence of the electronic judge.

Speaking of machine ethics, the first things to come to mind are the well-known Three Laws of Asimov, which govern the behavior of AI systems and prevent them from harming themselves and humans. But what if we add the fourth or, more precisely, “the zeroth law,” which significantly expands the capabilities of AI and makes it compare the possible harm to one person and humanity as a whole? Unfortunately, this set of rules will still not be enough to transform an electronic judge into an artificial moral agent (AMA).

Today, experts argue about the three promising ways to create AI as a moral agent. One of them is the construction of neural networks based on neuromorphic engineering technologies. This imitates the creation of an artificial analog of human intelligence with the use of millions of neural connections, gradually receiving and processing information in the same way as the human brain. However, even if we omit the hypothetical nature of such an assumption, from a moral perspective, there are questions about the environment where such robots will be brought up and learn about the world around them. Whose morality will they eventually inherit? What if a robot ends up carrying human weaknesses in terms of bias and non-objectiveness of their evaluations?

The next way involves the use of the decision tree mechanisms (for example, the ID3 algorithm). Such researchers as Nick Bostrom and Eliezer Yudkowsky insist on its application in the e-justice system. In their opinion, it meets the standards of transparency and predictability to the maximum and ensures the decision is derived from previous legal precedents.

However, this approach is not shortcoming-free either. It is based on the utopian idea that normative ethics can generate a global consensus about the moral norms and laws that define human life, which the current decision tree can be built on. Also, the adoption of such a regulatory grid will make the legal system highly dependent on the current pattern of society’s morality, which can change. If, however, we cater for the possibility of adjusting the moral foundations of AI, based on which it will generate decisions, e-justice becomes vulnerable to hackers.

Finally, the third way of producing an AMA involves the use of genetic algorithm technology. This presupposes not only consistent training of the AI in the practical skills of distinguishing between good and evil in order to maintain its existence but also consolidates this knowledge in new generations, transferring the digital genetic code from the original AI system to its derivatives. However, as the Lausanne experiment of 2009 showed, a collective AI brought up this way begins to reproduce results that are contradictory from a moral perspective: from collective work and even self-sacrifice of individuals for the common good to outright forgery and manipulation of signal data for its own benefit. As a result, this generates an equally complex system of motivations as human behavior, with the consolidation of unpredictable principles as moral imperatives.

However, all the three ways of creating AI as a potential digital judge, versed not only in the articles of legal codes but also in matters of morality and justice, run into one problem. What kind of moral agent do we want to see in the electronic judge? How much are we willing to trust it in matters of justice and humanity? In their book Moral Machines: Teaching Robots Right from Wrong, Wendell Wallach and Colin Allen note that the outcome of our attempt to introduce concepts of justice, good, and evil into AI will be the creation of one of the following three types of AMA.

These can be implicit ethical agents with programmed virtues that will not give AI true freedom of decision-making. They will force it to avoid “unethical” outcomes when making decisions – of course, within the understanding of morality that was embedded during its creation, which brings us back to the question of the producers’ morality and control over AI’s actions.

The second option is explicit ethical agents. Here, machine intelligence will be able to independently process data and correlate various possibilities when making decisions based on a system embedded in it, where moral norms and prohibitions can be imposed on each other. Such an AMA is capable, among other things, of making errors in the calculation and processing of information.

Finally, one day, we might be able to create full ethical agents based on machine intelligence and capable of making fully conscious ethical decisions. In this case, they will not only be able to perceive the abstract categories of justice, morality, and evil but also be aware of their subjectivity and feel responsibility for the decisions made. What is more, they will experience a kind of guilt for mistakes made in the data processing. Here, there is a question of whether such intelligence will face a challenge to reconsider and transform human visions on justice.

Conclusion

In every single case, we proceed from the requirement that the future digital judge must have some kind of analog of moral consciousness. This consciousness would be connected during the decision-making procedure and could adjust court sentences in accordance with the influence of certain moral norms and ethical principles, such as the principle of the humanity of justice. At the same time, an attempt to simulate, within the framework of the AI of an electronic judge, the same processes that occur in the mind of a human judge – at a higher faster hardware level – becomes a widespread opinion.

When developing technologies in the corresponding direction, one should not forget the wise advice of BCG’s experts, reduced to the aphorism “submarines don’t swim”: when we solve some practical problem, we are not obliged to follow the existing precedents and analogs for its effective solution. Sometimes the method chosen can be radically different from everything that we already know about this area. In other words, it can be about creating not an alternative moral system for robots but about a qualitatively different decision-making mechanism that, nevertheless, leads to the expected fair court verdicts.

Getting back to the issue of introducing AI systems in justice, we can note not just the need to solve a number of technical and communication problems associated with existing technologies. It is important to determine the priorities of legal and moral norms and principles in the jurisdiction field and the hierarchy of their applicability in decision-making, which can be consolidated both at the national and international levels. Also, we can think about an alternative to the existing ethical principles of AI decision-making mechanisms, which would allow us to implement the ideals of fairness and humanity of justice on a safe and transparent basis.

 

The post Challenges for the IT Industry: a Robot Judge as an Artificial Moral Agent, Pt2 appeared first on Qualitative software development.


Viewing all articles
Browse latest Browse all 10

Latest Images

Trending Articles





Latest Images