How to Guarantee the Human Controllability of AI Technology and Products by Artificial Intelligence

The technological field is changing with each passing day. Artificial intelligence (hereinafter referred to as AI) is a machine that has the ability to imitate intelligent behaviors. It is an intelligent brain that simulates human behavioral and cognitive processes in computers and naturally learns all the knowledge. It is increasingly replacing human activities, but it also poses a danger to mankind. In response to this, the modernity problem that AI generates is: To include these increasingly intelligent AI entities in the legal social control system like other legal entities. Is it coming?

How to Guarantee the Human Controllability of AI Technology and Products by Artificial Intelligence

In the 1950s, American science psychologist Asimov set the famous three principles of robotics in order to prevent the possible threat of AI:

Article 1: Robots cannot harm humans. And we cannot endanger human security by ignoring this danger.

Article 2: The robot must obey the order issued by humans. However, if the order issued is in violation of the first article, this will not be the case.

Article 3: As long as the robot does not violate the first two provisions, it must protect itself.

How to Guarantee the Human Controllability of AI Technology and Products by Artificial Intelligence

However, this obviously does not eliminate human fear of AI. In order to prevent the future AI from infringing on human interests, many scholars at home and abroad put forward the view that "artificial intelligence independently bears criminal responsibility" (hereinafter referred to as "independent liability assumption"). This view holds that AI's algorithm may have many qualities, and these traits are far more common than ordinary people. But these traits are not necessary for imposing criminal responsibility. When a person or company meets both external (harm behavior) and internal (free will) elements, it can be held criminally liable. If the AI ​​can meet these elements and is in fact satisfied, then there is no doubt that the AI's criminal responsibility will also be pursued. The core basis of the “Independent Liability Theory” is that AI conforms to the internal elements that impose criminal responsibility, ie, the independent “control ability and cultivable identification ability”. The satisfaction of external factors does not hinder the “independent commitment theory.” They believe that with the development of technology, it is not surprising and understandable to “break the harmful behavior” from the “vital elements” and “break through traditional theories”. As long as the AI ​​can mechanically control its body movements, any movement can be considered as the behavior of the AI.

On this basis, the “independent liability theory” set specific punitive measures for AI. This includes punishing AI for fines; imposing free punishment on AI; and imposing death penalty on AI, including the permanent destruction of the body and deletion of AI data.

How to Guarantee the Human Controllability of AI Technology and Products by Artificial Intelligence

However, the author believes that artificial intelligence does not have theoretical self-consistency in independently assuming criminal liability. The reasons are:

First, the behavior of AI is closely related to the meaning of freedom. Criminal behavior is understood as behavior controlled by meaning. Therefore, AI's behavior appears to be its own behavior. Indeed, AI will make movements through the limbs, send voices through the system, influence the surrounding environment through the central control system, etc., but it cannot be said with certainty that these behaviors are based on the control of freedom. This "meaning" is more likely to mean the person who programmed the AI ​​or the person using the AI. In this case, the behavior of the AI ​​will be attributed to the people behind the machine, not the AI ​​itself. To admit that AI's "behaviour" satisfies the elements of criminal law's behavior, it is necessary to find meaning completely equivalent to human meaning in AI itself.

Second, AI does not have the free meaning of evaluating with humans. The concept of liberty is a product of attribution designed to achieve certain social goals. However, responsibility is not arbitrarily imputable without restriction. For those who are unable to evaluate themselves through the ethical evaluation benchmark system for their current and past self-determination, that is, those who do not have good or evil judgments cannot conduct ethical dialogue because he cannot respond to ethical criticisms. That is, it does not have self-reflection ability as a necessary condition of responsibility. Therefore, it is too early for AI to make the same evaluation as human ethics.

Moreover, even if AI has the same ethical control system as human beings, it does not necessarily mean to have freedom. The German Federal Supreme Court once made the following classic statement on criminal responsibility: “The inherent basis of accountability is the moral maturity of human beings. At the same time, as long as the capacity for free and moral self-determination is not paralyzed for a short period of time due to pathology, or If it is hindered for a long time, then it is the ability to be free, answering, and ethically self-determining, so that it can decide to follow legal actions and resist unlawful conduct, so that their attitude is in line with the law's sexual norms and circumvents the law's prohibition. the behavior of."

What is moral maturity? The maturity of ethics requires social recognition. When we identify the responsibilities of the criminal law, expectations and imputations in real social relations are considered very important. In the future, with the evolution of AI technology, there is the possibility of giving humans the impression that AI is completely free to make decisions. However, it is not important whether AI can actually behave in a free manner. Even though it is the meaning of human freedom, we do not actually know whether we are really in the meaning of freedom. We just use the evaluation of the third party to judge whether a person has the meaning of freedom. Therefore, for robots, from the perspective of a third party, how to evaluate is the most important issue. Therefore, before AI reaches a state that can be accepted by human society, conduct equivalent exchanges, and is evaluated as completely indistinguishable from human beings, although AI possesses the ability to “control and identify”, it cannot be evaluated by humans as having a free meaning.

Third, it is not feasible to impose penalties on AI. First of all, with regard to fines imposed on AI, some scholars have suggested that fines imposed on AI can be finally achieved by forcing manufacturers and users of AI to fulfill legal obligations such as purchasing insurance. This is actually passing the penalties on AI to manufacturers and users, violating the principle of liability under criminal law. Secondly, with respect to the freedom of punishment of AI, the free punishment of AI will not achieve the same effect as humans. Although human beings can understand the meaning of freedom, AI itself cannot understand the meaning of the punishment. Finally, regarding "the death penalty" of AI, if AI is regarded as the main subject of human beings, then it is unethical to impose "the death penalty" on AI. We respect the human right to life and advocate the abolition of the death penalty. It is equally impossible to impose the death penalty on AIs that have the same status as humans.

"The future has come, but it does not mean to come." The review of the law in the AI ​​era should be based on the present. The criminal law response to the possible threat of AI should be rooted in the basic theory of criminal law. Of course, while the AI ​​era has brought us benefits, we need to pay attention to its potential dangers. To prevent this kind of technological risk, we should look far ahead from an ethical perspective, establish strict AI R&D, production technology ethical rules and legal standards as early as possible, and ensure the human controllability of AI technology and products, perhaps in the era of more urgent needs.

Cell Phone Holder and Monopod

SHAOXING COLORBEE PLASTIC CO.,LTD , https://www.colorbeephoto.com