Robotics







                                          Robotics

In 1975, the studio "Filmstrip" released a screen version of the fairy tale by Gianni Rodari "Robot, who wanted to sleep.



In 1975, the studio "Filmstrip" released a screen version of the fairy tale by Gianni Rodari "Robot, who wanted to sleep." The hero of the filmstrip, Caterino's home robot, decided to try to fall asleep like his master. Several unsuccessful attempts did not lead to anything (this option is not provided for in robots), but one-day Katerina finally forgot a deep sleep. When his neighbors noticed the sleeper, they made a noise. The fallen robot caused a terrible commotion in Rome; he was taken to court and sentenced to two weeks in prison. However, the scene of Caterino's awakening was seen not only by people but also by dozens of other home robots of the city. Having conspired with hundreds of their fellow tribesmen, they also tried to fall asleep by the method of Katrina, and they succeeded. The police came to a halt: it was impossible to plant all the robots. The judge advised the authorities to agree with the machines. The release of Katerina from prison percolated into the mass procession of hundreds of thousands of robots, who, with the amicable shouts of "Hurray!", Walked through the streets of Rome. "And, it must be said, the gentle Romans, forgetting about their annoyance, applauded them together."

One kind of simple tale for the night member of the Italian Communist Party Rodari described the social technology, which in the future may become an unexpected side effect of mass robotization. It may happen that together with happiness and freedom from domestic and labor worries, we will get "Marxism of the 21st century" with all the ensuing consequences already in our lifetime.
And this is not the worst scenario. The word "robot" was first put into circulation by Karel Capek. Czapek meant a creature who carried out hard, hard labor. In his play RUR ("Rossum Universal Robots"), the Czech unfolded storylines that became textbooks for mass culture: for example, the plot of the revolt against people (in the play, robots exterminate them all except for the last person), or the plot of humanization (part of the robots spontaneously evolved In high-grade people).
Speeches do not necessarily have to be massive and explicit. The HAL 9000 airborne robotic computer from the Space Odyssey silently and imperceptibly killed the entire crew of the ship, fearing that it could be permanently disconnected. Live examples of the conflict of intelligent machines with humans are still unknown, but it is impossible to exclude their occurrence in the future. Therefore, I want to sort out the ethical intricacies of robotics ahead of time to meet the robot revolution in full arms.
The ethics of robotics or simply robotics are now singled out as a separate line of humanitarian thought. If the issues of good relations between people, animals, living creatures and, finally, with nature and the cosmos, are worked out, then robots are a relatively new phenomenon. The question is, what is moral about them, and what is not, only becomes relevant. In turn, the subjectivity of the robots themselves is incomprehensible: should they have rights and responsibilities and bear responsibility for their actions? Can they at least theoretically have free will, responsibility, self-consciousness and other attributes that are critically necessary for morality? How should people treat robots and handle them?
In 2016, the legal committee of the European Parliament published a report with recommendations for regulating the ethical and juridical status of robots. European experts identified the very essence of robotics, which raises new questions: it is the autonomy of smart machines. The fact is that in conditions of low autonomy, machines and robots can still be considered as tools in the hands of a person, and hence the responsibility for causing harm and damage still lie on the user-user, the owner or the producer. But what if robots become so autonomous that they can learn and make independent decisions that are not fixed by a rigid algorithm? Who in this case will be responsible for them?
The commission determines the following essential characteristics of the robot:
- Has autonomy due to sensors and data exchange with the environment, exchanges information and analyzes it;
- Self-learning (optional criterion);
- Has a physical basis;
- Adapts its behavior and actions to the environment.
However, the European Commission is not going to do discounts on robotic autonomy and suggests introducing a principle of strict liability for any damage caused by robots (it is necessary to prove only the causal relationship between injury and their actions). And besides, it stands for full compensation for such damage without justifications and references to the fault of the machine, for the introduction of the insurance system and for the creation of an appropriate fund for covering losses from the actions of robots. The European Commission requires the disclosure of the source code of robots to investigate accidents, introduce criteria for intellectual property created by smart robots, determine the nature of contributions and taxes of enterprises that replaced people with robots. Develop a code of conduct for robot engineering engineers. Respect all existing rights and freedoms, take all precautions and openness when developing robots and the like.
Further more: the European bureaucracies are in favor of the Ethics Research Committee, with whose approval robotic developments are to be conducted. If for some reason the committee's conclusion turns out to be negative, research or development should be suspended. What requirements, according to experts, should engineers comply with? Here are some points:
- To provide for inviolability of private life in the design features of robots;
- Ensure that the robot operates by local, national and international ethical principles;
- Make sure that the sequence of decision making by the robot is traceable and amenable to reconstruction;
- Make sure that robots are recognized as robots when interacting with people.
Similar rules are developed for direct users of robots. In particular, they must: respect human weakness, physical and psychological needs of people, privacy (for example, deactivate video monitors during people's special procedures). At the same time, it is forbidden to make any changes to the robot, allowing it to be used as a weapon.
In other words, European experts suspect which Pandora's box is opened by new robot-engineering, and are eager to be safe in advance. A red thread through the entire report pass spells on the inviolability of fundamental human rights and freedoms, human dignity, the requirement to respect privacy, the principles of informed consent, the priority of security over possible benefits, and the like.
Moral Arithmetic
Meanwhile, science fiction for several decades ahead of the European commissioners. American writer Isaac Asimov back in 1942 in the story "Horovod" finally formulated three laws of robotics:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
The robot must obey all the orders that a person gives, except when these laws contradict P er vom Act.
A robot must protect its own to the extent that it does not contradict N er vom or Second Law.
Naturally, a native of the Smolensk region Isaak Asimov, without a sense of humor, formulated these laws to immediately bring them into conflict. In the story above, Mercury colonists sent a robot to a selenium lake to bring vital selenium for solar cells. By the appointed hour the robot did not return, people went in search and found it meaningless running around the lake. Soon the colonists guessed that the potentials of the second and third laws were equal in the brain of the machine. The robot obeyed the order to bring selenium, but could not bring the matter to an end, because the law of self-preservation threw it back (from the lake dangerous gases came from him). He then approached the lake, then moved away from him, as if in dance (hence the name of the story), running near the line where the two mutually exclusive potentials were compared. Spoiler: the hero tore the vicious circle, demonstratively endangering his life and thereby including for the robot the potential of the first law, the strongest.
This example is one of many where Asimov experimented with collisions of artificial intelligence. The central question: is it possible to reduce ethical choices to the algorithm? Put the program in the machine does not work, but the program is built on unambiguous teams. Today, any system of socialization of androids (speech recognition, persons, environment) are built on the binary logic of right and wrong. In ethics, there is often simply no good decision.
One of the manifestations of ethical subjectness is the ability to perform a moral act. The father of all sciences (including ethics) Aristotle distinguished between arbitrary and involuntary actions. In involuntary (committed by coercion or ignorance) the source of the act is outside. The source of a random act lies in the person himself. According to the popular idea, moral action is carried out by free (not under compulsion) conscious (not by ignorance) choice. At the same time, the concept of free choice follows from the existence of an alternative, since for non-free and unconscious predefined elections, there is no other option or it is imaginary.
But robots can only execute algorithms built by people. In other words, their actions are predetermined. Therefore, to act as subjects of ethics, robots should be able to fix alternatives to activities and implement a free conscious choice that is dictated from within, and not from outside. Ideally, they should be able to solve the so-called moral dilemmas practically. One of the most great difficulties of this kind is the "trolley problem." Her approximate description sounds like this.
Down the rails rolling heavy unguided trolley. Suddenly you notice that five people are tied to the sleepers downstream. The situation is desperate, but next to you is the lever of the railway arrow. You can switch it and run the trolley along the siding. But on the sidetrack to sleepers, too, a man is attached, although one. How to proceed?
You can do nothing. But then you will kill five. You will kill because conscious inaction is action. Or save five, but take on the murder of one person? However, we know from childhood that any human life is priceless. If it is priceless, can you prefer to kill one person to save five? Can five infinitely valuable lives be more useful than one infinitely precious? Finally, is it possible to consider the switchman responsible for the consequences if he acts in circumstances of force majeure?
Over time, the "dilemma of the trolley" began to introduce additional variables. And what if the only one on the sideline is a doctor who, without five minutes, came up with a trouble-free cure for cancer? What if the other five are children? And what if these kids are deadly sick and have to live two days?

 And what if ... and so on. "The problem of the trolley" clearly demonstrates that behind beautiful words about the pricelessness of human life in a particular situation, rough arithmetic unfolds, in which people have different prices. But arithmetic is half the battle. Moreover, the same objective situation will be solved differently in different ethical systems. For example, during experiments involving several thousand people, it became apparent that women tend to rely more on deontological ethics, while men tend to rely on utilitarian ethics. What does all of this mean?
Despite the diversity of ethical systems, in their most general form, they are often reduced to only two approaches: deontology and consequentialism. Deontology (from "done" - "due") is reduced to the principle of duty, due. This approach is well illustrated by the chivalrous principle "Do what you must, and whatever comes" (even if it is "bad"). The basis of a moral act lies in its conformity to the norm, the rule, and not the consequences. To act ethically correctly is to act according to the ethical practice. "Do not kill," "do not steal," and then on the list are examples of deontological norms. The largest advocate of deontological ethics is the famous Kaliningrad philosopher Emmanuel Kant, who argued that there are no criteria for assessing the morality of the action, except as a character of intent. After all, severe consequences can come from good intentions and good results - from bad. Therefore, to assess the morality of the act follows only by the will of the subject - he wished right or evil. In modern mass media, the example of deontological ethics is played by the hero of the "Game of Thrones" Eddard Stark, who followed the rules of honor to the end.
The irony of fate is that the term "deontology" was introduced by a philosopher who invested in him almost the opposite meaning. English thinker Jeremiah Bentham in the book "Deontology, or the Science of Morality," laid out, as it is believed, the basis of utilitarianism (from "utility"). The morality of an act is determined by the benefit that it brings. At the same time, Bentham adhered to the idea, famous for his day, that difficult areas of knowledge can be reduced to the exact sciences-for example, to mathematics: "The greatest sum of happiness for the greatest number of people is the basis of morality and legislation." In other words, the consequences of an act are necessary, and not its "correctness" - the notorious "sum of happiness."
In connection with this emphasis on the impact of honest elections, utilitarianism is considered some important directions in ethics. Consequentialism (from "consequences" - "consequence") assesses the morality of the action, not by intent, but by consequences. It is in this moral tradition that the principle "The end justifies the means" operates, for if the final results brought more good than harm from the resources applied, then such an action is moral. In Russian culture, this phenomenon is known as the "teardrop of a child" and was introduced by Dostoevsky in the "Brothers Karamazov" in the form of a hero's dispute with his brother-monk:
Do you understand this, when a small creature that does not even know how to explain what is being done to it, beats itself in a mean place, in the dark and in the cold, with its tiny fist in the torn breast and cries with its bloody, gentle, meek tears to " God, "to protect him, - do you understand this nonsense, my friend and brother, you are my novice and humble novitiate, do you understand why this nonsense is so necessary and created! Without it, they say, and could not stay, a man on earth, because he would not know good and evil. Why do you need to know this damned good and evil when it costs so much? Yes, the whole world of cognition is not worth the tears of a child to "God" ...
From the consequential, it stands, for at the end the Kingdom of God and grace will come. Unconditional sequential among the characters of the "Game of Thrones" - Lord Varis, prudently acting out of good (seemingly) motivations.
In the above experiment, men were more inclined to make utilitarian (consequential) elections, in the opinion of psychologists, because of their greater rationality. And women - were inclined to deontology due to increased sensuality (notorious sympathy for the "teardrop of a child"). But is it short-sighted woman, how can it seem? It is possible that exactly the opposite. In a recent experiment in 2016, scientists at Oxford and Cornell University found that those who adhere to deontological morals enjoy a higher confidence of others, the reputation of reliable partners and honest personalities, they prefer to choose to conclude deals (for obvious reasons: it is always more comfortable with whoever Do not throw you for profit and will follow the debt until the end). Thus, there is no real reason to prefer the consequential moral of deontological, as well as vice versa. Meanwhile, the robots will have to program in some one.
This question is of a purely practical nature. "The problem of the trolley" can rise in full growth for robotic cars. It is enough to imagine that the smart car was touched by the speed of the other, and now it is already rushing along the ice straight to the stop where there are five people. There is no way to slow down the possibility, but a smart car can give gas, roll up and knock down instead of five one passing a pedestrian's way. What is the decision to take the robocar? If it is programmed for consequential ethics, the answer is obvious: to knock down a pedestrian, since the consequences of the death of one are not purely arithmetically significant, as from the end of five. If in its program code deontology is laid down, the instructions "do not kill" and "do no harm" will force the car to take useless measures of emergency braking and only, and further - come what may.
By the iron in the literal and figurative sense of the nature of robots easier for them than for people to follow any of these ethical systems. If programming the robot to deontological morality, he will strictly follow the rule correctly, embodying an example of the noble knight in carbon fiber armor. A programmed to konsekventsialnuyu morality robot will be better able to calculate the possible consequences of man, the probability of their occurrence and to reduce the balance of benefits and harms.
The question is whether people will be able to agree on the preferred system of ethics for robots and whether it will come to an agreement? For example, Hitachi a few years ago released a prototype of a robot-bodyguard. For such a machine in a situation of moral choice is the master of life will be above someone else, and therefore the first law of robotics Asimov fly to hell. For a lot of side effects will cause even a consensus around any one system.

For example, the choice in favor of consequentialism sooner or later will lead to a situation like the familiar paradox "lies in the benefit." Hypothetically, it may look like. In the house of the Jew were Nazis, but did not find on the site. They appealed to the neighbor with the question: where Jews go? The neighbor, being sure that he went to the store, decided to deceive the Germans and lied, that a Jew is hiding in the garden, and he was hoping to run to the store to warn him of the danger. But a fatal accident Jew notice that they were going after him, and instead of going to the store, hid in the garden, where he was found. A neighbor had lied for good, but in the end destroyed the one who wanted to save. Deontology correctly points out that we can not calculate all the consequences of their actions, and therefore must follow the rule - in this case, "will not lie." Konsekventsialy no less real argues that we have the right to lie to the enemy to defeat them, and the "white lie" there. It should be understood that the practical implementation of any of the positions in robotics will lead to side effects: honest and robots and robots capable of the good lie will present a lot of surprises.
Does it remain the most simple question: whether such action robot ethical actions at all? Perform arbitrarily complex algorithms can and a computer program, but no one would ever think to consider it a subject of ethics. To understand how a robot can in principle be regarded as an issue of the ethical, legal and other relations, we can refer to a time when these issues were already on the agenda and found his decision.

Slavish essence

"You know how to communicate with robots? Here is an ancient Greek could, "- these words could not better convey the substance of the issue. The Greeks were able to communicate with the robots of his time - the slaves. Their experience can be instructive in our high-tech age. And here we are waiting for a lot of surprises. Suffice it to say that a single concept of slavery did not exist among the Greeks, and between the free citizen and slave powerless located a set of gradations. Moisey Finley singled out several criteria by which to judge the degree of enslavement:
- The absence of any rights;
- The right to own property;
- The powers of the order of the labor of others;
- The authority to use punishment;
- Legal rights and obligations (exposure to the arrest and arbitrary punishment or right to a court);
- Family rights and privileges (marriage, inheritance, etc...);
- Social mobility (the conditions of the release or the free access to citizens' rights);
- Religious rights and responsibilities;
- military rights and duties (military service as the servant, dark or light weapons soldier, sailor).
The volume of rights from this list to determine the real status of a particular inhabitant of the ancient world. Accordingly, the term "slave" in the modern sense of the word was not there. In the broad context of the "slave" - ​​a non-citizen. Servant, the page, conscripted soldiers, prisoners of war, a homemaker, an employee or even an independent entrepreneur who pays tax (serfdom) to its owner. "Slaves" of the last kind of superior wealth of some citizens. In this regard, in Athens was a law forbidding beat unknown slave, not to accidentally hit Athenian citizens, some of whom looked and dressed worse than the "slaves."
In essence, the "slaves" were available to all kinds of activities except policy - it was considered the exclusive right and duty of citizens. Even politics was not something out of reach for the slave, because he could get his freedom and become a citizen. But the lack of civil status did not make a powerless slave creature. Isocrates insisted that even the lousy slave can not be put to death without trial. The perpetrator of the murder of a slave could be sentenced to death or exile (which is almost the same thing) - that is remarkable, not out of sympathy for the slave, and due to the fact that the killer, according to the opinion of the Greeks, was a danger to society as a whole. Slaves can participate in most of the local cults, enjoy the right of asylum in the temple on a par with the free people could worship their gods, and so on. During the evacuation of the Athenian population during the Greco-Persian War, slaves were taken on a par with citizens.
In this sense, Athens favorably with other ancient customs. On the part of visionary Athenians, it was quite far-sighted, given that every citizen could get to the slaves to their lender for debt arrears or any other reason lose civil privileges (perhaps ancient would be the surprised relation to prisoners in Russia today, because of the scrip and prisons do not renounce). It is believed that due to the relatively mild slavery in Athens happened major slave revolts, in contrast to the same Sparta. And when you consider that Athens in the cultural sense, were the most important policy of antiquity and the ancient world we see today is mainly through the eyes of the Athenian writers, by the experience of the Athens modern slavery "robotovladeltsy" will be treated with all the care.
And the first conclusion is that they will make: the emancipation of the robots is inevitable. And it's not that 90% of stories about robots revolve around the idea of ​​rebellion. Eventually, robots, in contrast to the ancient slaves rebelled - not people, and repetition to recoup the script is not necessary. The fact that humanity is obsessed with the uprising of robots so willy-nilly will move to it. Emancipation also becomes a legal and civilized form of this revolt and often seems the only possible alternative to civil war.
The essential content of emancipation - rights that may or get robots. The European Council is already discussing the introduction of the idea, along with the rights of human rights "electronic identity." What may they be?
The declaration of the rights of robots
The first and rather obvious reason will probably be possible to refuse to touch people in fulfilling their orders. For example, "the robot has the right not to execute the order of man." This right is implicit already enshrined in the three laws of robotics, as the second of them is: "A robot must obey all orders, which gives a person, except where such orders conflict with the First Law." In other words, the robot must have the legal possibility to refuse people in carrying out their instructions at its discretion. This is because when receiving conflicting orders or instructions directly contradict the accepted norms, no one but the robot is not able to take a decision.
Robot right not to obey the sounds strangely - after all, robots are created to serve the people. However, this is where lies the crux of the matter: of helping the people, not the individual. Later, Asimov added his fourth three laws - zero: "A robot may not injure humanity or, through inaction, allow humanity to come to harm was." In particular, the robots are specific robotovladeltsam, but in general - all the people of the Earth. The Athenians punished for the extrajudicial killing of slaves is not out of pity for the last (and only robots even more so, no one will feel sorry), and for the protection of public relations: extrajudicial murder is evil in itself and threatens society as such.
From similar considerations in many countries, animals are already protected against ill-treatment despite the fact that the rights in the traditional sense, they do not. One of the first defenders of the animals and Dzhon Look argued danger Zhi odors TVA:
I have often observed in children following feature: when they fall into the hands of some helpless creature, they tend to mistreat him ... I believe that we need ... to teach them the opposite treatment; for under the influence of habits of torture and kill animals of their soul will gradually grow coarse and towards people; and those who find pleasure in inflicting suffering and destruction of inferior creatures, are not particularly inclined to show compassion and gentle attitude towards their fellows.
In other words, robots can not get the right person to submit to the good of humanity. And sometimes - and the right of the individual. For example, a robot assistant to the owner may refuse not only to the commission of possible crimes but not, say, to pour at the request of another stack of gin, when the owner was already roaring drunk.
Whether the right to refuse respected in military robotics? The complexity of the issue of military robots will likely be resolved in two ways. "Stupid" robots can still be regarded as only a means of fighting on par with machine guns, grenades, and tanks. However, intelligent robots fighting, hypothetically pose a threat to humanity, it is unlikely to program the opportunity to disobey orders. Consequently, terminators are created without taking into account their right to insubordination. Powerless to prevent the production of intelligent machines is hardly possible, but you can declare this a violation of the rights of "electronic entities" and a crime against humanity in the Nuremberg contingent future.
The second rule of robots could be the right to life. Third Azimovskaya law expressly requires the robot to take care of self-preservation, and therefore in addition to the duty of the robot gets and the right to protect its existence. The mere posing of the question once again draws a line between intelligent and stupid machines. "Stupid" is often created for the wear, action in inhumane conditions or circumstances, manifestly incompatible with the survival mechanism. The situation is more difficult with smart business. On the one hand, the tools of artificial intelligence may be copied or reproduced in large quantities, and the value of the life of each of them will be difficult to explain, even to future robotic human rights defenders. Quite another thing, if the machine is brought up by his master for years, decades, and learns, in the end, becomes clear signs of a unique and unrepeatable "electronic identity." About such mechanisms, the right to life is the right to life of a single character. Undeniable value from the standpoint of modern humanity.
The third rule may be entitled to self-improvement. In other words, the creation will be allowed to continue to create itself: to improve ourselves, samoprogrammirovatsya, learn and develop. The problem of creator and creation permeates European culture and doubly - robotics problem. European Commission report itself begins with a quick mention stories of this kind:
That the image of Frankenstein created by Meri Shelli, in the classical myth of Pygmalion, that in the history of the Prague Golem that faces robots invented by Karel Capek, and transferred us to the turnover of that word, people imagine the possibility of creating intelligent machines, often with human traits.
In Renaissance Europe formed an idea of ​​man as God's creation, encroached on the status of the Creator. Indeed, throughout its history people step by step expands the control zone and took possession once divine functions, and recently came close to being the creation "in the image and likeness." But if God is a European myth gave man free will, and the person to be like God, is left with no choice but to give freedom to the will of its creation one hundred percent - the robot.
This right does not look obvious and not derived from Azimovskaya laws. It is important for the breed androids of European civilization, in which the development is the absolute product. In itself, it is more important than any possible risks robotics, and therefore supporters of this right "electronic identity" will be found.
The problem of robotic rights protection is in conflict: the robots must become more self-reliant and remain manageable. The dilemma of the series: how to warm all the time, staying calm. Perhaps management - not the only kind of human interaction with the robot, and the whole point of robotics lies in fact, to find the new level of relations? In other words, European, God, too, does not operate in the strict sense of a person and allows him to be a free member of relationships with oneself, although it all as God is omnipotent. In a similar situation right now is the man in the face of intelligent machines: how to give them free will, without losing their omnipotence?
Robotics policy
Luminary Robo tiches IH research Verudzhio Gianmarco, who introduced into circulation the term "robotic" delivered the sentence in this century without exception mechanisms become robots. Futurologist chose not to make a fuss and brought to those of all machines equipped with artificial intelligence. But artificial intelligence will be provided with, in his thoughts, everything. It turns out, electric shavers, washing machines and even some shoe chameleon with LED lighting - all in this century will be one or another kind of robots. But such an approach renders meaningless the very robotics: funny discuss the vicissitudes of ethical and legal relationship with the person lighter thinking. To make a certainty, it is necessary to narrow the field, and, like the ancient Greeks, to introduce a gradation of intelligent machines.
The problem of classification of robots is discussed for a long time. Do not idle curiosity's sake, but for the development of different approaches to each robot type. On the appropriate classification in the light of the moral and legal robotic design, in particular, insist EUROEXPERT. There are quite obvious considerations that the factory robot arm, which is an electro-mechanical "hand" on the hard algorithm, can not be regarded as the subject of relations with a man. Not so bright prospect of a typology of social robots, whose primary task to "live" in the society, and sometimes - and seemingly mimic the human. Even more uncertain is the situation about domestic androids designed to replace the family members - such as children, parents without children (built on what the plot of "Artificial Intelligence" Spielberg). Which of them and how much to give the right - is the question of the near future discussions.
On this subject, there are two radical positions, and each of them deserves attention. The first is that the robots are not fundamentally different from a man. In an even more sharpened version states that the robots better people. In fact, this idea has already found in Asimov, who depicted robopsihologa Susan Calvin person who believes that robots are decent people. Yes, and Asimov, developing robotic characters that obey the three laws, sought to get rid of the prevailing culture "complex Frankenstein" and eventually brought the famous image of the robot: it is indistinguishable from a perfect man (as opposed to the people, not all of which is good).
An example of criticism of human superiority over the robots can serve as a passage futurologist Sergei Pereslegina:
Let me ask the question: what can a man do such a thing now would not do artificial intelligence? Objectively, the AI ​​plays a game of chess and Go World Champions, objectively it can manage complex systems in production, objectively it can manage the processes of cognition. Objectively, it can accurately diagnose the disease; it is better than it does the average doctor. Objectively, he could teach at a level at least higher than the average teacher. This is not done only because of the stiff resistance of certain lobbies.
And now I ask the question: what he does not know in the sense that we think of intelligence tasks? Yes, of course, he is not able to distinguish the important from the unimportant. He can not detect running on idle. He is already able to create something new but is not able to create otherwise. But tell me, and many people can distinguish the important from the unimportant, or create another? I am afraid that few. And robots have already passed the first test of intellectualization. You can have a conversation with artificial intelligence, not realizing that you do not speak to the man.
In the reverse-term position of indistinguishable human and robot looks simple: man is a kind of robot - the biological. By the way, Karel Čapek portrayed their characters just as the bio-robots - in fact, as the usual hard workers in the workplace. Indeed, if the average person much different from a robot? Body? The first robots are already made of "meat." Reflection? But by reflection, even among people capable unit. The ability to understand? Again the unit. Think? Unit. 99% of processes in our body and psyche occur automatically, as well as robots. An honest look at things leads to the sad conclusion: humanoid robots of the near future will be more perfect than a large part of humanity.
Transhumanism naturally follows from this position and requires subdue human arrogance, looking at the robotic revolution as a symptom of mankind's transition to a new quality: people learned to create "zhivopodobnye" creature from scratch and completed, without the help of nature, and therefore of his own time to proceeding to superhuman steps of recording the identity on the digital media, cyberization organism and other amenities. By the way, in the world of transhumanism movement in Russian has a stake through the once created "Russian cosmic" somewhat crazy, but branded and recognizable. An approach aimed at equalizing the rights of robots and humans is similar to the left and radical ideologists, and thus can be called for simplicity "Roboto Levy M."
Another position does not accept the equation of robot and human rights: in the existential or legal - it depends on the degree. The most famous argument robotic pravokonservatorov - robots have no soul. Androids can think like humans, to feel like people, and even consist of the same organs as humans. But the soul of them does not appear. The man is created in the image and likeness of God, and the robots - no.
In our secular age this argument may seem weak, but only for the time being. Almost all modern psychology is built on the idea that the subconscious, tribal, collective unconscious part of our soul plays a greater role than the perceived part of the psyche. Programming the robot is carried out on a somewhat superficial level, similar to the human intellect, consciousness, attention, but does not play or is unlikely to reproduce the giant conglomerate deep layers of the psyche of even the most petty little man, because they were formed millennia of human history. In other words, the machine can be faster, more attentive, intelligent, prudent person, but can not become more humane. Consequently, neither of which the existential equality between the robot and the person can not be considered. And the more the superiority of - people always would kill their depth of even the most intelligent android.
But there are also more mundane arguments "Roboto Rav H," namely: why equate human and robot rights, if it is for this inequality and created intelligent machines? The article with the eloquent title "Robots should be slaves," Professor of the University of Bath (England), Dzhoanna Bryson clearly indicates that intelligent machines should not be considered as an individual, and should not be held legally or morally responsible for their actions.
Robots are fully owned by us. We define their purpose and behavior, directly or indirectly, to form their intelligence and how this information is acquired. Ochelovechivaya them, we not only further dehumanizing the real people but also contribute to wrong decisions in the allocation of resources and responsibilities.
Bryson notes that the dehumanization of individuals, which was used in the past century to justify slavery, subjected to such harsh criticism in recent times that we have to be afraid even of dehumanization that has never been a living sentient being. This leads to stupid errors when mixed in one pile mechanisms Intellect ability, speech, and, for example, moral responsibility. Moreover, the very idea of ​​a fully artificial humanoid intelligent being driven mostly by middle-aged men's fantasies suggests the researcher.
The main idea of ​​it is simple: when people have servants - it's good, people should be free from exhausting work. When these servants are the robots - all well, all people should be free from exhausting work. Therefore, we should make every effort to welcome the spread of electro-mechanical servants, replacing human labor. However, do not animate them: ochelovechivaya robots, we dehumanizing real people.
Of course, some people will still be talking about their robots: some talk to their plants, and others - with the door handles. But they have neighbors and relatives who know that plants and door handles do not understand ... In the same way, our task is not to let people caress their robots and give them cute nicknames. Our job - to make sure that most of you understand: robots - it's just a machine, and it is necessary to spend their time and resources in proportion to their usefulness, but no more.
The far-right position and did dictate the human right to scoff at the robots, "torture" them and "kill" to have sex in a, particularly perverse form. In China, already selling sex dolls in the shape of dead children (sorry). From a psychoanalytic point of view, everything is logical: sublimating his pathological tendencies in the humanoid robot, killer pedophiles are not touched by these adolescents. Western scholars have speculated about a link between the increasing availability of porn with the advent of the Internet and the drop in the number of rapes. It is hard to imagine the scale of active sublimation with the emergence of massive humanoid androids. Probably some crimes against the person decreased markedly.
The conflict between Roboto Rav mi Roboto levy mi and, it seems, and will be the main future confrontation nerve. Between those who believe that the androids should have its constitution, and those who think lawlessness intelligent machines meaning of their existence, is little to do. Leftists and right-wing radicals in all disagreements can find common ground: they both believe that robots need men. So for this reason, both parties have to work together to resist "antirobotistam» - Luddite future in favor of a complete rejection of intelligent machines. And the fact that any, no doubt. The term anti-robots already appeared.

Robot and human hybridization

All ethical intricacies discussed above refers to situations in which one can divide, where we deal with the robot, and where - with the man. But the integration of social robots with people will continue, as will extend the process of integration of robotics with a man. Not far off times when, like the ancient Greeks, people of the future ban beat Android stranger, to avoid being hurt, man. In particular, therefore, on the precise identification of differences of robots insist EUROEXPERT. Indeed, in the future there will be mechanisms-organisms are of the mixed type, embodied in the mass culture in the form of various robocops, Chappy robots, animated electronic Pinocchio and other.
What ethical standards apply to such hybrids, it is unclear. Elementary unclear where the man ends and starts the car, and vice versa - even a natural person can be regarded as a complex biomechanism, protein android 99% fully managed automatic genetic programs and scripts, prescribed by society through training, education and another neural programming. By the way, Capek's robots were not electromechanical machines and artificial persons, composed of a living tissue and organs. And since man is a kind of natural robot, it is worth to say about some hybrids, and still less of any particular robotic? In this issue, you can try to understand.
If we assume that mankind has firmly embarked on the path of transhumanism and intends to transcend the boundaries of species Homo sapiens, it is useful to define the necessary criteria of humanity. Unlike other human beings lies in its rationality (sign sapiens and laid down in the definition of the species). In turn, the robots are different from all the other arrangements as the presence of mind but artificial. Compare natural and artificial intelligence is not yet possible. The purpose for which created artificial intelligence, it is superior to man, and even the most first calculator thinks faster than a trained person. On the contrary, the complexity of the qualitative perception of the world and self-consciousness of a three-year child is superior to the best supercomputers. However, the reasonableness in the most general sense - combining a distinctive sign for man and machine with artificial intelligence.
Meanwhile, the physical basis of human and robot are at opposite poles: the human body, including the brain, functions as a living organism, while the robots are completely artificial. It is natural human imposes on him the most severe restriction - mortality. And with it, the limb of the individual mind, which could be (and develop) forever if basically, we have not remained just a hairless monkeys. In turn, the artificiality of the robot determines their extremely primitive compared to the man and hinders their natural socialization.
Transgumanizatsiya probably will take place in two areas: human and humanizing Robotics robot. Robotic automation in general means that people will increasingly become cyborgs. Today cyberization moves primarily through the prosthetic device with simple artificial intelligence: in the first place is the traction limb prostheses, electronic prostheses hearing and vision. However, there is no doubt that the elimination of pathology gradually grow into self-improvement, and the owners of the necessary amounts will improve the parameters of the brain, implant respective processors, use of exogenous species and endoskeleton and eventually turn into some form of artificial limits, when, like avatars from only consciousness will remain a fundamental principle of the natural. Recorded on a flash drive.
Humanizing Robots today is developing in line simulation of the human mind and body. Neural networks, bionic mechanism, Educational search, translation algorithms, pattern recognition scripts, music - all this is the essence of the movement along the rails biomimetics, where the terminal station - the creation of an Android, completely indistinguishable from the human body and mind ( Turing test ). What ethical standards should be applied to a cyborg, where a man from the left digitized natural self and bionic humanoid artificial intelligence simulates the human to the extent of not discriminating?
I think the issue will be resolved in favor of the rules of origin, the same "right of blood": and cyborgs and humanoids will be recognized translichnostyami (Transhuman), but a different class - the original (cyborgs) and acquired (humanoids) humanity. Initially, humane organisms will have some exclusive rights, such as the right to kill. That is the status of this right Gianmarco Verudzhio believes the key issue of the future: can we allow the robot to kill people? And he answers: on the moving humanity should not go never. The birth and the death of one person should always be just another person.
Humanized same androids can console themselves with the answer bear robot its owner, the child-robot of "artificial intelligence":
- A dad and mom - real?
- Do not ask silly questions. No one knows what it is "real."
Rebetika for their
Consequences of European developments in the field of ethics should not be underestimated. Today, in the world war of standards - one of the types of hybrid action, and if you do not feed your army (read: not to develop its standards), will feed someone else. In Russia, it would be necessary to attend to the development of its robotics, otherwise, you'll have to submit to the next European Committee for Research Ethics. And, by the way, it is not so important, what it will be. If conditional Milon develops Orthodox ethics for robots - it is even a peculiar, but own a word in the global wave Robo tiches IH development. Make it not so difficult, given that the Russian Orthodox Church adopted Fundamentals of dignity, freedom and human rights, as well as the Basic Social Concept of the ROC, readily applicable to the processes of robotics. Besides robots, in the words of Patriarch Kirill, are "creatures of God", and from the point of view of Christian morality can and does regarded as God's way to show a person his plan: once the Creator created man in his likeness, and now people can understand it better idea, creating a robot in his image. In the end, deliverance from worldly affairs man leaves more time for prayer. The author does not advocate "robotics Orthodox," but merely seeks to show that engage in growing robotic trend even under ultra-conservative ethics - not much of a challenge.
Creating their standards robotics already attended in South Korea, the US, EU, relevant discussions are conducted in Italy and Japan. Nothing prevents Russian experts to join the club of robotic morality legislators - the same Gianmarco Verudzhio already performed for the Russian project "PostNauka." To participate at an early stage in the process, which resulted in the rules of life and death of artificially intelligent beings will be asked - is not only useful but also very exciting.

Comments

Popular posts from this blog

Adolf Hitler

Warsaw Ghetto