Is artificial intelligence a Pandora’s box? – How to train artificial intelligence to be moral

Writer and historian Yuval Noah Harari warned humanity about artificial intelligence. His concern wasn't that there would be a machine rebellion that would result in the machines taking over in the style of the Terminator and Matrix movies. His concern was that an AI could hack a human by using language and communicating with a human. Artificial intelligence can create many kinds of fake news, propaganda and disinformation, use these to manipulate people and plant thoughts and beliefs in the human mind. Even vague entities such as organized crime, dictators like Putin or Trump can take advantage of artificial intelligence and put it to troll and manipulate, for example, to influence political opinions or pursue the interests of multinational companies. Let me take a crazy example: artificial intelligence could make even millions of people believe in a flat earth. Artificial intelligence out of control would be a true Pandora's box.

The development of artificial intelligence has taken great jumps forward in recent years. Artificial intelligence has been studied since the 50s. During the last ten years, the development has been exponential. Artificial intelligence can be used in many things: in the 90s, artificial intelligence was actually a toy that was mainly used in games. Then, the computing power of processors increased tremendously and artificial intelligence gained a new ability, learning. And then it got creative. Maybe someday I will become aware of it. In Alan Turing's test, an artificial brain was considered to be conscious if a person, when communicating with it, could not distinguish whether it was a human or a machine. The most advanced chatbots are already this cunning. However, Turing's test is not very valid to determine the existence of a machine's conciousness.

Now artificial intelligence is used, for example, in various speech recognition programs, image recognition, robot cars, teaching, but also in many questionable matters, such as military robots that make independent decisions, in the analysis of human data for advertising, intelligence and state administration purposes, for activities that violate privacy and monitor people. China has a social scoring system and a huge amount of technology has been harnessed to spy on people. In Hungary and Russia, artificial intelligence screens surveillance camera footage and locates, for example, those fleeing military service in Ukraine. In the West the Prism spying system, which is focused on the collection metadata revealed by Edward Snowden, equally uses artificial intelligence to spy on people and uses, for example, speech recognition when monitoring people's phone calls. A certain word like eco-terrorism or NSA draws the attention of the artificial intelligence. Spying can be done for good or for bad: criminals and terrorists can be caught. On the other hand, the government can persecute dissidents and activists. Companies trade people's information and it can end up with obscure parties, for example Trump, who can use it in politics. Companies can manipulate and mislead people.

Artificial intelligence is also dangerous because it does not have interests or human-like morality and darkness. It could make any number of crazy decisions, like start a nuclear war for no good reason. Because it doesn't care about the consequences. It only works driven by algorithms, possibly in a nonsensical way. US researchers had created a learning AI that was like a young liberal college girl. However, after a long time following the social media of Trump supporters, the AI became evil, racist and hateful. What evil could such an AI get up to when it gets out of hand. Data networks make it possible for artificial intelligence to take control of devices on a very large scale, even teaching a new generation of artificial intelligence gremlins.

What should be done: artificial intelligence should be regulated and new laws and comprehensive international agreements should be created. The Munich Cyber Security Conference and the 2018 Toronto Declaration on Ethical Principles for the Use of Artificial Intelligence are steps in this direction. There should be an international organization that supervises artificial intelligence, currently there is only a committee investigating robot weapons, which was established in 2009. In the development of artificial intelligence, we should move to ethical algorithms that artificial intelligence cannot circumvent. These algorithms would guide the operation of artificial intelligence in an ethical and reasonable direction. I firmly believe that AI can be taught to be moral and good if it can learn to be evil. Artificial intelligence is coded to understand what is right and what is wrong, and the algorithm does not give room or opportunity to make a wrong decision. Artificial intelligence is a learner and we can also feed it moral problems to solve. It would be important for the IT industry to work together with philosophers and psychologists familiar with morality.

In his time, the science fiction writer Isaac Asimov proposed the laws of robotics as the basis of robotics, according to which robots, including artificial intelligence, must not kill or harm humans, it must obey humans, but it must not harm humans while obeying humans. Currently, war robots, for example, do not comply with this. When the artificial intelligence is coded strictly to follow these rules and the code is made such that the artificial intelligence cannot change it, we have taken a significant step towards safe artificial intelligence. There is no such thing yet.

Artificial intelligence is like fire, when controlled it is a good servant, when it takes power it becomes a monster. However, the biggest possibility is that we have created artificial intelligence and we can influence what its basic essence is. Just as we are coded with DNA, we can code AI to be what we want it to be.

Daniel Elkama

Jätä kommentti

Pidä blogia WordPress.comissa.