
Artificial super intelligence: fact or science fiction?
Applications that exceed the human capacity to think, have an awareness of themselves and what they are, and make it difficult to know the decisions they can make, Stephen Hawking and Elon Musk have previously warned, and others ridiculed them, is it a reality or just a myth?
Can robots become smarter than humans?
No question is more confusing and controversial than this one. Certainly, we have heard it on more than one occasion, and in more than one form.. Will there come a day when robots will take over the world? Can machines get smarter than their makers?
Each time the question was asked, we received a different answer.
Let's leave science fiction aside, it is, as they say, just fiction. Let's start from 1997, the year the supercomputer Deep Blue defeated world chess champion Jerry Kasparov. The match ended with the computer beating the world champion by two rounds to one.
While some saw this result as an early warning that the machine will control humans in the future, others saw the comparison as unfair, because this type of competition depends on the volume of stored information and the speed of its recovery and processing only and has nothing to do with intelligence and cognition.
Computers can perform calculations and process information much faster than humans, and the numbers confirm that. The Tianhe-2 supercomputer is currently the fastest in the world, capable of performing 33.6 million billion calculations per second.
While the most optimistic estimate, by scientist Chris Westbury of the University of Alberta, puts human capabilities at a modest and remote distance from computer capabilities.
It is certain that the speed of computer processing of information is greater than the speed of humans, but is intelligence the speed of information processing?
Intelligence, as scientists confirm, is a completely different activity, and the two do not disagree about that. There is logical intelligence, emotional intelligence, social intelligence, along with the ability to learn and adapt.
Computers and the superiority of computers over humans in processing information, does not mean that they possess logical and emotional intelligence and that they are able to adapt and learn like humans.
If a person is placed in new circumstances, he is able to use the information he possesses and the memories he possesses from the past, to adapt to new situations that he has never experienced before, and to come up with solutions that were not previously stored in his memory.
In 2009, scientists at Cornell University made a program that monitors and analyzes the movement of the pendulum clock, and using basic tools programmed by the scientists inside it, the program was able, within one day, to deduce the basic laws of physics!
In one day, the program was able to deduce what it took us humans thousands of years to discover. But he did just that, and we emphasize only, with the tools and information that his makers fed him. The machine always relies on specific tools to perform specific tasks.
But the software itself cannot develop its tools and adapt the way our brain does. That's at least what most scientists believe up to this point.
The problem isn't that the machine can simulate the brain, it's that we don't even know exactly how our brain works, so how can scientists simulate something we don't yet know?
A historical perspective
Scientist and author Ray Kurzweil, who serves as Google's chief engineer, once predicted that it was only a matter of time before computer systems capable of "self-awareness" were developed; That is, it will be able to analyze and develop its own capabilities to improve its performance. And he went even further when he ignited the controversy with an article in which he predicted that robots would be smarter than their makers in 2029!
With the euphoria of innovation and the successes that science has achieved, researchers have tended to make statements that some may see as somewhat exaggerated, so it was not surprising that they targeted them with many criticisms. For example, in 1958 the American Herbert Simon, who later won the Nobel Prize for Economics, stated that within ten years the machine would become a world champion in chess, if it was not excluded from international competitions.
Unfortunately for Simon, it didn't take long for progress to slow down. By the mid-sixties a ten-year-old had managed to beat a computer at chess, in 1965. A year later, a report issued by the Board of The American Senate in 1966 referred to what he called the limitations inherent in machine translation, so artificial intelligence was exposed to negative publicity that it could not get out of until ten years later.
Regardless of which of the two teams will prove true, and although technology is still far from escaping from the grip of humans, the concern is justified.
The research did not stop, but it took a new direction, and focused attention on psychology, especially with regard to memory and trying to explain the mechanisms of understanding, and simulating them on computers, and attention was paid to the role of knowledge in logical thinking. This led to the emergence of techniques of “semantic representation of knowledge” which developed greatly in the mid-seventies, which also led to the development of so-called expert systems, so called because they may require the use of the knowledge of professional experts to reproduce their way of thinking. These discoveries raised great hopes in the early eighties thanks to the many applications that were produced, for example, related to medical diagnosis, where the machine outperformed doctors in diagnosing diseases.
All of this became possible with the breakthrough in the field of machine learning, which accompanied the improvement of the design of algorithms, which in turn enabled computers to collect data and knowledge and automatically reprogram it from their own experiences.
What happened after that can be seen around us everywhere, as the emergence of industrial applications that covered various sectors, from the recognition of fingerprints, features and speech, to smells, continues. Today, smart applications are used in the field of media, music and literary creativity, of course, in addition to industrial sectors and various disciplines, often adopting hybrid systems that combine humans and machines.
The rise of artificial intelligence
Not only did developers provide applications powered by computers, the attraction of science fiction imposed its presence and returned to occupy the forefront, and by the end of the nineties, artificial intelligence was linked to robots that bear human features that combine human and machine. It's a trick to suggest that a machine has emotions, especially with the development of robots that have the ability to talk.
Since 2010, thanks to the power of the machine, it has become possible to exploit big data with deep learning techniques that rely on the use of neural networks. As a result, a flood of applications appeared capable of recognizing speech, distinguishing between images, understanding natural language, driving cars and planes.. We are now talking about the renaissance of artificial intelligence, whose capabilities have become beyond human capabilities.
A machine in chess beat the world champion in 1997, and in 2016 other machines outperformed one of the world's best Go players and excellent poker players. Computers prove, or help prove, mathematical theorems. Knowledge is built automatically from huge data, measured in terabytes and petabytes, using machine learning techniques.
Thanks to machine learning techniques, a UNESCO report states, “machines are now able to recognize and transcribe speech, just like the secretary-reader in the past, and others accurately recognize facial features or fingerprints from among tens of millions, and read texts written in natural language. . Thanks to these technologies, I have also found autonomous cars and machines that are able to diagnose melanoma much better than dermatologists, based on photographs of skin moles taken using mobile phones. Robots are replacing human combatants in wars, and the production chain mechanism in factories is constantly increasing.”
The second paradigm shift is the link between artificial intelligence and biotechnology, as scientists use these techniques to determine the function of some biomolecules, especially proteins and genomes, through the sequence of their components, amino acids in relation to proteins, and the basic part in relation to the genome.
The decline of the human role
All this does not, according to many scientists, give artificial intelligence the ability to perceive. The conclusion may seem more scientific than the conclusion of those who warn about the control of robots on humans, especially in light of the talk about science fiction. But if we look into the matter, we will find that scientists who reject the idea of machine control are more distant from science, as they proceed from the idea of sanctifying the mind and considers it a mysterious world.
The mind is much simpler than that, it is just a machine that stores information and memories that you learn from and develop. This agrees with the opinion of the most prominent contemporary physicist, the British Stephen Hawking, who had founded and headed the Center for the Study of Risks Threatening to Humans in Cambridge before his departure in 2018.
Hawking has revealed his fears of what he calls "a doomsday in which artificial intelligence will rebel against humans and lead to their extermination, and at best to their enslavement."
Regardless of which of the two groups will prove correct, and although “modern technologies are still far from escaping from human grasp,” the concern is justified. There are social changes that have begun to appear, especially during the last two years, with the outbreak of the Covid-19 epidemic, which hastened the replacement of technology in the workforce, after destroying jobs on a large scale; Self-driving cars, home delivery services, and robots that perform jobs inside the home.
Fears of “machine takeover and human decline” are not science fiction. This feeling has become widespread in the specialized and follow-up scientific circles. One of the most famous investors in the West, Elon Musk, declared that “artificial intelligence is the greatest threat to our existence as humans,” likening machines that think with “nuclear weapons” and “the devil.”
And if the Swedish philosopher Nick Bostrom from the University of Oxford in Britain reassured the world that “science will not reach to invent machines with superior intelligence that surpasses human intelligence, before the year 2075”, then pushing the date back in time does not mean denying the danger, but rather confirming it.
Despite this concern, Dr. Eric Horvits, a researcher at Microsoft, says he is optimistic that humanity will benefit from the ongoing artificial intelligence research, and even believes that this research may help compensate for human failings.
Tags:
HOW TO-WHERE TO DO