How scary is AI?

It’s horror movie season. People all over the world watch horror movies and go to haunted houses for self-scare entertainment. Controlled monsters can be fun, but uncontrollable things can be really scary. For example, for some people, uncertainty about the future can be a nightmare. For others, AI and the like make their skin stand on end. The thought of an AI behaving unexpectedly can be chilling (we've all seen the ending of The Terminator, right?), but how likely is it to happen in the real world? Whatever you think of horror movies like this, the fact is, AI is here to stay and there’s nothing to worry about. We’ll debunk some of the most popular myths that make people uneasy about AI. Now that you know the truth, you can rest easy knowing that there are no Cylons plotting to destroy the world.

Movies are entertainment, not reality

We watch movies for entertainment, but sometimes movie plots are too heartbreaking. Have you ever seen a true portrayal of your life, dreams or fears in a movie? Think about "The Terminator," "Blade Runner," "2001: A Space Odyssey" and "The Matrix." What do these movies have in common? They all depict a future civilization where AI takes over the world. Most AI myths are the result of exaggeration in movies:

  • AI will take over the world
  • AI will rule humanity
  • AI will become as sentient and thinking as humans
  • AI will evolve to advanced levels beyond human control

In these types of movies, the world is often a dystopian society. The city is in ruins, machines are running rampant everywhere, but humans are trembling in the desolate distance. The storylines of such movies all contain a common theme; AI systems and robots designed to help humans suddenly become sentient at some point and feel that they have been unfairly treated. These robots band together to fight back and decide to reshape the world in their own image, leaving humanity with only a few determined warriors left to save the world for us. In movies, although a small number of non-mainstream warriors often win the final victory and take back the world for mankind, this is enough to plant the seeds of uneasiness about the future in the hearts of some viewers. Fortunately, movies are just good at fiction. First, we might as well take a step back and remember that we are still in the early stages of AI development. The human mind is very complex, and while some people may think that AI programs and robots can think for themselves, this is actually wrong.

The AI ​​revolution has not yet succeeded, and AI still needs to work hard

Let us expose the truth. TrainingThe data life cycle of AI algorithmincludes the following stages:

  1. Obtain training data based on end application usage;
  2. label data;
  3. Enter the annotation data into the model;
  4. Confirm that the model is functioning as expected;
  5. Start the cycle again.

The great thing about machine learning is that, given enough data, it can recognize and interact with new data that is a variation of the data previously used to train the model. If faced with a scenario it has never been trained on, the model will not take action. For example, if you train an AI model to recognize images of food in photos, it won't be magically capable of writing a symphony even if you give it instructions to compose music. Extending this to the idea of ​​an AI robot taking over the world, you first have to provide it with the right training data, covering all possible scenarios in which it learns to take over the world. Since no one wants AI robots to take over the world, AI will not be trained in this way. Furthermore, the human brain does not have a unified way of thinking. But the way the robot thinks remains the same, it is trained by humans how to think (or, more accurately, how to operate). Robots cannot generate new ideas—they simply lack the different kinds of training data to develop sentience or human-level intelligence. One could argue that Terminator robots succeeded in taking over the world because they were trained to be military machines. That's true, but don't forget how difficult it would be for the Terminator to find Sarah Connor and kill anyone with the same name. (Note: Sarah Connor is the main character in the movie "Terminator" series and the biological mother of John Connor, the future leader of the human resistance organization.) The Terminator AI system neither has the corresponding skill program nor has it been trained enough With limited data training, it is impossible to recognize the right person at once. Humans don't make such mistakes. Like robots in the real world, robots in movies have flaws and limitations. The Terminator failed to complete his only mission. If he could have completed it, the movie might have been over in 5 minutes. Movies are good at fiction, but just to be on the safe side, Elon Musk founded a company called Neuralink specifically to prevent any Terminator/Blade Runner-style robots from taking over the world.

おすすめ

転載: blog.csdn.net/Appen_China/article/details/134830967