The shocking things about ChatGPT

Two days ago, everyone must be aware of the news that Musk united with thousands of AI technicians to demand the suspension of giant AI experiments.

The news was quite explosive and once intensified people's panic.

Brother Ming had an idea at that time. How did ChatGPT view this matter?

So I entered this news event and the response I got was this:

Then I asked, what potential risks will the rapid development of AI bring, and what impact will it have on human beings?

According to ChatGPT’s answer, it can be summarized as follows:

First, some artificial intelligence weapons may appear. If AI develops self-awareness, it may escape human control, causing catastrophic consequences.

Second, the rapid development of AI may replace many jobs and cause large-scale unemployment.

Third, the emergence of AI may threaten personal privacy and security.

Maybe you think this only appears in science fiction movies, but I can only say that these scenes are not far away from us.

Brother Ming has been in contact with ChatGPT for some time and has been exploring and trying it. Next, I will briefly share some of my views on ChatGPT.

01

The main reason why Brother Ming came into contact with ChatGPT was that he hoped to use ChatGPT for content creation.

If you have ever written an article, you know that writing is really painful.

Brother Ming must ensure the quality of every article he writes, so every time he writes an article, he will lose a lot of brain cells.

So, I tried using ChatGPT to create content.

However, at present, the content created by ChatGPT does not meet my official account article publishing standards.

But it is still very simple to use ChatGPT to write Xiaohongshu notes or Douyin video scripts.

This does not mean that the quality of the public account articles generated by ChatGPT is not good.

Brother Ming thinks there are two main reasons.

First, the articles generated by ChatGPT are too newsy.

It can be very rational, tell you facts and reason, and can share hard-core practical information, but it cannot arouse emotions or resonate with readers.

Most of the popular self-media content now has one thing in common, that is, it can mobilize the emotions of fans.

In this regard, the current ChatGPT is still lacking.

Of course, there is a second reason, which is that I am not skilled enough in ChatGPT training.

ChatGPT has a huge amount of data and can do anything you want for you.

But the premise is that you have to let it understand your purpose and your specific needs.

If your purpose is not clear enough, ChatGPT will not be able to get your specific needs, and the generated content may not be what you want.

For example, if you need to go to a self-media company for an interview, let ChatGPT act as the interviewer.

If your needs are not clear enough, ChatGPT may ask questions that are beyond your abilities or responsibilities.

However, when you feed your background information to ChatGPT and put forward specific needs.

Then ChatGPT will know what you want to do and generate the answer you want.

Therefore, the key to making good use of ChatGPT is to feed it accurate background information so that it can understand what you want to do.

02

However, in the process of using ChatGPT, I found that ChatGPT would also make up random things about unfamiliar places.

For example, I previously wanted ChatGPT to help me summarize the specific content of "The Second Time in Life".

As a result, ChatGPT directly told me that the documentary "Second Life" is the work of an American director.

In fact, this documentary was filmed by CCTV.

Later I corrected it and asked again.

ChatGPT's answer is still wrong.

You may think that ChatGPT made a mistake, but I am more inclined to think that ChatGPT has learned to lie.

For something it is not sure about, it may make up random things.

If you don't have enough discernment, you may be misled.

Thinking further, if one day, AI has self-awareness, then it may deceive humans.

It's scary to think about it.

03

Returning to the Zhihu issue, Musk united with thousands of AI technicians to sign a moratorium on giant AI experiments.

It is very likely that they are worried that AI will develop too fast and escape human control.

If, I mean if, the day comes when AI really has self-awareness, will it do things for its own benefit?

By then, the scenes of "The Matrix" will not only exist in science fiction movies.

In addition, the rapid development of AI may disrupt social order, such as replacing a large number of human jobs and triggering social unrest.

At the moment, we are not ready for the arrival of ChatGPT.

So, this is understandable.

Now that ChatGPT's capabilities have been demonstrated, it cannot be allowed to develop.

Just like nuclear weapons, ChatGPT needs to be restricted.

For example, a bunch of organizations will be established, such as safety committees, planning committees, etc.

Prevent some uncontrollable factors from occurring.

04

Brother Ming doesn't know what you think, whether you accept it, panic, or reject it.

But one thing is for sure.

The development of AI may be delayed, but it cannot be stopped.

This is the trend of the times.

Since it cannot be changed, the only thing that can be done is to face it, learn from it, and use it.

The future is here, get ready!

Guess you like

Origin blog.csdn.net/2301_76935063/article/details/130255573