Fengciyuan's Tech Tea House: A Flare Gun from the Future

79639c60c13df912d55f77d1b6c43167.jpeg

A long time ago, a friend asked me, now that science and technology information is so developed, why do you still write?

That's how I see it. The news point that can finally be condensed into information is actually the final result of a series of things, and this result will bring more new results. Among them, the process of "getting" and "bringing" is hidden behind the information, which is invisible and ambiguous.

If we just want to know the general idea, and the expectation of technological content is only a condiment for chatting with friends, then the developed information is certainly enough. But if you want to use science and technology as your study and career, and make it an ability that you can understand and master, then you need to have a process of identifying, speculating, and predicting scientific and technological information. .

ba140ca2ae3deed3121c3bc6e30fb374.png

Humans are highly imaginative animals who like to imagine the big picture based on the signals they see. But what happened under the flare gun, and what will happen, is often a bit complicated and not quite what we imagined. It's like when the princes see the smoke of the war and think that the situation is critical, but in fact it's just Baosi getting a small gift.

Science and technology information is sometimes like this kind of signal gun. We not only need to be able to see it, but also need to identify and analyze it.

Today, I will select a few news and talk to you about the uncertainty of the future under the signal gun.

AI to exterminate humanity

Should it be banned?

baf0aae565bfdda5383c6d0b56325c8c.png

The first one is no longer news, but it was kind of a blast at the time.

At the end of May, more than 350 industry executives, experts and professors in the field of AI signed an open letter warning that AI may bring extinction risks to human beings. In fact, famous people include the "father of ChatGPT" Open AI CEO Sam Altman, DeepMind founder and CEO Demis Hassabis and other big names.

So some friends said, these AI people say that AI is going to destroy human beings, what are we going to do with him, quickly block it, because "Terminator" and "The Matrix" will be staged later.

b40eee51a0abb9dfa877ccec89d6a482.png

But looking at this matter from another angle, so many industry executives reminded to be vigilant against AI out of control, but which one of them started from me and gave up the AI ​​business? Obviously not.

This kind of warning is not uncommon in the entire history of AI development, and even the history of technological development. On the one hand, it is a matter of duty for people in the industry to draw social attention to the possible loss of control. On the other hand, this is also an inevitable statement in line with the correctness of certain people under the current social atmosphere in Europe and the United States.

A large part of our fear of AI comes from science fiction literature and movies, but in fact any disordered development of the industry may have devastating effects. This is true for chemicals, energy, industry, and even entertainment. It is of course important to regulate development, but regulation does not mean banning, let alone panic.

Fire is so dangerous, but learning how to use it is a sign that we humans bid farewell to our ape form.

So, don't be afraid of AI.

e0559e699f0f6f308cc7595b44ea3fee.png

free big model

Just ask if you are afraid?

There is also a hot topic in recent days, which is Llama 2 open source. The discussion that this incident has sparked in the AI ​​industry seems to be bigger than that outside the industry.

The logic of the controversy is easy to understand, that is, free and open-source large models have appeared, and the closed-source large models that you spend so much money to make are going to be in vain? It can be observed that some friends who have just entered the AI ​​industry or invested in AI projects are very anxious about this, taking advantage of the large-scale model.

301b597e8d8da485fa916803510d185b.png

This is actually a difficult statement to make. From the perspective of software development history, open source is only a competitive strategy, and some fields are suitable and some are not. Not all software will eventually become open source, and the large open source model has a large number of problems, such as the inability to adapt to the security, privacy, self-controllable needs of a large number of enterprise users. At the same time, open source will lead to a decline in the profit margins of algorithm suppliers and discounts in service capabilities, making it impossible to meet user needs. Just judging from the ten years since the rise of deep learning algorithms, most of the mainstream algorithm models are closed-source.

In addition, the capabilities of open source models are generally not strong, so it is difficult for large open source models to impact the industrial order for a long time. The specific content is explained in detail in the article "Large model, open source can't kill closed source".

In fact, for friends who have just joined this field, what needs to be anxious is not the impact of open source, but that large models are like many basic software, and in the end it must be more and less. How to ensure that one's own value is not damaged in this process is a problem worthy of attention.

Musk made a move

European and American Internet reshuffle?

608391a317e496180f039c6d9fc1d62f.png

Another hot topic in the past two days is Musk's announcement of his Super X plan. As the pace of Twitter’s name change accelerates, it is generally believed that Musk will turn “New Twitter” into a super terminal in the “Weibo + WeChat” model.

Out of their conviction in Musk's ability to make things happen, many friends believe that the Internet in Europe and the United States is about to be reshuffled, and it may even have a certain degree of impact on the Internet in China.

28ade3c26217307b269b792dc71e3fa1.png

Personally, I am more cautious about this. If we exclude Musk’s personal aura and only look at the projects he participated in, we will find that most of the projects except Tesla are not progressing fast and have poor commercial results. Of course, this is also related to the fact that these projects are generally too advanced. The iteration of Twitter not only needs to face melee combat from Meta, (insert here, I don’t know when the melee combat between the two CEOs will be staged). More need to face the pressure of Google and Apple.

At the super system level of the European and American Internet, the most monopolistic power is not a certain terminal, but Google, which is stuck in the bottom position of multiple terminals. Its ubiquity is hard to come by even Apple.

There is reason to believe that with the huge traffic and appeal of Musk himself, the new Twitter will get a sharp short-term growth. But long-term competition is likely to be something that "Super X" is not very good at.

Of course, the new Twitter will inevitably incorporate more intelligent capabilities brought by xAI. This point is very imaginative, and it is likely to become the target of the next round of plagiarism in China's scientific and technological circles.

0d9b15f4fd9cc0763992a9bdbef4a4f8.png

GPT-4 has become stupid, is AI okay?

Recently, there is another not-so-good news for AI, that is, GPT-4 has become stupid.

On July 20, the research teams of Stanford University and the University of California, Berkeley proposed that comparing the March and June versions of GPT-4, they found that they had declined in math problems, code generation, and visual reasoning tasks.

Soon, openAI responded to this point of view on the blog, saying that although most indicators have improved, GPT-4 may perform worse on some tasks.

789d34b9ea9789371d3f5d35d093b9eb.png

So many voices appeared again, some of them felt that the GPT-4, which carried the flag, was no longer good. Is the AI ​​boring? Another part of the voice tends to conspiracy theory, thinking that this is intentional by openAI.

Of course we don't know the real problem behind this phenomenon, but we can discuss a relatively positive direction. That is, GPT itself is a model mechanism based on feedback and re-optimization. Therefore, when the amount of feedback decreases, especially when there is a lack of high-quality feedback, its ability may become worse.

The reason for going in this direction may be because of openAI's increasingly complex and strict usage strategies, as well as the official opening of more and more high-quality large models, diverting the traffic focused on GPT.

A teacher friend told me that the leading score of the first place is too big, which is actually not good for the study of the whole class. If an AI becomes stupid, maybe it means that the AI ​​in the class has generally become smarter?

Miao Duck camera is on fire

Should the All in ID photo be taken?

2fbf2fdfe3c87e8b1fc211201c1ef536.png

Back in China, the gratifying thing is that AI applications have finally become popular. Miaoya Camera has gathered a lot of attention in a short period of time, and of course it has also triggered a series of discussions.

Among these discussions, we feel that the most unnecessary one is that the value of AI is very obvious in the ID photo, so we feel invested now and go to the hippocampus.

694be563df78fb03fe10abf3ae120a84.png

This is a standard way of seeing the trees but not the forest. With a little imagination, you will find that the application changes brought about by the large model are countless, and the generation of photo ID photos is only a tiny one of them.

It is expected to see the ID photo, so hurry up and all in. It is better to imagine the underlying logic, application cost, and business model of the large model, and then discover what other similar needs can be filled.

The native application of large models is the greatest imagination that this round of AI outlets can bring, so don't be fooled and stick the great opportunity on a passport photo.

In short, behind all kinds of information, there are many uncertainties from the future. We need to look at it for a long time, and we must not treat the moment as a golden rule and the excitement as a rule.

See what you see, know what you know, and reach what you haven't reached. This is the best state for us in the age of intelligence.

8b03ddb6126029de99924fda0cdbebdd.gif

Guess you like

Origin blog.csdn.net/R5A81qHe857X8/article/details/132013869