A "departure" nine years ago: laying the foundation for multi-modality and competing for large-scale models

Original: Tan Jing

26db7257b572ed717f6759dfbe62dd4f.jpeg

There are not many secrets about the technical route of the global AI large model, and those few routes can be counted on one hand.

And the world-famous GPT-4 is full of secrets.

These two things are not contradictory. why?

This is like answering "How to make a lithography machine?".

"Any mathematical formulas, laws of physics, and working principles you need can be found in the library of any science and engineering university, but it is a completely different thing to manufacture a lithography machine. The engineering problems that need to be solved in the middle It is hundreds of thousands."

The example of a lithography machine comes from Dr. He Xiaodong, who was once the chief researcher of the Deep Learning Technology Center of Microsoft Redmond Research Institute, and is currently the vice president of Jingdong Group and the head of the Jingdong Technology Intelligent Service and Product Department.

Making technology to the extreme, human intelligence is opening the magic box of "machine intelligence".

The mystery behind the ultimate has been summarized several times by many scientists with their life experiences.

In July 2021, I collected nine pieces of experience shared by Dr. He Xiaodong in the JD AI Research Institute, and reviewed them frequently, and there was always something to gain.

161cdac380b66b43467b31e8dbfbb78c.png

Sharing with his permission.

In these nine experiences, Dr. He Xiaodong not only re-emphasized the importance of "engineering ability", but also unreservedly passed on the true meaning of universal scientific research in his heart to his scientific researchers.

Today, the explosion of large models has pushed AI engineering practice to a new peak. In Dr. He Xiaodong's view, the progress of AI in the sense of scientific principles is inseparable from the ultimate realization of engineering.

This is a "need" and "need" problem.

History has repeatedly proved that technology is the core of innovation, but it also requires the blessing of resources and management to produce the expected results. Therefore, innovation is not a one-man show of technology, but an ensemble with resources and management.

In today's world, one person can still win the Nobel Prize in Literature with one pen, but it is absolutely impossible for one person to create a competitive AI model with hundreds of billions of parameters.

fba74537b3bc35edeff2cf7f5635910b.jpeg

Those details that are easily overlooked should be paid attention to by technology observers.

When a certain technical route accumulates explosive power, the number of citations of the foundational papers on this technical route many years ago will suddenly increase.

After five years, a paper on the attention mechanism ("Bottom-up and top-down attention"), completed in 2018, has quietly increased (4028 citations as of press time).

The academic value of this paper lies in raising a relatively basic question at a higher level: "How to align cross-modal language and image information at the semantic level?"

If any reader is excited about multimodal technology, the word "alignment" must be "later".

A brief paper review and induction will reveal that this paper is a "masterpiece" of the earlier three papers. A very innovative attention mechanism is proposed in this paper. One of the first three papers, "Hierarchical attention networks", had 4953 citations as of the time of publication of this article. Generally speaking, if the citations of papers in the field of AI are more than 1,000, it is considered relatively high.

Now it seems that the skills of three articles accumulating strength and one exerting strength make this set of "three + one" papers a milestone.

d520e8874b65ef6cbd2ce3b107931c1d.png

Interestingly, among all the papers published at the CVPR conference in the past five years, the "Bottom-up" paper ranks in the top 20.

What's more interesting is that among the top twenty papers, only "Bottom-up" is about multimodality.

Let me say that among the top 20 papers, this paper ranks first in terms of multimodal technology. (Because the first 19 articles were all about computer vision, lol.)

The foundational work of this multimodal academic paper came from He Xiaodong and JD Yunyanxi's artificial intelligence application platform team.

CVPR ranks fourth among all journals and conference proceedings in the world. How many AI researchers work hard to get "a ticket" before the conference deadline.

CVPR has an indicator (H5 factor), and the scientific research value of the important work (not all work) published here is already at the same level as the scientific journals Nature ("Nature") and Science ("Science").

From the publication of the first article in 2014 to the present, the sun and the moon have not lived, and it has been nine years in a hurry.

The importance of multimodal technology to large models is self-evident, and time will wait for people with different ideas to finally reach the same destination.

Among these nine years, 2018 is a particularly important year.

That year, He Xiaodong served as the executive vice president of JD AI Research Institute.

That year, Dr. He Xiaodong's team used the AttnGAN algorithm to generate a "photo" of a bird.

It can be said that this is the "ancient bird" of the artificial intelligence Wen Shengtu.

00c90064d95a563ca0f5a46954851d04.png

It was a short-billed bird with red feathers and a white belly. It was fat and cute, with two thick black eyebrows. It looked like the protagonist in the popular game "Angry Birds" all over the world. Dr. He Xiaodong told me that he likes to leave souvenirs for staged work, and this little bird was his cell phone screen saver for a while.

That year, time seemed to open a door. He saw a larger space through the crack of the door, a space he had never seen before, and he was confident in reaching that space.

Dr. He Xiaodong said: "Not only my team has to go the multi-modal way to make large models, other teams have to go this way too."

"If you take the multi-modal large-scale model technology route, you will definitely decide at which level to do multi-modal fusion." He emphasized.

Obviously, this is testing the decision-making ability of the leader of the scientific research team.

"This is a scientific question." Dr. He said. I added: "This is a scientific problem very close to application."

Dr. He Xiaodong nodded in approval.

Listening to this, my heart is surging.

I have always believed that in the world of large models, multimodal technology still has great opportunities.

I asked Dr. He several exciting technical questions:

1. To realize the emergence of multi-modal large models, is the current Transformer model architecture sufficient? Is it necessary to make underlying changes to the Transformer model architecture?

Dr. He said that maybe it is necessary, maybe not, it needs to be explored and studied.

2. Is it aligned at the semantic level or at the data level?

Dr. He's point of view is that at the semantic level, or even lower.

9d9db1c1e1c8343a507b67f4f67f5149.jpeg

In my opinion, the starting point for multimodal large models is language large models.

That is to say, to a certain extent, the scientific research level and engineering capabilities of the large language model are the solid foundation of the large model.

In the beginning, the goal of He Xiaodong and Yanxi's large language model development was to make it have stronger language ability, especially language generation ability. This ability was quickly used in JD.com. To put it bluntly, writing product copywriting can get better and better.

The team's original large-scale model work includes the K-PLUG large-scale model with 1 billion scale parameters. K stands for knowledge, knowledge. This large model has been promoted since 2019 and will mature in 2021.

After all, JD.com is a company that is good at using technology to promote retail business. Based on the basic capabilities of JD.com's AI application platform, various applications such as content review, photo purchase, and product marketing copy generation have emerged as the times require.

For example, in Jingdong Mall, the workload of product marketing copywriting is heavy, and copywriting generation is necessary, and it has covered the third-level categories of commodities (clothing, women's clothing, dresses), and has reached more than 3,000 third-level categories.

Taking an inventory of the overall workload, the K-PLUG large model has accumulatively generated more than 3 billion words, directly bringing in at least 300 million yuan in revenue.

I learned from Dr. Wu and Dr. Zhang in the team that there is an interesting aspect of the copywriting scenario. After the copywriting is generated, manual review is required, and the pass rate is like a report card. The current score is 95 points (out of 100 points). Because the pass rate has exceeded 95%.

The normal state of industrial scenarios for large models is "demanding".

The team found that many industrial applications have extremely high requirements for the loyalty and reliability of "generated content". Marketing a product cannot just pile up praise words, and real praise is especially important.

Daily recommended refrigerators will talk about green energy-saving refrigeration, but the situation does not apply to luxury refrigerators. Energy saving is not the advantage of luxury refrigerators.

In the era of traditional language models, it is likely that some words will be put on it. As far as the refrigerator industry chain is concerned, it is absolutely impossible for merchants to accept "in name only" and to force non-existent "bright spots" into products.

He Xiaodong and Yan Xi's team will not only do one technical route, there are many experimental work behind the big model, or innovation itself includes multiple attempts.

The team's original work on large models also includes a multimodal text generation model. In other words, the existing two types of large-scale models will be important components of JD’s industrial large-scale models in the future.

The team's technical layout of the large model is not only a scene promotion, but also an industry promotion.

So, what is the current focus and future vision of the team?

Currently focusing on AGI, the first step is to make a large-scale general language model.

The second step is to make a multi-modal large model (at this step, a decision must be made at which level to do multi-modal fusion).

Dr. He Xiaodong said that next, the team will start with Vincent graph technology.

"The Wensheng Diagram will be a very good traction application." Dr. He Xiaodong said, "Although this is a scientific problem, we still hope to have an application to traction."

This is also a very pragmatic approach. For Dr. He, industrial landing has always been a relentless pursuit.

The third step is that when general intelligence advances, in addition to the key multimodal technology, digital intelligence will move towards the physical world. Whether it's an arm, a robot, or a self-driving car in the physical world, it's a giant leap forward to endow a machine with general intelligence.

In the future, everyone will graduate from the four major colleges of Hogwarts, Harry Potter's alma mater. Does this make people feel scared?

At a meeting at Yale University in the United States in 2017, Dr. He Xiaodong and the Boston Dynamics robotics team had an exciting exchange.

Dr. He Xiaodong said, what will happen if multi-modal cognitive intelligence is installed in a robot dog?

For example, go to the small store next door and buy me a bottle of Coke. This is an easy task for a human child, but a difficult task for a mechanical dog. Orientation and recognition in complex environments, plus "skills" such as reasoning, mathematics, and dialogue.

Can I enter the store?

Do you know Coke?

What should I do if I buy back chewing gum?

The mechanical dog may also justify itself for making "fancy mistakes": "It's fierce, and it makes a lot of mistakes."

Dr. He Xiaodong's point of view is that compared with the previous perceptual intelligence, cognitive intelligence has entered a learning curve that is steeper, that is to say, a more difficult learning process.

The more difficult it is, the more unpredictable the time when humans will walk out of this "mountain road".

Where is the difficulty?

After reaching the level of cognitive intelligence, learning becomes difficult. At the level of perceptual intelligence, you can clearly tell the computer that if you make a mistake, correct it. Tagging is publishing the answer. The mechanism of trial and error is clear.

Cognitive intelligence, however, does not work.

People often say: "There are a thousand Hamlets in the hearts of a thousand people." At the level of cognitive intelligence, the situation becomes subtle and complicated, that is to say, AI needs to understand the complexity of affairs and the breadth of meaning. Everyone has their own understanding of a painting, and maybe the descriptions from all angles are correct, so how to design training?

We have encountered this problem, and the American company OpenAI must have also encountered it. Human feedback is a very important technology. Humans may only be able to give some very general feedback, but it is difficult to give very detailed annotations.

A few days ago, many people are pessimistic about the endless computing power, data, and parameter growth of large models, and worry that a new round of technological monopoly may be formed.

Those small and medium-sized enterprises are exhausted and cannot create a world-leading large-scale model from scratch. Their appeal is "use". At this point, Dr. He made an optimistic judgment.

Dr. Ho described two steps.

The first step is difficult to step up, and it is difficult to step on it.

At this level, we create a general-purpose large model with strong general knowledge, which is extremely difficult and expensive.

When the large model has good capabilities such as information compression, logical judgment and reasoning, the step height of the next step will be reduced.

The technical principle of lowering the "threshold" is that after the capability of the large model is strong, the next step is "fine-tuning", and the cost of computing power is also reduced.

At this time, the benefits of the industry are reflected.

Enterprises with low industrial profits and small and medium-sized enterprises with low status in the supply chain have the opportunity to use the "big model". In this way, not only will the digital and technological divide not be widened, but it will also generate inclusive value.

3163de4d2b46f1f189dc5482b15c4fa5.jpeg

First ask a question: "When will the mobile phone I ordered yesterday arrive today?"

From a technical point of view, this question is short and clear in intent.

Unfortunately, ChatGPT can't answer.

Because the answer is not in the public information.

To answer this question, ChatGPT needs to know where to place an order and access to the e-commerce business system, including ordering, ordering, warehousing, and logistics.

There is no doubt that a relatively independent "territory" will have unique scenes and data.

There are undoubtedly thousands of such "territories".

In JD.com, it is far from enough to rely solely on the technical results of "success in laboratory indicators" and "ranking first in the competition".

Because JD.com has high requirements for the shopping experience, human customer service cannot provide poor service, let alone robots. Therefore, the process from technology to service must be verified very strictly within JD.com, and the verification logic is directly compared with human services.

If the "service level" is poor, the technology is finished and cannot be used.

Give three examples to experience the "exclusive problem".

First, JD Smart Customer Service has an indicator called "first sentence hang up rate". It's easy to understand, when you talk to you on the phone with a strong dialect and a non-fictional accent, you not only don't believe it, but also want to hang up.

People will also be hung up when talking on the phone, but the hangup rate of intelligent customer service must be close to the hangup rate of human customer service.

When I make an appointment for the delivery of large-scale home appliances, I hear the cold voice of a robot when I answer the call, and the call is hung up instantly, which really delays things.

The second example is after sales.

For example, if a user comes with after-sales problems, they must solve their problems as soon as possible. At this time, the customer service does not need to "retain people with a sweet mouth", but to understand people's urgency as soon as possible and give a satisfactory solution.

Chatting for a long time and being able to chat casually are not requirements for after-sales intelligent customer service.

To sum it up in technical language: Human-machine dialogue usually has a clear purpose, and it needs to perfectly solve the needs of customers in various links such as pre-sales and after-sales consultation, price guarantee, transaction, payment, delivery, and return and exchange services.

The third example is the 400 hotline. When a user calls to complain, no one will prepare a speech for the complaint and then read it aloud. Users can say whatever they want, think while talking, and stop if they want.

Can you understand the half-sentences, inverted sentences, and car wheel talk?

I made a mistake, correct me again, can you understand?

Someone is talking next to you, and there is TV sound, can you tell the difference?

These are the difficulties in voice interaction.

In the third example, although the interruption of oral conversation is common, it used to be a technical difficulty. For example, after the intelligent customer service has finished speaking, it is the turn of humans to speak, and humans may be thinking.

A Jewish proverb says, "When man thinks, God laughs."

How does the robot know that the other party is finished?

For example, if the other party stops talking after 2 seconds, is it enough to write such a rule? It's hard for us to write a rule that makes most people comfortable.

In this regard, He Xiaodong and Yan Xi's team used a multi-modal discourse decision-making model to solve it. The principle is to make a dynamic decision-making model through multi-modal signals such as voice signals, pause time, semantic integrity, and tone correlation to judge whether a human being has finished speaking or is thinking, and wait for the other party to finish speaking before answering the call.

There is nothing more respectful and polite than this.

Most people think that customer service is not just a dialogue robot with a large model with strong language and writing skills, but this is not the case.

JD needs multi-modal large models.

In Jingdong, there are 400 calls (voice); product photos (pictures); installation guides (video); and rave reviews (text).

Modal is an academic vocabulary, and the more accurate and ancient source is related to "signal". Simply understood, different types of data are "modalities".

These multimodal information require multimodal large models to process.

So, don't be curious what kind of big model JD.com will grow here. Multimodality is a technical route that conforms to and meets the needs of business scenarios, and so on to industries closely related to JD.com, such as retail and finance.

The emergence of Jingdong Technology's intelligent service and product department is because Jingdong's growing customer service business requires a dedicated technical team to take out all internal customer service and use "smart" to solve them. Over the years, technology and capabilities have been accumulated into a usable product capability platform, which is the Yanxi platform.

"Our platform (JD Yunyanxi artificial intelligence application platform) has more than 40 independent subsystems, more than 3,000 intents and 30 million high-quality question-and-answer knowledge points." Dr. Wu from He Xiaodong's team said.

JD.com's technical experience in full-scale smart services, coupled with years of practice in JD.com's retail, logistics, health and other businesses, has achieved an average of tens of millions of smart interactions per day.

While talking and laughing, those lightweight model tasks (information extraction, speech recognition, dialect speech recognition, keyword recognition, semantic recognition, sentiment analysis) have already been "won".

The growth of JD.com’s business has brought about “three highs”: high requirements for real scenarios, high requirements for user experience, and high requirements for large-scale services.

Therefore, internal research and development of difficult technical issues has already started, content generation, complex semantic understanding or intention recognition, and multi-round dialogue decision-making reasoning are the focus.

Dr. He Xiaodong is a highly influential scientist in the field of natural language processing and cross-modal intelligence. In the AI2000 list of the world's most influential scholars in artificial intelligence, he was selected in three fields (NLP, Speech, IR) at the same time, and he is one of the 60 people in the world.

He is a professor and an IEEE Fellow. Although he has a strong academic background, he pays special attention to the application prospects of technology. The technical accumulation of He Xiaodong's team is based on more than 200 academic papers, nearly 40,000 academic paper citations, and 580 million user real-world training grounds. For those who have the ability to challenge, the higher the difficulty, the higher the technical level.

On May 6, 2023, the 12th Wu Wenjun Artificial Intelligence Science and Technology Award was officially announced. The JD Yunyanxi team won the Wu Wenjun Artificial Intelligence Science and Technology Award for technological progress for its "task-based intelligent dialogue interaction key technology and large-scale industrial application" prize.

"It has generated more than 2 billion yuan of direct economic benefits and good social benefits, and promoted the rapid development of retail, logistics, finance, government affairs and other related industries." The organizing committee commented.

At the same time, Dr. He Xiaodong won the Outstanding Contribution Award of the Wu Wenjun Artificial Intelligence Science and Technology Award.

"His patience was an encouragement."

"He is good at pointing out the direction, and can always find the essence of the problem in the discussion, helping us to open our minds." Dr. Wu and Dr. Fan under Dr. He Xiaodong commented in this way.

The layout of JD.com's artificial intelligence model can be seen from the names of cutting-edge and cool laboratories. Some of those researchers are from the Graphical Text Lab, some from the Fundamental Model and System Lab, some from the Cross-Modal Vision Generation Lab, and some will come from the Machine Intelligence Lab in the future. Exploration and discussion are encouraged here, and issuing orders and stereotyped short-sightedness are not welcome.

Go back to the first sentence at the beginning of the article.

In the important matter of the dispute over the technical route of the large model, whether Decode-Only wins or Encode-Decode wins, no one can draw conclusions hastily at present.

Although the large model GPT-4 currently taking the Decode-Only route is temporarily leading, it is not guaranteed that Google will come back against the wind one day and write a big "Google vs. Microsoft: The History of AI Large Model Reversal".

27e18d541856f5ae760ab0dddb5011f7.jpeg

He Xiaodong and Yan Xi's team understand the three conditions for the development of large industrial models in this way:

First, understand and understand business logic.

The vertical scene has its own barriers, understand the business, understand the industry, and understand it step by step.

Second, you can touch it. Only when you operate the business will you have data, and then feed it to the large model to develop unique capabilities.

Third, the data flywheel turns, and there is a cycle of feedback and optimization.

These three points are both essence and limitation. The commanding height of large-scale models is a game between the strong, and the competitive advantages of industrial large-scale models and general-purpose large-scale models come from this.

It has never been easy to get the cognitive laws of the industry: as much as you understand today, it will be as difficult as yesterday.

Grow from mistakes again and again, and lead all experiences to understanding and correct results.

Every change has rules to follow, and so does the change from the consumer Internet to the industrial Internet. Technology companies like JD.com, companies with supply chain thinking, although they had advantages in those years, they could not guarantee a stable victory.

Relevant insiders of Jingdong also have similar views:

Although we were born in retail, every time we enter the field of retail segmentation, we learn from scratch. In the early days, it was home appliances, and then fresh food (7FRESH), and then set off to do a lot of offline retail. Retail is a huge scene. Every track is different and has its own solutions. Going deep into the industry cannot rely solely on imagination, and frivolous discussions are easy to make but useless.

"Use general data to train the common sense ability of the large model enough, and then use accurate, small amount of industry data, and finally provide it to the industry in the form of a large industrial model." Dr. He Xiaodong said.

When both technology and meaning exist, how do He Xiaodong and Yan Xi's team understand the relationship between large models and upper-level applications?

Data still occupies an incomparably important position in the development of large-scale models, which will undoubtedly increase the competitive advantage of industrial large-scale models. The large model is by far the most intelligent AI native product of human beings, and it has the power to subvert the existing ecology of the SaaS layer.

Among all technology companies, JD.com has the strongest strength in the retail industry and retail supply chain. They understand that the high dynamics of the retail industry require agile promotion, and they understand that it is most appropriate for retail to provide services in the form of SaaS.

The commonality of industrial needs can be refined, the capabilities of the digital intelligence supply chain can be replicated, and hundreds of scenarios will be empowered.

For example, the relationship between agricultural products and e-commerce has become increasingly close. The search keyword "origin + characteristic agricultural products" has continued to grow for four consecutive years in the top search terms of JD APP consumers.

In the past five years, the consumption of landmark agricultural products has grown by an average of 36% annually, which is 4 percentage points higher than the overall growth rate of agricultural products; the consumption of landmark fresh agricultural products has increased by an average of 41%, which is 7 percentage points higher than the overall growth rate of fresh agricultural products.

Sales growth requires an efficient supply chain and advanced marketing methods, which is also one of the key directions for the future implementation of JD's industrial model.

The person closest to the need has the greatest opportunity. With the blessing of the industrial model, JD.com has the opportunity to grow a leading company with a market value equal to that of Salesforce.

In some ways, Salesforce is the company that defines SaaS. With US Salesforce, there is SaaS.

On the industrial model, everyone can use the SaaS suite, not only to open stores and do business, but to do a good job in sales and services in various industries. From goods to money payment logistics, from back-end customer service to front-end shopping guide marketing, there are full-life user life-cycle management services. Moreover, not only have their own SaaS products (modules), but also build a platform that allows third-party development. Only in this way can the ecology of the industrial model be truly implemented.

The industry develops with the development of social division of labor, and thousands of enterprises in vertical industries will definitely use large models in the future. Who will do it?

Good opportunities are in sight, and those who come can chase after them.

-Finish-

dc21ad79dd3dedf76359831a92d7f86a.jpeg

read more

AI framework series:

1. The group of people who engage in deep learning frameworks are either lunatics or liars (1)

2. The group of people who engage in AI frameworks 丨 Liaoyuanhuo, Jia Yangqing (2)

3. Those who engage in AI frameworks (3): the fanatical AlphaFold and the silent Chinese scientists

4. The group of people who engage in AI framework (4): the prequel of AI framework, the past of big data system

Note: (3) and (4) have not been published yet, and will meet you in the form of book publishing.

comic series

1.  Interpretation of the Silicon Valley Venture Capital A16Z "Top 50" data company list

2.  AI algorithm is a brother, isn't AI operation and maintenance a brother?

3.  How did the big data's social arrogance come about?

4.  AI for Science, is it "science or not"?

5.  If you want to help mathematicians, how old is AI? 

6.  The person who called Wang Xinling turned out to be the magical smart lake warehouse

7.  It turns out that the knowledge map is a cash cow for "finding relationships"?

8.  Why can graph computing be able to positively push the wool of the black industry?

9.  AutoML: Saving up money to buy a "Shan Xia Robot"?

10.  AutoML : Your favorite hot pot base is automatically purchased by robots

11. Reinforcement learning: Artificial intelligence plays chess, take a step, how many steps can you see?

12.  Time-series database: good risk, almost did not squeeze into the high-end industrial manufacturing

13.  Active learning: artificial intelligence was actually PUA?

14.  Cloud Computing Serverless: An arrow piercing the clouds, thousands of troops will meet each other

15.  Data center network : data arrives on the battlefield in 5 nanoseconds

16. Being late is not terrible, what is terrible is that no one else is late ,  the data center network "volume" AI:

17.  Is it joy or sorrow? AI actually helped us finish the Office work

AI large model and ChatGPT series:

18. ChatGPT fire, how to set up an AIGC company, and then make money?

19.  ChatGPT: Never bully liberal arts students

20.  How does ChatGPT learn by analogy? 

21.  Exclusive丨From the resignation of the great gods Alex Smola and Li Mu to the successful financing of AWS startups, look back at the evolution of the "underlying weapon" in the era of ChatGPT large-scale models

22.  Exclusive 丨 Former Meituan co-founder Wang Huiwen is "acquiring" the domestic AI framework OneFlow, and wants to add a new general from light years away

23.  Is it only a fictional story that the ChatGPT large model is used for criminal investigation?

24.  Game of Thrones of the Large Model "Economy on the Cloud"

25.  Deep chat丨Fourth Paradigm Chen Yuqiang: How to use AI large models to open up the trillion-scale traditional software market?

26. CloudWalk's large-scale model: what is the relationship between the large model and the AI ​​platform? Why build an industry model?

DPU chip series:

1.  Building a DPU chip, like a dream bubble? 丨 Fictional short stories

2.  Never invest in a DPU?

3.  How does Alibaba Cloud perform encryption calculations under the support of DPU?

4.  Oh CPU, don’t be tired, brother CIPU is helping you on the cloud

Long article series:

1.  I suspect that JD.com’s mysterious department Y has realized the truth about the smart supply chain

2. Supercomputers and artificial intelligence: supercomputers in big countries, unmanned pilots

5699bff0bb735963be86b681ee3c1eca.jpeg

b7e84dc95e0bffc2475dc8ddd658ff8d.png

Finally, let me introduce myself as the editor-in-chief.

I'm Tan Jing, author of science and technology and popular science topics.

To discover stories in the times,

I am chasing the gods of technology and blocking technology companies.

Occasionally write novels and draw comics.

Life is short, don't take shortcuts.

Originality is not easy, thank you for forwarding

If you still want to read my articles, just pay attention to "Dear Data".  

おすすめ

転載: blog.csdn.net/weixin_39640818/article/details/130895966