Records of reading and studying good texts in June

insert image description here

See how CHATGPT has been trending in recent months

https://blog.csdn.net/csdnnews/article/details/130878125?spm=1000.2115.3001.5927

This is a good time for AI developers, why not try more.

Peking University Professor Chen Zhong talks about the future of AI: approaching AGI and integrating into the metaverse, open source is the top priority!

https://blog.csdn.net/csdnnews/article/details/130838259?spm=1000.2115.3001.5927

When ChatGPT was born at the end of November last year, no one may have imagined that a large-scale change would start from this.

With its powerful language understanding and generation capabilities, ChatGPT has exceeded 100 million monthly active users within two months of its launch, attracting widespread attention from industry and academia. The large-scale model technology represented by ChatGPT is even considered to have opened the era of AI 2.0:

▶ Bill Gates: ChatGPT has great historical significance, no less than the birth of the Internet or personal computers;

▶ Microsoft CEO Satya Nadella: For knowledge workers, ChatGPT is a revolution;

▶ Sam Altman, founder of OpenAI: The multi-modal AI model is expected to become a technology platform after the mobile Internet;

▶ ……

At the same time, the sudden advent of the AI ​​2.0 era has also sparked many discussions in the industry: What kind of technological change will the large model technology trigger? How can developers and enterprises in it get on this "car" smoothly? How far are we from the real AGI (artificial general intelligence)?
————————————————
Copyright statement: This article is an original article of CSDN blogger "CSDN Information", following the CC 4.0 BY-SA copyright agreement, please attach the original source link and this article for reprinting statement.
Original link: https://blog.csdn.net/csdnnews/article/details/130838259

"Maybe the people who did the pre-training large model at the beginning didn't have much confidence and didn't set the goal very high, but the results it emerged exceeded people's expectations, so it played a shocking role in the third wave of artificial intelligence .”

Intel officially announced a 1 trillion parameter AI large model, which is planned to be completed in 2024

It is understood that the Intel Aurora genAI model will be based on two frameworks: NVIDIA's Megatron and Microsoft's DeepSpeed.

▶ Megatron: An architecture for distributed training of large-scale language models, optimized specifically for Transformer, not only supports data parallelism in traditional distributed training, but also supports model parallelism.

▶ DeepSpeed: Focus on optimizing the training of large-scale deep learning models. By improving the scale, speed, cost and availability, it releases the ability to train 100 billion parameter models and greatly promotes the training of large-scale models.

In addition to these two frameworks, the Aurora genAI model will also be powered by the Aurora supercomputer—the supercomputer Intel designed for Argonne National Laboratory, which has finally taken shape after various delays.

According to the current public information, the Aurora supercomputer is powered by Intel Xeon CPU Max and Xeon GPU Max series chips, with a total of 10,624 nodes, 63,744 Ponte Vecchio GPUs, 21,248 Sapphire Rapids Xeon CPUs, and 1,024 distributed asynchronous Object Storage (DAOS) storage nodes and 10.9 PB of DDR5 Optane persistent memory.

New job | I use GPT to bring goods to the electronics factory

https://t.cj.sina.com.cn/articles/view/6286736254/176b7fb7e01901df3u?sudaref=www.ruanyifeng.com&display=0&retcode=0

remarks

No more The Godfather, no more The Wizard of Oz, just 15 seconds of human stupidity.

– A Hollywood screenwriter, talking about his views on TikTok

Tech Enthusiast Weekly (No. 160): The Dilemma of Middle-Aged Code Farmers

In 2008, after I graduated from Harbin Institute of Technology, I came to Shanghai with my classmates in the same dormitory. After working in Shanda Games for a few years, he returned to his hometown in Guangzhou, and we rarely kept in touch.

Some time ago, I had something to ask him, so I chatted about the current situation. Both his undergraduate and master's degrees are computer majors. He is currently working in a game company in Guangzhou and is still writing code. We are all 35 years old, and I also want to know, what is the current market situation for middle-aged code farmers in this age group?

He told me a few things. First of all, as everyone thinks, overtime is very powerful. From Monday to Friday, I basically leave work at 10:00 pm every day. If a project comes online or has a major update, it must be at two or three in the morning, and sometimes it will be overnight. Still working on Saturdays.

His current company is rather pitiful. The basic salary given by game companies to R&D personnel is not too high, and a large part of your income comes from project bonuses. Last year, a project in their company went online. Just before it went online, the entire project team was disbanded, either dismissed, or assigned to other project teams. The company does this to save costs and pay less bonuses. Many companies do this, there is no way, employees will always be in a weak position.

Then, I am very curious, and many people are also curious. The master of computer science in 985 universities has been working for 12 years. How much is the income? He told me that his monthly salary is more than 30,000 yuan after tax, but he didn't say how much, and I didn't ask about the bonus.

I am a freelancer, and I will worry about my income next month. I think he works in a company, which may be relatively stable. He said that all the worries of freelancers, he has as an office worker, worrying about being laid off. Layoffs are a topic that all middle-aged code farmers, or all middle-aged professionals, cannot avoid. For these professionals aged 35 to 40, if the position cannot reach the middle level, your labor cost will be very expensive. It is a better choice for the company to optimize you and hire young people who have just graduated. They have more physical strength, are more obedient, and execute better. When many companies lay off employees, the first consideration is the middle-aged and low-level employees. As a middle-aged person, if you don't usually work overtime, in case the performance evaluation is not good, it may be optimized.

When I first graduated, many classmates and colleagues may have thought in their hearts that they should write code for a few years, and then transfer to management after the code is well written. Later, some people really switched to management, but more people changed careers and stopped working as code farmers, because they were getting older and couldn't keep up with various physical strengths. After all, transfer to management is a minority, because there are too many monks and few food, there are only a few positions, and some people are not suitable for management and like to write code. Even if you are successfully promoted to the management level, it is even more difficult to go up. In many cases, you can only reach the middle level, and it is difficult to reach the top level. Therefore, for middle-level managers, he also has the mid-life crises mentioned above.

Now there are many code farmers in their 30s and 40s. The good news is that a small group of people, like my classmates, are still writing code. The bad news is that many companies are more harsh on middle-aged coders. The cost is relatively high and it is easy to be optimized. This is the status quo.

Let me give a few suggestions for those young code farmers.

(1) There must be accumulation. Whether it is text, video, project, code, etc., there must be accumulation. Outside of your job, there must be something you can slowly accumulate. In the first few years, there may not be much profit, but you'd better stick to it. I think accumulation is a very powerful force, more important than learning ability. Because as you grow older, your learning ability is declining, and the industry and technology are iterating relatively fast, and new things are always appearing. You have to keep learning, which is very difficult.

(2) Make yourself indispensable. The company has formulated many perfect processes and systems. The purpose is to allow every employee to be replaced. Once someone leaves, they can find a replacement in a short time, so as to maintain the normal operation of the company. Personal strategy is actually the opposite of the company, making it difficult for the company to find someone to replace you. If it takes a long time or a large cost for the company to find the right person to replace you, then you are indispensable.

(3) Keep an open mind and be good at accepting. Everyone's knowledge is limited, the world is diverse, and every communication is a collision of cognition. Many people are just not good at accepting other people's point of view, very stubborn. I don't mean to make you agree with others mindlessly, but that you are willing to try or verify other people's views. This will bring more opportunities for yourself, and there is no way out for recklessness. The status quo of most code farmers in China is not optimistic. If you don't think about it, the situation may be even more unoptimistic.

Tech Enthusiast Weekly (Issue 109): The Value of Podcasts

https://www.ruanyifeng.com/blog/2020/05/weekly-issue-109.html

Spotify recently bought the exclusive rights to the Joe Rogan podcast for a reported $100 million.

"Podcast" is the Chinese transliteration of podcast, which refers to the Internet audio program of talk, mainly for users to listen to. Joe Rogan's program is one of the most influential podcasts in the United States. Each episode interviews a guest, and the two sit and talk. A single episode has been listened to by more than 10 million people.

Podcasts cost very little to produce, how much can speaking cost? The sky-high price of 100 million U.S. dollars is unprecedented. It is hard to imagine an Internet talk show worth so much money.

The lesson of this incident is that we may have greatly underestimated the potential of podcasting. It is a highly communicative medium that belongs to an underappreciated gold mine.

Compared with other media, the biggest feature of podcasts is that when you listen alone (especially wearing headphones), the host is talking to your ears, which belongs to the media with the closest physical distance to the audience. In real life, only your closest friends and relatives will tell you one-on-one. Therefore, podcasts can easily create intimacy with listeners and win long-term loyal subscribers.

This in turn requires that the podcast host must be very sincere, otherwise there will be no effect of telling in the ear, and it will easily cause disgust. Another advantage of podcasts is that they can be listened to while walking, driving, or lying down, and the communication occasions far exceed videos.

In my opinion, podcasting may be the next hot spot of the domestic Internet. Now the domestic hotspot is live broadcast selling goods, which is actually the Internet version of TV shopping. The number of viewers is always limited. How many people are willing to watch promotional programs? A well-produced talk show will have a much larger audience.

Some people will say that podcasts are not feasible in China, because domestic content management is very strict, and talk shows cannot be produced. But on the other hand, because of insufficient production, domestic audiences have a particularly large demand for content. In the past, "Reader" magazine could issue 10 million copies in one issue, which illustrates this point. At present, there are very few good talk shows in China, which is not normal, and we have a population of 1.4 billion. There are many aspects of podcasts that can be talked about, and there must be a large number of listeners following them, such as relationships between men and women, life insights, family life, football games, movies, financial (or real estate/stock/lottery) analysis, and so on.

Not everyone can do a podcast, though. Talk shows have particularly high requirements for hosts, who must have intimacy and life experience, and speak fluently, easily understandable, attractive and appealing. The boys and girls who now occupy the webcasting stations cannot do podcasts.

Exclusive dialogue with the father of Python: the human brain is the ceiling of software development efficiency

https://blog.csdn.net/programmer_editor/article/details/127083989?ops_request_misc=&request_id=&biz_id=102&utm_term=%E5%AF%B9%E8%AF%9Dpython%E4%B9%8B%E7%88%B6&utm_medium=distribute.pc_search_result.none-task-blog-2allsobaiduweb~default-0-127083989.142v88control_2,239v2insert_chatgpt&spm=1018.2226.3001.4187

The origin of Python and its 30-year development history
Zou Xin: Python is the first programming language for many people. How did you start learning programming?

Guido: I first started learning programming in Amsterdam in 1974 and 1975. The first language I learned was ALGOL 60, and I learned some other languages ​​later, but my favorite is Pascal, it is a very elegant language. In this process, I gradually learned about the features that a programming language should have, and their respective characteristics when dealing with specific problems. For example, there is no string type in ALGOL 60. If you want to define an identifier, you must use a magical way to process strings. This magic is performed differently on different input hardware—— You know we were entering codes with punched cards, and every card machine is different. And Pascal is also very good at processing strings. I think Pascal is very elegant and can help programmers program efficiently.

Zou Xin: In the early 1990s, you created Python as a personal interest project during the Christmas holidays. At that time, did you ever think that Python would shine so brilliantly one day? How do you see Python today?

Guido: At that time, I had a task to complete at work: write a large number of small tools with very similar functions in C language. I'm annoyed by the repetitiveness of writing very similar C code, and it would be nice to have a better programming language than C so I can get things done very quickly. Later, I simply invented Python myself. At that time, I just wanted to create a "glue language" to paste the written C language applets together to form a new tool.

I actually have no expectations for the later development of Python. I think it is just like many failed projects that I did at the time, and there is nothing special about it. The initial development of Python was actually very slow. The reason why it was favored by everyone later was that in the late 1990s, when many scientists began to perform scientific computing, just like me, they used Python as a "glue language" to call the code originally written by Fortran or C++. . For these scientists, Python is a very handy tool.

Comparing the current Python with the earliest version, you may find that the Python programming language has hardly changed, but the class declaration has changed slightly; print has always been a statement from the beginning to Python 2, and it did not change until Python 3 It became a function; the function did not have keywords and parameters at the beginning, but later; and the double underscore magic function (Dunder/Magic Methods) that appeared in Python 3, and so on. But in general, the current Python is not very different from the original one, and it is very close in terms of syntax, semantics, and its essence.

Zou Xin: When you first come into contact with the Python language, you will be curious about the mandatory code indentation. If you did it all over again, would you drop the mandatory requirement of indentation?

Guido: Code indentation (Indentation) was not actually invented by me. My colleagues at the time inspired me. The reason why code indentation is required in Python is that the code editors 30 years ago could not indent the code well, so I want to encourage programmers to format the code correctly, so as to ensure that programmers The visual understanding of the code is consistent with the compiler's interpretation of the code. This is actually very important. A few years ago, Apple had a very serious code security vulnerability accident, which was caused by a statement in the code that did not match the if-else grammatical logic actually imagined by the programmer, as shown in Figure 1. In fact, it is indeed a bit exaggerated to strictly require code indentation. It is not impossible to use curly braces instead.

[External link picture transfer failed, the source site may have an anti-leeching mechanism, it is recommended to save the picture and upload it directly (img-ddN1U8Yu-1687517699817)(2023-06-03-20-03-41.png)]
——— ————————————
Copyright statement: This article is an original article of CSDN blogger "New Programmer" editorial department, following the CC 4.0 BY-SA copyright agreement, please attach the original source link and this statement.
Original link: https://blog.csdn.net/programmer_editor/article/details/127083989

insert image description here

Zou Xin: What is the core of software development? Is it to use "tools for creating tools" to greatly improve efficiency?

Guido: I think the really cool thing about programming is to build new software on top of previous software, so that programmers who write code today can borrow code from previous ones. For example, let's say I need to write a program in Python to perform sentiment analysis on tweets. Having never done anything like it, I'm sure it could be done in an afternoon with Google and Copilot, as I'm sure I'm not the first to do it. It can be seen that the way we build software is very different from 20 or 30 years ago.

Software is actually composed of multiple layers. It's a bit like biological evolution. On the one hand, the way DNA is encoded has not changed for a billion years, just like bits, bytes, pointers, and memory in the computer system. Hundreds of millions of years ago, the algae cells of the Cryptozoic Era already had their own unique DNA codes, and an algae cell was like a small computer. On the other hand, cryptozoic cells found in algae fossils do not have much morphological change with modern cells, such as human tissue cells. Different types of multi-cells can form different organs, and various organs eventually form human beings, and human beings themselves form human society. The same is true for software. I think the most important event in the field of software development is the connection of computers through the network, so that a multi-level large-scale complex system can be built based on simple and small structures. There is a high-level structure in simplicity, which has unimaginable flexibility and possibilities.

In the software, it is actually pointers, memory, and calculations. As a programmer, it is very important to understand these lowest-level concepts. At least when learning to program, be aware of this. It's like doing addition, subtraction, multiplication and division. You can use a calculator to calculate the result, but if you don't understand the principles of arithmetic, you accidentally press the wrong button when you use the calculator to calculate, and you get the wrong final result. I don't even know what's wrong. If you understand a little basic arithmetic, you can get a rough idea of ​​whether the calculator's final display is correct. If you know a little more about mathematics, you can divide a problem that cannot be solved directly by arithmetic into several small arithmetic problems. After solving these small problems, you can calculate the final answer. So instead of treating the computer as a magic box, we need to understand how it works. Although this may not necessarily make you the most powerful programmer, you will understand the weaknesses and constraints of software and computers better than those programmers who do not understand the basic concepts, so that you can better use various software tools and avoid stupid mistakes.

"I'm not a futurist, I'm more focused on the present"

Dialogue with Monty, the father of MySQL: Code should be written until 100 years old

Code to be written until 100 years old

There seems to be an age wall in the world of programmers. For example, the 35-year-old crisis that everyone often discusses, either take a management position or leave, which brings anxiety and pressure to many developers. Someone introduced that even if you are 35 years old, you are unlikely to be unemployed even if you have not taken a management position, but if you are still writing code at the front line at the age of 45, you may face unemployment.

Monty believes that there is a big mistake in the career development of programmers. As they grow older, many programmers can choose to become management or managers, but he believes that development managers in enterprises are easily replaced. And to become an excellent programmer, its difficulty and contribution value to the enterprise will be even greater. MariaDB is willing to give programmers more responsibilities and provide higher salaries so that they can develop better on the technical road.

In the MariaDB community, there is a developer who is over 80 years old and is still writing code. I believe this can inspire many developers. But Monty points out that most startup developers in Finland are young people. MariaDB may be an exception. In the MariaDB server-side team, 80% of the developers are around 40 years old or older. "It is a great job to be able to retain these experienced older people." thing."

Monty is deeply respected and loved by developers around the world, not only because of his remarkable achievements in the field of open source and database, but also because of his love and enthusiasm for open source and technology, which has influenced and changed many people. During the interview, it was easy to be infected by his enthusiasm for technology. When talking about MariaDB performance and response speed, he said with a very confident smile: "as fast as a rocket."

There are many beating notes in everyone's life. For Monty, perhaps the keyboard tapping sound of writing code is the beating note, and it is also his "best thing". He said that writing code is a rare thing. Write it down until you are 100 years old.

insert image description here

AI is revolutionizing software engineering

In 2021, a classic paper published in ASE 2007 led ASE to award the "Most Influential Paper" award to Peking University chair professor Xie Tao and then doctoral student Suresh Thummalapenta. ASE, ICSE, and ESEC/FSE are listed as the three top international software engineering conferences. Xie Tao is the first batch of Chinese authors among the authors of the ASE Most Influential Paper Award over the years.

In this paper, Xie Tao and his students proposed a method of using machine learning to improve the efficiency of software development, combining large-scale code search, machine learning and data mining. This also makes him one of the earliest scholars to carry out intelligent software engineering research.

After 14 years of overseas study and teaching, Xie Tao found an opportunity to return to Peking University as a chair professor to continue high-level scientific research work. As one of the earliest scholars to carry out research on intelligent software engineering, he has a deep understanding of AI and its applications. For the new AI ChatGPT, he said: "ChatGPT is a great progress in the model. It can keep talking and let Users can clearly express what they really want, and just clarifying the needs can make the effect and usability of AI a big step forward." He believes that China will also be able to make its own ChatGPT in the future.

In the past two or three years, in the field of intelligent software engineering, large models have received great attention. Copilot's amazing first launch, let everyone see the great potential of large models in code generation, code review, code defect detection, etc. Coupled with ChatGPT's interactive dialogue, what new directions and opportunities does AI technology provide for the development of basic software? What kind of prospect do you have for the development of domestic basic software industry? We will present everything from the in-depth dialogue between CSDN founder & chairman Jiang Tao and Peking University chair professor Xie Tao.

Jiang Tao: Artificial intelligence is not only a helper for programmers, but also eliminates repetitive tasks for some people. Some data analysis work may be gradually replaced by robots, and engineers who develop basic software need not worry too much. What is basic software? What is the status of domestic development?

Xie Tao: The definition of basic software in the industry varies, but the basic consensus is that it includes operating systems, programming languages, compilers, database management systems, office software, and browsers. Basic software in a broad sense also includes some development tools, test operation and maintenance tools, etc. Because office software is so commonly used and is in a key position in work, it has also become one of the bases, and is now considered an important part of basic software. In addition, some industrial software supports the base of industrial applications and is also included in the basic software of specific industries.

There are many sub-fields of domestic basic software that are stuck, such as operating systems, generalized industrial software MATLAB, and so on. Maybe you want to ask many operating systems such as Linux and Android are open source, why are you stuck? A large part of the reason is due to ecological restrictions, which leads to the fact that its right to speak is not our side.

The domestic operating system has developed well in recent decades, and the country has always supported it. But now the "heart" (kernel) of the mainstream domestic operating system is still Linux. Although Chinese programmers and enterprises contribute a large proportion to the Linux kernel, and even the domestic giant Huawei's contribution to the Linux kernel now ranks first, but back to The above-mentioned key word—the right to speak, still presents a situation where there are many contributors but few core decision-makers have the right to speak. However, my country has a certain say in big data, AI, cloud native and other related open source emerging fields. China's new generation of technological forces is catching up. The development of enterprises often does not wait for others to lead, so they invest in and breed good R&D core talents, and thus make some achievements in emerging fields.

Jiang Tao: How to define the core talents of basic software?

Xie Tao: There is such an example. Before returning to Peking University, I taught at the computer department of UIUC (University of Illinois at Urbana-Champaign, abbreviated as UIUC). The department has a doctor named Chris Lattner who graduated in 2005, and his doctoral supervisor Vikram Adve is also my former colleague. The LLVM architecture compiler they launched is one of the three major compilers along with GCC. Chris Lattner later became known as the father of LLVM. His mentor told me that Chris developed the infrastructure of the LLVM compiler during his Ph.D. studies. After graduation, he made up his mind to go to the industry. He showed strong abilities, won many offers, and finally entered Apple. Chris's thought at the time was: "Whoever allows and supports me to continue to develop LLVM, I will go there!" Apple fully supported this matter, so LLVM has a good momentum of development. Here we can see the characteristics of basic software core talents.

Jiang Tao: The application of artificial intelligence is also very extensive nowadays. Water can carry a boat or overturn it. AI has become a helper for some people, and it has also become a sharp weapon in the hands of some criminals. CCTV once revealed that fraudulent groups used artificial intelligence technology to make robots make fraudulent calls, and made 17 million calls. In the end, more than 800,000 people were fooled, and a total of 180 million was defrauded. The people who were deceived could not tell it was a robot from the phone. sound. Is there AI technology that can assist in identifying robo-harassing calls and protect ordinary people?

Xie Tao: Big data and AI technology are increasingly being used by criminals such as fraud gangs, causing numerous incidents of victimization of users, and it is difficult to prevent and control them absolutely. Just like the relationship between offense and defense in the security field, we can only prevent and control to a certain extent by raising the threshold of fraud. At present, AI technology has made the threshold for launching attacks and fraud actions very low, and it can also make very realistic robot synthetic voices and very realistic conversation content. It is really difficult for ordinary users to prevent these AI fraud methods.

Let me share my anti-fraud experience with my personal experience as an example. A scammer once texted me for the phone number of one of my industry colleagues. This way of asking is actually not very common. Most people don't just send text messages to ask for another person's phone number. We can carefully analyze that if the way something happens is unnatural and uncommon, then there is likely to be a problem, so be careful.

insert image description here

The Challenges of Open Source AI

https://csdnnews.blog.csdn.net/article/details/131098681?spm=1000.2115.3001.5927

In the near future, we will witness fundamental changes in the way we interact with people, the way labor is exchanged, and even the way society is organized. Personalized AI entities (called "Ghosts") promise to become potentially personalized beings for each person and connect us into a network of other AI systems around the world. These AI systems will provide us with multiple services, and we can think of them as tools to expand our cognition, not just assistants. Businesses and organizations will likely also have their own "Ghosts" to improve collaboration among members. In addition to the social aspect, associative memory networks with recurrent connections may enable AI systems to have memory capabilities. These individual "Ghosts" may even form unique identities. Additionally, AI systems utilizing consensus algorithms may emerge, leading to the development of decentralized autonomous AI. While this has yet to materialize, we can already envision some upcoming economic changes and trends.

Suddenly there is no reality, and the virtual world is so realistic, it is really worrying. I don't know how we should see the world, who knows if it's true or not.

– Hany Farid, professor at the University of California, on how easy AI is making deepfakes

At the end of the day, blogs, podcasts, short videos are where a person expresses themselves, their way of saying "this is me" in digital form.

– "Will Artificial Intelligence Kill Blogging?" "

If you are original, you can sidestep the competition. Basically, if you're competing with someone else, it's because you're doing the same thing. If everyone does things differently, there can be less or no competition. So, don't imitate others.

– Naval, a famous American venture capitalist

One has to specialize in something to make money. I always tell my kids: You need to learn a certain skill and be better at it so that someone will pay you. Then you pay someone to help you do the things that you find boring or difficult.

– Hacker News reader

British scientists have studied what causes humans to feel bored. In the end, it was found that the most boring people in the world have the following characteristics: their occupation is religious data entry, their hobby is watching TV, and their residence is a remote town.

– "Researchers find the most boring people in the world"

Will artificial intelligence kill blogging?

Some may feel that blogging is threatened by ChatGPT and other large language models. It's so easy to generate decent enough writing that many professional writers soon have to change the way they operate. On top of that, I had to update Bear's review process to catch more and more well-written spam. Still, I'm not too worried. AI-generated content certainly threatens content marketing and others of its kind, but that threat doesn't apply to personal blogs.

First, economic incentives are inconsistent. Nobody reads content marketing SEO slime for fun. However, reading articles from bloggers, especially ones you are familiar with, about their personal experiences is fundamentally different. We love to observe other people's experiences, learn what's on their minds, and develop (sometimes parasocial) relationships with these writers.

When fighting spam on Bear, I've found that the easiest way to generate content is to check that the blog itself is cohesive and "taken care of". But the big judgement is whether or not they're advertising anything (which is pretty obvious).

Also, the creative process behind making a blog can be a rewarding and rewarding experience for bloggers. Through the process, they can reflect on their own ideas, learn from their experiences, and engage with audiences in a truly human way.

At the end of the day, a blog is a place where a person expresses himself. They planted a flag in the digital realm. It's their cry to the void, their way of saying "this is me".

Will AI kill blogging?

26 Apr, 2023

It may feel to some that blogging is under threat by the likes of ChatGPT and other large language models. It’s so easy to generate decent-enough writing that many professional writers are quickly having to change the way they operate. On top of that, I’ve had to update Bear’s review process to catch an increasing deluge of well-written spam. Despite this, I’m not too concerned. AI generated content certainly threatens content marketing and the rest of their ilk, but that threat doesn’t hold true for personal blogging.

First off, the economic incentives don’t line up. No-one reads content marketing SEO goop for fun. However, reading an essay by a blogger, especially one you’re familiar with, about their personal experience is fundamentally different. We like to see into the experience of others, understand how they think, and develop (sometimes para-social) relationships with these writers.

While fighting spam on Bear, the easiest way for me to spot generated content was to check whether the blog itself was cohesive and “taken care of”. But the biggest tell was whether they were advertising anything (which is pretty obvious).

Furthermore, the creative process behind crafting a blog is a meaningful and rewarding experience for the blogger. Through this process, they can reflect on their own thoughts, learn from their experiences, and engage with their audience in a way that is genuinely human.

Ultimately, blogs are a person’s place to express themselves. Their planting of a flag in the digital realm. It is their shout into the void, their way of saying “this is me”.

Thoughts on Computer Science

Long memory of watching the tide, the people of Manguo are fighting to watch the river. Come to doubt that the sea is empty, and the sound of drums on thousands of faces.
The tide-gatherers stand facing the waves, holding the red flags in their hands to keep them from getting wet. Don't come to look at the dream, the dream is still chilling.

There is a wave or bubble in the IT industry every few years, and a new wave has already hit, hide? How far can you run? It's better to be a waver and bravely stand up to the waves.

Why AI will save the whole world?

Will AI put us out of work? Will AI "kill" humans? When an important new technology turns out, people always worry about the threats it brings to people. Based on this, the author of this article believes that although the risks of AI are high, there are also very influential opportunities.

First, let’s briefly explain what artificial intelligence (AI) is. AI is the application of mathematics and software code to teach computers how to understand, synthesize and generate knowledge in a human-like manner. AI is a computer program that, like any other computer program, can run, accept input, process it, and generate output. The output of AI has applications in a wide range of fields, including programming, medicine, law, creative arts, and others. Like other technologies, AI is owned and controlled by people.

As for what AI isn't, in short, AI isn't killer software, or a robot that suddenly comes to life and decides to murder humans or otherwise destroy everything, nothing like what you see in the movies.

A simple description of what AI can be: a way to make everything we care about better.

Why panic?

In stark contrast to the above-mentioned positive views, the public is currently full of fear and paranoia about AI.

We often hear that AI will kill us all in one way or another, ruin our society, take away our jobs, create massive inequality, and help evil.

How to explain this divergence from near-utopia to horrific dystopia?

Historically, every major new technology—light bulbs, cars, radios, and the Internet—has sparked a moral panic that people believe will destroy the world, society, or both. Over the decades, we have witnessed this pessimist time and time again. And in fact, the current AI scare we're seeing isn't even the first.

Granted, many new technologies do have bad consequences, but often the same technology can do us great good in other ways as well. A moral panic does imply some concern.

But moral panics are inherently irrational, amplifying legitimate concerns to hysterical levels, which ironically makes it harder to confront truly serious concerns.

So, is AI now facing a full-blown moral panic?

Many have exploited this moral panic to make calls for policy action, including new AI restrictions, regulations, and laws. These people have further fueled a moral panic by making extremely dramatic public statements about the dangers of AI, while they act like disinterested defenders of the public interest.

But are they really defenders of humanity?

Are they right or wrong?

Economists have observed long-term patterns of such reform movements. Participants in such movements fall into two categories: "Baptists" and "smugglers", here we borrow from the American Prohibition of the 1920s:

"Baptist" refers to true believers in social reform who believed that alcohol was destroying the moral fabric of society and that new restrictions, regulations, and laws were needed to prevent social catastrophe.

Applied to the risks of AI, "Baptists" refer to the group of people who believe that AI will indeed bring disaster.

"Smugglers" are self-serving opportunists who use new restrictions, regulations, and laws to take advantage of their competitors to financially benefit.

During Prohibition in the United States, "smugglers" were private alcohol opportunists who took advantage of the period to make huge profits selling illegal alcohol to Americans.

The risks applied to AI, which “smugglers” can exploit to their own advantage by erecting regulatory barriers that form a cartel of government-backed AI vendors, protecting them from startups and open-source The impact of competition.

It has been suggested that some people are both “Baptists” and “smugglers,” especially those paid to attack AI by universities, think tanks, activist groups, and media outlets. If you take money or funding to fuel the AI ​​panic, then you are a "smuggler".

The problem with "Smugglers" is that they will win. "Baptists" are nothing but naive ideologues and "smugglers" are operators, so often the result of a reform movement like this is that "smugglers" get what they want, like regulatory capture, competitive segregation, and the formation of cartel alliances, leaving only the "Baptists" wondering what had gone wrong with their push for social progress.

In fact, we just experienced a shocking example not so long ago, the reform of the banking industry after the global financial crisis in 2008. "Baptists" tell us we need new laws and regulations to break up "too big to fail" banks to prevent a crisis like this from happening again. Therefore, the US Congress passed the Dodd-Frank Act of 2010, which was ostensibly a "Baptist" wish, but in fact it was used by "smugglers". The result was that in 2008 there were too many "too big to fail" banks, and they were too big.

So in practice, even if the "Baptists" are correct, it will be used by the "smugglers", causing them to be the beneficiaries of last resort.

Today, the development of AI regulation is repeating history.

However, establishing each individual's identity and questioning their motives is not enough. We must reflect on the "Baptists" and "Smugglers" arguments.

insert image description here

AI Risk #1: Will AI Kill Us All?

AI Risk #2: Will AI Destroy Our Society?

AI Risk #3: Will AI Take Our Jobs?

AI Risk #4: Will AI Cause Severe Inequalities?

AI Risk #5: Will AI Help?

The development of AI began in the 1940s, around the same time as the advent of computers. The first scientific paper on neural networks (the architecture of today's AI) was published in 1943. In the past 80 years, entire generations of AI scientists were born, went to school, worked, and died without seeing the rewards we are getting today. Each of them is a legend.

Today, more and more engineers are working to make AI a reality, many of them young, maybe their grandparents or even great-grandparents were involved in its creation, and fear-mongering and doom-mongering portray them as reckless villains. Don't believe they are reckless villains. Each of them is a hero. My company and I want to support them as much as possible, and we definitely support them and their work 100%.

Copyright statement: This article is an original article of CSDN blogger "CSDN Information", which follows the CC 4.0 BY-SA copyright agreement. For reprinting, please attach the original source link and this statement.
Original link: https://blog.csdn.net/csdnnews/article/details/131218734

The brain will replace the developer's keyboard! Can humans and AI "go both ways"? |

"New Programmer": This issue of Technology Radar has added the latest hotspot: ChatGPT (as shown in Figure 2). It's listed on the technology radar as "evaluation" rather than the more mature "experimental" level. What risks and challenges do you think it faces?

Kristan: Rather than talking about risks and challenges, it is better to say that the world currently lacks sufficient practical application experience to conduct a comprehensive evaluation of ChatGPT. Humans have never had the experience of conversational interaction with large language models, which requires more time observation. Our team of experts has done many proofs of concepts using ChatGPT, but the use in daily production environments is still limited.

"New Programmer": In the technology radar, the domain-specific large language model is also listed as an "evaluation" level technology (as shown in Figure 3). Does it face the same ethical and legal issues as the general language model? For example, does it discriminate against certain social groups? Words like "doctor" and "programmer" are associated with men, or "nurse" and "homemaker" with women.

Kristan: I think any language model, any code has the potential for bias. Some of these biases may be conscious, but in most cases, these biases are actually unconscious. In the example you mentioned, people unconsciously associate a character with a certain gender, and that bias is naturally built into the model.

This is a question that needs to be addressed first in the text. We need someone to question those unconscious biases, analyze and process biased texts, because these biases are often difficult to detect, and different people will have different unconscious biases. So, working teams with a diverse set of minds are more likely to identify and address these specific biases.

Is "GPT-N" necessarily stronger? Experts warn: When human data runs out, AI models may become more and more stupid

https://m.thepaper.cn/newsDetail_forward_23467960

They came to this conclusion by studying the probability distribution of text-to-text and image-to-image AI generative models:

"Models are irreversibly flawed when they are trained using content generated by (other) models."

That is, "Model Collapse".

What is model collapse?

Essentially, "model collapse" occurs when the data generated by an AI large model ends up polluting the training set of subsequent models.

"Model collapse refers to a regressive learning process in which, over time, the model begins to forget impossible events because the model is poisoned by its own projection of reality," the paper reads.

A hypothetical scenario is more helpful in understanding the problem. The machine learning (ML) model was trained on a dataset containing pictures of 100 cats — 10 cats with blue fur and 90 cats with yellow fur. The model understands that yellow cats are more common, but also indicates that blue cats are a little more yellow than they actually are, and when asked to generate new data, it returns some results representing "green-colored cats." Over time, the initial signature of blue coat color fades over successive training epochs, changing gradually from green to yellow. This gradual distortion and eventual loss of few data features is "model collapse".

CORDIC Algorithm: An Efficient Method for Computing Trigonometric Function Values

http://www.longluo.me/blog/2023/06/07/CORDIC-algorithm/

How to make a picture QR code

https://stable-diffusion-art.com/qr-code/

The world works day and night to make you someone else, and if you want to be yourself, it means fighting the hardest battle of all.

– E. E. Cummings (EE Cummings), a famous American poet in the 20th century

People rely on machines, hoping that it will bring them more freedom, but it only enslaves them by those who own them.

– Frank Herbert, author of the science fiction novel Dune

visual information theory

https://colah.github.io/posts/2015-09-Visual-Information/

![Insert picture description here](https://img-blog.csdnimg.cn/0687b819ac9a4134af29edd0ee6fe9f6.png#pic_centerinsert image description here

Tip Engineering Guide

https://www.promptingguide.ai/zh
Prompt Engineering (Prompt Engineering) is a relatively new discipline, focusing on the development and optimization of prompt words, helping users to use the Large Language Model (Large Language Model, LLM) for various scenarios and field of study. Having skills related to hint engineering will help users better understand the capabilities and limitations of large language models.

Researchers can use hint engineering to improve the ability of large language models to handle complex task scenarios, such as question answering and arithmetic reasoning. Developers can achieve efficient integration with large language models or other ecological tools by prompting engineering design and developing powerful engineering technologies.

Cue engineering is not just about designing and developing cue words. It encompasses various skills and techniques for interacting with and developing large language models. Hint engineering plays an important role in realizing the interaction and docking with large language models, as well as the ability to understand large language models. Users can improve the security of large language models through hint engineering, and can also empower large language models, such as using professional domain knowledge and external tools to enhance the capabilities of large language models.

Based on the strong interest in large language models, we have written this new hint engineering guide, which introduces large language model-related paper research, study guides, models, lectures, reference materials, large language model capabilities, and other related hint engineering. related tools.

Compilers and Language Design

https://www3.nd.edu/~dthain/compilerbook/

Tech Enthusiast Weekly (Issue 212): Life's Not Short

Life is short, but it is not if you know how to use it well.

Life is just enough to realize one of your dreams, the premise is that you must focus all your energy on it from the beginning.

If you waste time and don't focus enough, then you're not doing anything and life is over.

The real problem is not that life is short, but that we waste too much time.

The most surprising thing is that people don't value their time. You don't let people steal your money, but you let people steal your time.

If you allow yourself to be distracted by unimportant random things, even if you live a thousand years, you will get nowhere.

Von Neumann, the inventor of the modern computer, died in 1957 at the age of 53. He has been very busy all his life, and all kinds of things come to him.

He put off things he wanted to do many times, always saying he would do them later when he had time, but never saying when.

He once said, for example, that he wanted to write a big treatise on von Neumann algebras, a field of mathematics he had pioneered himself. However, when World War II broke out, his interests changed, and he turned to applied mathematics in the service of the war, and was also involved in government consultation and advice.

From the outbreak of World War II until the 1950s, most of his time was not spent on academic research, but on policy consulting for the US military.

His graduate and university colleagues lamented that. They believed that he was wasting his time and talent, that policy consulting could be handed over to others, and that his mathematical genius should be used to complete academic research that others could not.

Not long after he joined the NEC, he was diagnosed with cancer. Within two years, he died.

Initially optimistic about his cancer, he continued to be active in government. But after a period of treatment, the doctor was powerless and clearly told him that there was not much time left.

At this time, he panicked, his life was about to end, but there were still so many unfinished things. He tried to seize the time and concentrate on completing the subject he was studying - the theory of automata. But it was too late, the cancer progressed faster and faster, and he didn't even finish the study.

Even at this time, he promised to go to Yale University to give a series of lectures, which of course did not materialize in the end.

He had great ambitions for the theory of automata, thinking that it would be the greatest work of his life. This field is also entirely created by him, combining mathematical logic, information theory and biology, and will have a major impact on human beings. But alas, he put other things ahead.

After his death, colleagues were interviewed, again commenting that his talent had been wasted. Only about 30 years of his life were actually spent working, but most of the last 10 were devoted to government consulting projects rather than the kind of academic research that only he could do.

It’s not that he doesn’t know this, but it’s just this kind of character that he likes to study many things at the same time. Once he becomes interested in something, he will put down the work at hand and say that he will come back to do it later. Unfortunately, life is not for He allows time to "do it later".

I believe that life has not set aside these times for you and me. If you allow time to be wasted on trivial uses, you lose it forever. Only when you can protect your time and focus on one direction, life will not be so short.

There is a famous saying: Programming is thinking, not typing. After years of programming, I often feel like I type too much and think too little.

– How to Control Programming Metacognition? "

Human language is the interface used to describe problems. The clearer and more precise your language, the easier it is to describe and solve problems.

– "Less technical content"

The reason we failed as a startup was because we changed our approach from making what people wanted to making what we wanted people to want.

– Eric Migicovsky, creator of smart watch Pebble

How to control the metacognitive process of programming?

https://lambdaisland.com/blog/2022-02-17-the-fg-command

Posted on Thu, 17 Feb 2020 41:45: <>
By Chen Tingting

There is a famous saying: Programming is thinking, not typing. After doing this job long enough, sometimes, I feel like I type too much. Whether humans are thinking or typing, they use their brains, so I believe the word "typing" is a metaphor: typing refers to the kind of activity we do unconsciously, through our muscle memory, rather than through our conscious attention force. Many of us, experienced enough, use a fair amount of muscle memory. On the other hand, are we using enough thinking when we need to?

We humans make many decisions in our daily life: decisions like buying a car, or decisions like naming a function. Evolution has made us animals where we can delegate some parts of our decision-making to our subconscious mind, what we might call a background process in the brain. So we can focus on what matters. The question is: what if the subconscious is not doing a good job? Most of the time, it's not a big deal and escalating the issue back to foreground thinking fixes it. For example, when you press to evaluate a form, but mistakenly evaluates the wrong form. You detect this, and you use your foreground thinking to decide what to do next. However, sometimes, when you know your subconscious mind won't handle work well before you start working, what can you do to prevent your subconscious mind from taking over control? Do we have any similar linux command that can bring a process from background to foreground? e efg

I'm a little embarrassed to admit that while I've been programming for a decade, sometimes I run into a bug and my brain goes into panic mode for a few hours and then goes back into analysis mode. Why? My theory is that when I developed my junior developer skills, I solved my problems in panic mode: frantic searches, trial and error, etc. Years of experience trained my behavior and muscle memory to debug in panic mode. However, now I want to solve this problem. There are two methods that help me control my subconscious mind:

Explain the code to another person. (rubber duck method)
Ask yourself some pre-prepared questions. (Druk Law)
Remote Work Taught Me the Rubber Duck Method
When working with the Gawan team, we work entirely remotely. Sometimes, when I want to talk to someone, I need to wait. A few times, when I've encountered a bug and I want to ask for help, I've written down all the details about the code, the environment, and how I tried to fix the bug. After I pasted my bug report into Guywan's comm channel and took a 15-minute break, something magical happened: The Muse came to mind. I fixed it quickly and edited the message I've posted. The rubber duck method really works!

In fact, I think the idea of ​​thinking by explanation has different names: rubber ducks, literacy programming, Feynman techniques, etc. They are all similar things.

The right question is like the fg command on your brain
Now, when I get an error, I ask myself three questions:

Am I chasing this bug the scientific way?
Do I have the correct view of the system for the scope of this problem?
Do I have the necessary telemetry tools?
When I write functions, write modules, or prepare for deployment, I run into a similar problem: too much typing, too little thinking.

Here are some issues I've come up with for myself, still in alpha:

Problem with writing functions:
should I separate commands and queries?
Do I write conditional designs for errors (like logs)? try/catch
Am I writing a preventive design for conditional or error?
Should the preassert function name describe the purpose?
Do I add some proper docstrings on the function?
The problem with writing modules:
should I explicitly specify the APIs of this module and make them distinct from the internal functions?
Is all unnecessary dead code removed?
Do I design or use appropriate Clojure records to model some notion of immutability for domain concerns?
Questions about integration and deployment
Do I add appropriate tests on domain functions?
Have I designed proper feedback messages on the install script?
Could some of the manual setup steps be replaced with automated scripts?
The problem with using records requires a little more explanation: for some immutable concepts like uris, dates or connections, representing them with some well-crafted records will keep the implementation details hidden in the appropriate layer. These immutable concepts tend to have a fixed set of related operations that can be modeled using . Protocol

I find that explaining or asking questions can help me effectively. They help me train my subconscious mind, the more analytical thinking I use, the better divergent thinking I get. Are you also using some issues? Tell us your order.

Guess you like

Origin blog.csdn.net/shaozheng0503/article/details/131353256