Self-study AI for one year (1): Talking about the long and short of technical book publishing

Self-study AI for nearly a year, and found that it has opened up a new technology door, no longer limited to the field of vulnerability attack and defense, and more importantly, AI can be applied to many fields, not even limited to the computer field, and there are more things that can be done up. During this period, I also used NLP natural language processing technology to develop multiple models to process text, and the effect was not bad.

1. Although there are many AI books, few are suitable for getting started

AI is a subject with high theoretical requirements, so there is a lot to learn, but although there are many materials, there are very few suitable for entry.

There are many theoretical books on AI, but few of them can really teach you to write code (aside from the mathematical theory knowledge involved in AI).

I read a book on natural language processing before, and it was very popular, and I even made complaints about it in the circle of friends:

ebf808b6507d268e64aa4eb542d48ab9.jpeg

If you don't have any AI foundation and want to quickly develop your own AI model, then I recommend two books "Machine Learning in Practice: Based on Scikit-Learn, Keras and TensorFlow" and

"Python Deep Learning" (the author is the father of Keras).

2. People who recommend a bunch of book lists are mostly recommended by people who have never read them

If you ask to recommend an AI book for getting started, many people will follow suit and recommend "Flower Book", "Watermelon Book" and the like, which are really not suitable for getting started. I saw a post on Zhihu:

5d5c71072ff48f586ea619f79467f097.jpeg

841e0debd9356f6382841719524d10f0.jpeg

If you look carefully at the title of the English book, you will find that it is the same. In fact, this is the first and second editions of the same book, but the Chinese translations are different because they were published by different publishers. This kind of one can tell at a glance that the recommender has never read the book at all.

3. Writing a book is a high-input and low-output task

Generally, the sales volume of technical books is not too high, 5k can be considered normal, and tens of thousands are best-selling. Writing a book is often written in spare time, and it will take at least a year or two to publish it (I have still written that book for 4 years, and I really don’t want to write it anymore). What about the manuscript fee? The industry standard is generally 8% of the selling price, and some parts can be negotiated higher. So you can do the math, how much money you can make selling the book, plus the impact of piracy is even more bleak. It can only be said that writing a book is not for making money, perhaps more fame than profit, but if the writing is bad, there may be a lot of scolding.

If someone says in a high-profile way that he is writing a book at the beginning, there is a 90% probability that he will die in the end. In the security circle, I have seen several.

And those who publish a book in a few months, usually the quality is not too high, or they advertise "wholesale book writing" in the name of the team or the company. There is also a strange phenomenon in the circle: the quality of the content of the book is not high, but the sales are very impressive. Suddenly, I thought of a story (I don’t know if it’s a joke or the truth):

3f7526277bfc7e0e8246107cea47772d.png

4. The development of AI technology is too fast, and books can't keep up with the pace

This year chatGPT became popular, and many companies are following up the development. Maybe after a while, various titles of "Big Model xxx" will appear. However, I still feel that the publication of AI books is too late.

In many cases, the publication of a technical book means that it is obsolete or close to obsolete.

Recently, I was studying the application of graph neural network in program analysis, and found that there are not many books on graph neural network. Maybe 6 or 7 books can be found on JD.com, but none of them can teach you how to develop graph neural network. Most of them are Most of them are theory + review.

Later, I saw Stanford University's CS224W "Graph Machine Learning", which was very good. The course authors also developed the graph neural network development library PyG, which combines theory with practice and is suitable for entry.

It is hoped that a practical book on graph neural networks will be published in China in the future, and translation or original works will do.

ce41bdcd050c0608adce34648cc0da7c.png

Guess you like

Origin blog.csdn.net/riusksk/article/details/129471103