Oxford University professor Michael Wooldridge: How the AI community views neural networks for 40+ years

Recently, Michael Wooldridge, a professor of computer science at Oxford University, tweeted 30 tweets in one breath. The content is that through his collection of books, he sorted out the views of the artificial intelligence community on neural networks for more than 40 years, and gained many colleagues in the AI ​​​​field or Big guy attention. For example, Yann lecun praised and said "Nice thread on the perception of neural nets by the AI ​​community over 4 decades."

The editors contacted Professor Michael Wooldridge for permission to translate. The editor compiled the main points of the tweet below, and added relevant pictures for the books mentioned:

I'm sure you'll agree that some in the neural network community regret the path the field has taken over the past 40 years, especially the connectionism/PDP (Editor's Note: Parallel Distributed Processing, Parallel Distributed Processing) boom of the 1980s and the 1990s . After years of silence.

The office was recently relocated, and I reorganized my book collection, re-examined the ridiculous number of AI textbooks, and reviewed everyone's views on neural networks and whether it was reasonable to ignore neural networks at the beginning.

First up: Godel, Escher, Bach (1979) by Douglas Richard Hofstadter . Hofstadter was a Pulitzer Prize winner and hugely influential in the 1980s, and this book explores the connections between artificial intelligence, logic, music, and art.

GEB (editor's note: an abbreviation of the title, also known as "Ji Yi Bi") is an intellectual journey, a truly unique and head-expanding work. I think a lot of students will be fascinated by AI/logic/mathematics after reading this book, so hats off to Hofstadter, I haven't seen anything like it since.

GEB "does" talk about neural structures (p. 339), and doesn't shy away from big questions (like how does the brain lead to thought?) In fact, it also addresses a lot of big questions, but doesn't mention "Perceptron Rosenblatt", nor There is no mention (perhaps by luck) of the book "Perceptrons" by Minsky & Papert (although other work by both authors is cited).

(Editor's Note: The perceptron Rosenblatt was born in 1958, and it is the first neural network that is fully described algorithmically; the book "Perceptrons" carefully analyzes the functions and limitations of the single-layer neural network system represented by the perceptron, proving that The perceptron could not solve simple XOR equilinear inseparability problems, which killed people's research interest in neural networks at that time.)

The next one is the 3-volume Handbook of Artificial Intelligence. Vols 1 (1981) and 2 (1982), edited by Barr & Feigenbaum, Vol 3 (1982) edited by Cohen & Feigenbaum. I have a soft spot for them.

Re-reading these books has been a deep nostalgia trip for me: I studied them intently as an undergraduate (1985-89), and they provide insightful insights into the world of AI in the 1970s, many of which hold reality today Significance and historical reference value.

Volume 3 covers vision and learning, and we'll find an introduction to perceptrons under the chapter on pattern recognition... Gradient descent also comes up (p. 376), which slightly surprises me, Create a feeling of "climbing a mountain".

However, the vast majority of the vision/learning part is about symbolic methods, and neural networks for prototype (buttonholing) are just relegated to the pattern recognition category. This is surprising given our current intellectual experience, but that was 40 years ago.

Next came Elaine Rich's 1983 book Artificial Intelligence. I think this is the first really widely adopted AI textbook. This is the textbook for my undergraduate AI course. It's well written, it's got a lot going on, and it's "very" impactful.

Rich's book has influenced the design of AI courses, and its structure is still the norm: problem solving/search; game playing; knowledge representation; natural language processing; perception; learning; application. Even today, you can find AI courses with this structure anywhere.

Rich's book is a classic that remains fascinating to this day. However, its description of the perceptron is only one paragraph long, and it is considered never to have achieved any degree of success (p. 363). This was the first few years before connectionism/PDP got everyone's attention.

This was followed by a special issue of the Journal of Artificial Intelligence, published under the title "Foundations of Artificial Intelligence" (MIT Press, ed D Kirsh, 1992, based on a 1987 symposium).

Among the included papers, the symbolic approach dominates (Lenat, Feigenbaum, Newell...), but interestingly, I see some interest in the logicist approach ("Rigor mortis" is the title of an article criticizing logicist AI) and a backlash against "knowledge-based AI."

Neural networks are rarely mentioned, and Rod Brooks seems to be the only one who does, and his article "Intelligence Without Representations" is brilliant, clear-cut, and arguably arguably the hallmark of our generation of AI researchers. Loud horn.

But Rod's discussion of neural networks is mainly positioned in terms of his own work (reaction/behavioral AI) - Rod is still a long way from logic/knowledge, but his AI research, and general neural AI research at the time showed a higher level compared to

Matt Ginsberg's "Essential of Artificial Intelligence" was published in 1993. Matt's book clearly shows that the field of AI has matured and become more algorithmic/mathematical since Rich's textbook.

Including it has a very clear explanation of Simple Neural Networks (starting on page 310), but no algorithms like gradient descent/backpropagation. These approaches were ignored because of a lack of a stated rationale, but perhaps more importantly because of the authors' concerns about scalability.

``… Neural methods… face serious difficulties if we need 10^12 threshold elements! "(p. 313)

Well, the cumulative effect of Moore's Law has affected many of us. This is a salutary lesson.

Finally, Russell & Norvig's classic "AI: A Modern Approach". I remember a ripple of excitement in the AI ​​community when the book was released in 1995. This is a stunning piece of academic work.

There is no other artificial intelligence book quite like this one. Because it manages to tell a coherent story with endless dizzying chains of thought about this crazy field. I've written multiple textbooks and know how hard it can be. This is a major job, unique.

The 1st edition of AIMA has a solid treatment of neural networks (chapter 19): extremely clear descriptions of perceptrons, multilayer feedforward networks, backpropagation, etc., with proper mathematical definitions + practical algorithms .

However, the book points out many problems with neural networks and argues that they "do not form a suitable basis for current artificial intelligence".

The latest edition (version 4) of AIMA was recently published: I am very proud to announce that I have contributed to the chapter on multi-agent systems.

This new release includes Gan author Ian Goodfellow's "fantastic" contribution to neural networks. It's brilliantly written, and does full justice to how AI has progressed until 2020.

in conclusion? Yes, neural networks have clearly been viewed as of marginal interest and of uncertain value by many in the AI ​​community for a long time. I don't think it's necessarily unreasonable, unless the naysayers come up with something hopelessly impractical...  

Personally, I think neural networks are an approach that is "complementary" to other AI directions, and it has clearly proven its worth in certain domains. I didn't foresee what happened in the past decade, so one thing is certain: AI will continue to surprise us, and we will laugh at many of the opinions we now hold.

(Editor’s note: Michael Wooldridge’s final summary was a bit euphemistic. A netizen’s comment on his Twitter was more angular, and was recognized by Michael Wooldridge. This netizen said: “Your conclusion shows three points: 1) Don't blindly trust so-called experts; they often clearly miss the point, 2) transformative progress is often unpredictable, and 3) people still don't have that much respect for neural networks without big data and GPUs.")

About Michael Wooldridge

Head of the Department of Computer Science at Oxford University, who participated in the development of the AlphaGo robot project. Professor Michael Wooldridge is mainly engaged in the research of multi-intelligence systems. He is also a member of the International Computer Society, the American Association for Artificial Intelligence and the European Association for Artificial Intelligence. He served as the chairman of the International Joint Conference on Artificial Intelligence from July 2015 to August 2017. Michael Wooldridge is an academic pioneer in the field of artificial intelligence, and has made great achievements in multi-intelligence systems. His book "Introduction to Multi-Agent Systems" has been cited as a computer science textbook for many universities at home and abroad.

Guess you like

Origin blog.csdn.net/lionkingcz/article/details/125537735