MIT Professor Tegmark: GPT-4 sounds the alarm, where will mankind go in a hundred years丨Guests from Zhiyuan Conference

1fcc4e82df47952dec8d858656be50b0.jpeg

guide

An open letter calling for a six-month moratorium on large-scale model research has put a company called the Future of Life Institute (FLI) on the cusp. The co-founder of the institute, Max Tegmark, is a physicist and artificial intelligence researcher from the Massachusetts Institute of Technology and the author of the book "Life 3.0 Becoming Human in the Age of Artificial Intelligence." He is the key figure who led the open letter calling for a six-month moratorium on training large-scale AI experiments such as GPT-4.

In the latest episode of the dialogue program with the famous AI anchor Lex Fridman, Max expressed his views on issues such as GPT-4, intelligent alien civilization, life 3.0, open letters, and how AI can kill humans (Max Tegmark will serve as The keynote speech will be given by special guests of Zhiyuan Conference) . The Zhiyuan community has sorted out the essence of views.

3b01e9c7f4e5fa6ccd9d72ad2dc035b4.png

Max Tegmark

Max Tegmark (Max Tegmark), a cosmologist, is currently a professor at the Massachusetts Institute of Technology and director of science at the Institute for Fundamental Questions. Max used to be a physicist. After seeing the power of artificial intelligence, he devoted himself to combining basic science and artificial intelligence to develop artificial intelligence that can be "understood" by basic scientific methods.

Future of Life Institute

The Future of Life Institute, located in Boston, USA, is a research and promotion institution dedicated to reducing the risks faced by human beings, focusing on the possible risks in the development of artificial intelligence technology. Over the past decade, the FLI has hosted high-profile events, conferences, and workshops, leading to numerous important outcomes including the Asilomar AI Principles.

▲ Max Tegmark will participate in the invitation report session as a guest of this Zhiyuan Conference, so stay tuned . Scan the QR code below to sign up for the 2023 Zhiyuan Conference for free.

c81d84b6b1b434df53a5664ae1708b38.png

Quick Facts

◆ Life is so rare, we are the stewards of this fire of advanced consciousness, we must nurture it and let it grow

◆ Whether the disputes about AI will take away part of the meaning of being human.

◆ In the post-AI era, we should focus on the subjective experience of advanced intelligent creatures, love and connection, which are really valuable things.

◆ It may not be long before there will be a superintelligence that greatly surpasses our cognitive capabilities. It will be the best and possibly the worst thing that ever happened to mankind. There is no middle ground. 

◆ GPT-4 is a one-way information flow, just like a very smart zombie that can do some smart things, but has no ability to experience the world itself.

◆ The launch of GPT-4 may be the wake-up call that human beings really need, telling people to stop fantasizing that things in 100 years may be uncontrollable and unpredictable.

Finishing & Editing: Li Mengjia

intelligent alien civilization

L (Lex Fridman): Currently is a decisive moment in the history of human civilization, and the power struggle between humans and AI has begun to change. Max's thoughts and voice are most valuable and influential in times like these, and his support, wisdom, and friendship have been gifts that I am forever and deeply grateful for.

Let me ask you again, the question I asked in the first episode, do you think there is intelligent life in the universe? When you look up at the stars, what do you think of?

Max (Max Tegmark): When we look up at the starry sky, if we define our universe in the way most astrophysicists use, it is not the entire space, but the spherical space area that we can observe with astronomical telescopes. From light energy Time to get to our place to start. Since the Big Bang, one prediction has been that we are the only beings in all of space, the only beings who have invented the Internet, the radio, and reached such a high level of technology. If that is the case, then we are all the more obliged not to screw up, because life is so rare, and we are the stewards of this spark of higher consciousness, to nurture it, to allow it to grow, and eventually life can spread from here to the greater reaches of our universe. In parts, we can have this wonderful future.

Too bad if we are reckless with technology, killing it out of stupidity or fighting, then the rest of cosmic history will be a play for empty benches. But I think alien intelligence will actually visit us soon. Even we will create alien intelligence ourselves.

So we will give birth to an intelligent alien civilization, unlike anything they humans, evolution on earth is able to create such a biological path.

And it would be much more "alien" than a cat, or even the most exotic animal on the planet right now, because it wasn't created through the usual Darwinian competition, no longer concerned with self-preservation, fear of death, etc. The alien mind space you can build is much wider than what evolution has given you. With that comes a great responsibility on our part to ensure that the kind of minds we create, minds that share our values ​​and benefit humanity and life, and that the minds we create do not suffer.

L: If you generalize all types of intelligence, have you considered that AI may belong to extraterrestrial thinking?

Max: I tried, but failed. I mean, the human brain has a hard time really coping with something that is still completely foreign. I mean, just imagine what it would be like if we were completely indifferent to death or individuality. For example, you can copy my knowledge on how to speak Swedish.

It doesn't take too much effort to learn new things, because it can be copied directly. You might be less afraid of dying, and if the plane was going to crash, you might think, 4 hours my brain hasn't backed up, I'm going to lose all these wonderful experiences of this flight. We also have more compassion for other people because we are able to experience other people's experiences first-hand, which feels more like a hive mind.

L: The entire written history of mankind is described through poetry, novels, philosophy, etc., the human condition and the content contained in it. Like you said, the fear of death, the definition of love, etc. All of this would change with another form of intelligence (other than humans). All of this, including all those poems, they were creeping up on what it means to be human, and all of that changed. How will the survival crisis that AI will experience conflict with the survival crisis and human condition of human beings. It is difficult to deeply understand and difficult to predict.

Max: It’s shocking that Microsoft made a GPT-4 commercial where a woman is showing that she’s going to give her daughter’s graduation speech, and she asks GPT-4 to write it, and she writes almost 200 words. If it were me, I would be very angry when I found out that my parents were unwilling to write even 200 words and had to outsource it to a computer. So I also wonder whether this kind of dispute about AI will take away part of the meaning of being human.

L: Someone told me recently that they started using ChatGPT and GPT-4 to write about their true feelings for another person. And they have emotional issues themselves, basically trying to get ChatGPT to express their point of view in a better way. So we erased the inner jerk in our communication, which of course has positive effects, but mainly symbolizes a shift in the way humans communicate.

This is actually scary because much of our society is built on this glue of communication. If we now use artificial intelligence as a communication medium, let it provide us with language, so much emotion in human communication, so much intent is outsourced to AI, how does this change everything. It will change our internal state of how other people feel, what makes us feel lonely, what excites us, what scares us, makes us fall in love, all of that.

Max: It reminds me of the things that make life meaningful to me. For example, if I go hiking with my wife Maya, I don't want to press a button and get to the top of the mountain. I want to feel the struggle, the sweat, and finally make it to the top. Likewise, I want to constantly improve myself and become a better person. If I say something in anger that I regret, I want to actually learn to improve myself instead of telling the AI ​​to filter what I write from now on so I don't have to work hard and I'm not really growing.

L: But then again, just like in chess, AI can absolutely surpass human performance, it will live in its own world, and may provide humans with a prosperous civilization. And we humans just keep climbing mountains and playing games even if AI is smarter and stronger and superior in every way I mean it's a promising trajectory and humans are still humans and AI became a medium through which the human experience flourished.

Max: I would like to phrase it as reshaping us from Homo sapiens to Homosentients. So we bill ourselves as the most intelligent information processing entity on the planet. Of course, that will obviously change as AI continues to develop. So maybe we should focus on the subjective experience we have as advanced intelligent beings, love, connection, that's what's really valuable. Let go of our arrogance and hubris.

L: So consciousness and subjective experience are the most fundamental value of being human. It should be at the top of the list.

Max: To me, this seems like a promising direction. But it also requires more compassion, not just for the smartest humans on the planet, but for all other fellow creatures on the planet. For example right now we are treating a lot of farm animals horribly under the pretext that they are not as smart as humans. But if we admit we're not all that smart in the grand scheme of things, perhaps we should give more weight to the subjective experience of cattle in a post-AI world.

Life 3.0 and super artificial intelligence

L: Looking back at the book Life 3.0, the views in this book have become more and more far-sighted. So first, what is life 1.0, 2.0, 3.0? How has this vision evolved to this day?

Max: Life 1.0 is stupid, like a bacterium, it can't learn anything in its lifetime. And this learning process comes from the genetic inheritance process from generation to generation. Life 2.0 is humans and animals with brains who can learn a lot in a lifetime.

Let's say you don't speak English naturally, but at some point you suddenly decide that you want to upgrade your software and install an English-speaking mod. This is the goal of Life 3.0, to replace not only software but also hardware. Currently, we may be in life 2.1, because we can implant artificial knees, pacemakers, etc. If Neuralink or other companies are successful, it may well be Life 2.2 and so on. But what the companies trying to build AGI are trying to do is of course the full 3.0, infusing intelligence into something that has no biological basis.

L: But can it be understood that, about life 2.0 and 3.0, about the truly powerful things about life, wisdom and consciousness, it already exists in 1.0. Is it safe to say so?

Max: Of course it's not black and white. Obviously there is a range. There's even debate over whether single-celled organisms like amoebas can learn a little bit (my apologies if I offended any bacteria). My point was more about how cool it would be to have a brain that can learn in its lifetime. From 1.0, 2.0 to 3.0, as you continue to evolve, you become more and more the master of your own destiny, instead of being a slave to evolution. Through constant software upgrades, we can be very different from previous generations and even our parents. If you can also swap out the hardware and take whatever physical form you want.

Since my last podcast, I have lost both of my parents. Thinking about them in this way actually gave me a lot of comfort. In a sense, they didn't really die. Their values, ideas, even jokes, etc., didn't disappear. I can go on living with these things. In this sense, even with Life 2.0, we can already transcend the physical body and death to some extent. Especially if you can share your own information and thoughts with as many other people as you can on a podcast, then that's the closest we can get to immortality through our organisms.

L: Do you miss your parents? What did you learn about life from them?

Max: Too many. My fascination with math and the physical mysteries of the universe came from my dad, and my thinking about big questions like consciousness really came mostly from my mom. And the really core part of taking that from both of them is doing what I think is right, no matter what other people say. They're all just doing their own thing, and sometimes they get blamed for it, but they do it anyway.

A good reason to want to do science is that you're really curious and want to find out the truth. When I was in school, I once wrote a crazy paper about the nature of the universe is mathematics, which we don't see today. At that time, a famous professor told me that this is not just rubbish, it will also affect your future career. Don't write this kind of stuff anymore. Then I sent it to my dad and guess what he said? He even quoted a sentence from Dante, "follow your own course and let people talk". Although he passed away, his attitude still lives on.

L: As a man, as a person, how did their passing change you? How does it expand your worldview? As we are talking, humans have created another sentient being itself.

Max: Two things, one of them is to go through all their stuff after they died, and think about, what they spent a lot of time doing, should they really spend so much time on it, or Say something more meaningful could have been done. So, I look at my life more now and ask myself the meaning of what I'm doing now. It should be something that I really enjoy doing, or something that is really meaningful because it benefits humanity.

L: Will you be afraid of your own death from now on? Make death feel more real to you?

Max: (Their deaths) made it real. The next one in the family is me, and I have a younger brother. They face death with dignity and never complain. When you get old and your body starts to have problems, the complaints will increase. They can also focus on the meaningful things they are doing instead of wasting time talking or even thinking about the things that disappoint them. When you start your day with meditation and thinking about things to be grateful for, you're basically choosing to be a happy person. Because there are not many days left, each day needs to be lived meaningfully.

open letter

L: Speaking of it, AI is indeed the thing that can have the greatest impact on multi-human civilization, both at the detailed technical level and at the high philosophical level. You also mentioned that you are writing an open letter.

Max: Have you seen the movie Don't look up ("Don't Look Up", 2021 American satirical science fiction film), we are performing the plot in it, almost living imitating art. Just like in the movie, we are just building an asteroid ourselves. Compared with this asteroid that is about to hit the earth, the things people are arguing about are almost trivial. Most politicians don't realize it's imminent, they think it's 100 years away.

Currently, we are at a fork in the road. In 100,000 years, this is the most important bifurcation point that humans have reached. On this planet, we are effectively building a new species that is smarter than ourselves. It doesn't look like a species yet, as it's not yet embodied in a robot, but the technical details of that will be ironed out soon.

The arrival of artificial intelligence that can do all the work as efficiently as we do, may soon lead to superintelligence that greatly surpasses our cognitive capabilities. It will be the best and possibly the worst thing that ever happened to mankind. There is no middle ground. 

L: What we are seeing now, the development of GPT-4 may bring super intelligent AGI in the short term. When reaching superhuman level intelligence, there are still many problems to be explored urgently. The content of the open letter is that we have to suspend the development of all similar AI systems?

Max: I remember around 2014 and 2015, AI security was a mainstream topic. The thinking was that, even if there were risks, steps could be taken. But at the time, many people thought it was weird to talk about AI safety. Many AI researchers feel quaint, possibly bad for funding, and I'm glad that phase has passed. Now there will be AI security issues in major AI conferences, which is a nerd technical field full of formulas. 

The topic of slowing down the development of AI has become almost taboo recently. I've been gritting my teeth and saying maybe we don't need to slow down the development of AI. We just need to win this race, the race between the growing power of AI and the ability to manage intelligence. Instead of trying to slow down AI, let's try to speed up the smarts of technology management, figuring out how you can really make sure that strong AI will do what you want it to do, with social incentives and regulations in place so that the technology works well use. Sadly, this didn't work out.

When we started this project (Future of Life Institute) in 2014, we did not expect the rapid progress of AI. why? Many people are an afterthought, like building a flying machine. People spend a lot of time studying how birds fly, and it turns out it's really hard. Compared with the Wright brothers who built the first airplane, the path of evolution is actually more complicated, even constrained. From an evolutionary point of view, aircraft (birds) need to be able to assemble and repair themselves, use only the most common atoms in the periodic table, and save fuel (the fuel consumption of birds is almost half that of a small remote-controlled aircraft). The Wright Brothers didn't care, they used steel, iron atoms to build airplanes.

The same goes for large language models. The brain is very complex. Some argue that before building a machine, one must first figure out how the brain achieves human-level intelligence. I think that's completely wrong, you can take a very simple computing system, a transformer network, and train it to do some simple things. Like reading a lot of text and trying to predict the next word. As long as the amount of calculation and data is large enough, a model such as GPT-4 will be produced.

L: What do you think of GPT-4? Can GPT-4 reason? How about intuition? Which capabilities are most impressive from a human-interpretable perspective?

Max: I'm both excited and scared. Its reasoning ability is strong, and it can also provide services to a large number of people at the same time. There is a recurrent neural network in the brain. Information is passed and circulated between neurons. You can be cranky and self-reflective. These large language models cannot do that. The transformer architecture of GPT-4 is like a one-way channel of information, called a feedforward neural network. Its depth determines that only so many steps of logical reasoning can be done. In fact, it's pretty amazing to do such amazing things with such a minimal architecture.

How Artificial Intelligence Is Killing Humans

L: Is there any actual mechanism that causes AI to really kill all humans? You've been outspoken about autonomous weapon systems, do you still have that concern as AI gets more powerful? 

Max: The concern is not that everyone will be killed by slaughter robots. I'm talking about a sort of Orwellian dystopian world where the minority kills the majority. If you want to know how AI will kill humans, look at how humans will make other species extinct, we (humans) are not actually going to shoot them, but we (humans) are plundering their habitat for our own use . This belongs to the killing along the way. For example, let's say a machine finds oxygen a bit annoying because it causes more corrosion, so decides to pump it away (think how people live without oxygen).

The basic problem is that you don't want to hand over control of the planet to some other smarter entity with a different purpose than your own existence. It's that simple. This brings up another key challenge that AI safety researchers have long grappled with. How to make AI understand human goals? First, understanding and adopting human goals that will continue to be retained even as the future gets smarter. All three are difficult. It's like dealing with a human child, not smart enough to understand our goals, and unable to communicate verbally. And then eventually they grow up and become teenagers and understand our purpose, smart enough and malleable enough. Through good education, you can teach them the concept of right and wrong. These, too, are challenges faced by computers.

L: Even with time, the problem of AI alignment seems to be really hard.

Max: But it's also the most valuable question, the most important question humanity has ever had. Once you solve the problem of aligning with the AI, all other problems will follow. The launch of GPT-4 may be the wake-up call that human beings really need, telling people to stop fantasizing that things in 100 years may be uncontrollable and unpredictable. ChatGPT previously tried to convince a journalist to divorce his wife, something engineers didn't expect when they built it. I just wanted to build a huge black box, train and predict the next word, and as a result, many unimagined properties emerged.

L: Speaking of computer teaching, when I was growing up, being a programmer was a great profession. And with the nature of programming changing, why should we invest so much time in becoming good programmers?

Max: Actually the whole nature of our education system has changed. English teachers are the ones who really freak out, assign a dissertation assignment, and end up with a pile of Hemingway-style professional writing. They have to completely rethink.

I'm an educator myself and it pains me to say this, but I feel like our education right now is totally obsolete by what's happening right now. You put a kid in first grade and imagine they're going to graduate from high school 12 years later. Everything he has to learn has been pre-planned. It is clear that we need a more opportunistic education system that constantly adjusts itself as society readjusts. Cover really useful skills when writing a course. What I want to ask is how much of the skills learned in these schools will help students find jobs 12 years later.

Consciousness and AGI

L: Do you think GPT-4 is conscious?

Max: Let's define consciousness first, because in my experience, 90% of the debates about consciousness are two people arguing about chicken and duck. I define consciousness as a subjective experience. Currently I am experiencing colours, sounds and emotions, but what will self-driving cars experience? It's about whether it's conscious or not. Is GPT-4 conscious? Have a subjective experience? The short answer, I don't know, because what brings about this wonderful subjective experience. Because our life itself is a subjective experience. Happiness and love are subjective experiences.

Neuroscientist Professor Giulio Tononi once wrote his bold mathematical conjecture about the nature of information processing by consciousness. He hypothesizes that consciousness is related to circuits in the brain's information processing. On the computer, it is called a recurrent neural network. He believes that the feedforward neural network only transmits information in one direction, such as from the retina to the back of the brain, which is an unconscious behavior. The retina is like a camera, it is not conscious itself.

GPT-4 is also a one-way flow of information, so if Tononi is right, GPT-4 is like a very smart zombie that can do smart things but has no ability to experience the world itself. That way, for example, I won't feel guilty when I turn off GPT-4 or erase its memory. It's creepy. They are all one-way neural networks like Transformer, and they are all like zombies, ushering in the zombie apocalypse. We have such a great universe going on without life experiencing it. What a frustrating future that would be.

So I think that as we move toward higher levels of AI, it's time to figure out what kind of information processing is going to inspire experience. Many people say that consciousness equals intelligence. Not at all, you can perform some intelligent behavior, but at the same time it may be in an unconscious state.

L: How do you view the timeline of AGI? Do you think it will be one year, 5 years, 10 years, 20 years or 50 years? What does your gut say?

Max: AGI may be very close, which is why we decided to publish an open letter. If there was ever a time to stop (the development of AI), it would definitely be today. The version after GPT-4 is not AGI, so maybe the version after that is. And there are many companies trying, their basic architecture is not super secret. One voice that has been heard over the years is that there will come a time when we need to pause (in the development of AI) a little bit, and obviously that moment is now.

More content in Zhiyuan community

Guess you like

Origin blog.csdn.net/BAAIBeijing/article/details/130418106