The father of ChatGPT admits that GPT-5 does not exist, why is OpenAI always so honest? |Digital details

ChatGPT

prequel

Source: Ai Faner Wechat ID: ifanr

Recently, OpenAI CEO Sam Altman refuted rumors for GPT-5 at a public meeting.

He claims that OpenAI is not training GPT-5, but has been doing other work based on GPT-4.

OpenAI is a very interesting organization. Unlike big companies such as Microsoft and Google, OpenAI has never shy away from talking about its ideas and products, and has always maintained a candid attitude.

Why is OpenAI so special? Why can this non-profit organization with only a few hundred people make AI products that shock the world? This 10,000-word long article written by Karen Hao may tell you the secret of OpenAI's success.

This article was published in MIT Technology Review in February 2020, four months before the release of GPT-3, and two years and nine months before the release of ChatGPT.

Text/Karen Hao@MIT Technology ReviewTranslation
/f.chen@真实fund

It's a story of how the pressures of competition erode idealism.

Every year, OpenAI employees vote to predict the arrival of artificial general intelligence (AGI) - this is mainly seen as a fun team building activity internally, and the estimates made by different employees vary widely. It is worth mentioning that, While the success of building an autonomous AI system is still up for debate, OpenAI has half bet that AGI may be within 15 years.

After just four years of development, OpenAI has become one of the world's leading AI research laboratories. On the one hand, it has built a reputation for consistently publishing compelling research, putting it on par with other AI giants like Alphabet's DeepMind; on the other hand, it's also a Silicon Valley darling, co-founded by Elon Musk and legendary investor Sam Altman .

But most of all, OpenAI is revered for its mission. Its vision is to be the first organization to achieve AGI, but not to rule the world, but to ensure the safe development of this technology and the equitable distribution of the resulting benefits to all.

If the technology is left to develop freely, AGI may easily get out of control-narrow intelligence, that is, the clumsy AI around us today (referring to 2020), has shown us many examples. We now know that algorithms are biased and fragile, that they can cause serious abuse and deceit, and that the cost of developing and running them concentrates the power to control them in the hands of a few. Predictably, without the careful guidance of a "merciful shepherd," AGI could prove disastrous.

OpenAI hopes to be this "benevolent shepherd" and carefully craft a "personal image"-in a field dominated by rich and powerful giants, it was established as a non-profit organization.

The organization’s first statement said the distinction would allow it to “create value for everyone, not shareholders” ; its charter —  a document so sacred that employees’ salaries depend on how well they adhere to its terms — —Further  claiming that OpenAI's "primary responsibility is for humans" ; moreover, it is so important to achieve AGI safely that if another organization comes close to achieving this goal first, OpenAI will stop competing and turn to cooperate with it. Such a narrative was so attractive to investors and the media that in July 2020, Microsoft invested $1 billion in the lab.

However, three days spent in OpenAI's offices, and nearly three dozen interviews with past and present employees, collaborators, friends, and other domain experts, presented an entirely different picture. The expression is not consistent with the actual operating concept. Over time, intense competition and mounting funding pressures have eroded its founding philosophy of transparency, openness and collaboration.

Many who work or have worked for the company insisted on anonymity for the report because they were not authorized to speak or feared reprisal. Interviews with them suggest that OpenAI, despite its lofty vision, is obsessed with keeping secrets, protecting its image and maintaining the loyalty of its employees.

Since its inception, the development direction of AI technology has been to understand human intelligence and then realize independent creation. In 1950, the famous British mathematician and computer scientist Alan Turing asked a famous question in his paper: "Can machines think?" Six years later, a group of scientists gathered at Dartmouth College and formalized the discipline.

"It's one of the most fundamental questions in the history of knowledge, right?" said Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence (AI2), a Seattle-based nonprofit AI research lab. Whether we understand the origin of the universe is the same as whether we understand matter."

The problem is that the definition of AGI has always been vague. No one can really describe exactly what it would look like, or at least what it should do, and it's hard to say that there is only one kind of general intelligence, of which human intelligence may be just a subset. Opinions also differ on the purpose of achieving AGI. In the more romanticized view, a machine intelligence not limited by the need for sleep or the efficiency of human communication could help solve complex problems such as climate change, poverty and hunger.

But the consensus in the AI ​​field is that it will take decades, if not centuries, to achieve such advanced capabilities—if at all it is possible to develop them. Many also worry that overzealous pursuit of AGI could backfire, and historically, AI research was overhyped and underperformed in the late 1970s and early 1990s. Funding dried up overnight, deeply scarring an entire generation of researchers.
"The field feels like a backwater," said Peter Eckersley, a former research director for the Partnership on AI, an industry group of which OpenAI is a member.

In this context, OpenAI was born with a high profile on December 11, 2015. It's not the first lab to publicly announce that it wants to achieve AGI—DeepMind did so five years ago, and was acquired by Google in 2014—but it seems to be different: First, its staggering amount of startup funding has attracted A sensation, it will receive $1 billion in start-up capital from private investors (including Musk, Altman, and PayPal co-founder Peter Thiel), and the list of star investors has also attracted a lot of media attention; secondly, the list of its first batch of employees Also notable:

- Greg Brockman, former CTO of payments company Stripe, will serve as CTO;

- Ilya Sutskever, who studied under AI pioneer Geoffrey Hinton, will serve as Director of Research

; Core technical team;

- In February 2018, Musk announced that he had left the company due to disagreements over the direction of OpenAI. A month later, Altman resigned as the president of the startup accelerator Y Combinator and became the CEO of OpenAI.

But most importantly, OpenAI's non-profit nature amounts to a statement: " It is very important to have a leading research institution that can put the overall interest above the individual's own self-interest. We strongly encourage researchers and will be encouraged to publish publicly." Their work, whether it's papers, blog posts, or code, and our patents (if any) will be shared with the world." It's not specified, but the implication is very clear: a lab like DeepMind cannot serve humanity because They are limited by commercial interests, they are closed, but OpenAI is open.

In an increasingly privatized research environment focused on short-term financial gains, OpenAI offers a new way to finance solutions to the biggest problems. "It's a ray of hope," says machine-learning expert Chip Huyen, who has followed the lab's development closely .

At the intersection of 18th Street and Folsom Street in San Francisco, OpenAI's office looks like a mysterious warehouse. The historic building is made of dull gray slats, has tinted glass windows, most of the shutters have been drawn, and the words "PIONEER BUILDING" in faded red paint Tucked inconspicuously around the corner, this is a vestige of the existence of its former owner, Pioneer Truck Works.

But its interior is bright and airy. The ground floor has several communal spaces and two meeting rooms, one of which is suitable for large conferences and is called "2001 Space Odyssey" (A Space Odyssey); It's called Infinite Jest - a space that was restricted during my visit. I was also banned from visiting the second and third floors, where everyone's desks, robots, and pretty much everything interesting, came down to meet me when it was time for interviews, and an employee carefully supervised me between meetings.

The sky was blue on the day we visited Brockman, but he looked tense and wary. “We’ve never given so much authority to other people,” he says with a tentative smile. He sports a clean, no-fuss hairstyle and, like many others at OpenAI, wears casual clothes.

Brockman, 31, grew up in rural North Dakota in what he describes as a "focused, quiet childhood": milking cows, harvesting eggs, and teaching himself to love math. In 2008, he entered Harvard University, intending to double major in math and computer science, but quickly grew impatient because he was eager to enter the real world; a year later, he dropped out to attend MIT, but a few months later , he dropped out again -- his final decision -- and he moved to San Francisco and never looked back.

Brockman took me to lunch to get me out of the office during a company all-hands meeting. At the coffee shop across the street, he talked about OpenAI with enthusiasm, sincerity, and excitement, repeatedly comparing its mission to landmark achievements in the history of science. He has an easy-to-appreciate leadership charisma and shares books that impress him, focusing on one of Silicon Valley's favorite narratives—the American race to the moon. "One of my favorite stories is that the cleaners s story."

He goes on to cite a famous, possibly fictional, story: "Kennedy walked up to him and asked him, 'What are you doing?' and he said, 'Oh, I'm helping put a man on the moon!'" And the Transcontinental Railroad , "actually the last large-scale project to be done entirely by human hands...a project of enormous scale with great risk", and Edison's incandescent lamp, "a group of distinguished experts said 'it will never work', a year Then it's published" and so on.

Brockman knew the stakes OpenAI was taking, and the cynicism and censorship it would spark. But every time he mentions it, his message is clear: people can doubt us as much as they want, and that's the price we pay for boldness.

Those who joined OpenAI early on remember the energy, excitement, and sense of purpose. The team is small - everyone is closely related - and the management style is loose and informal. Everyone follows a flat structure, and anyone can contribute ideas and participate in debates.

Musk played no small role in establishing the “collectivist myth.” "He said to me 'I get it, AGI is probably far away, but what if it's not? If there's only a 1% or 0.1% chance of it happening in the next five to 10 years, should we really look at it?'" UC Berkeley Professor Pieter Abbeel recalled that he and several of his students worked there for the first two years.

But this informal way of operating has also contributed in part to the blurring of the team's direction. When Altman and Brockman met with Dario Amodei, then a Google researcher, in May 2016, Amodei told them that no one understood what OpenAI was doing, and neither did the team themselves, in a report published by The New Yorker. "Our goal right now ... is to do what's best," Brockman said. "It's kind of vague."

However, Amodei joined the team a few months later. His sister, Daniela Amodei, had previously worked with Brockman, so he already knew many members of OpenAI. Two years later, at the invitation of Brockman, Daniela also joined OpenAI. "Imagine we start from scratch," Brockman said. "We only have this ideal, and I hope AGI goes well."

By March 2017, 15 months after launching, company management realized that the team needed to be more focused. So Brockman and a few other core members began drafting an internal document that would pave the way for AGI.

But they soon discovered a fatal problem -  as the team studied the development trend of the field more deeply, they realized that remaining non-profit was not economically sustainable. Competitors are doubling every 3.4 months at the rate at which they leverage computing resources to achieve breakthrough results. “To keep up ,” Brockman said , they needed enough capital to match or exceed this exponential growth rate, which required a new organizational model that could build up capital quickly while remaining true to its mission.

Unbeknownst to the public and most employees, it was on this premise that OpenAI published its new charter in April 2018. The document restates the lab's core values, but subtly shifts the wording to reflect the new situation.

In addition to promising to "avoid the use of AI or AGI to cause harm to humans or to excessively concentrate power ," it also highlighted the need for resources "We anticipate the need to mobilize substantial resources to achieve our mission, but we will always strive to reduce Conflicts of interest arising between interested parties that could harm broader interests.

“We spent a lot of time internally iterating on the principles with our employees,” Brockman said. “Even as we change our structure, these things have to stay the same.”

The organizational change occurred in March 2019, when OpenAI created a "limited profit" sub-agency to move away from a purely non-profit nature  - a for-profit organization under the supervision of a non-profit entity, with investor returns limited to 100 times. Shortly after, it announced a $1 billion investment from Microsoft -- though it didn't reveal at the time that it was made up of cash and a line of credit from Azure, Microsoft's cloud computing platform.

Unsurprisingly, the move sparked a wave of accusations that OpenAI had reneged on its mission. Shortly after the announcement, Hacker News published an article in which a user asked how the 100x return was limited: "Google's early investors have made roughly 20x capital returns, and your bet is that OpenAI will have more than Google's corporate structure that returns several times higher...but you don't want to 'over-concentrate power', how does that work? If power isn't concentration of resources, what is?"

The move has also rattled many employees, who have expressed similar concerns. To assuage internal unease, management drafted an FAQ as part of a series of highly confidential transition documents, one of which read: "Can I trust OpenAI?" The answer begins with "Yes," followed by a paragraph explain.

The charter is the backbone of OpenAI and the foundation for all of the lab's strategies and actions. During our lunch, Brockman recites it like a scripture, explaining why it exists at every level of the company (he emphasizes in the middle of the recitation: "I think I know what every sentence is, because I spent a lot of time Read the charter carefully to make sure my memory is correct, I didn't read this stuff before the meeting."):

- How will you ensure that humanity continues to live meaningful lives while you develop higher levels of technological capabilities? "As we write, we think its impact should be to give everyone economic freedom to find new opportunities unimaginable today." - How would

you organize yourself to distribute AGI equally to humanity? "I think the public utility is the best analogy for our vision. But again, it's entirely governed by the charter."

- How will you compete to be the first to achieve AGI without compromising safety? "I think there's definitely this important balance, and the best way we can do that is by following the charter."

For Brockman, strict adherence to the document is key to the effectiveness of OpenAI's organizational structure. Internal consistency is critical: With rare exceptions, all full-time employees are required to work in the same office  —a rule that for the policy team, and especially Director Jack Clark, makes his life between San Francisco and Washington, D.C. High-frequency circulation, but he doesn't mind, in fact, he agrees with this approach. It's those free times spent together, like having lunch with colleagues, that help keep everyone on the same page, he said.

In many respects, this approach has clearly paid off: the company has an impressive culture of consistency. Employees work long hours and talk about work constantly during meal and social time; many people attend the same party and subscribe to the rational philosophy journal Effective Altruism; they joke about their lives in machine learning terms: “Your life is a function Part of it?" "What are you optimizing?" "Everything is basically a min-max function."

To be fair, other AI researchers like to do this too, but those familiar with OpenAI agree: Compared to others in the field, its employees view AI research more as a status than a job (November 2019 , Brockman married his girlfriend of a year, Anna, in the office against the backdrop of OpenAI’s iconic flowers, with Sutskever as the moderator and the ring passer as a robotic arm).

But in mid-2019, charters are no longer just a lunchtime conversation. Shortly after the switch to limited profits, management introduced a new salary structure based in part on how well each employee understood the mission. In a spreadsheet called the Unified Technical Ladder, in addition to sections such as Engineering Expertise and Research Direction, the last section outlines culturally relevant expectations for each level:

- Level 3: "You understand and internalize OpenAI's charter."

- Level 5: "You ensure that all projects you and your team members work on are consistent with the charter."

- Level 7: "You are responsible for maintaining and improve the charter, and ask others in the organization to do the same.”

The first time most people heard about OpenAI was on February 14, 2019. On the same day, the lab announced impressive new research: a model that can easily generate convincing articles and reports—just input a sentence from "Lord of the Rings" or an article about Miley Cyrus' pickpocketing ( fake) news report, it will output a large number of paragraphs of the same style.

But there was a catch: The researchers said the model, called GPT-2, was too dangerous to be released. If such a powerful technology falls into the wrong hands, it can easily be weaponized and used to mass-produce disinformation.

Scientists objected immediately. Some people say that OpenAI is just doing a public relations stunt, GPT-2 is not enough to pose a threat at all, if it is a threat, why announce its existence but not open it to the public to avoid public scrutiny? “OpenAI seems to be marketing people’s fear of AI,” said Britt Paris, an assistant professor at Rutgers University who studies AI-generated disinformation.

By May, OpenAI had changed its mind, announcing plans for a "phased release." Over the next few months, it gradually rolled out increasingly powerful versions of GPT-2. During that time, it also worked with several research institutions to examine the potential for misuse of the algorithm and develop countermeasures. Ultimately, OpenAI released the full code in November, saying "so far there is no strong evidence of abuse."

With the public continuing to accuse the company of pursuing public relations effects, OpenAI insists that GPT-2 is not a gimmick, but rather a carefully considered experiment that was reached after a series of internal discussions and debates. The consensus is that such action, even if it is overdone this time, would set a precedent for dealing with more dangerous research. In addition, the charter predicts that "security concerns" will gradually force the lab to "reduce product releases in the future."

This is an argument that the policy team has elaborated in six consecutive months of blog posts, and I also participated in a discussion session. Policy research scientist Miles Brundage emphasized in the Google document: "I think this must be part of the success story framework. The theme of this section should be: We did a great thing, and now some people are replicating it, which is evidence of its usefulness."

But the media hype around OpenAI and GPT-2 also follows a pattern that alarms the wider AI community. Over the years, the lab's major research announcements have been repeatedly accused of fueling the AI ​​hype cycle. Critics have also repeatedly accused the lab of exaggerating its results to the point of error, so many in the AI ​​field tend to distance themselves from OpenAI.

That hasn't stopped the lab from continuing to devote resources to boosting its public image. In addition to research papers, OpenAI publishes its results in carefully written and maintained company blog posts, where all work is done in-house, including text writing, production of multimedia materials, and design of each cover image.

For a while, it also began making a documentary documenting its project, a 90-minute film against DeepMind's AlphaGo, which eventually became an independent production, partially funded by Brockman and his wife Anna (I also Agree to appear in the documentary without compensation to provide technical explanation and background for OpenAI's achievements).

Opposition has been growing, as has internal discussions to address the issue. Employees were frustrated by the constant external criticism, which management worried would dent the lab's influence and ability to hire the best people. An internal document highlighted the issue and proposed an outreach strategy:



- The "Policy" section states : "In order to be influential at the policy level, we need to be seen as the most trusted source of information in the field of machine learning research and AGI." Not only is endorsement necessary to earn such a reputation, but it amplifies our messaging.” “Explicitly treating the machine learning community as a communications stakeholder, changing our tone and external messaging, is only possible if we choose to do so intentionally.” Fight them."

There is another reason why GPT-2 has sparked such a backlash. It was felt that OpenAI had abandoned its early promise of openness and transparency. With news of the transformation a month later, the withheld studies have fueled even more skepticism. Has the technology been kept secret to facilitate future licensing?

Unbeknownst to people, however, OpenAI has chosen to hide its research program on more than one occasion , and that it has another research program that is kept secret.

Regarding the question of what technology is needed to achieve AGI, there are currently two mainstream technical theories: one theory believes that all the necessary technologies already exist, and it is enough to figure out how to expand and combine them; the other theory believes that a new paradigm is required , the current mainstream technology in AI, deep learning, is not enough to achieve this goal.

Most researchers take a view somewhere in between these two extremes, but OpenAI has almost always focused on scaling and combining. Most of its breakthroughs have been made by devoting more computing resources to technological innovations developed in other labs.

Brockman and Sutskever deny that this is their only strategy, but closely guarded research in the lab suggests the opposite may be the case. A team called "Foresight" is experimenting to test whether they can push AI capabilities further to the frontier by using increasingly large amounts of data and computing power to train existing algorithms. For management, the results of these experiments have confirmed their hunch that a lab-focused, computation-driven strategy is the best approach.

These conclusions were withheld for about six months because OpenAI considered these to be its main competitive advantage. Employees and interns were given explicit instructions not to disclose the results, and departing employees signed non-disclosure agreements.

It wasn’t until January 2020 that the team published a paper on a major open-source AI research database, and while it didn’t garner much attention, those who took part in the closely guarded experiment were mystified by the change in behavior— — It is worth noting that another paper with similar results by a different researcher was published a few months earlier (meaning when the article was published).

This level of secrecy was not the original intention of OpenAI, but it has become their habit now. Over time, OpenAI's management has abandoned their original belief that "publicity is the best way to build beneficial AGI".

Now, the importance of keeping silent is instilled in those who collaborate with or work in the lab, including not speaking to reporters without express permission from the communications team. After my initial visit to the lab, I started reaching out to company employees when I received an email from the head of the communications team reminding me that all interviews had to be approved by her.

When I declined, saying it would undermine the validity of what people were telling me, she directed the staff to let her know about my work. Afterwards, Clark, a former reporter, sent a Slack message praising everyone for keeping the reporter's "investigative operation" under wraps.

In a statement responding to heightened secrecy, an OpenAI spokesperson referred to part of the charter: "We anticipate that in the future we will reduce disclosures related to security issues while increasing sharing of the importance of security, policy, and standards research." Ren also added: “Every time we release information/products, we have to go through an information risk assessment process, and we hope to slow down the release of our results to understand potential risks and impacts.”

One of OpenAI's biggest secrets is what projects to work on next. Sources described to me what it would look like: Use massive computing resources to train an AI system (a multimodal model) that can process images, text, and other types of data.

A small group of people is working on an initial attempt, and other teams and efforts are expected to eventually join the effort. Interns were barred from attending the company-wide meeting where the program was announced. People familiar with the plans offer an explanation: Management thinks this is the most promising way to achieve AGI.

The person in charge of OpenAI's strategy is Dario Amodei, a former Google employee and current research director. He struck me as a more anxious version of Brockman, equally genuine and sensitive, but with an erratic nervous energy. He looked a little distant as he spoke, his brow furrowed, a hand tugging at his curly hair involuntarily.

Amodei divides the lab's strategy into two parts. The first part specifies how OpenAI will achieve advanced AI capabilities, which he likens to an investor's "portfolio." Different OpenAI teams are making different bets. For example, the language team is making a theoretical bet that AI can There is a deep understanding of the world through pure language learning, while the robotics team is advancing a contrary theory that intelligence requires a "body" as a medium to develop.

Also like a portfolio, not every bet is weighted equally, but for scientific rigor all bets should be tested before being abandoned. Amodei uses GPT-2 as an example to explain why it is important to keep an open mind: "The pure language field is even a direction that some of us have some doubts about. But now we see that this is really promising."

Over time, some betting weights will be upgraded, which will attract more energy from the team. They will then be cross-combined with each other, with the goal of gradually reducing the number of small teams, and eventually merging into a single technical team for AGI  -this is the process that OpenAI's latest top-secret project has undergone.

Amodei also explained that the second part of the strategy focuses on how to make advancing AI systems safe. This includes making sure they reflect human values, can explain the logic behind their decisions, and do not harm humans in the process of learning. Teams dedicated to achieving these security goals are looking to develop methods that can be applied to different projects. For example, techniques developed by the Interpretability team could be used to reveal the logic behind GPT-2's sentence construction or robot movements.

Amodei admits that this part of the strategy is somewhat haphazard, based more on intuition than established theory. "At some point we're going to achieve AGI, and until then I want these systems to make me feel good when they're running," he said. "Anything that makes me feel bad, I'll build and recruit problematic team.”

Even though Amodei is somewhat conflicted about his pursuit of exposure and secrecy, when he said it, he seemed sincere—the possibility of failure seemed to really disturb him.

"We're in an awkward position: we don't know what AGI will look like, and we don't know when it's going to happen," he said. Security researchers, they often have a different perspective than I do, and I want that difference and diversity because that's the only way to find and fix all the problems."

The problem is that the reality of OpenAI lacks "difference and diversity" , a fact that was confirmed on the third day of my stay. The only time I was allowed to have lunch with the staff, I chose the table that seemed to have the most diversity. In less than a minute, I realized that the people dining there were not actually OpenAI employees, but Neuralink, a company founded by Musk that specializes in brain-computer interface technology and shares a building and restaurant with OpenAI.

Of the more than 120 employees OpenAI reported, 25 percent were women or non-binary people. She said there were two women on the executive team and 30 percent of the management team was women, but she did not specify who those teams were (four executives, including Brockman and Altman, were all white men, and in my case Of the 112+ employees identified on LinkedIn and other sources, an overwhelming majority were white or Asian -- _Note: This information is currently subject to change)_.

To be fair, this lack of diversity is common in AI. Last year, a report by AI Now, a New York-based research organization, found that women made up just 18 percent of authors at leading AI conferences, 20 percent of faculty positions, and 15 percent and 10 percent of total researchers at Facebook and Google, respectively. Respondents from OpenAI said: “There is still a lot of work to be done in both academia and industry, and diversity and inclusion is something we take seriously and continually strive to improve, through partnerships with WiML, Girl Geek and our Scholars Program and other initiatives to work together to achieve this goal.”

Indeed, OpenAI is already trying to expand its talent pool to increase diversity, launching a remote scholar program for minorities in 2018. While the reported process was positive, only two of the original eight academics became full-time employees, and the most common reason for refusing to stay was the need to live in San Francisco, which lacked diversity. Nadja Rhodes, for example, is now a machine learning engineer at a New York-based company.

But if diversity is an issue for the AI ​​industry as a whole, it's an even more fundamental issue for a company whose mission is to spread technology fairly to everyone. In fact, OpenAI is underrepresented by the lack of minority group members.

How does OpenAI plan to "distribute the benefits of AGI to everyone"? This is less clearly articulated in the organizational mission often cited by Brockman, and management tends to talk about it vaguely (in January 2020, Oxford University’s Future of Humanity Institute, in collaboration with OpenAI, published a report recommending share the proceeds by allocating a percentage of profits, but the authors also mention "significant unresolved questions about how to implement it"). "This is the biggest problem with OpenAI in my opinion," said a former employee who worked at OpenAI on condition of anonymity.

"They're using complex technical practices to try to use AI to solve social problems," said Britt Paris of Rhodes University. "It seems that they don't actually have the ability to really understand social problems. They just know that this is a good starting point to make money."

Brockman agrees that OpenAI ultimately needs a combination of technical and social expertise to achieve its mission. But he disagrees that social problems need to be addressed in the first place. He said: "How do you incorporate ethics or other perspectives? When, how? One strategy that can be practiced is to try to incorporate everything that may be needed from the beginning -- but I think this Such a strategy is unlikely to succeed."

First, he said, there needs to be a clear understanding of what AGI will look like, and that's when "it's time to make sure we understand the consequences."

In the summer of 2019, a few weeks after the profit cap model was implemented and Microsoft invested $1 billion, OpenAI’s management assured employees that the changes would not materially alter OpenAI’s research methodology. The values ​​of Microsoft and the laboratory are highly consistent. Any commercialization efforts are far away, and exploring basic problems is still the core of the work.

For a while, those assurances seemed to hold true, and the project continued. Many employees don't even know if the company has made any commitments to Microsoft.

But in 2020, commercialization pressures intensify, and the need for profitable projects no longer seems like a distant future. Altman privately shared the lab's vision for 2020 with employees, and his message was clear: OpenAI needs to make money to do research, not vice versa.

It's a difficult but necessary tradeoff, management says, a choice the lab has to make because of a dearth of wealthy philanthropic donors. By contrast, Seattle-based nonprofit AI2, an ambitious effort to advance basic AI research, is funded by the self-sustaining (at least for the foreseeable future) legacy of the late billionaire Paul Allen. Pool of money, Paul Allen is best known for co-founding Microsoft with Bill Gates.

But the truth is that OpenAI faces this trade-off not only because it has no money, but because it made the strategic choice to be the first to achieve AGI. That pressure has forced it to make decisions that look increasingly far removed from its original intentions. It is eager to attract money and talent, prone to hype, wants to keep its research under wraps by keeping its research under wraps, and pursues computationally intensive strategies—not because it is the only way to achieve AGI, but because it is perhaps the fastest.

However, OpenAI remains a bastion of AI talent and cutting-edge research, gathering people who genuinely strive to work for the good of humanity—in other words, it still has the most important roots, and there is still time to change.

When interviewing Rhodes, a former remote researcher, I asked her what was the last topic she couldn't skip in this article, and she hesitated: "I think, in my opinion, some of the problems may come from the environment it faces, and also Some come from those it tends to attract and those it excludes.”

"But in my opinion, it's doing some right things, and I feel like the people there are genuinely trying to do it."

Guess you like

Origin blog.csdn.net/lqfarmer/article/details/130295152