OpenAI, Google, Microsoft and others set up US$10 million AI security fund

Google, Microsoft, OpenAI and Anthropic issued a joint statement appointing Chris Meserole, an executive of the American Think Tank Association, as the first executive director of the Frontier Model Forum. And announced the establishment of a $10 million AI Security Fund "to promote ongoing research on tool development to help society effectively test and evaluate the most capable AI models."

The Frontier Model Forum was jointly founded by Microsoft, OpenAI, Google and Anthropic in July this year. It is an industry organization focused on ensuring the safe and responsible development of cutting-edge artificial intelligence models. This forum is designed to help:

  • Promote AI safety research to promote responsible development of cutting-edge models and minimize potential risks;
  • Identify security best practices for leading-edge models;
  • Share knowledge with policymakers, academia, civil society and others to advance responsible AI development;
  • and strong support for the development of AI applications that help address society’s challenges .

The AI ​​Security Fund will support independent researchers from around the world affiliated with academic institutions, research institutions and startups, the announcement states. Initial funding comes from Anthropic, Google, Microsoft and OpenAI, as well as other philanthropic partners.

The fund's primary focus will be to support the development of new model assessment techniques to help develop and test assessment techniques for the potentially hazardous capabilities of cutting-edge systems. “We believe increased funding in this area will help improve safety standards and provide insights into the mitigations and controls industry, government and civil society need to address the challenges posed by AI systems. ”

The fund will solicit proposals in the coming months. The Frontier Model Forum also plans to form an advisory board to help guide its strategy and priorities.


$10 million is not a small amount, but in the context of AI safety research and compared with what members of the Frontier Pattern Forum spend on commercial activities, this amount "appears quite conservative." Technology media  TechCrunch pointed out that this year alone, Anthropic has raised billions of dollars from Amazon to develop the next generation of AI assistants, after Google also invested $300 in it.

And the fund is also small compared to other AI security funds. Open Philanthropy, a funding and research foundation co-founded by Facebook founder Dustin Moskovitz, has donated about $307 million to AI security, according to an analysis by the blog Less Wrong .

The public welfare company "The Survival and Flourishing Fund" has also donated approximately US$30 million to AI safety projects. The National Science Foundation said it will spend $20 million on AI security research over the next two years, part of which will be funded by Open Philanthropy.

The Frontier Model Forum hinted at a larger fund as the next step. If that comes to fruition, it could have the opportunity to advance AI safety research—provided we trust that the fund’s explicitly for-profit backers won’t unduly impose restrictions on research impact. But no matter how you slice it, the initial funding appears to be too limited to accomplish much.

Guess you like

Origin www.oschina.net/news/263552/frontier-model-forum-ai-safety