Hugging Face partners with Wiz Research to improve AI security

We are excited to announce that we are partnering with Wiz with the goal of improving the security of our platform and the entire AI/ML ecosystem.

Wiz researchers work with Hugging Face on the security of our platform and share their findings. Wiz is a cloud security company that helps customers build and maintain software in a secure manner. With the release of this research, we will take the opportunity to highlight some relevant Hugging Face security improvements.

  • More information on Wiz’s partnership with Hugging Face https://www.wiz.io/blog/wiz-and-hugging-face-address-risks-to-ai-infrastruct

Hugging Face recently integrated Wiz for vulnerability management, an ongoing proactive process that ensures our platform is protected from security vulnerabilities. Additionally, we use Wiz for Cloud Security Posture Management (CSPM), which allows us to securely configure our cloud environment and monitor it to ensure its security.

One of our favorite Wiz features is a holistic view of vulnerabilities, from storage to compute to network. We run multiple Kubernetes (k8s) clusters and have resources across multiple regions and cloud providers, so it's very helpful to have a central report with a complete context picture of each vulnerability in a single location. We also built their tools to automatically fix issues detected in our products, specifically in Spaces.

During the course of a joint effort, Wiz's security research team identified deficiencies in our sandbox computing environment through the use of pickle to run arbitrary code within the system. As you read this blog and Wiz's security research, please remember that we have addressed all issues related to this vulnerability and will continue to remain vigilant in our threat detection and incident response processes.

Hugging Face Safety

At Hugging Face, we take safety very seriously. As artificial intelligence advances rapidly, new threat vectors seem to emerge every day. Even as Hugging Face announces multiple partnerships and business relationships with some of the biggest names in technology, we remain committed to empowering our users and the AI ​​community to responsibly experiment and operate AI/ML systems and technologies. We are committed to securing our platform and driving the democratization of AI/ML so that the community can contribute and be part of this paradigm shift that will impact us all. We wrote this blog to reaffirm our commitment to protecting our users and customers from security threats. Below we also discuss Hugging Face's philosophy on supporting controversial pickle files, and discuss the shared responsibility to move away from pickle formats.

There are also many exciting security improvements and announcements coming in the near future. These publications will not only discuss the security risks faced by the Hugging Face platform community, but also cover systemic security risks of AI and best mitigation practices. We are always committed to ensuring the security of our products, infrastructure, and AI community. Please pay attention to subsequent security blog posts and white papers.

Open source security collaboration and tools for the community

We place a high value on transparency and collaboration with the community, which includes participating in the identification and disclosure of vulnerabilities, jointly solving security issues, and developing security tools. The following are examples of security outcomes achieved through collaboration that help the entire AI community reduce security risks:

  • Picklescan was developed in collaboration with Microsoft; the project was started by Matthieu Maitre, and since we also had a version of the same tool internally, we joined forces and contributed to Picklescan. If you want to know more about how this works, please refer to the following documentation page: https://hf.co/docs/hub/en/security-pickle

  • Safetensors are a safer alternative to pickle files developed by Nicolas Patry. Safetensors were audited by Trail of Bits in a collaborative project with EuletherAI and Stability AI.

    https://hf.co/docs/safetensors/en/index

  • We have a robust bug bounty program that attracts many great researchers from around the world. Researchers who identify security vulnerabilities can join our program by contacting [email protected].

  • Malware scanning: https://hf.co/docs/hub/en/security-malware

  • Privacy Scanning: Please visit the following link for more information: https://hf.co/docs/hub/security-secrets

  • As mentioned earlier, we also work with Wiz to reduce platform security risks.

  • We are launching a series of security publications to address security issues facing the AI/ML community.

Security best practices for open source AI/ML users

  • AI/ML introduces new attack vectors, but for many of these attacks, mitigations have long existed and been known. Security professionals should ensure that relevant security controls are applied to AI resources and models. Additionally, here are some resources and best practices when working with open source software and models:
  • Know your contributors: Only use models from trusted sources and be careful about commit signatures. https://hf.co/docs/hub/en/security-gpg
  • Don't use pickle files in production
  • Using Safetensors: https://hf.co/docs/safetensors/en/index
  • Review of the OWASP Top 10: https://owasp.org/www-project-top-ten/
  • Enable MFA on your Hugging Face account
  • Establish a secure development lifecycle that includes code reviews by security professionals or engineers with appropriate security training.
  • Test models in non-production and virtualized test/development environments.

Pickle files – a security risk that cannot be ignored

Pickle files have been the focus of Wiz's research and other recent publications by security researchers on Hugging Face. Pickle files have long been considered a security risk, for more information please see our documentation: https://hf.co/docs/hub/en/security-pickle

Despite these known security flaws, the AI/ML community still frequently uses pickle files (or similar easily exploitable formats). Many of these use cases are low risk or are for testing purposes only, making the familiarity and ease of use of pickle files more attractive than safer alternatives.

As an open source artificial intelligence platform, we have the following options:

  • Completely disable pickled files
  • Do nothing with pickled files
  • Find a middle ground that allows for pickling while reasonably and realistically mitigating the risks associated with pickled files

We currently choose the third option, which is a compromise. This choice is a burden on our engineering and security teams, but we have put significant effort into mitigating the risk while allowing the AI ​​community to use the tools of their choice. Some of the key mitigation measures we have implemented against pickle-related risks include:

  • Create clear documentation outlining risks
  • Develop automated scanning tools
  • 使用扫描工具和标记具有安全漏洞的模型并发出明确的警告
  • 我们甚至提供了一个安全的解决方案来代替 pickle (Safetensors)
  • 我们还将 Safetensors 设为我们平台上的一等公民,以保护可能不了解风险的社区成员
  • 除了上述内容之外,我们还必须显着细分和增强使用模型的区域的安全性,以解决其中潜在的漏洞

我们打算继续在保护和保障 AI 社区方面保持领先地位。我们的一部分工作将是监控和应对与 pickle 文件相关的风险。虽然逐步停止对 pickle 的支持也不排除在外,但我们会尽力平衡此类决定对社区的影响。

需要注意的是,上游的开源社区以及大型科技和安全公司在贡献解决方案方面基本上保持沉默,留下 Hugging Face 独自定义理念,并大量投资于开发和实施缓解措施,以确保解决方案既可接受又可行。

结束语

我在撰写这篇博客文章时,与 Safetensors 的创建者 Nicolas Patry 进行了广泛交流,他要求我向 AI 开源社区和 AI 爱好者发出行动号召:

  • 主动开始用 Safetensors 替换您的 pickle 文件。如前所述,pickle 包含固有的安全缺陷,并且可能在不久的将来不再受支持。
  • 继续向您喜欢的库的上游提交关于安全性的议题/PR,以尽可能推动上游的安全默认设置。

AI 行业正在迅速变化,不断有新的攻击向量和漏洞被发现。Hugging Face 拥有独一无二的社区,我们与大家紧密合作,以帮助我们维护一个安全的平台。

请记住,通过适当的渠道负责任地披露安全漏洞/错误,以避免潜在的法律责任和违法行为。

想加入讨论吗?请通过 [email protected] 联系我们,或者在 LinkedIn/Twitter 上关注我们。


英文原文: https://hf.co/blog/hugging-face-wiz-security-blog

原文作者: Josef Fukano, Guillaume Salou, Michelle Habonneau, Adrien, Luc Georges, Nicolas Patry, Julien Chaumond

译者: xiaodouzi

本文分享自微信公众号 - Hugging Face(gh_504339124f0f)。
如有侵权,请联系 [email protected] 删除。
本文参与“OSC源创计划”,欢迎正在阅读的你也加入,一起分享。

微软中国 AI 团队集体打包去美国,涉及数百人 一个不知名的开源项目可以带来多少收入 华为官宣余承东职务调整 华中科技大学开源镜像站正式开放外网访问 诈骗分子利用 TeamViewer 转走 398 万!远程桌面厂商该如何作为? 前端第一可视化库、百度知名开源项目 ECharts 创始人——“下海”养鱼 知名开源公司前员工爆料:技术 leader 被下属挑战后狂怒爆粗、辞退怀孕女员工 OpenAI 考虑允许 AI 生成色情内容 微软向 Rust 基金会捐赠 100 万美元 请教各位,此处的 time.sleep(6) 起到了什么作用?
{{o.name}}
{{m.name}}

Guess you like

Origin my.oschina.net/HuggingFace/blog/11126604