阿西洛马人工智能23原则(Asimolar principle) 人工智能伦理哲学 中英对照

文章目录

中文

科研问题
1)研究目标:人工智能研究的目标不是为了创造不受指挥的智能,而是有益的智能。
2)研究经费:对人工智能进行投资的同时,要保证有经费用于研究如何有益地使用人工智能,研究包括计算机科学、经济学、法律、伦理以及社会研究中的棘手问题,比如:
如何使未来的人工智能系统变得高度稳健,即系统会执行我们所想的而不会发生故障或被入侵?

如何通过自动化提升我们的繁荣程度,同时保持人们的资源和意志?

如何升级我们的法制体系使其更公平高效,能够跟得上人工智能的发展速度,并且能控制人工智能带来的风险?

人工智能该与什么样的价值体系保持一致?它该有怎样的法律和伦理地位?

3)科学与政策的联系:在人工智能研究员和政策制定者之间应该要有富有建设性的和健康的交流。
4)科研文化:在人工智能研究员和开发者中应该要培养起一种以合作、信任与透明为基础的文化。

避免不必要的竞争:人工智能开发团队之间应该积极合作,避免有人钻空子导致安全标准被削弱。

伦理和价值
6)安全性:人工智能系统在它们整个的运转周期内应该是安全可靠的,并且能在可应用的和可行的地方被验证。
7)失败透明性:如果一个人工智能系统造成了损害,那么造成损害的原因要能被确定。
8)审判透明性:任何自动系统参与的司法决策都应提供令人满意的解释,可被有能力的人类监管机构审核。
9)负责:高级人工智能系统的设计者和建造者,在道德影响上,是人工智能使用、误用和动作的利益相关者,并有责任和机会去塑造那些影响。
10)价值观一致:高度自主的人工智能系统应该被设计,确保它们的目标和行为在整个运行过程里与人类的价值观相一致。
11)人类价值观:人工智能系统应该被设计和操作,以使其和人类尊严、权力、自由和文化多样性的理想相一致。
12)个人隐私:人们应该拥有权力去访问、管理和控制他们产生的数据,考虑到人工智能系统有分析和使用那些数据的能力。
13)自由和隐私:人工智能在个人数据上的应用必须不能不当地剥夺人们真实的或认为的自由。
14)分享利益:人工智能科技应该惠及并赋权最大可能的多数人。
15)共同繁荣:由人工智能创造的经济繁荣应该被广泛地分享,惠及全人类。
16)人类控制:人类应该来选择如何和是否委派人工智能系统去完成人类选择的目标。
17)非颠覆:高级人工智能被授予的权力应该尊重和改进健康的社会所依赖的社会和公民秩序。
,而不是颠覆。
18)人工智能装备竞赛:致命的自动化武器的装备竞赛应该被禁止。
更长期的问题
19)能力警惕:我们应该避免关于未来人工智能能力上限的假设,但这一点还没有共识。
20)重要性:高级人工智能能够代表地球生命历史的一个重大变化,我们应该用与之相称的警惕和资源来管理。
21)风险:人工智能系统造成的风险,特别是灾难性的或有关人类存亡的风险,必须能够被相应的努力所管理和减轻。
22)递归的自我提升:人工智能系统被设计成能够以一种可以快速提升质量和数量的方式进行自我升级或自我替代,这种方式必须受制于严格的安全和控制标准。
23)公共利益:超级智能的开发是为了服务广泛认可的伦理理想,并且是为了全人类的利益而不是一个国家和组织的利益。


英文


Research Issues

  1. Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.
  2. Research Funding: Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as: - How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked? - How can we grow our prosperity through automation while maintaining people’s resources and purpose? - How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI? - What set of values should AI be aligned with, and what legal and ethical status should it have?
  3. Science-Policy Link: There should be constructive and healthy exchange between AI researchers and policy-makers.
  4. Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.
  5. Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.

Ethics and Values
6) Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.
7) Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.
8) Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.
9) Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.
10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.
11) Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.
12) Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.
13) Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.
14) Shared Benefit: AI technologies should benefit and empower as many people as possible.
15) Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.
16) Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.
17) Non-subversion: The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.
18) AI Arms Race: An arms race in lethal autonomous weapons should be avoided.


Longer Term Issues
19) Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.
20) Importance: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.
21) Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.
22) Recursive Self-Improvement: AI systems designed to recursively selfimprove or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.
23) Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.

发布了47 篇原创文章 · 获赞 4 · 访问量 2292

猜你喜欢

转载自blog.csdn.net/qq_34107425/article/details/103775430