查看原文
其他

我们应该发展非人类智能吗?

互莲健谈 2024-03-19



数千名科技界专家签署了一份“暂停巨型人工智能实验”的公开信。
包括特斯拉和推特的首席执行官埃隆·马斯克 (Elon Musk) 、苹果联合创始人斯蒂夫·沃兹尼亚克 (Stephen Wozniak) 、蒙特利尔学习算法研究所所长约书亚·本吉奥 (Yoshua Bengio) 等业界大佬。
这封公开信呼吁所有人工智能实验室立即暂停比GPT-4更强大的人工智能系统的训练,暂停时间至少为6个月,并建议如果不暂停,应该政府强制暂停。


原文链接:

https://futureoflife.org/open-letter/pause-giant-ai-experiments/

中文翻译:


AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs. As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.

广泛的研究表明,顶级人工智能实验室也承认,具有人类竞争性智能的人工智能系统可能对社会和人类构成深远的风险。正如广泛认可的阿西洛马人工智能原则所述,先进的人工智能可能代表着地球生命历史上的深刻变化,应该以相称的关怀和资源来规划和管理。不幸的是,这种水平的规划和管理并没有发生,尽管最近几个月,为了开发和部署更强大的数字思维,人工智能实验室陷入了一场失控的竞赛,没有人——甚至包括它们的创造者——能够理解、预测或可靠地控制。

Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system's potential effects. OpenAI's recent statement regarding artificial general intelligence, states that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." We agree. That point is now.

当代人工智能系统现在在一般任务上具有人类竞争性,我们必须问自己:我们应该让机器用宣传和谎言淹没我们的信息渠道吗?我们是否应该把所有的工作都自动化掉,包括那些使人有成就感的工作?我们是否应该发展非人类思维,让它们最终在数量上、智力上超过我们、淘汰并取代我们?我们是否应该冒险失去对我们文明的控制?这样的决定不应委托给未经选举产生的科技领导者。只有在我们确信它们的影响是积极的、风险是可控的情况下,才应该开发强大的人工智能系统。这种信心必须有充分的理由,并随着系统潜在影响的大小而增加。OpenAI最近在关于通用人工智能的声明中表示,“在某些时候,在开始训练未来的系统之前,进行独立审查可能很重要,对于最先进的努力来说,同意限制用于创建新模型的计算增长率也很重要。”我们同意。那就是现在。

Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

因此,我们呼吁所有人工智能实验室立即暂停至少6个月对比GPT-4更强大的人工智能系统的训练。这种暂停应该是公开的、可核实的,并包括所有关键行为者。如果这样的暂停不能迅速实施,政府就应该介入并实施暂停。

AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt. This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.

人工智能实验室和独立专家应该利用这一暂停,共同开发和实施一套针对高级人工智能设计和开发的共享安全协议,这些协议由独立的外部专家严格审计和监督。这些协议应确保遵守这些协议的系统在排除合理怀疑的情况下是安全的。这并不意味着人工智能开发总体上的暂停,只是从危险的竞赛倒退到具有应急能力的越来越大的不可预测的黑箱模型。

AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.

人工智能的研究和开发应该重新聚焦于使当今强大的、最先进的系统变得更加准确、安全、可解释、透明、强劲、一致、值得信赖和忠诚。

In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

与此同时,人工智能开发人员必须与政策制定者合作,大幅加快开发强大的人工智能治理系统。这些措施至少应包括:致力于人工智能的新的、有能力的监管机构;监督和跟踪高能力的人工智能系统和大型计算能力池;出处和水印系统,以帮助区分真实的和合成的,并跟踪模型泄漏;健全的审计和认证生态系统;人工智能造成的损害责任;为人工智能技术安全研究提供强有力的公共资金;以及资源充足的机构,以应对人工智能将造成的巨大经济和政治混乱(尤其是对民主的破坏)。

Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an "AI summer" in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society. We can do so here. Let's enjoy a long AI summer, not rush unprepared into a fall.

人类可以通过人工智能享受繁荣的未来。在成功创建了强大的人工智能系统后,我们现在可以享受一个“人工智能之夏”,在这个夏天我们可以收获回报,设计这些系统,为所有人带来明显的利益,并给社会一个适应的机会。社会已经暂停了其他对社会有潜在灾难性影响的技术。我们可以在这里这样做。让我们享受一个漫长的人工智能夏天,而不是在毫无准备的情况下匆忙进入一个秋天。

互联杂谈后记:

终于人类有可能造出超越自己的东西。
是福还是祸?
不管对人类是福还是祸,人类其实已经很难阻止了...
因为这个世界并不统一,人类很难达成共识,俄乌冲突就是明证,所以AI研究不会停止,一旦AI可以自我进化...


相关文章

关注

互联杂谈多次被消失,防失莲,点此加小编个人微信,朋友圈更精彩!

点下方阅读原文,查看更多互联杂谈精选文章

继续滑动看下一个
向上滑动看下一个

您可能也对以下帖子感兴趣

文章有问题?点此查看未经处理的缓存