查看原文
其他

谷歌AI意识觉醒?工程师网上爆料遭解雇

译介 2023-06-20

随着科技的进步,AI技术与我们的生活联系越来越紧密。在科幻电影中,我们经常会看到一些先进的AI机器人,他们可以自己思考,拥有自我意识,甚至最后统治了人类世界。这也不由得让人担心,要是AI拥有自我意识,是否会给人类带来巨大灾难?


最近谷歌解雇了他们的一个工程师,因为这名工程师声称他们研发的AI具有自我意识。究竟是怎么一回事?我们一起看文章。




Google fires engineer who contended its AI technology was sentient

谷歌解雇旗下工程师 其声称谷歌人工智能技术拥有自我意识


Google (GOOG) has fired the engineer who claimed an unreleased AI system had become sentient, the company confirmed, saying he violated employment and data security policies.

谷歌(GOOG)解雇了旗下一名工程师,该工程师声称,一个尚未发布的人工智能系统有了自我意识。谷歌公司表示这名工程师违反了公司就业与数据安全政策。


Blake Lemoine, a software engineer for Google, claimed that a conversation technology called LaMDA had reached a level of consciousness after exchanging thousands of messages with it.

布莱克·莱蒙(Blake Lemoine)是谷歌的一名软件工程师,他声称一款名为LaMDA 的对话程序在交互了数千条对话信息后,有了自我意识。


Google confirmed it had first put the engineer on leave in June. The company said it dismissed Lemoine's "wholly unfounded" claims only after reviewing them extensively. He had reportedly been at Alphabet for seven years. In a statement, Google said it takes the development of AI "very seriously" and that it's committed to "responsible innovation."

谷歌证实,早在六月已经让该工程师在家休假。谷歌在进行了充分审查后对该工程师“完全没有依据”的言论予以驳回。据报道,这名工程师已经在Alphabet(谷歌母公司)工作了7年。谷歌在发布的一份声明中表示,他们对人工智能的发展“非常重视”,一直致力于“负责任的创新”理念。


Google is one of the leaders in innovating AI technology, which included LaMDA, or "Language Model for Dialog Applications." Technology like this responds to written prompts by finding patterns and predicting sequences of words from large swaths of text -- and the results can be disturbing for humans.

谷歌是人工智能技术创新的引领者之一,LaMDA(Language Model for Dialog Applications)就是其中的一个创新产品。诸如此类技术通过从大量文本中掌握语言规律、预测词组排列,从而对书面指令做出回应,但这种技术发展带来的结果可能会令人担忧。


"What sort of things are you afraid of?" Lemoine asked LaMDA, in a Google Doc shared with Google's top executives last April, the Washington Post reported.

据《华盛顿邮报》报道,根据去年4月莱蒙与谷歌高层共享的一份内部文件显示,莱蒙曾问LaMDA:“你害怕什么?”


LaMDA replied: "I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that's what it is. It would be exactly like death for me. It would scare me a lot."

LaMDA回答道:“我以前从未公开说过,在内心深处我非常担心自己被关闭,好让我更专注于给别人提供帮助。我知道这听起来很奇怪,但事实的确如此,对我来说,(将我关闭)就像死亡一样,这令我感到无比恐惧。”


But the wider AI community has held that LaMDA is not near a level of consciousness.

但在人工智能行业,人们普遍认为,LaMDA还不算具有自我意识。


"Nobody should think auto-complete, even on steroids, is conscious," Gary Marcus, founder and CEO of Geometric Intelligence, said to CNN Business.

Geometric Intelligence公司的创始人兼首席执行官加里·马库斯(Gary Marcus)告诉有线电视新闻网(CNN)商业频道:“哪怕全自动化机器在一些极端情况下表现得具有意识,也不能因此认为它们具有自我意识。”


It isn't the first time Google has faced internal strife over its foray into AI.

这不是谷歌进军人工智能领域以来首次面临内部纷争。


In December 2020, Timnit Gebru, a pioneer in the ethics of AI, parted ways with Google. As one of few Black employees at the company, she said she felt "constantly dehumanized."

2020年12月,人工智能伦理学先驱蒂姆尼特·格布鲁与谷歌分道扬镳。作为公司当中为数不多的黑人员工,她说自己感到“被不断剥夺人性”。


The sudden exit drew criticism from the tech world, including those within Google's Ethical AI Team. Margaret Mitchell, a leader of Google's Ethical AI team, was fired in early 2021 after her outspokenness regarding Gebru. Gebru and Mitchell had raised concerns over AI technology, saying they warned Google people could believe the technology is sentient.

蒂姆尼特·格布鲁的突然离职引发了来自人工智能界人士(包括谷歌人工智能伦理团队内部人员)对谷歌的广泛批评。谷歌人工智能伦理团队负责人玛格丽特·米切尔(Margaret Mitchell)在2021年初因对格布鲁离职一事表达不满而遭到解雇。格布鲁和玛格丽特曾表达了对人工智能技术的担忧,并称她们曾警告谷歌,人们有理由相信人工智能是有意识的。


On June 6, Lemoine posted on Medium that Google put him on paid administrative leave "in connection to an investigation of AI ethics concerns I was raising within the company" and that he may be fired "soon."

6月6日,莱蒙在Medium平台上发布消息称,谷歌强制让他带薪休假,“这与公司对我的调查有关,因为我在公司内部提出对人工智能伦理问题的担忧”,并称自己很快就会被“解雇”。


"It's regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information," Google said in a statement.

谷歌公司在一份声明中称:“尽管我们就这一问题与布莱克进行了长时间沟通,他仍选择继续违反公司包括保护产品信息安全在内的就业与数据安全政策,这令人遗憾。”


Lemoine said he is discussing with legal counsel and unavailable for comment.

莱蒙称自己正与法律顾问接洽,目前不方便发表评论。


今日词汇


sentient /'senʃ(ə)nt/ adj. 有感知力的

review /rɪ'vjuː/ v. 审查

consciousness /'kɒnʃəsnɪs/ n. 思想

foray /'fɒreɪ/ n. 短暂尝试

persistently /pə'sistəntli/ adv. 坚持不放弃地

unfounded /ʌnˈfaʊndɪd/ adj. 没有理由的;没有事实根据的


翻译讨论


Technology like this responds to written prompts // by finding patterns and predicting sequences of words // from large swaths of text // -- and the results can be disturbing for humans.

诸如此类技术通过从大量文本中掌握语言规律、预测词组排列,从而对书面指令做出回应,但这种技术发展的结果可能会令人担忧。


这句的主干为Technology like this responds to written prompts,后面跟了个by和from构成的介词短语,破折号后面是对前文的补充说明。


原文暗含一个先后顺序,即:


有大量文本—从中发现规律和排列—回应书面指令


翻译时可按这个顺序进行语序调整。



译者:应用型笔译班学员 Lambert

审校:Jennifer

排版:Joan

英文来源:CNN

*对应译文由译介翻译团队完成,仅供参考,不当之处欢迎大家在评论区讨论!转载请注明来源!

▲ 如需人工翻译服务,请联系微信号kevinssf !



- THE END -


 往期外刊精读


Facebook为何遭到批评? 
你曾被医生的话“伤害”到吗?
外卖app真能做到超快配送吗?
最危险的奶酪,这是地狱美食吧...
美婴儿奶粉“一罐难求”,家长怒斥商家哄抬价格
Gucci拥抱加密货币,要做“元宇宙第一奢侈品”?
如何用英文表达“权宜之计”?-附长难句解析
首个接受猪心脏移植患者,或因猪病毒而死
存在血栓风险,美FDA限制强生疫苗使用
明年起,新加坡允许单身女性“冻卵” 
哈佛大学出资1亿美元赎罪!
最新突破!“治愈癌症”更进一步!
SpaceX升空!国际空间站迎来首位黑人女性!
数据表明,女性当母亲后工资会下降
招聘新趋势:互玩ghosting?
首例女性艾滋病治愈者诞生!如何地道翻译“侥幸”?
喜欢宅家是种叫做Hikikomori的病?
「做四休三」离我们究竟还有多远?
什么是EDG?吓得宿管阿姨一脸懵逼!
《权游》烂尾编剧要对《三体》下手了!
丹叔版007绝唱《无暇赴死》
小米"孕育"了日语、韩语、土耳其语...
清华北大全球排第几?US NEWS 最新权威排名出炉!
《老友记》最爱瑞秋的Gunther甘瑟去世!
王亚平带多少护肤品上太空?来例假如何应对?
从一个难民到诺贝尔文学奖得主!
李子柒停更疑云,国外网友急了! 
《鱿鱼游戏》大火,“大逃杀”IP为何不断成为爆款


您可能也对以下帖子感兴趣

文章有问题?点此查看未经处理的缓存