查看原文
其他

当人工智能失控,真人版“黑客帝国”也就不远了 | 双语阅读

2017-08-15 英国《金融时报》 FT每日英语

有可能,机器会获得自做主张地塑造和控制未来的能力——发生这样的剧变也不需要任何蓄谋的恶意。


我们把越来越多的工作交给人工智能,但研究发现,连“设计者,要仅凭观察来破译他们所研发的机器人的行为也有困难”。当机器的智能超过人类创造者的智力,当“奇点”——人工智能失控的那个临界点——实现,我们可能真的会面临“Game Over”。




If I were to approach you brandishing a cattle prod, you might at first be amused. But, if I continued my advance with a fixed maniacal grin, you would probably retreat in shock, bewilderment and anger. As electrode meets flesh, I would expect a violent recoil plus expletives.

如果我挥舞着赶牛棒朝你走去,你刚开始可能会觉得搞笑。但如果我脸上挂着僵硬而癫狂的笑容朝你越走越近,你很可能会吓得后退,心中充满震惊、困惑和愤怒。当电极触碰到肉体,可以想见,你会猛地往后缩,嘴上咒骂连连。


Photo credit: Getty Images


Given a particular input, one can often predict how a person will respond. That is not the case for the most intelligent machines in our midst. The creators of AlphaGo — a computer program built by Google’s DeepMind that decisively beat the world’s finest human player of the board game Go — admitted they could not have divined its winning moves. This unpredictability, also seen in the Facebook chatbots that were shut down after developing their own language, has stirred disquiet in the field of artificial intelligence.

当你对一个人做了什么事,对方会做出什么反应往往是可以预测的。当今最智能的机器却不是这样。AlphaGo是谷歌(Google)下属的DeepMind开发出的计算机程序,它在围棋对弈中击败世界最顶尖的棋手。但AlphaGo的创造者们承认,他们无法推测它会走出什么胜招。这种不可预测性——同样可见于Facebook的两个聊天机器人,它们因为发展出自己的语言而被关闭了——引起了人工智能界的不安。


As we head into the age of autonomous systems, when we abdicate more decision-making to AI, technologists are urging deeper understanding of the mysterious zone between input and output. At a conference held at Surrey University last month, a team of coders from Bath University presented a paper revealing how even “designers have difficulty decoding the behaviour of their own robots simply by observing them”.

随着我们迈入自主系统的时代、把更多决策工作交给人工智能,技术专家们开始敦促我们要更深入地理解“输入”(input)和“输出”(output)之间的神秘地带。上月在萨里大学(Surrey University)举办的一次会议上,来自巴斯大学(Bath University)的一个编程团队提交了一篇论文,透露就连“设计者,要仅凭观察来破译他们所研发的机器人的行为也有困难”。


Photo credit: Getty Images


The Bath researchers are championing the concept of “robot transparency” as an ethical requirement: users should be able to easily discern the intent and abilities of a machine. And when things go wrong — if, say, a driverless car mows down a pedestrian — a record of the car’s decisions should be accessible so that similar errors can be coded out.

巴斯大学的研究人员倡导把“机器人透明”列为一项伦理要求:用户应该能够轻易辨识一部机器的意图和能力。而在出事之后,比如说一辆无人驾驶汽车撞倒了一名行人,人们应该能够获得关于该汽车所做决定的记录,以便通过修改代码来根除类似的错误。


Other roboticists, notably Professor Alan Winfield of Bristol Robotics Laboratory at the University of the West of England, have similarly called for “ethical black boxes” to be installed in robots and autonomous systems, to enhance public trust and accountability. These would work in exactly the same way as flight data recorders on aircraft: furnishing the sequence of decisions and actions that precede a failure.

其他机器人专家也呼吁在机器人和自动化系统上安装“道德黑匣子”,以增强公众信任,也有助于追究责任。其中最有名的要数西英格兰大学(University of the West of England)布里斯托机器人实验室(Bristol Robotics Laboratory)的艾伦•温菲尔德教授(Alan Winfield)。这种黑匣子的作用就像飞机上记录飞行数据的黑匣子:记录机器失灵前做了哪些决定和行为。


Photo credit: Getty Images


Many autonomous systems, of course, are unseen: they lurk behind screens. Machine-learning algorithms, grinding mountains of data, can affect our success at securing loans and mortgages, at landing job interviews, and even at being granted parole.

当然,很多自主系统是看不见的:它们隐藏在屏幕背后。处理海量数据的机器学习算法对于我们能否申请到贷款、能否获得面试机会,甚至能否获得假释都会有影响。


For that reason, says Sandra Wachter, a researcher in data ethics at Oxford university and the Alan Turing Institute, regulation should be discussed. While algorithms can correct for some biases, many are trained on already-skewed data. So a recruitment algorithm for management is likely to identify ideal candidates as male, white and middle-aged. “I am a woman in my early 30s,” she told Science, “so I would be filtered out immediately, even if I’m suitable . . . [and] sometimes algorithms are used to display job ads, so I wouldn’t even see the position is available.”

出于这个原因,牛津大学(Oxford University)和图灵研究所(Alan Turing Institute)的数据伦理学研究员桑德拉•沃奇特(Sandra Wachter)认为,我们应当讨论制定相应的监管条例。虽然算法可以纠正某些偏见,但很多算法是以本来就扭曲的数据训练出来的。比如一款招聘管理人员的算法可能把理想的应聘人选列为白人中年男性。“我是个30岁出头的女人,”她告诉《科学》(Science)杂志,“所以我会立刻被过滤掉,即使我是合适的人选……有时候算法还会被用来显示招聘广告,所以我甚至看不到这个职位信息。”


The EU General Data Protection Regulation, due to come into force in May 2018, will offer the prospect of redress: individuals will be able to contest completely automated decisions that have legal or other serious consequences.

将于2018年5月生效的《欧盟一般数据保护条例》(EU General Data Protection Regulation)将提供纠正的空间:那些完全自动做出的决定如果造成法律或其他方面的严重后果,个人将能够提出异议。


Photo credit: Getty Images


There is an existential reason for grasping precisely how data input becomes machine output — “the singularity”. This is the much-theorised point of runaway AI, when machine intelligence surpasses that of human creators. Machines could conceivably acquire the ability to shape and control the future on their own terms.

对于我们为什么需要掌握数据输入是如何变成机器输出的,有一个关乎人类生死存亡的理由,那就是“奇点”(singularity)。人们对奇点的概念进行了很多理论分析,它是指当机器的智能超过人类创造者的智力,人工智能失控的那个临界点。可以想象,机器有可能获得自做主张地塑造和控制未来的能力。


There need not be any premeditated malice for such a leap — only a lack of human oversight as AI programs, equipped with an ever-greater propensity to learn and the corresponding autonomy to act, begin to do things that we can no longer predict, understand or control. The development of AlphaGo suggests that machine learning has already mastered unpredictability, if only at one task. The singularity, should it materialise, promises a rather more chilling version of Game Over.

发生这样的剧变不需要任何蓄谋的恶意;只需要当人工智能程序具备越来越强烈的学习倾向和相应的自主行动能力,开始做一些我们再也无法预测、理解或控制的事情时,人类却对它们缺乏监督。AlphaGo的开发似乎表明,机器学习已经造就了掌控不可预测性的高手,即便只是针对一种任务。倘若奇点真的出现了,Game Over可能会更加恐怖。








你觉得人工智能会带来威胁吗?


推荐

阅读


“犯我中华”的“犯”该怎么译?一不小心就是个国际误会


您可能也对以下帖子感兴趣

文章有问题?点此查看未经处理的缓存