查看原文
其他

【TED演讲217】人工智能时代,我们更需坚守人类道德

littleflute 笛台 2021-10-05

:

【TED演讲214】3个心理学技巧让你真正学会存钱!

【TED演讲213】眼见未必为实,魔术大师教你转移注意力的秘诀

【TED演讲212】“越高级的人,越不合群”


【TED演讲211】比智商和情商更重要的,是坚持!

【TED演讲210】其实你根本不需要那些应用程序

【TED演讲209】什么才是爱情应有的样子?

【TED演讲208】从太空观察地球是种什么样的体验

【TED演讲207】我们需要钱来进行援助,那就来印钞吧!

【TED演讲206】顶级心理学家:考试不及格也许是件好事!

【TED演讲205】打造幸福婚姻,避免离婚的三个方法

【TED演讲204】社交传媒和性别消失:未来媒体会发生什么变化?

【TED演讲203】5种迹象判定抑郁症

【TED演讲202】How to stay calm when you know you'll be stressed

【TED演讲201】拿什么拯救我们的医疗?演说者:Atul Gawande


 Machine intelligence is here, and we're already using it to make subjective decisions. But the complex way AI grows and improves makes it hard to understand and even harder to control. In this cautionary talk, techno-sociologist Zeynep Tufekci explains how intelligent machines can fail in ways that don't fit human error patterns -- and in ways we won't expect or be prepared for. "We cannot outsource our responsibilities to machines," she says. "We must hold on ever tighter to human values and human ethics." TEDTalks is a daily video podcast of the best talks and performances from the TED Conference, where the world's leading thinkers and doers give the talk of their lives in 18 minutes (or less). Look for talks on Technology, Entertainment and Design -- plus science, business, global issues, the arts and much more.

 

机器智能就在这里,我们已经在使用它来做出主观决策。但是,AI增长和改进的复杂方式使其难以理解甚至难以控制。在这个警示性演讲中,技术社会学家Zeynep Tufekci解释了智能机器如何以不适合人为错误模式的方式发生故障,以及以我们不会期望或未做好准备的方式发生故障。她说:“我们不能将责任外包给机器。”“我们必须更加严格地遵守人类价值观和人类道德。
演说者:Zeynep Tufekci
演说题目:人工智能时代,我们更需坚守人类道德


 

 

 中英文对照翻译


So, I started my first job as a computer programmer in my very first year of college -- basically, as a teenager. 
我的第一份工作是程序员,那是在我刚上大学的时候,不到二十岁。 


Well, I laughed, but actually, the laugh's on me. Nowadays, there are computational systems that can suss out emotional states and even lying from processing human faces. Advertisers and even governments are very interested. 
我笑了,但其实我是在笑自己,现在,计算机系统已经可以通过分析人脸来辨别人的情绪,甚至包括是否在撒谎。广告商,甚至政府都对此很感兴趣。 


I had become a computer programmer because I was one of those kids crazy about math and science. But somewhere along the line I'd learned about nuclear weapons, and I'd gotten really concerned with the ethics of science. I was troubled. 
我选择成为电脑程序员,因为我是那种痴迷于数学和科学孩子。其间我也学习过核武器,我也非常关心科学伦理。我曾经很困惑。


However, because of family circumstances, I also needed to start working as soon as possible. So I thought to myself, hey, let me pick a technical field where I can get a job easily and where I don't have to deal with any troublesome questions of ethics. So I picked computers. 
但是,因为家庭原因,我需要尽快参加工作。我对自己说,嘿,选一个容易找工作的科技领域吧,并且找个不需要操心伦理问题的。所以我选了计算机。 

We're asking questions like, "Who should the company hire?" "Which update from which friend should you be shown?" "Which convict is more likely to reoffend?" "Which news item or movie should be recommended to people?" 
我们会问, “我们公司应该聘请谁?” “你该关注哪个朋友的哪条状态?” “哪种犯罪更容易再犯?” “应该给人们推荐哪条新闻或是电影?” 


Look, yes, we've been using computers for a while, but this is different. This is a historical twist, because we cannot anchor computation for such subjective decisions the way we can anchor computation for flying airplanes, building bridges, going to the moon. 
看,是的,我们使用计算机已经有一段时间了,但现在不一样了。这是历史性的转折,因为我们在这些主观决策上无法主导计算机,不像我们在管理飞机、建造桥梁、登月等问题上,可以主导它们。


Are airplanes safer? Did the bridge sway and fall? There, we have agreed-upon, fairly clear benchmarks, and we have laws of nature to guide us. We have no such anchors and benchmarks for decisions in messy human affairs. 
飞机会更安全吗?桥梁会摇晃或倒塌吗?在这些问题上,我们有统一而清晰的判断标准,我们有自然定律来指导。但是在复杂的人类事务上,我们没有这样的客观标准。 


To make things more complicated, our software is getting more powerful, but it's also getting less transparent and more complex. Recently, in the past decade, complex algorithms have made great strides. 
让问题变得更复杂的,是我们的软件正越来越强大,同时也变得更加不透明,更加复杂。最近的几十年,复杂算法已取得了长足发展。

And the system learns by churning through this data. And also, crucially, these systems don't operate under a single-answer logic. They don't produce a simple answer; it's more probabilistic: "This one is probably more like what you're looking for." 
系统扎进这些数据中学习,重要的是,这些系统不再局限单一答案。他们得出的不是一个简单的答案,而是概率性的: “这个更像是你在寻找的。” 


Now, the upside is: this method is really powerful. The head of Google's AI systems called it, "the unreasonable effectiveness of data." The downside is, we don't really understand what the system learned. In fact, that's its power. This is less like giving instructions to a computer; it's more like training a puppy-machine-creature we don't really understand or control. 
它的优势是:它真的非常强大。Google人工智能系统的负责人称它为: “不可思议的数据效率”。缺点在于,我们无法清楚的了解系统学到了什么,事实上,这也正是它的强大之处。不像是给计算机下达指令,更像是在训练一个机器狗,我们无法精确的了解和控制它。


So this is our problem. It's a problem when this artificial intelligence system gets things wrong. It's also a problem when it gets things right, because we don't even know which is which when it's a subjective problem. We don't know what this thing is thinking. 
这就是我们遇到的问题。人工智能会出错,这是一个问题。但他们得出正确答案,又是另一种问题。因为我们面对主观问题,是不应该有答案的。我们不知道这些机器在想什么。 


So, consider a hiring algorithm -- a system used to hire people, using machine-learning systems. Such a system would have been trained on previous employees' data and instructed to find and hire people like the existing high performers in the company. Sounds good. 
所以,考虑一下招聘算法-通过机器学习构建的招聘系统。这样的系统会用员工现有的数据进行自我培训,参照公司的优秀员工来寻找和招聘新人。听起来很好。

I was broke, so free lunch. I always went. I later realized what was happening. My immediate managers had not confessed to their higher-ups that the programmer they hired for a serious job was a teen girl who wore jeans and sneakers to work. I was doing a good job, I just looked wrong and was the wrong age and gender. 
我当时很穷,所以不会放过免费的午餐。后来我才想明白原因,我的主管们没有向他们的上级坦白,他们雇了一个十多岁的小女孩来做重要的编程工作,一个穿着牛仔裤,运动鞋工作的女孩。我的工作做得很好,我只是看起来不合适,年龄和性别也不合适。 


So hiring in a gender- and race-blind way certainly sounds good to me. But with these systems, it is more complicated, and here's why: Currently, computational systems can infer all sorts of things about you from your digital crumbs, even if you have not disclosed those things. 
所以,忽略性别和种族的招聘,听起来很适合我。但是这样的系统会带来更多问题,当前,计算机系统能根据零散的数据,推断出关于你的一切,甚至你没有公开的事。
They can infer your sexual orientation, your personality traits, your political leanings. They have predictive power with high levels of accuracy. Remember -- for things you haven't even disclosed. This is inference. 
它们可以推断你的性取向,你的性格特点,你的政治倾向。它们有高准确度的预测能力,记住,是你没有公开的事情,这就是推断。 


What if it's hiring aggressive people because that's your workplace culture?" You can't tell this by looking at gender breakdowns. Those may be balanced. And since this is machine learning, not traditional coding, there is no variable there labeled "higher risk of depression," "higher risk of pregnancy," "aggressive guy scale." 
如果因为你的公司文化,它只雇佣激进的候选人怎么办?” 只看性别比例,你发现不了这些问题,性别比例是可以被调整的。并且因为这是机器学习,不是传统的代码,不会有一个变量来标识 “高抑郁风险”、 “高怀孕风险”、 “人员的激进程度”。
Not only do you not know what your system is selecting on, you don't even know where to begin to look. It's a black box. It has predictive power, but you don't understand it. 
你不仅无法了解系统在选什么样的人,你甚至不知道从哪里入手了解。它是个暗箱。它有预测的能力,但你不了解它。 


"What safeguards," I asked, "do you have to make sure that your black box isn't doing something shady?" She looked at me as if I had just stepped on 10 puppy tails.
我问,“你有什么措施可以保证,你的暗箱没有在做些见不得人的事?” 她看着我,就好像我刚踩了10只小狗的尾巴。 

Another problem is this: these systems are often trained on data generated by our actions, human imprints. Well, they could just be reflecting our biases, and these systems could be picking up on our biases and amplifying them and showing them back to us, while we're telling ourselves, "We're just doing objective, neutral computation." 
另一个问题是,这些系统通常使用我们真实的行为数据来训练。它们可能只是在反馈我们的偏见,这些系统会继承我们的偏见,并把它们放大,然后反馈给我们。我们骗自己说, “我们只做客观、中立的预测。” 


Researchers found that on Google, women are less likely than men to be shown job ads for high-paying jobs. And searching for African-American names is more likely to bring up ads suggesting criminal history, even when there is none. Such hidden biases and black-box algorithms that researchers uncover sometimes but sometimes we don't know, can have life-altering consequences. 
研究者发现,在Google 上,高收入工作的广告更多的被展示给男性用户。搜索非裔美国人的名字,更可能出现关于犯罪史的广告,即使某些根本不存在。这些潜在的偏见以及暗箱中的算法,有些会被研究者揭露,有些根本不会被发现,它的后果可能是改变一个人的人生。 


In Wisconsin, a defendant was sentenced to six years in prison for evading the police. You may not know this, but algorithms are increasingly used in parole and sentencing decisions. He wanted to know: How is this score calculated? It's a commercial black box. The company refused to have its algorithm be challenged in open court. 
在威斯康星,一个被告因逃避警察被判刑六年。你可能不知道,但计算机算法正越来越多的被应用在假释及量刑裁定上。他想要弄清楚,这个得分是怎么算出来的?这是个商业暗箱,这家公司拒绝在公开法庭上讨论他们的算法。


She was wrong, she was foolish, but she was also just 18. She had a couple of juvenile misdemeanors. Meanwhile, that man had been arrested for shoplifting in Home Depot -- 85 dollars' worth of stuff, a similar petty crime. But he had two prior armed robbery convictions. But the algorithm scored her as high risk, and not him. 
她做错了,她很愚蠢,但她也才刚满18岁,她之前有不少青少年轻罪的记录。与此同时,这个男人在连锁超市偷窃被捕了,偷了价值85美金的东西,同样的轻微犯罪,但他有两次持枪抢劫的案底。这个程序将这位女性判定为高风险,而这位男性则不是。


Two years later, ProPublica found that she had not reoffended. It was just hard to get a job for her with her record. He, on the other hand, did reoffend and is now serving an eight-year prison term for a later crime. Clearly, we need to audit our black boxes and not have them have this kind of unchecked power. 
两年后,ProPublica发现她没有再次犯罪,但这个记录使她很难找到工作。而这位男性,却再次犯罪,并因此被判八年监禁。显然,我们需要审查这些暗箱,确保它们不再有这样不加限制的权限。 


Audits are great and important, but they don't solve all our problems. Take Facebook's powerful news feed algorithm -- you know, the one that ranks everything and decides what to show you from all the friends and pages you follow. Should you be shown another baby picture? 
审查是很重要的,但不能解决所有的问题。拿Facebook的强大的新闻流算法来说,就是通过你的朋友圈和你浏览过的页面,决定你的 “推荐内容”的算法。它会决定要不要再推一张婴儿照片给你,


The story of Ferguson wasn't algorithm-friendly. It's not "likable." Who's going to click on "like?" It's not even easy to comment on. Without likes and comments, the algorithm was likely showing it to even fewer people, so we didn't get to see this. 
弗格森事件对算法是不适用的,它不是值得“赞”的新闻,谁会在这样的文章下点“赞”呢?甚至这新闻都不好被评论。因为没有“赞”和评论,算法会减少这些新闻的曝光,所以我们无法看到。


Instead, that week, Facebook's algorithm highlighted this, which is the ALS Ice Bucket Challenge. Worthy cause; dump ice water, donate to charity, fine. But it was super algorithm-friendly. The machine made this decision for us. A very important but difficult conversation might have been smothered, had Facebook been the only channel. 
相反的,在同一周,Facebook的算法热推了ALS冰桶挑战的信息。这很有意义,倒冰水,为慈善捐款,很好。 这个事件对算法是很适用的,机器帮我们做了这个决定。非常重要但艰涩的新闻事件可能会被埋没掉,因为Facebook已经成为主要的信息来源。 


Now, finally, these systems can also be wrong in ways that don't resemble human systems. Do you guys remember Watson, IBM's machine-intelligence system that wiped the floor with human contestants on Jeopardy? It was a great player. But then, for Final Jeopardy, Watson was asked this question: "Its largest airport is named for a World War II hero, its second-largest for a World War II battle." 
最后,这些系统也可能会在一些不同于人力系统的那些事情上搞错。你们记得Watson吧,那个在智力竞赛《危险边缘》中横扫人类选手的IBM机器智能系统,它是个很厉害的选手。但是,在最后一轮比赛中,Watson 被问道: “它最大的机场是以二战英雄命名的,它第二大机场是以二战战场命名的。” 

In May of 2010, a flash crash on Wall Street fueled by a feedback loop in Wall Street's "sell" algorithm wiped a trillion dollars of value in 36 minutes. I don't even want to think what "error" means in the context of lethal autonomous weapons. 
在2010年五月,华尔街出现一次股票闪电崩盘,原因是“卖出”算法的反馈回路导致,在36分钟内损失了几十亿美金。我甚至不敢想,致命的自动化武器发生“错误”会是什么后果。 


So yes, humans have always made biases. Decision makers and gatekeepers, in courts, in news, in war ... they make mistakes; but that's exactly my point. We cannot escape these difficult questions. We cannot outsource our responsibilities to machines. Artificial intelligence does not give us a "Get out of ethics free" card. 
是的,人类总是会有偏见,法庭上、新闻机构、战争中的,决策者、看门人…他们都会犯错,但这恰恰是我要说的。我们无法抛开这些困难的问题,我们不能把我们自身该承担的责任推给机器。人工智能不会给我们一张“伦理免责卡”。  


Data scientist Fred Benenson calls this math-washing. We need the opposite. We need to cultivate algorithm suspicion, scrutiny and investigation. We need to make sure we have algorithmic accountability, auditing and meaningful transparency. We need to accept that bringing math and computation to messy, value-laden human affairs does not bring objectivity; rather, the complexity of human affairs invades the algorithms. 
数据科学家Fred Benenson称之为“数学粉饰”。我们需要是相反的东西。我们需要培养算法的怀疑、复查和调研能力。我们需要确保有人为算法负责,为算法审查,并切实的公开透明。我们必须认识到,把数学和计算引入解决复杂的、高价值的人类事务中,并不能带来客观性,相反,人类事务的复杂性会扰乱算法。

 

声明:除特别注明原创授权转载文章外,其他文章均为转版权归原作者或平台所有。如有侵权,请后台联系,告知删除,谢谢

【听笛台012】 Bonnie Tyler - Total Eclipse of the Heart (Video)

【听笛台011】Cyndi Lauper-Girls Just Want To Have Fun(Official Video)

【听笛台010】Abba - Dancing Queen (Official Video)

【听笛台009】The Mamas & The Papas - California Dreamin'


【听笛台008】Kansas - Dust in the Wind (Official Video)

【听笛台007】Suspicious Mind - Elvis Presley

【听笛台006】Leonard Cohen - Hallelujah (Live In London)

【听笛台005】 Dire Straits - Sultans Of Swing (Alchemy Live)


【听笛台004】Eric Clapton - Layla 2do

【听笛台003】Ram Jam - Black Betty 2do

 【听笛台002】Queen & David Bowie-Under Pressure (Classic Queen Mix)

【听笛台001】Queen - Love of My Life


【全球过亿播放MV】49 首


中央音乐学院周海宏-音乐何须懂(1h53) 

:

【第3疗程:猛药3剂】AC/DC - Thunderstruck (from Live at River Plate)

【第2疗程:猛药3剂】AC/DC - T.N.T. (from Live at River Plate)

【第1疗程:猛药3剂】AC/DC - Thunderstruck (Official Video)


【omv01】Michael Jackson - Billie Jean (Official Music Video)


7Lazy-JimmyBarnes&JoeBonamassa

6: Deep Purple - Burn (1974)

5:  Hush (Official Video)

4:   Smoke On The Water (Live)

3: KnockingAtYourBack Door(OV)

2:  Perfect Strangers(ov)

1:   Child In Time - 1970


【mv021】Lazy

【mv018-20】Smoke On The Water|Hush |Burn (1974)

【mv015-17】ChildInTime|PerfectStrangers|KnockingAtYourBack Door

【3首mv012-14】The Bad Touch & The Ballad Of Chasey Lain&Weapen...

[f]【mv010-11】Set Fire To The Rain & Smooth Criminal  & Sabotage

[th]【mv009】音乐欣赏 Adele《Hello》

【摇滚MV-008】Metallica -Nothing Else Matters [Official Music Video]

【摇滚MV-007】The Cranberries - Zombie (Official Music Video)

【摇滚MV-006】 R.E.M. - Losing My Religion (Official Music Video) 

【摇滚MV-005】Scorpions - Wind Of Change (Official Music Video)

【摇滚MV-004】1986年崔健首次公开演出《一无所有》

【摇滚MV-003】【港版MV】Don't Break My Heart-黑豹时期窦唯

【摇滚MV-002】窦唯 黑豹乐队时期 怕你为自己流泪

【摇滚MV-001】【無地自容 Shameful】Official Music Video


 [th]关于“笛声嘹亮”V0.1

 

 


合集收藏



【全球过亿播放MV】49 首 


看见这个好看了吗你懂我意思☟☟☟



 


 

【全球过亿播放MV】49 首


中央音乐学院周海宏-音乐何须懂(1h53) 

:

【第2疗程:猛药3剂】AC/DC - T.N.T. (from Live at River Plate)

【第1疗程:猛药3剂】AC/DC - Thunderstruck (Official Video)


【omv01】Michael Jackson - Billie Jean (Official Music Video)


【3首mv012-14】The Bad Touch & The Ballad Of Chasey Lain&Weapen...

[f]【mv010-11】Set Fire To The Rain & Smooth Criminal  & Sabotage

[th]【mv009】音乐欣赏 Adele《Hello》

【摇滚MV-008】Metallica -Nothing Else Matters [Official Music Video]

【摇滚MV-007】The Cranberries - Zombie (Official Music Video)

【摇滚MV-006】 R.E.M. - Losing My Religion (Official Music Video) 

【摇滚MV-005】Scorpions - Wind Of Change (Official Music Video)

【摇滚MV-004】1986年崔健首次公开演出《一无所有》

【摇滚MV-003】【港版MV】Don't Break My Heart-黑豹时期窦唯

【摇滚MV-002】窦唯 黑豹乐队时期 怕你为自己流泪

【摇滚MV-001】【無地自容 Shameful】Official Music Video


 [th]关于“笛声嘹亮”V0.1

 


【笛声嘹亮】微信公众号是【漂泊者乐园微信公众号联盟】旗下的第2号示范微信公众号“笛声嘹亮”。【笛声嘹亮】微信公众号由littleflute(小笛)创建经营,以音乐为主题,分享各种音乐精品,分享音乐乐谱歌词及相关音乐技术

 

初衷是收藏自己认为很经典很有指导和教学意义的音乐图文音像帖子每篇帖子都是经过精心收集和整理的,一定力会求做到准确、实用、标准!经营者littleflute也是从事音乐工作。littleflute(小笛)把自己喜欢的音乐相关资讯分享给大家对自己而言也是一件挺开心的事。如您在访问【笛声嘹亮】公众号的过程中遇到任何问题或有任何想法均可通过公众号页面对话框直接回复或者加微信 littleflute 联系我们。

 

版权和免责声明

【笛声嘹亮】微信公众号公众号为个人订阅号,推文所引用资源部分来自互联网或其他公众号,仅为个人自用收藏、学习,版权均归原作者所有。本公众号提供的图文仅供日常使用和研究,不得用于任何商业用途,因按照本公众号所发图文操作或作业造成的损失,全部由使用者本人承担,与本公众号无关。

 

【特别说明】本站提供的所有信息仅供学习交流,您不能利用其为商业使用,也不能因此危害国家公共安全利益、传播非法言论,抑或煽动民族不满情绪。您的所有行为必须符合当地法律法规,否则因为您的不合法使用而导致的一切后果本站概不负责。

 




提示:点[阅读原文]可到【漂泊者乐园微信公众号联盟】主页。


: . Video Mini Program Like ,轻点两下取消赞 Wow ,轻点两下取消在看

您可能也对以下帖子感兴趣

文章有问题?点此查看未经处理的缓存