TED-为什么个人隐私如此重要!
TED-How I Best stage fright - 我是如何战胜怯场的
TED-The world's English mania - 我们为什么要学英语?
TED-对儿童的正确教育方式是什么?- 超人气演讲,值得一看
在过去十年来,「公开」与「私人」的界线已经模糊不清,同时,网路上与现实生活也是一样,亚历山卓‧阿吉斯提在此解释隐私权代表的意义,以及它很重要的原因。在这场发人深省但又有点吓人的演讲中,讲者与大家分享近期完成的研究成果与还在进行的研究的细节,其中包括有一项计画,该项计划显示,要将陌生人的照片与他们非常敏感的私人资讯连接起来,是一件非常简单的事情。
《为什么个人隐私如此重要!》
https://v.qq.com/txp/iframe/player.html?vid=p0394lcigop&width=500&height=375&auto=0
演说者:Alessandro Acquisti
中英演讲稿
00:12
I would like to tell you a story connecting the notorious privacy incident involving Adam and Eve, and the remarkable shift in the boundaries between public and private which has occurred in the past 10 years.
00:28
You know the incident. Adam and Eve one day in the Garden of Eden realize they are naked. They freak out. And the rest is history.
00:39
Nowadays, Adam and Eve would probably act differently.
00:44
[@Adam Last nite was a blast! loved dat apple LOL]
00:46
[@Eve yep.. babe, know what happened to my pants tho?]
00:48
We do reveal so much more information about ourselves online than ever before, and so much information about us is being collected by organizations. Now there is much to gain and benefit from this massive analysis of personal information, or big data, but there are also complex tradeoffs that come from giving away our privacy. And my story is about these tradeoffs.
01:15
We start with an observation which, in my mind, has become clearer and clearer in the past few years, that any personal information can become sensitive information. Back in the year 2000, about 100 billion photos were shot worldwide, but only a minuscule proportion of them were actually uploaded online. In 2010, only on Facebook, in a single month, 2.5 billion photos were uploaded, most of them identified. In the same span of time, computers' ability to recognize people in photos improved by three orders of magnitude. What happens when you combine these technologies together: increasing availability of facial data; improving facial recognizing ability by computers; but also cloud computing, which gives anyone in this theater the kind of computational power which a few years ago was only the domain of three-letter agencies; and ubiquitous computing, which allows my phone, which is not a supercomputer, to connect to the Internet and do there hundreds of thousands of face metrics in a few seconds? Well, we conjecture that the result of this combination of technologies will be a radical change in our very notions of privacy and anonymity.
02:35
To test that, we did an experiment on Carnegie Mellon University campus. We asked students who were walking by to participate in a study, and we took a shot with a webcam, and we asked them to fill out a survey on a laptop. While they were filling out the survey, we uploaded their shot to a cloud-computing cluster, and we started using a facial recognizer to match that shot to a database of some hundreds of thousands of images which we had downloaded from Facebook profiles. By the time the subject reached the last page on the survey, the page had been dynamically updated with the 10 best matching photos which the recognizer had found, and we asked the subjects to indicate whether he or she found themselves in the photo.
03:20
Do you see the subject? Well, the computer did, and in fact did so for one out of three subjects.
03:29
So essentially, we can start from an anonymous face, offline or online, and we can use facial recognition to give a name to that anonymous face thanks to social media data. But a few years back, we did something else. We started from social media data, we combined it statistically with data from U.S. government social security, and we ended up predicting social security numbers, which in the United States are extremely sensitive information.
03:56
Do you see where I'm going with this? So if you combine the two studies together, then the question becomes, can you start from a face and, using facial recognition, find a name and publicly available information about that name and that person, and from that publicly available information infer non-publicly available information, much more sensitive ones which you link back to the face? And the answer is, yes, we can, and we did. Of course, the accuracy keeps getting worse. [27% of subjects' first 5 SSN digits identified (with 4 attempts)] But in fact, we even decided to develop an iPhone app which uses the phone's internal camera to take a shot of a subject and then upload it to a cloud and then do what I just described to you in real time: looking for a match, finding public information, trying to infer sensitive information, and then sending back to the phone so that it is overlaid on the face of the subject, an example of augmented reality, probably a creepy example of augmented reality. In fact, we didn't develop the app to make it available, just as a proof of concept.
04:57
In fact, take these technologies and push them to their logical extreme. Imagine a future in which strangers around you will look at you through their Google Glasses or, one day, their contact lenses, and use seven or eight data points about you to infer anything else which may be known about you. What will this future without secrets look like? And should we care?
05:24
We may like to believe that the future with so much wealth of data would be a future with no more biases, but in fact, having so much information doesn't mean that we will make decisions which are more objective. In another experiment, we presented to our subjects information about a potential job candidate. We included in this information some references to some funny, absolutely legal, but perhaps slightly embarrassing information that the subject had posted online. Now interestingly, among our subjects, some had posted comparable information, and some had not. Which group do you think was more likely to judge harshly our subject? Paradoxically, it was the group who had posted similar information, an example of moral dissonance.
06:15
Now you may be thinking, this does not apply to me, because I have nothing to hide. But in fact, privacy is not about having something negative to hide. Imagine that you are the H.R. director of a certain organization, and you receive résumés, and you decide to find more information about the candidates. Therefore, you Google their names and in a certain universe, you find this information. Or in a parallel universe, you find this information. Do you think that you would be equally likely to call either candidate for an interview? If you think so, then you are not like the U.S. employers who are, in fact, part of our experiment, meaning we did exactly that. We created Facebook profiles, manipulating traits, then we started sending out résumés to companies in the U.S., and we detected, we monitored, whether they were searching for our candidates, and whether they were acting on the information they found on social media. And they were. Discrimination was happening through social media for equally skilled candidates.
07:19
Now marketers like us to believe that all information about us will always be used in a manner which is in our favor. But think again. Why should that be always the case? In a movie which came out a few years ago, "Minority Report," a famous scene had Tom Cruise walk in a mall and holographic personalized advertising would appear around him. Now, that movie is set in 2054, about 40 years from now, and as exciting as that technology looks, it already vastly underestimates the amount of information that organizations can gather about you, and how they can use it to influence you in a way that you will not even detect.
08:04
So as an example, this is another experiment actually we are running, not yet completed. Imagine that an organization has access to your list of Facebook friends, and through some kind of algorithm they can detect the two friends that you like the most. And then they create, in real time, a facial composite of these two friends. Now studies prior to ours have shown that people don't recognize any longer even themselves in facial composites, but they react to those composites in a positive manner. So next time you are looking for a certain product, and there is an ad suggesting you to buy it, it will not be just a standard spokesperson. It will be one of your friends, and you will not even know that this is happening.
08:49
Now the problem is that the current policy mechanisms we have to protect ourselves from the abuses of personal information are like bringing a knife to a gunfight. One of these mechanisms is transparency, telling people what you are going to do with their data. And in principle, that's a very good thing. It's necessary, but it is not sufficient. Transparency can be misdirected. You can tell people what you are going to do, and then you still nudge them to disclose arbitrary amounts of personal information.
09:23
So in yet another experiment, this one with students, we asked them to provide information about their campus behavior, including pretty sensitive questions, such as this one. [Have you ever cheated in an exam?] Now to one group of subjects, we told them, "Only other students will see your answers." To another group of subjects, we told them, "Students and faculty will see your answers." Transparency. Notification. And sure enough, this worked, in the sense that the first group of subjects were much more likely to disclose than the second. It makes sense, right? But then we added the misdirection. We repeated the experiment with the same two groups, this time adding a delay between the time we told subjects how we would use their data and the time we actually started answering the questions.
10:09
How long a delay do you think we had to add in order to nullify the inhibitory effect of knowing that faculty would see your answers? Ten minutes? Five minutes? One minute? How about 15 seconds? Fifteen seconds were sufficient to have the two groups disclose the same amount of information, as if the second group now no longer cares for faculty reading their answers.
10:36
Now I have to admit that this talk so far may sound exceedingly gloomy, but that is not my point. In fact, I want to share with you the fact that there are alternatives. The way we are doing things now is not the only way they can done, and certainly not the best way they can be done. When someone tells you, "People don't care about privacy," consider whether the game has been designed and rigged so that they cannot care about privacy, and coming to the realization that these manipulations occur is already halfway through the process of being able to protect yourself. When someone tells you that privacy is incompatible with the benefits of big data, consider that in the last 20 years, researchers have created technologies to allow virtually any electronic transactions to take place in a more privacy-preserving manner. We can browse the Internet anonymously. We can send emails that can only be read by the intended recipient, not even the NSA. We can have even privacy-preserving data mining. In other words, we can have the benefits of big data while protecting privacy. Of course, these technologies imply a shifting of cost and revenues between data holders and data subjects, which is why, perhaps, you don't hear more about them.
11:58
Which brings me back to the Garden of Eden. There is a second privacy interpretation of the story of the Garden of Eden which doesn't have to do with the issue of Adam and Eve feeling naked and feeling ashamed. You can find echoes of this interpretation in John Milton's "Paradise Lost." In the garden, Adam and Eve are materially content. They're happy. They are satisfied. However, they also lack knowledge and self-awareness. The moment they eat the aptly named fruit of knowledge, that's when they discover themselves. They become aware. They achieve autonomy. The price to pay, however, is leaving the garden. So privacy, in a way, is both the means and the price to pay for freedom.
12:50
Again, marketers tell us that big data and social media are not just a paradise of profit for them, but a Garden of Eden for the rest of us. We get free content. We get to play Angry Birds. We get targeted apps. But in fact, in a few years, organizations will know so much about us, they will be able to infer our desires before we even form them, and perhaps buy products on our behalf before we even know we need them.
13:20
Now there was one English author who anticipated this kind of future where we would trade away our autonomy and freedom for comfort. Even more so than George Orwell, the author is, of course, Aldous Huxley. In "Brave New World," he imagines a society where technologies that we created originally for freedom end up coercing us. However, in the book, he also offers us a way out of that society, similar to the path that Adam and Eve had to follow to leave the garden. In the words of the Savage, regaining autonomy and freedom is possible, although the price to pay is steep. So I do believe that one of the defining fights of our times will be the fight for the control over personal information, the fight over whether big data will become a force for freedom, rather than a force which will hiddenly manipulate us.
14:26
Right now, many of us do not even know that the fight is going on, but it is, whether you like it or not. And at the risk of playing the serpent, I will tell you that the tools for the fight are here, the awareness of what is going on, and in your hands, just a few clicks away.
14:48
Thank you.
14:49
(Applause)00:12
我想跟大家分享一个 将亚当和夏娃的 臭名昭著的隐私事件 和过去十年发生的 公共和隐私领域里的显著变迁 相结合的故事。
00:28
大家都知道这个事件吧。 在伊甸园里有一天亚当和夏娃 意识到了他们是赤裸的。 他们吓坏了。 然后剩下的就是历史了。
00:39
换作现在的话,亚当和夏娃 可能会有不同的举动。
00:44
(推特)[@亚当,昨天太销魂了!我好爱那个苹果啊。]
00:46
[@夏娃,是啊,宝贝儿,知道我裤子变成什么样了吗?]
00:48
我们确实比以往任何时候都开放, 把大量关于自己的信息放在网上传播。 而且这么多有关我们的信息 正在被各种机构收集起来。 当今,通过对这些大量 个人信息的研究, 我们从中受益非浅; 但是在放弃我们的隐私的同时 也要付出很多的代价。 而我的故事就是关于这些代价的。
01:15
让我们首先看看一个我认为 在过去几年已经变得越来越清晰的现象, 那就是任何个人信息 都可能变成敏感信息。 在2000年的时候,全球大约拍摄了1000亿 的照片, 但是只有非常微不足道的一部分 被放在了网上。 到了2010年,仅仅在脸书上,一个月 就上传了25亿张照片, 大部分都是可确认的。 同时, 计算机在照片中识别面孔的能力 提高了三个数量级。 当你把这些技术结合起来后 会发生什么呢? 不断增加的脸部信息可用性; 不断增强的计算机面部识别能力; 同时还有云计算, 让在座的任何人 都拥有了 几年前只有情报机构才有的 计算能力; 同时还有普适计算, 让我的手机, 和互联网相连接 然后可以在几秒内进行 成百上千的面部数据测算。 我们预测 这些技术的结合体 会对我们所谓的隐私和匿名 产生非常巨大的影响。
02:35
为了证明这个想法,我们在 卡内基·梅隆大学校园里做了一个测试。 我们让过路的学生们 参与一项研究, 我们用摄像头给他们照了相, 然后我们让他们在电脑上填写一张调查问卷。 在此同时, 我们把他们的照片上传到一个云计算节点上, 然后我们开始用一个面部识别程序来 把那张照片和一个有 成百上千张照片的数据库相比较对照 这些照片都是我们从脸书上下载下来的。 当被研究对象做到问卷的最后一页时, 那页已经自动显示我们找到的 10张由识别程序找到的 最相似的图片, 然后我们让被研究对象确认 那些照片到底是不是自己。
03:20
大家看到被研究对象了吗? 电脑做到了,实际上它的准确率是 三分之一。
03:29
基本上,我们可以从一张匿名的面孔开始, 线下或线上,然后我们可以用脸部识别技术 找到那个人。 这多亏了社交媒体的数据。 但是几年前,我们做了些其他事情。 我们从社交媒体数据出发, 然后我们把它和美国政府的 社会安全机构里的数据相对照, 我们最终可以预测一个人的社会保险号码, 这个号码在美国 是极其敏感的信息。
03:56
大家明白我的意思了吗? 如果你把这两个研究相结合, 问题就来了, 你可不可以从一张面孔出发, 然后通过面部识别找到这个人 和有关此人的 各种公共信息, 从这些公共信息里, 可以推断出未公开的信息, 即那些关于此人 更敏感的信息呢? 答案是,可以的,我们也做到了。 当然,准确率也变糟了。 [27%的调查对象的社会保障号头5个数字 可以通过4次尝试得到] 但实际上,我们甚至决定开发一个苹果应用, 这个应用使用手机内置的相机给 研究对象拍照 然后把照片上传到云端 然后实时地进行我刚才描述的计算: 寻找匹配,公共信息, 尝试推测敏感信息, 然后把这些信息传送回手机 然后把这些信息列到研究对象的图像旁边, 这是个夸张现实的例子, 大概也是一个令人毛骨悚然的现实。 实际上,我们没有开发这个应用, 这只是一个概念验证。
04:57
事实是,让我们把这些技术推进到 逻辑的极限。 设想一下未来你周围的陌生人 可以通过他们的谷歌眼镜 或者,他们的隐形眼镜, 并通过你身上的7、8个数据点 就可以推测出 任何与你有关的信息。 这个没有任何秘密的未来会是怎样的? 而我们该不该关心这个问题?
05:24
我们可能会倾向于相信 这个有这么丰富的数据的未来 会是一个不再有偏见的未来, 但实际上,拥有这么多的信息 并不意味着我们就会做出 更理性的选择。 在另一个试验里,我们给研究对象 关于一个工作应征者的信息。 上传信息里同时也包括了一些 有趣并且绝对合法, 但毕竟有些 尴尬的内容。 有趣的是, 一部分研究对象发布了类似的信息, 有些没有。 大家认为哪个组 会更有可能质疑他人呢? 自相矛盾的是, 那是发布了类似信息的组, 这就是一个与个人道德相悖的例子。
06:15
大家可能现在会想, 这跟我无关, 因为我没有什么可隐藏的。 但实际上,隐私不是说 你有什么坏事情要隐藏。 想象一下你是某机构人事部的主管, 你收到一些简历, 然后你决定寻找更多的关于这些应征者的信息。 然后,你就在谷歌上搜他们的名字 在某种情形下, 你找到这个信息。 或者在一个平行的空间里,你找到了这个信息。 你认为你会公平的 给任何一个应征者面试的机会吗? 如果你是这样想的话, 那么你与美国的老板们不同, 实际上,我们就是用了这些老板做的这个试验。 我们建立了一些脸书帐号,编制了一些信息, 然后我们开始给他们发简历, 此后我们监控着, 到底这些公司会不会搜索我们的应征者, 他们有没有对他们在社交网络上找到的信息 有所举动。实际上他们确实这样做了。 对同等条件的应征者的歧视 是正从社交网络收集的信息开始的。
07:19
现在营销人员希望我们相信 关于我们的所有信息 永远都会以我们喜欢的方式被使用。 但是想想看,凭什么总会是这样? 在几年前出品的一部电影里, “少数派报告”,一个著名的镜头里 是汤姆·克鲁斯在一个大厦里走着 然后全息个性化的广告 出现在他周围。 那部电影的背景年代是2054年, 大约离现在还有40年, 就像那个技术显示的一样让人兴奋, 它已经大大低估了 各种机构可以搜集到的 关于你自己的信息,以及他们如何能利用这些信息 以一种你自己都无法预测到的方式来影响你。
08:04
举个例子,这是另一个 我们正在做的未完成的试验。 想象一下某个机构有 你的脸书朋友信息, 通过某种算法 他们可以找到两个你最喜欢的朋友。 然后,他们的即时创建出 这两个朋友的脸部信息结合体。 在我们之前的研究显示 人们在合成的脸部图片中 甚至不会识别出自己, 但是他们却对这些合成图片有好感。 那么下次你在浏览某个产品的时候, 同时有个广告建议你买它, 这就不会是一个标准的推销员, 却会变成你的朋友, 而且你都不会意识到正在发生着什么。
08:49
现在的问题是 我们当下的保护个人信息 不被滥用的政策法规 还十分薄弱。 其中的一个法规是透明性, 要告诉人们你将怎样使用这些数据。 理论上,这是非常好的事情。 这是必要的,但是却不完善。 透明性也会被误导。 你会告诉人们你要做什么, 然后你仍然试图诱导他们 给你任意数量的个人信息。
09:23
那么在另一个实验里,这次我们让 学生们给我们他们的 学校表现信息 这包括一些非常敏感的信息,比如这个。 [你有没有在考试中做过弊?] 对其中一个组,我们告诉他们, “只有其他的学生会看到你的答案。” 而对另一组学生,我们说, “学生和系里会看到你们的答案。” 透明度。预先声明。当然,这个奏效了, 第一组学生 比第二组更愿意说出实情。 很合理吧,不是吗? 但是我们加了下面的误导。 我们在两组中重复做了这个实验, 这次我们 在告诉他们我们如何 使用这些数据 和让他们实际开始回答问题之间增加了一点延迟。
10:09
大家认为这个延迟需要多久 能让我们抵消掉之前的“系里也会看你们的答案” 带来的抑制作用? 十分钟? 五分钟? 一分钟? 15秒怎么样? 只要15秒就会让两组 提供同样数量的数据, 就好像第二组不再关心 系里会不会看他们的答案一样。
10:36
到此为止,我得承认这个演讲 可能显得非常的郁闷, 但是这不是我的重点。 实际上,我想分享的是我们还是有 其他办法的。 我们现在的处理方式不是唯一的, 也绝对不是最好的。 也绝对不是最好的。 当有人对你说,“大家不用关心隐私,” 想想是不是因为事情已经被扭曲到 他们不能再关心个人隐私了, 然后我们才意识到一切已被人操纵, 已经逐渐侵入到 自我保护的整个过程中。 当有人说隐私和大量信息带来的好处 无法兼得时, 想想过去的20年里, 研究人员已经发明了 理论上使任何电子转帐 更加安全保密的方式来进行的技术。 我们可以匿名的浏览网页。 我们可以发送连美国国家安全局都不可以 读取的个人电子邮件, 我们甚至可以有保护隐私的数据挖掘。 换句话说,我们可以在得到大量数据的同时 仍能保护个人隐私。 当然,这些技术的应用意味着 在数据拥有者们和 数据对象们之间将有花费和收入的变化, 也许这可能就是我们为什么没怎么听说过这些技术的原因。
11:58
让我再回到伊甸园。 关于伊甸园的故事 还有第二个关于隐私的解释 这跟亚当和夏娃 的赤裸和羞耻 没有任何关系。 大家可以在 约翰·弥尔顿的“失乐园”里看到类似的解释。 在伊甸园里,亚当和夏娃是物质上的满足。 他们很开心,也很满足。 但是,他们没有知识 和自觉性。 当他们吃到 智慧之果时, 其实是他们发现自我的时刻。 他们变得自觉,实现了自主。 但是代价却是,离开伊甸园。 那么,隐私,换句话说,就是 为了得到自由必须付出的代价。
12:50
再次,营销人员告诉我们 大量数据和社交网络 并不仅仅是为他们谋福利的天堂, 同时也是我们所有人的伊甸园。 我们得到免费的信息。 我们可以玩愤怒的小鸟。我们得到适合自己的应用。 但实际上,在几年内,各种机构 就会因为知道这么多关于我们的信息, 进而可以在我们知道自己想要做什么之前 就可以诱导我们的想法,或许 在我们知道自己是不是真的需要某个商品之前 就以我们自己的名义把它买下来了。
13:20
有一个英国作家 预测到了这种未来 就是我们会用自己的自主 和自由来换来舒适安逸。 甚至超过了乔治·奥威尔, 这个作家当然是赫胥黎。 在“美丽新世界”里,他想象了一个社会: 人们发明了原本是为了得到 自由的一种技术, 最终反被此技术所奴役。 然而,在这本书里,他同样给我们指出了一条 突破这个社会的道路, 跟亚当和夏娃不得不离开伊甸园的道路类似。 用野人的话说, 重获自主和自由是可能的, 尽管代价惨重。 因此我相信当今 具有决定性的战役之一 就是控制个人信息之战, 决定大量数据是否会变成帮助获得自由 的武器, 还是暗中操纵我们的工具。
14:26
现在,我们中的大多数 甚至不知道战斗已经打响了, 但这是真的,不管你喜欢不喜欢。 冒着打草惊蛇的危险, 我告诉大家战斗的武器就在这里, 那就是意识到正在发生着什么, 就在你手中, 只需几次点击。
14:48
谢谢大家。
14:49
(掌声)
学习型公众号:值得你关注
点击关键词获取福利
动画 | 10部美剧 | 专八全科 | 新概念 | 老友记 | 动画2
89奥斯卡 | 专四 | 英语智力竞赛 | 美语发音| BEC | PPT模板
关注小芳老师
听哈利波特,看TED视频
转发和点赞是我持续更新的动力
资源来自网络,如果有侵权,即刻删除!