外滩专访 | 对话诺奖得主迈克尔·斯宾塞:关于AI发展与监管
近几十年来,技术进步的惊人速度似乎并未显著推动全球经济增长,而以ChatGPT为代表的生成式人工智能(Artificial Intelligence, AI)技术的问世与突破,被许多人寄予为人类社会带来深刻变革的希望。
1
在您看来,受人工智能技术影响最为深刻的领域都有哪些,人工智能是否会给人类社会带来“里程碑”式变革?其影响力体现在哪些方面?
2
人工智能技术将如何影响整体的劳动生产率,以及如何改变劳动力市场?
3
有观点认为,人工智能产业已经出现泡沫,您对此有何看法?
4
您认为生成式AI最值得关注的风险有哪些、如何尽可能抑制这些风险?
5
构建AI监管框架的难点有哪些?
6
如何建立起协调、统一、有效的跨国人工智能治理架构?
7
放眼全球,如何促进以人工智能为代表的本轮科技创新向包容性增长的方向发展,而非仅仅让少数国家或人群变得更有数字优势?
8
本届外滩金融峰会围绕科技创新与生产力革新设置多场讨论,您最关注其中哪个环节或哪些议题?您对外滩金融峰会的未来有何期待?
英文访谈纪要
Q1:In your views, what are the areas that have been most profoundly reshaped by AI technologies? Will AI bring the human society through a “milestone” revolution?
Michael SPENCE:AI is in early stages. The AI revolution goes back to language and speech recognition and then image and object recognition, and now we have the amazing breakthrough of the large language generative AI models. That’s very recent – that research started in 2017 with that famous paper that the eight authors wrote. Some of them were from Google, and now they may all have their own companies.
Most people, including me, think that the potential in terms of productivity and improved performance, if you want, at a slighter broader front, in a very, very wide array of areas, is very high. Now, is that a guess, or a forecast? Yes, at this stage it is, because there’s literally thousands and thousands of experiments and explorations being conducted to find out how to use these things.
The straight answer to the question is that the sequence of AI breakthroughs and likely future ones could result in a transformative change in the economy. In terms of where it lands? The large language models basically land in what Sundar Pichai, the CEO of Alphabet, said as the knowledge economy. There’s more to do. We need breakthroughs in robotics in order to expand, yet once again, the digital footprint.
But I think it’s a huge thing that generative AI has a couple of really interesting characteristics. One is that, really for the first time, the AI switches domains easily in response to very simple kinds of prompts. If you ask it about the Italian Renaissance and then switch to mathematics and so on, it goes with you. That’s very unusual and new. It runs counter to the conventional wisdom in artificial intelligence until recently up to that point – that AIs perform best in restricted domains. The second thing is it’s accessible. You don’t need much technical training in order to use it. You do need technical training to create it.
So, yes, I think it’s potentially revolutionary and we are in the regulatory experimental early stages.
Q2: How will AI technologies impact the overall productivity and the labor market?
Michael SPENCE:There will certainly be aspects of work, particularly in the knowledge economy, in which the artificial intelligence systems will do a better job. One way to think about this is when does the AI outperform the human, and when does the human outperform the AI. So, there’s a set of things that AIs do either faster, or more accurately, and so there will be some displacements. That is some degree of automation.
Overall, at least for the foreseeable future, the most likely use of AIs is something that I call the powerful digital system model, so that takes part of the job, but it doesn’t take human out of the equation. When you think of the applications of it, are we going to get rid of analysts for asset management, I doubt it, and turn in over to AIs? Are nurses and doctors going to have AIs that assist them in their work? Yes, probably. Are we going to get rid of them? No.
I think it’s right to admit Erik Brynjolfsson at Stanford wrote an influential essay that I referenced saying there is a bit of a bias that he called the Turing Trap, which is the lean in the direction of automation and replacing human activity. While it would be crazy to deny some of that, I think it is more likely and better to focus on augmenting human performance using these powerful systems - I hope that’s where we go. I’m not terribly worried about a massive loss of employment opportunity, frankly. The more likely outcome is a kind of large productivity gain. Maybe there will be transitory employment problems associated with that.
Having said that, that’s kind of at the macro level. At the micro level, there will be a lot of change in jobs using these digital tools, and so there could be a fair amount of disruption. AI’s generative LLMs are going to write first drafts of stuff. If you think about a doctor who spends enormous amount of time recording what he or she has done, having a first draft that’s reasonably accurate can reduce that time by 80% to get the report done. That’s just pure gain. Do we need fewer doctors? I doubt it. For people who write copy for media and communications, I can imagine there will be a significant employment affect.
Again, it’s too early to tell. I think it would be wrong to assume that there will be no micro impacts of that kind. Let me give another example. These models are capable of producing “drafts” of computer code. That’s been demonstrated. That’ll make software engineers writing computer codes more efficient. If it was a big effect, maybe we need fewer programmers. But we live in the age where software is going to drive everything, so there will be huge increment demand as well. The net effect is hard to know in advance.
So, the question is, sitting in an armchair, can you figure out which of those effects is bigger? I don’t think so. There’s a lot to wait and see. My personal view is that it’s not likely that you’ll see massive loss of employment opportunities just because AIs have demonstrated themselves to be reasonably capable.
Now, if you go on 25 years, and these things have been tested and run, I think it gets a lot harder to guess what the world would look like and what the relationship between humans and digital machines is going to be. So, I cannot go there, but I think of the implementation of the true potential of this occurring over the next 10 years or so, and in that 10 years, we’re going to see mostly a kind of productive collaboration between machine and humans.
Q3: There is the view that AI is already in the middle of a bubble. Do you share that concern?
Michael SPENCE:I’m not too worried about it. The question is what do you mean by “bubble”. Some people think a bubble is when the market gets disconnected from reality and there is really nothing there, right? I don’t think that’s the case for AI. Are the markets pretty frothy in the sense of valuations? Probably yes. So, if by a bubble you mean they’ve overshot the mark relative to the current underlying reality, you could make a good case for that.
We have precedent for this, in the internet bubble more than 20 years ago. A that time, people realized that these digital technologies were pretty powerful and would transform aspects of the economy. But in the early stages there were companies created that didn’t make any sense that eventually failed; there were valuations that didn’t make any sense and they eventually came down. But if what you ask about that forecast, even though it was an excessively exuberant one, that this was going to have a profound effect on our economy, on e-commerce, fintech, etc, the forecast wasn’t wrong – it’s just the timing wasn’t quite right. It always takes longer than the expected. Recognizing the potential and seeing it realized and implemented occur on different time scales.
I think we are in the same situation now. I don’t think the forecast that this is going to be very impactful on the economy and more broadly than that is wrong. Is it going to have some potentially risky downside effects? Yes. Is it going to happen tomorrow? No. It takes way too long relative to the initial speculation for these things to come into reality. Business models have to change, organizations change, people need to change their behavior and acquire new skills – all that sort of thing is not something that will happen in a single quarter.
So, I think the valuations are pretty high right now, but the underlying reality is that they are anticipating that something pretty fundamental is happening.
Q4: What are the most notable risks that generative AI creates, and how to bring them under control?
Michael SPENCE:There’s a large set of risks. I don’t know where to start.
First of all, there’s a new set of data issues related to security and privacy and whose rights are recognized in the way data is used. And that’s not new. What’s new is that generative AI, the big models, are trained on essentially the entire Internet. They are trained on virtually everything that’s out there in digital form. So that raises the question of whether there are any rights associated with that or do the LLM’s have a free run on the internet with everything that has been published. That needs to be thought through.
Second, generative AI is a fairly powerful tool for creating fake news and fraud, that kind of thing, influencing people. So, there’s regulatory issues with respect to that and there’s real risks associated with that.
Then there are idiosyncratic ones that I don’t worry so much about. There is a well-known case in the United States involving hallucinations. A lawyer used generative AI to produce a brief that was presented to the court, and the generative AI, unfortunately for this lawyer, cited legal precedents that it has made up – that is they don’t exist. These are called hallucinations in the AI world. And the lawyer didn’t know that and presented it to the court and he got into serious trouble, because he tells the court there’s a whole bunch of precedents and legal cases that actually don’t exist. They’ll produce hallucinations and one has to be kind of careful. The creators of these generative AI models are well aware obviously.
Factual accuracy is important in many contexts – there hallucinations are a problem. But in creative industries, making stuff up isn’t so bad. If it makes up a new song or a new picture or set a new video, there’s attribution issues associated with this for sure, but it might just give a creative artist new ideas. There’s a fair amount of feedback to that effect. In some contexts, these hallucinations are actually good, rather than bad. New ideas create new things.
There is one other thing I might mention, though. A balanced agenda with respect to AI needs to focus on two things. One is a set of risks and potential misuses of the technology, the second one, which I’m afraid is getting too little attention, is that there is a set of policies that are designed to make this powerful new technology accessible and usable in its positive applications, let’s call it productivity for a moment, across the entire economy. So, big tech companies and big banks in China and the United States, maybe in Europe, will have the resources to explore building use cases and applications on top of the AI models using the application programming interfaces (APIs) they are creating. But what about the small- and medium-sized businesses? Are they going to be fine? Is the market system going get the job done by itself?
In don’t think it is a sage assumption. In past rounds of digital technology penetration, we’ve seen a pattern of divergence both across sectors and companies. The tech sector and the finance sector tend to be pretty advanced, at least in the United States, while some other sectors are seriously lagging behind. That divergence could emerge here.
For me, the other part of a balanced agenda is associated not with protecting us from negative outcomes and risks, but with making sure that the positive outcomes are dispersed and diffuse widely in the economy so that we don’t get unfortunate competitive outcomes, backwaters, places where it doesn’t penetrate. So, I’m hoping that we can rebalance that agenda in the direction of diffusion and widespread adoption.
If you really want the surge in productivity which we desperately need, (at least in the west we need it because we have declining productivity, declining growth and all kinds of supply side headwinds) and so a productivity surge, if it can be achieved, would be a major change in the supply side constraints to growth. And if we don’t get that, it would be much more difficult to achieve other objectives such as massive investments in the energy transition and climate change. If you think about it, we have rising sovereign debt levels, rising interest rates, declining fiscal space, and then now we are supposed to spend an extra of 3 to 4 trillion dollars a year on the energy transition…anyway, it’s fairly easy to see that productivity and growth would be a major boost in addressing other crucial objectives: sustainability and inclusiveness.
So, in order to achieve that, you really need this technology not to be adopted only by the tech leaders while everybody else is behind, you need the whole system. And generative AI is potentially applicable essentially to everyone in the economy. So that would be the other part of the agenda. It’s not regulatory. It’s more promotion, public sector support, diffusion of information technology, skills training, etc, that would help ensure that kind of outcome.
Q5: What are the major bottlenecks in building an AI regulatory framework?
Michael SPENCE:It depends on where you are. There are really big differences. Europe, which tends not to have the main players that are generating this technology, is moving fairly aggressively to regulate artificial intelligence. What they don’t have is the same kind of incentive that you have in China and the United States to make sure that we don’t get in the way of innovative uses, experimentation, exploration and all that sort of thing.
On the other hand, there are common elements. The data question has to be addressed. Are there limits that need to be placed on the use of generative AI with respect to publicly available data? I don’t think the answers to this have emerged yet. Furthermore, I don’t think the answers that will emerge, when they do, in different places will be the same. Because the political systema and the cultures are different as you go from one place to the other. So, the role of government in China with respect to digital is very different in principle than the role of government in places like the United States.
So, there are two issues there. One is finding the balance between regulation and innovation and whatever other values deemed important in the society. And then in the international arena, we need international institutions to mediate the process, trying to make these things match together as much as we can – and this can be very difficult.
Balancing security and innovation is a hard problem. I don’t think anybody would argue that AI doesn’t have anything to do with national security, regardless of whose national security you are talking about, which means that it’s almost inevitable that there are going to be restrictions.
Inevitably, we are going to see restrictions on the flow of technology and products that support it. I think the challenge is to limit those restrictions cooperatively to the extent we can. I suspect that’s the approach that most governments will try take.
I don’t think sensible people realistically want to use national security as an excuse for a massive shutdown in trade and technology transfer on a global basis. While it’s not an easy challenge to ring-fence the technologies that are essential to national security, it is worth the effort.
In the international arena, let me give an example. Artificial intelligence is starting to be used to increase the transparency of global supply chains. They are very complex. It’s almost impossible for humans to figure out all the things that are going on, but the AIs have a reasonable contribution to make in that area. And that process is starting. But that means AIs are going to be operating on what is quintessentially a data-dependent international trade and commerce. It’s almost surely true that we are going to need principles and regulatory structures that define the boundaries for that. That’s just one example.
So, the bottom line is that we are in early enough stages. We don’t have a precise forecast of how and in what sequence the AIs are going to hit the economy. We don’t really, on the other side, have a very precise guess as to how the regulatory process that are underway will turn out. And the range of opinions is quite large. Some people think these AIs are an existential threat to humanity; but I don’t think that’s the majority view. And because it’s so new, everybody in some sense, all of us, are educating ourselves about both risks and opportunities. You can’t imagine there is some kind of short-run easy resolution to this at all. The goal is to maintain an open mind and a sense of balance.
Q6: How can countries establish a coordinated, unified and effective transnational governance framework?
Michael SPENCE:This is not going to be doable in an environment dominated by nationalism. So, we need the international institutions; maybe we even need new ones. As digital becomes even more important, we need existing and possibly new international institutions to intermediate and manage the interactions.
We need the international institutions to be the forum and to mediate the discussion and make sure that is the scope is inclusive.
The current trend toward marginalization of the international institutions as a side effect of geopolitical tensions and rising nationalism is counterproductive. That’s a negative for everybody. This is an area where you need a forum where you explore options with respect to how to cooperate.
Another way to go about it is to say, look, there’s not much we can do about this, it’s just part of the scenery. But one thing we can agree on is that we have a few areas where we have important common interests. The one that is most cited, correctly, is the climate challenge. So, another way to go about this is to say what do we need to do to make sure we don’t disrupt the technology transformation and its diffusion, that are needed to accomplish the energy transition and move to a sustainable global economy. We could focus on that, and just agree to disagree on other things. That would be a major step forward.
And AI plays a role in all of these things. The way to go about this may not be to focus entirely just on AI, but to focus on real challenges in which AI is a component and let the cooperative agreements emerge from focusing on common problems including what kinds of AI do we really need to spread around the world in order to achieve the objectives that we all share.
Q7: How to make sure that the current round of technological boom represented by AI supports inclusive growth across the world, as opposed to making a minority group of countries or people more digitally privileged?
Michael SPENCE:That’s an important challenge. It’s the part of the balanced agenda. It’s to focus on widespread benefits as well as widespread accessibility and use. The technology itself is not necessarily problematic. Because it’s different, it’s accessible, ordinary people can use it. It’s not that hard to learn to inject prompts.
But we do need, if you look at it from the global point of view, some kind of a plausible plan that prevents an outcome like the great powers race forward and everybody else is standing there looking from the sidelines. That’s a bit too extreme, because there are a lot of entrepreneurs and innovative activity in a wide range of emerging economies, for example, and even in Europe which is clearly behind both China and the United States in terms of advanced technology. They’ll bring some of this unless we introduce extraordinary and currently non-existent barriers.
But there is a real agenda here, and it’s part of the agenda restoring inclusiveness to the global growth patterns right now. The more vulnerable emerging economies have debt distress, potential defaults and restructuring needs, they have limited fiscal space because the pandemic used it all up, their climate shocks are very difficult to deal with – they are difficult for everybody to deal with but especially with limited and shrinking resources. The list of climate shocks that have gotten attention is so long now that it’s hard to remember them all. Just south of us, in northeastern Lybia, the current estimate is that more than 20,000 people died in floods when dams broke.
This isn’t directly related to AI, but I think there is a pattern that we’ve lost the forward momentum that we had for many years in the global economy in terms of inclusiveness – the rapid growth of emerging economies, the rising middle class and so on. Not everywhere. Now there is a set of countries that are vulnerable and at risk of being left behind. And AI is part of that story. We are moving to economies in which machines do more and more things, whether it’s production or the massive service sectors. These economies need new growth models. The labor-intensive production and assembly that drove growth in many countries including China, at least for a while, may not work as the digital economy grows. And so, we need this technology to be spread and get adapted in the entire global economy, but as far as I know that’s not high on anyone’s agenda at the moment. Hopefully we’ll get there.
Q8: The 5th Bund Summit features multiple sessions themed around technological innovation and productivity revolution. Which session or what topics are of your utmost concern? What are your expectations for the event going forward? What are your expectations for the event going forward, especially in view that you sit on our International Advisory Council?
Michael SPENCE:I’m looking forward to the session I’m in, which is focused on comparing the macroeconomic conditions and responses in the major parts of the global economy which should be interesting. I look forward to listening to the other people.
But I think these multiple sessions on technology transformation, in fintech and finance in general, for example, tends to be an important focus in the Bund Summit. So, I’m looking forward to those, but there are just so many interesting sessions.
My expectations are always the same. If you get people with experience and relevant knowledge together, and they share their ideas, the main output of that is everybody comes away with a new way of seeing the world and a new set of things that they might want to pursue that are beneficial not only to themselves and their organizations and to a larger cause, if I can put it that way.
Unfortunately, the combination of the pandemic and the geopolitical tensions have reduced the amount of interactions that we have dramatically. Those interactions are really important. We need to share ideas and know what’s going on in each other’s backyard. So, the Bund Summit is an important event and organization that promotes that, and I’m glad to be part of it.
聚焦Michael SPENCE尤为关注的科技创新、技术转型、金融科技等话题,本届外滩峰会设置全体大会、外滩圆桌、外滩闭门会等多个环节,议题覆盖“赋能实体经济与共建科创金融”“窥见未来:新技术打开新世界”“金融支持科创发展与技术驱动下的金融创新”等。在以“前沿科技对金融发展及监管的影响”为主题的外滩圆桌中,与会嘉宾将围绕AI等新技术如何对经济金融产生影响以及如何提升监管能力展开思想碰撞。
第五届外滩金融峰会由中国金融四十人论坛(CF40)与中国国际经济交流中心(CCIEE)联合主办,主题为“迈向新征程的中国与世界:复苏与挑战”。峰会持续聚焦绿色发展、国际金融、资产管理、金融科技四大主题,坚持“国际化”、“专业化”定位,为上海建成具有全球重要影响力的国际金融中心,为中国作为建设性力量参与国际治理,为国际社会消弭分歧、增进互信、凝聚共识,贡献价值与力量。
关注中国金融四十人论坛公众号和外滩金融峰会官网,第一时间获取第五届外滩金融峰会更多亮点和完整议程。
版面编辑:马欣雨|责任编辑:瑟瑟 李俊虎
撰文:宥朗 瑟瑟|翻译:佳茜
视觉:李盼 东子
监制:李俊虎 潘潘