其他

业界揭秘 | 为什么我们离完全人工智能仍有光年之远

2016-12-16 德先生

置顶德先生,更多精彩内容抢先看!


编者按:关于机器人,无人机和自动驾驶等人工智能领域的大量新闻报道和媒体宣传使我们相信人工智能的未来已触手可及。但事实真的如此吗?新加坡著名的人工智能公司ViSenze的市场经理Clara Lu认为:人工智能理想王国仍遥遥无期。我们对人工智能美好未来的畅想是基于一个错误的前提:我们理解人类的智力和意识。但事实却是,我们对智力,意识,人类思维甚至是什么的概念的知识仍然处于一个婴儿阶段




The future is here… or is it?

 

With so many articles proliferating the media space on how humans are at the cusp of full AI (artificial intelligence), it’s no wonder that we believe that the future — which is full of robots and drones and self-driven vehicles, as well as diminishing human control over these machines — is right on our doorstep.

 

But are we really approaching the singularity as fast as we think we are?

 

It’s not hard to have that impression with the likes of Elon Musk, Stephen Hawking, leading university departments and research centers around the world and more being highly concerned with the potential risks brought about by AI and taking action now to avoid a doomsday scenario in the near future. They predict that by the year 2030 machines will develop consciousness through the application of human intelligence.

 

In fact, Dr. Hawking told the BBC, “The development of full artificial intelligence could spell the end of the human race.” Meaning that the future is here and it may soon outstrip us.

 

Yet, the truth is, we are far from achieving true AI — something that is as reactive, dynamic, self-improving and powerful as human intelligence. And I’m not talking about 100 years kind of far, but possibly centuries, millenniums and, perhaps, we might never get there at all.

 

Here are some reasons.

 

Intelligence does not equate to superintelligence

 

Full AI, or superintelligence, should possess the full range of human cognitive abilities. This includes self-awareness, sentience and consciousness, as these are all features of human cognition.

 

Oxford philosopher and leading AI thinker Nick Bostrom defines superintelligence as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.”

 

Now AI only exists such that it specializes in one area. For instance, there’s AI that can beat the world chess champion in chess, but that’s the only thing it does.

 

Even when scientists have built neural networks that mimic the intricate layers of how the brain understands, analyzes information and build concepts, they don’t know what exactly is going on in there, why neural networks are interpreting things in a certain way.

 

The most extreme promises of full AI are based on a flawed premise: that we understand human intelligence and consciousness.

 

“I don’t see any sign that we’re close to a singularity,” said Ernest Davis, a New York University computer scientist. “While AI can trounce the best chess or Jeopardy player and do other specialized tasks, it’s still light years behind the average 7-year-old in terms of common sense, vision, language and intuition about how the physical world works.”

 

The failure to recognize the distinction between this intelligence and full AI could be contributing to Hawking and Musk’s existential worries, both of whom believe that we are already well on a path toward developing full AI.

 

Our own understanding of intelligence and superintelligence is limited

 

“To achieve the singularity, it isn’t enough to just run today’s software faster,” Microsoft co-founder Paul Allen wrote in 2011. “We would also need to build smarter and more capable software programs. Creating this kind of advanced software requires a prior scientific understanding of the foundations of human cognition, and we are just scraping the surface of this.”

 

Essentially, the most extreme promises of full AI are based on a flawed premise: that we understand human intelligence and consciousness.

 

Most experts who study the brain and mind generally agree on at least two things: We do not know, concretely and unanimously, what intelligence is, and we do not know what consciousness is.

 

Neuroscience and neuropsychology don’t provide a definition of human intelligence — rather, they have many. Different fields, even different researchers, identify intelligence in disparate terms.

 

Currently, AI experts are working within a specific definition of intelligence, namely the ability to learn, recognize patterns, display emotional behaviors and solve analytical problems. However, this is just one definition of intelligence in a sea of congested, vaguely formed ideas about the nature of cognition.

 

And if we as humans don’t understand intelligence, how do we create computers capable of “intelligence”?

 

The human brain is too complex to duplicate

 

In order to duplicate the human brain and how it works, scientists have been trying to clone the brain, or build a system that is inspired by the brain.

 

The human brain has around 100 billion neurons, with one trillion connections between them. So far, the best attempts to artificially map a living brain have been done by the OpenWorm project. The team behind it has managed to map the roundworm Caenorhabditis elegans’ 302 neurons into a computer simulation that powers the movement of a simple LEGO robot.

 

While a remarkable feat in itself, mechanically creating a neuron is light years away from rebuilding an entire human brain, a biologically, infinitely complex structure — let alone understanding how to synthesize human consciousness and intelligence.

 

Naturally, we don’t have to model our AI systems after the human brain; however, if we aim for AI that is smarter than humans, then it is implied that it will at least have to surpass the human brain in one capacity or another. Perhaps most importantly, the brain is also the best benchmark we have available.

 

Limitations in computing power

 

There’s a lot of talk within scientific circles that people are hoping for quantum computing to bring us forward in our AI journey.

 

Quantum computers have been the dream for many years as current computers are not powerful or fast enough and they do not have the capability to simulate the human brain, so tech giants like Google have built their own quantum computers specialized for this kind of work.

 

However, quantum computing is still very much a mystery to us, and is a notoriously difficult beast to tame. Unlike normal computers (which are binary, either 1 or 0), quantum computers could be either or both at the same time. This means that scientists must deal with all the quirky properties of quantum mechanics in order to program quantum computers correctly.

 

Also, test results were not nearly as impressive as Google claimed they were. Quantum computers are extremely difficult to program and highly unpredictable — a problem that will take quite some time to tackle.

 

A huge leap to full AI

 

Technology has the potential and ability to advance in unprecedented, accelerated ways; we’ve witnessed the rapid period of mechanization felt during the industrial revolution, and the internet that has brought about dramatic changes in how we communicate.

 

However, our knowledge of the concepts of intelligence, consciousness, what the human mind even is, remains in an infantile stage. And these gaps in knowledge will surely drag down the projected AI timeline.

 

The AI we have built so far are still one step — albeit a large one — away from receiving information and understanding it. It is a huge leap from advanced technology to the artificial creation of consciousness.

 

Companies that are developing anything close to AI technology can only persevere in their persistent search for true AI in the many, many years to come.


文章来源:techcrunch


点击“阅读原文”,购买首发新书《智能汽车-先进传感与控制》


 德先生精彩文章回顾 


在公众号会话位置回复以下关键词,查看德先生往期文章!


人工智能|类脑研究|人机大战|机器人

虚拟现实|无人驾驶|智能制造|无人机

科研创新|网络安全|数据时代|区块链

……


更多精彩文章正在赶来,敬请期待!


德先生旗下求知书店已正式投入运营,欢迎大家选购首发新书《智能汽车-先进传感与控制》。


您可能也对以下帖子感兴趣

文章有问题?点此查看未经处理的缓存