本文摘要:TThere are many ways of being smart that aren’t smart like us.” These are the words of Patrick Winston, a leading voice in the field of artificial intelligence. Although his idea is simple, its significance has been lost on most people thinking about the future of work. Yet this is the feature of AI that ought to preoccupy us the most. “(人工智能)有很多与人类有所不同的智能方式。
TThere are many ways of being smart that aren’t smart like us.” These are the words of Patrick Winston, a leading voice in the field of artificial intelligence. Although his idea is simple, its significance has been lost on most people thinking about the future of work. Yet this is the feature of AI that ought to preoccupy us the most. “(人工智能)有很多与人类有所不同的智能方式。”这是人工智能领域的领军人物帕特里克温斯顿(Patrick Winston)说道过的话。
尽管他的观点很非常简单,但多数思维工作未来的人没领悟到它的含义。然而,他所说的是我们应当尤为注目的人工智能的一个特征。
From the 1950s to the 1980s, during the “first wave” of AI research, it was generally thought that the best way to build systems capable of performing tasks to the level of human experts or higher was to copy the way that experts worked. But there was a problem: human experts often struggled to articulate how they performed many tasks. 从上世纪50年代到80年代,在人工智能研究“第一次浪潮”时期,人们一般指出,创立需要将任务继续执行到超过人类专家水平或更高水平的系统的最佳方法,是拷贝专家们的工作方式。但问题是:对于很多任务,人类专家都经常无法讲出他们是如何继续执行的。Chess-playing was a good example. When researchers sat down with grandmasters and asked them to explain how they played such fine chess, the answers were useless. Some players appealed to “intuition”, others to “experience”. Many said they did not really know at all. How could researchers build a chess-playing system to beat a grandmaster if the best players themselves could not explain how they were so good? 对局是一个很好的例子。
当研究人员与大师们椅子来,请求他们说明如何把棋全靠这么好时,答案都是毫无用处的。一些大师指出是“直觉”,还有一些人则归咎于“经验”。很多人回应,他们显然不告诉原因。如果最杰出的棋手自己都无法说明他们为何如此出众,那么研究人员如何需要创立一个可以击败大师的对局系统? A turning point came in 1997. Garry Kasparov, the then world chess champion, was beaten by IBM’s supercomputer, Deep Blue. What was most remarkable was how the system did it. Deep Blue did not share Mr Kasparov’s “intuition” or “experience”. It won by dint of sheer processing power and massive data storage capability. 1997年,一个转折点经常出现了。
当时的国际象棋世界冠军加里卡斯帕罗夫(Garry Kasparov)被IBM的超级计算机“深蓝”(Deep Blue)打败。最引人瞩目的是这个电脑系统打败人类的方法。“深蓝”没卡斯帕罗夫的“直觉”或“经验”。
它是凭借强劲的处置能力和大规模数据存储能力获得胜利的。There then followed AI’s “second wave”, which we are in today. Google’s AI, AlphaGo, has just finished a five-game series of Go against Lee Se-dol, perhaps the best player of the game alive. Until recently, most researchers thought we were at least ten years away from a machine victory. Yet AlphaGo beat Mr Lee in four of the five games. It did not have his genius or strategic insight; it relied on what are known as “deep neural networks”, driven, once again, by processing power and data storage. Like Deep Blue, AlphaGo was in a sense playing a different game. 接着经常出现了人工智能的“第二次浪潮”,就是现在。谷歌(Google)的人工智能程序AlphaGo刚在5局的棋士对局中打败也许算是目前最杰出的棋手李世石(Lee Se-dol)。
旋即以前,多数研究人员还指出,我们距离机器获得胜利最少还有10年的时间。然而,AlphaGo在与李世石的5局交锋中,有4局获得胜利。它没李世石的天赋或战略眼光;它凭借的是被称作“深度神经网络”的系统,某种程度的,该系统是由处置能力和数据存储能力驱动。与“深蓝”一样,从或许上来说,AlphaGo玩游戏的是有所不同的游戏。
In retrospect, we can see that early researchers made the mistake we now call the “AI fallacy”: they assumed that the only way to perform a task to the standard of a human expert is to replicate the approach of human specialists. Today, many commentators are repeating the same mistake in thinking about the future of work. They fail to realise that in the future systems will out-perform human beings not by copying the best human experts, but by performing tasks in very different ways. 回过头来看,我们需要显现出早期的研究人员犯有了我们现在称作“人工智能谬论”的错误:他们指出,要把一项任务继续执行到超过人类专家的标准,唯一途径是拷贝人类专家的方法。如今,很多评论人士在思维工作的未来时也在反复某种程度的错误。他们没能意识到,将来系统战胜人类不是通过仿效最杰出的人类专家,而是通过以截然不同的方式继续执行任务。
Consider the legal world. Daniel Martin Katz, a law professor, has designed a system to predict the voting behaviour of the US Supreme Court. It can perform as well as most specialists, but it does not mirror the judgement of a human being. Instead it draws on data that captures six decades of Court behaviour. 以法律界为事例。法学教授丹尼尔马丁卡茨(Daniel Martin Katz)设计了一个预测美国最高法院投票不道德的系统。它可以预测得与多数专家一样好,但它并不是仿效人类的辨别。
它利用的是记录美国最高法院60年不道德的数据。We see similar developments in other parts of the economy. Millions of people in the US use online tax preparation software, not a personal interaction with an accountant, to file their returns. Autodesk’s “Project Dreamcatcher” generates computerised designs, not by mimicking the creativity of an architect, but by sifting through a vast number of possible designs and selecting the best option. And IBM’s Watson helps to diagnose cancer, not by copying the reasoning of a doctor, but by trawling enormous bodies of medical data. 我们还在其他经济领域看见了类似于的事情。在美国,数百万人利用在线报税软件,而不是特地与会计师会面,来递交纳税申报表格。Autodesk的“Project Dreamcatcher”不会通过检验大量有可能的设计以及自由选择最佳方案(而不是仿效建筑师的创新)来分解电脑化设计。
IBM的超级计算机“沃森”(Watson)通过查询海量医疗数据(而非拷贝医生的推理方法)协助临床癌症。All this does not herald the “end of work”. Rather, it points to a future that is very different from the one most experts are predicting. It is often said that because machines cannot “think” like human beings, they can never be creative; that because they cannot “reason” like human beings, they can never exercise judgement; or that because they cannot “feel” like human beings they can never be empathetic. For these reasons, it is claimed, there are a great many tasks that will always require human beings to perform them. 所有这一切都没伴随“工作的落幕”。它只是指出未来与多数专家的预测截然不同。
人们常常说道,因为机器无法像人类那样“思维”,所以它们总有一天无法显得有创新;因为它们无法像人类那样“推理小说”,所以它们总有一天无法作出辨别;因为它们无法像人类那样“感觉”,所以它们总有一天无法显得有同情心。出于这些原因,有人声称,很多任务总有一天必须人类去继续执行。
But this is to fail to grasp that tomorrow’s systems will handle many tasks that today require creativity, judgement or empathy, not by copying us, but by working in entirely different, unhuman ways. The set of tasks reserved exclusively for human beings is likely to be much smaller than many expect. 但这没能意识到,未来的系统将处置很多现在必须创新、辨别或同情的任务,不是通过仿效我们,而是通过用一种几乎有所不同的非人类的方式工作。人类专属的任务可能会比很多人预测的较少得多。
本文来源:beat·365唯一官方网站-www.zhizjy.com