【技术贴】大型发布会现场的WiFi网络应该如何搭建?

Sina WeiboBaiduLinkedInQQGoogle+RedditEvernote分享

转帖知乎上的文章,挺不错。

—————

WiFi网络的部署要远远比一般人想象的复杂,不是说放上几十个AP带宽就自动增加几十倍,恰恰相反,简单放几十个AP带宽会由于AP之间的竞争而 迅速使带宽下降为几乎不可用。实际上这个问题完全可以写一本书了,此处只有挂一漏万,简明扼要地讲个大概。对于大型活动做WiFi的规划,要按照这几步来 做:

情况调查:首先从主办方取得场地大小和人数、分布,包括场地地图。对于网络的规模和部署有个大概的估计。 一般来讲要为每个人规划至少1个客户端设备,以往经验值可以按0.5个客户端来规划,由于智能手机和平板的普及,未来估计要往1.5~2个客户端靠拢了。 手机和笔记本或者平板电脑有可能同时在上网。


带宽估计:发 布会要保证参与者能正常使用比较轻量级的互联网应用,最基本每个设备要分配500kbps的可用带宽。在这个基础上要考虑大型活动的特点。如果是新闻发布 会,那么会有很多人上传视频,带宽分配需要重新考虑,每个人至少有一个设备应保证1Mbps带宽。如果是小组讨论会,那么带宽需求就会小得多。下图是一些 典型应用通常需要的带宽。

根据如上两点可以算出每个区域的带宽需求,下一步就是AP规划。 虽然11g号称54Mbps带宽,实际可用的最多只有25Mbps,也就是说最多能保证50个设备同时浏览网页(在这个情况下由于客户端相互竞争,用户体 验已经非常糟糕了,一般打个对折)。11n对于大部分手机只能保证35Mbps,对于笔记本电脑等支持MIMO的可以保证到70Mbps甚至更高。按照这 个原则相应地在图上标出每个AP应该覆盖的区域。为了保证通信质量,为了保证比较好的体验,实际上应该控制每个AP接入的设备不超过刚才计算出的数目的一 半。


下一步就是分配信道。 由于国内只能用2.4GHz的频段,这个频段虽然号称有11个信道(有的国家有13个),实际上只有1,6,11三个互相不重叠的信道可以用。把这三个信 道尽可能互不重叠地在上图中覆盖起来(见上图)。有时候如果无法做到不重叠地覆盖,那么还要考虑用扇区天线把覆盖区域细分成几个扇区。


信道分配完成后就要实地部署无线网络了 (实际上在上述理论工作之前就应该做实地勘探,考虑墙壁和各种反射物的影响,此处为了简化略去)。部署时应该考虑用高增益天线,但是降低每个AP的发射功 率,让其覆盖区域基本与规划的区域吻合。注意这里功率不是越大越好,应该让每个AP只覆盖规划好的区域,别越俎代庖。部署时可能需要用一些现场测量工具对 于部署的效果进行评价,防止由于多径干扰出现死角。下图就是用11g部署和11n网络部署后整片区域的差异。红色区域表示覆盖不理想的情况,可以看出 11n网络对于多径干扰抑制非常好。再过几年尽量就不要考虑兼容问题,部署时不要开启11g模式了。目前看起来还是需要通过11g接入点自身功率和天线调 整,甚至增加额外接入点来弥补。


·有线网络规划部署:每个11g的AP应该接入至少百兆上行口,11n的AP应该接入千兆上行口。最后出口也要保证足够上下行带宽,上行传现场资料,下行供大家无聊或者需要查相关资料用,也要按上面第二步计算的总结果的按一定比例保留(取决于活动性质)。若是国内还要考虑多个运营商的出口。

·SSID的分配:实际上除了少数情况用户实现已经分配好座位,大部分情况没有办法把用户固定在某个AP上,所以更常见的做法是所有AP设置同一个SSID。这里实际上涉及到瘦AP和AP控制器的管理问题,由于各家方案都不相同,就不细数了。

·用户认证和带宽控制:为了防止恶意蹭网,最好能对用户做基本的认证,比如凭入场券领取账号名和密码。同时对于每个账号要限制带宽使用,这也会涉及到用户认证和带宽管理,通常需要额外的服务器来处理。

·拒绝弱信号客户端接入: 谢谢@魏冰然和@曹梦迪,通过和他们互动我觉得这点也非常重要,所以单独补充一下。通过AP测量到的客户端信号强度给客户端分配合适的AP,如果某个AP 能接收到客户端信号,但是强度太弱不足以支持某个门限速率,就拒绝客户端从这个AP的接入,防止这个猪一样的队友占有过多带宽(他传1bit时间你能传 54bit!),用最低速率把整个AP性能拉低。


至此一个较简单的WiFi网络才部署完毕。你给的链接找不到原文,我只能根据新闻说有3000人到场,猜测主办方低估了参会者带宽需求,原因无非是

·AP数目不足(应该100个左右,至少50个)

·AP规划不合理(太多包碰撞)

·或者AP崩溃(每个AP接入用户太多)

·或者AP控制器崩溃(无法同时响应这么多AP接入/断开请求)

·或者认证服务器崩溃(无法同时认证这么多用户)

·或者出口带宽太窄(按我的估计需要至少300MB上行,300MB下行)

·管理混乱(没有控制每个客户端设备流量)。

 

超级碗的主办方需要对付73208个用户,安排了700多个个接入点,能够同时支持30000个连接(比例低一点儿,这么热烈的比赛,用户花在看手机上的时间应该比新闻发布会少得多)。

美国人的课堂也很可怕,清一水的苹果笔记本,还不包括他们包包里的平板和手机!

 

(没有打分)

Facebook invents an intelligence test for machines

原文转自:http://www.newscientist.com

John is in the playground. Bob is in the office. Where is John? If you know the answer, you’re either a human, or software taking its first steps towards full artificial intelligence. Researchers at Facebook’s AI lab in New York say an exam of simple questions like this could help in designing machines that think like people.

Computing pioneer Alan Turing famously set his own test for AI, in which a human tries to sort other humans from machines by conversing with both. However, this approach has a downside.

“The Turing test requires us to teach the machine skills that are not actually useful for us,” says Matthew Richardson, an AI researcher at Microsoft. For example, to pass the test an AI must learn to lie about its true nature and pretend not to know facts a human wouldn’t.

These skills are no use to Facebook, which is looking for more sophisticated ways to filter your news feed. “People have a limited amount of time to spend on Facebook, so we have to curate that somehow,” says Yann LeCun, Facebook’s director of AI research. “For that you need to understand content and you need to understand people.”

In the longer term, Facebook also wants to create a digital assistant that can handle a real dialogue with humans, unlike the scripted conversations possible with the likes of Apple’s Siri.

Similar goals are driving AI researchers everywhere to develop more comprehensive exams to challenge their machines. Facebook itself has created 20 tasks, which get progressively harder – the example at the top of this article is of the easiest type. The team says any potential AI must pass all of them if it is ever to develop true intelligence.

Each task involves short descriptions followed by some questions, a bit like a reading comprehension quiz. Harder examples include figuring out whether one object could fit inside another, or why a person might act a certain way. “We wanted tasks that any human who can read can answer,” says Facebook’s Jason Weston, who led the research.

Having a range of questions challenges the AI in different ways, meaning systems that have a single strength fall short.

The Facebook team used its exam to test a number of learning algorithms, and found that none managed full marks. The best performance was by a variant of a neural network with access to an external memory, an approach that Google’s AI subsidiary DeepMind is also investigating. But even this fell down on tasks like counting objects in a question or spatial reasoning.

Richardson has also developed a test of AI reading comprehension, called MCTest. But the questions in MCTest are written by hand, whereas Facebook’s are automatically generated.

The details for Facebook’s tasks are plucked from a simulation of a simple world, a little like an old-school text adventure, where characters move around and pick up objects. Weston says this is key to keeping questions fresh for repeated testing and learning.

But such testing has its problems, says Peter Clark of the Allen Institute for Artificial Intelligence in Seattle, because the AI doesn’t need to understand what real-world objects the words relate to. “You can substitute a dummy word like ‘foobar’ for ‘cake’ and still be able to answer the question,” he says. His own approach, Aristo, attempts to quiz AI with questions taken from school science exams.

Whatever the best approach, it’s clear that tech companies like Facebook and Microsoft are betting big on human-level AI. Should we be worried? Recently the likes of Stephen Hawking, Elon Musk and even Bill Gates have warned that AI researchers must tread carefully.

LeCun acknowledges people’s fears, but says that the research is still at an early stage, and is conducted in the open. “All machines are still very dumb and we are still very much in control,” he says. “It’s not like some company is going to come out with the solution to AI all of a sudden and we’re going to have super-intelligent machines running around the internet.”

(没有打分)

测山石网智能下一代防火墙,免费得iPhone6

测山石网智能下一代防火墙,免费得iPhone6

2月5日,民族网络安全领导厂商山石网科发布了智能下一代防火墙新版本,总裁兼CEO罗东平亲自分享了山石网科在未知威胁防护方面取得的新成果,并宣布面向全社会招募20名网络管理员对新版本进行免费测试。

本次测试招募的范围较广,只要为网络管理人员及网络安全技术的爱好者均可报名,报名过程也极为简单,报名人员只要在报名注册页面的回答几个问题,并且填写真实的个人信息即可。本次报名截止日期为3月20日,之后将有评审团选入20名测试人员进行最终测试。山石网科将为最终测试者每人提供一部iPhone6手机。

本次活动测试的内容包含智能NGFW的全部亮点功能,如未知威胁分析,异常流量监控,以及应用识别,系统管理,网络监控等全方面的安全功能。山石网科希望测试用户能够以真实环境测试,并在山石技术人员的指导下做相关功能测试。

山石网科本次活动的负责人表示,山石网科本次测试主要是希望得到用户对产品的使用反馈,因此希望测试者能够参与测试相关的反馈调查和测试建议。对于最终决定人选的条件,她表示山石网科希望测试者具备实际测试的环境,可以旁路或者串联部署山石网科的智能下一代防火墙,需要具备基础网络知识和网络安全设备的使用经验,可以配置或者在我们的指导下配置网络安全设备,愿意与我们分享和探讨整个测试过程等。

山石网科智能下一代防火墙通过基于威胁行为的分析技术识别未知威胁,帮助客户解决目前市场中下一代防火墙和“沙箱技术”不能发现的0-DAY、APT、变种恶意软件等未知威胁,提前排除内网的安全隐患,在未知威胁产生破坏前减少损失。本次发布的智能NGFW新版本,首次采用“未知威胁检测引擎”和“异常行为检测引擎”两大智能引擎,和以全新的安全可视化界面和策略联动成为全新亮点。

(7个打分, 平均:3.29 / 5)

零售业的室内定位技术应用呼之欲出

 

『室内定位技术将互联网式追踪搬进物理空间。』

 

 

 

你刚刚把一罐花生酱扔进购物车,智能手机就响了起来。你瞥了一眼屏幕,上面显示着一条消息:“购买果冻享受1美元优惠。”简直是心灵感应。

 

方便吗?当然。害怕吗?好像有点。

 

这就是室内定位的一种可能图景。室内定位是一项发展迅速的技术,到今天,它已经可以使零售商追踪顾客在商场里的移动情况,结果之精确,前所未有。现在许多大卖场都配备了这类装置,它们能嗅出顾客的智能手机,并记下他们的行踪,比如某人在鞋类区逗留了几分钟,它们就一清二楚。

 

有了这项技术,零售商终将获得与网店一较高下的能力。在网络上,行为广告会利用上网者的浏览历史来推荐产品。用不了多久,药房或家居装饰店在推销纸巾和木材的时候,就也可以采用相同的手法了。

 

业内人士托德·谢尔曼(Todd Sherman)介绍说:“以前购物者到收银台结账之前,我们对于他们在商店内的行为是不怎么了解的。现在有了室内定位,你就能了解他们对什么感兴趣,还有他们的行踪了。”谢尔曼在华盛顿州贝尔维市的新兴企业Point Inside担任首席市场官。包括Point Inside在内,已经有二十家公司为改进室内追踪和广告技术募集了风险投资。

 

在这之前,Nordstrom、Family Dollar和American Apparel等美国零售商已经对各种室内定位系统进行了试验。这些系统使用摄像机、声波、甚至是磁场来定位顾客。2012年9月,苹果为其智能手机添加了一项名叫“iBeacon”的功能,它使手机发出一个低功耗蓝牙无线信号,这也是为了在室内应用而设计的。

 

目前使用最广的定位方法,是拦截购物者的智能手机发出的Wi-Fi信号,然后运用三角测量法,就可以将手机的位置确定在几米的范围之内。每一部手机都有一个独特的识别码,称为“MAC地址”,商场搜集了这个地址,就能建立回头客的行为信息了。

 

森林城企业(Forest City Enterprises)拥有或管理着近20家购物中心,目前它正利用手机信号对其中的多数做客流量监控。公司称,有了这些数据,公司就可以决定将妨碍顾客入场购物的自动扶梯移到哪里去了。公司还用手机信号记录顾客会在一场时装表演或音乐会结束之后逗留多久。森林城的数字战略副总裁斯蒂芬妮·施里弗-恩达尔(Stephanie Shriver-Engdahl)表示,公司想要了解一系列问题的答案,比如“顾客是买了一杯苏打水就上车走人,还是会多逗留一会儿?”她表示,在将来,公司可以根据客流量数据来制定租金。

 

室内定位仍然是一个复杂问题,因此它也许不会像有的鼓吹者期盼的那样发展起来。Opus调研公司的分析师格雷格·斯特林(Greg Sterling)指出:“虽然技术已经存在,也有可能推广,但是营销商未必肯接手。室内定位的某些前景可能是永远实现不了的。”

 

投资了好几家室内定位公司的谷歌经理唐·道奇(Don Dodge)却认为,这项技术将会“比GPS或者网络地图还了不起”,这是因为人们的大多数时间都在室内度过,而室内的GPS信号往往太弱,根本不敷使用。

 

眼下,谷歌已经把谷歌地图扩展到了室内,并且为17个国家的博物馆、机场和大型商场绘制了地图,比如香港的大埔超级城。看来谷歌认为,一旦他们的头戴式电脑、即谷歌眼镜开始销售,室内地图的作用就会凸显出来。“室内定位将产生巨大的影响,”道奇说,“它将对零售业和优惠券产业造成前所未有的冲击。”

 

不过在那之前,零售业者或许还要面对一次有关隐私的辩论。Nordstrom就在去年遭遇了一次公关失败:它使用Euclid Analytics研发的一套Wi-Fi系统在17家商场内定位顾客的行踪,有顾客在商场入口处阅读了介绍文字,随即投诉商场侵犯隐私。

 

Nordstrom表示,这项测试已经在几个月前结束。公司发言人柯林·约翰逊(Colin Johnson)说:“这个测试基本上是顺利完成了。我们学到了一些东西,然后就去忙别的了。不过我们也认识到了需要继续测试、开辟新的功能,只有这样才能不断进步,跟上顾客的要求。”

 

自从Nordstrom的这段小插曲之后,零售商们就不太愿意承认他们在使用室内定位系统了。然而,提供“综合店内分析工具”的RetailNext公司却表示,他们的产品正在100家大型零售商的数千家商场中使用着。Euclid Analytics也表示自己有100家客户,家得宝公司就是其中之一。

(2个打分, 平均:3.00 / 5)

Google执行董事长:互联网即将消失,物联网无所不能!

导读

互联网即将消失,物联网将无所不能,这是google执行董事长近日的“豪言壮语”,事实上,埃森哲预计,2030年产业物联网将为全球创造14.2万亿美元新产值。这确实太恐怖,但作为地产人,你可能想“骂人了”。因为先不提物联网,房地产进军移动互联网都才刚刚开始,google执董就说互联网即将消失……这多少有点让地产人有点跟不上节奏的感觉。反过来说,房地产虽然产值巨大,但与制造业相比依旧经营与管理粗放,而借移动互联网思维改造与变革也是刚刚起步。但在另外一个版图,建筑与住宅社区又是物联网一个关键的场所和平台。一旦地产开发、智能家居,智能建筑、移动互联网一旦融合在一起,这将是物联网一个重大市场领域。更美好的生活,正因为跨界融合而变得“一切皆有可能”。

当互联网概念在中国资本市场上方兴未艾之时,瑞士达沃斯经济论坛上已传来互联网的死刑判决书。互联网巨头谷歌公司的执行董事长埃里克•施密特在近日举行的座谈会上大胆预言:互联网即将消失,一个高度个性化、互动化的有趣世界——物联网即将诞生。施密特的此番言论可谓自我颠覆。他说:“我可以非常直接地说,互联网将消失。”

一、互联网即将消失,物联网将无所不能

 

施密特称,未来将有数量巨大的IP地址、传感器、可穿戴设备,以及虽感觉不到却可与之互动的东西,时时刻刻伴随你。“设想下你走入房间,房间会随之变化,有了你的允许和所有这些东西,你将与房间里发生的一切进行互动。”

他表示,这种变化对科技公司而言是前所未有的机会,“世界将变得非常个性化、非常互动化和非常非常有趣”。这位谷歌掌门人认为:“所有赌注此刻都与智能手机应用基础架构有关,似乎将出现全新的竞争者为智能手机提供应用,智能手机已经成为超级电脑。我认为这是一个完全开放的市场。”

美国市场研究公司Gartner预测:到2020年,物联网将带来每年300亿美元的市场利润,届时将会出现25亿个设备连接到物联网上,并将继续快速增长。由此带来的巨大市场潜力已经成为美国科技公司新的增长引擎,包括思科、AT&T、Axeda、亚马逊、苹果、通用电气、谷歌与IBM等在内的美国公司争相抢占在物联网产业的主导地位。

 

二、看高科技500强争相布局物联网

在1月9日刚刚落幕的2015国际消费电子展(CES)上,物联网概念成为最大看点之一。智能家居、数字医疗、车联网等产品的推出,使得物联网技术真正服务于智能生活。

“物联网不是趋势,它是现实。”三星电子总裁兼首席执行官尹富根(YoonBoo-keun)在CES的演讲上,把物联网作为了三星重点业务方向。尹富根同时透露了三星技术支持物联网的时间表:到2017年,所有三星电视将成为物联网设备;五年内所有三星硬件设备均将支持物联网。

无独有偶,芯片巨头高通也在CES上披露了自己的物联网计划。高通总裁德雷克•阿伯勒(Derek Aberle)在CES上表示,高通向全球超过30个国家推出了15款物联网设备,涉及数字眼镜、儿童跟踪器、智能手表等多个产品。未来,高通将以智能手机为支点,拓展车联网、医疗、可穿戴设备等领域。

制造业巨头也希望在物联网中确立自己的领导者地位。通用电气去年十月宣布与一众技术巨头结盟建立起物联网联盟。通用电气此举的目的是寻求各方对旗下Predix平台的支持。Predix软件旨在令各种物联网端点具备智能化。

全球范围内的其他合作也正在展开。英特尔已携手美国圣何塞市,利用公司强项,进一步推动该市的“绿色视野(Green Vision)”计划。英特尔公司全球物联网业务开发销售总监Gregg Berkeley表示,英特尔目前正与二三十个全球合作伙伴,讨论如何利用英特尔的物联网技术建设智能城市,有些合作在亚洲,有些遍及欧洲。

 

三、物联网和互联网究竟有什么区别?

作为互联网的延伸,物联网利用通信技术把传感器、控制器、机器、人员和物等通过新的方式联在一起,形成人与物、物与物相联,而它对于信息端的云计算和实体段的相关传感设备的需求,使得产业内的联合成为未来必然趋势,也为实际应用的领域打开无限可能。

在过去一年,云计算和大数据继续发酵,物联网也成为未来大趋势之一。很多网友对于物联网和互联网之间有何关系存在疑惑,让我们一起来看看。什么是互联网?即Internet,又称网际网路,因特网等,是网络和网络之间串联而成的庞大网络。而物联网是的英文缩写是The Internet of things,也即物物相连的网络。

物联网的定义是通过射频识别(RFID)、红外感应器、全球定位系统、激光扫描器等信息传感设备,按约定的协议,把任何物品与互联网相连接,进行信息交换和通信,以实现对物品的智能化识别、定位、跟踪、监控和管理的一种网络。简单地说,物联网是一种建立在互联网上的泛在网络。物联网技术的重要基础和核心仍旧是互联网,通过各种有线和无线网络与互联网融合,将物体的信息实时准确地传递出去。

四、物联网是一个新的江湖,一个比互联网大太多太多的江湖

互联网在20多年来帮助人们解决了信息共享、交互,几乎在瞬间颠覆了很多传统的商业模式,把卖产品变为卖内容和服务,是个了不起的产业成就。雷军很早前曾说过:“未来没有所谓的互联网企业,未来每个公司都变成物联网公司。”这个江湖够大了吧。

但从分工上理解,互联网还只是物联网中的一部分,主要是IT服务方面。物联网因为其“连接一切”的特点(“连接一切”是马化腾在2013的WE大会上提出来的未来第一路标),它具有很多互联网所没有的新特性。比如,互联网已经连接了所有的人和信息内容,提供标准化服务,而物联网则要考虑各种各样的硬件融合,多种场景的应用,人们的习惯差异等问题。相对于互联网,物联网需要更有深度的内容和服务,以及更加差异化的应用,也将更加的人性化,这也符合们不停地追求更好的服务体验,这是个亘古不变的刚需。

因此,也可以这样断言,未来所有的公司都是物联网企业。他们享受着物联网的各种便利,利用物联网工具和技术,生产物联网产品,为人们提供物联网服务。

四、物联网的关键技术

针对互联网的特性,专家总结了物联网应用中的三项关键技术:

1、传感器技术:这也是计算机应用中的关键技术。大家都知道,到目前为止绝大部分计算机处理的都是数字信号。自从有计算机以来就需要传感器把模拟信号转换成数字信号计算机才能处理。

2、RFID标签:也是一种传感器技术,RFID技术是融合了无线射频技术和嵌入式技术为一体的综合技术,RFID在自动识别、物品物流管理有着广阔的应用前景。

3、嵌入式系统技术:是综合了计算机软硬件、传感器技术、集成电路技术、电子应用技术为一体的复杂技术。经过几十年的演变,以嵌入式系统为特征的智能终端产品随处可见;小到人们身边的MP3,大到航天航空的卫星系统。嵌入式系统正在改变着人们的生活,推动着工业生产以及国防工业的发展。如果把物联网用人体做一个简单比喻,传感器相当于人的眼睛、鼻子、皮肤等感官,网络就是神经系统用来传递信息,嵌入式系统则是人的大脑,在接收到信息后要进行分类处理。这个例子很形象的描述了传感器、嵌入式系统在物联网中的位置与作用。

总之,我们可以发现物联网概念是在互联网概念的基础上,将其用户端延伸和扩展到任何物品与任何物品之间,进行信息交换和通信的一种网络概念。物联网和物联网概念的关系也是相互依存的关系。(根据澎湃新闻杨漾《谷歌执行董事长大胆预言:互联网即将消失,物联网无所不能》、东方硅谷整合而成)

(1个打分, 平均:1.00 / 5)

【刘挺】自然语言处理与智能问答系统

节选自微博:杨静Lillian

【刘挺】哈尔滨工业大学教授,社会计算与信息检索研究中心主任,2010-2014年任哈工大计算机学院副院长。中国计算机学会理事、曾任CCF YOCSEF总部副主席;中国中文信息学会常务理事、社会媒体处理专业委员会主任。曾任“十一五”国家863 计划“中文为核心的多语言处理技术”重点项目总体组专家, 2012 年入选教育部新世纪优秀人才计划。主要研究方向为社会计算、信息检索和自然语言处理,已完成或正在承担的国家973课题、国家自然科学基金重点项目、国家863计划项目等各类国家级科研项目20余项,在国内外重要期刊和会议上发表论文80余篇,获2010年钱伟长中文信息处理科学技术一等奖,2012 年黑龙江省技术发明二等奖。

 

【刘挺】大家好,我是哈工大刘挺。感谢杨静群主提供的在线分享的机会。2014年11月1-2日,第三届全国社会媒体处理大会(Social Media Processing, SMP 2014)在北京召开,12个特邀报告,800多名听众,大会充分介绍了社会媒体处理领域的研究进展,与会者参与热情很高,2015年11月将在广州华南理工大学大学举办第四届全国社会媒体处理大会(SMP 2015),欢迎大家关注。

今晚我想多聊一聊与自然语言处理与智能问答系统相关的话题,因为这些话题可能和“静沙龙”人工智能的主题更相关。欢迎各位专家,各位群友一起讨论,批评指正。

 

IBM沃森与智能问答系统

 

【杨静lillian】刘挺教授在自然语言处理、数据挖掘领域颇有建树。腾讯、百度、IBM、讯飞、中兴等企业都与他有合作,他还研发了一个基于新浪微博的电影票房预测系统。

近年来,IBM等企业将战略中心转移到认知计算,沃森实际上就是一个智能问答系统。刘教授谈谈您在这方面的研发心得?

 

【刘挺】我们实验室是哈尔滨工业大学社会计算与信息检索研究中心,我们的技术理想是“理解语言,认知社会”。换句话说,我们的研究方向主要包括自然语言处理(Natural Languge Processing, NLP)和基于自然语言处理的社会计算,此次分享我重点谈自然语言处理。

1950年图灵发表了堪称“划时代之作”的论文《机器能思考吗?》,提出了著名的“图灵测试”,作为衡量机器是否具有人类智能的准则。2011年IBM研制的以公司创始人名字命名的“沃森”深度问答系统(DeepQA)在美国最受欢迎的知识抢答竞赛节目《危险边缘》中战胜了人类顶尖知识问答高手。

【白硕】深度,是从外部观感评价的,还是内部实现了一定的功能才算深度?

【刘挺】白老师,我认为深度是有层次的,沃森的所谓“深度问答”是和以往的关键词搜索相比而言的,也是有限的深度。IBM沃森中的问题均为简单事实型问题,而且问题的形式也相对规范,比如:“二战期间美国总统是谁。”

【白硕】要是问二战时美国总统的夫人是谁,估计就不好做了。

【刘挺】相应的,2011年苹果公司在iPhone 4s系统里面推出Siri语音对话系统,引起业内震动。百度、讯飞、搜狗先后推出类似的语音助手。但近来,语音助手的用户活跃度一般,并没有达到预期的成为移动端主流搜索形式的目标。

语音助手产品在有的互联网公司已基本处于维持状态,而不是主攻的产品方向,这背后的核心原因一方面是虽然语音技术相对成熟,但语言技术还有很多有待提高的空间,无法理解和回答用户自由的提问;另一方面,对生活类的查询用菜单触摸交互的方式,用户使用起来更便捷。

因此,但无论IBM沃森还是苹果Siri距离达到人类语言理解的水平仍有很大的距离,智能问答系统还有很长的路要走。

 

【胡颖之】@刘挺 这个问题我们调研过,不知国外情况如何,大部分人觉得,在外对着手机自言自语有点尴尬,且往往还需要调整识别不准的词。如果是一问一答,就直接电话好了,不需要语音。

【刘挺】IBM沃森在向医疗、法律等领域拓展,引入了更多的推理机制。认知计算成为IBM在智慧地球、服务计算等概念之后树起的一面最重要的旗帜。

【杨静lillian】深度问答系统转型成了智能医疗系统。请问我国企业怎么没有开发这种基于认知计算的智能医疗系统?

【刘挺】相信不久的将来,我国的企业就会有类似的系统出炉。百度的“小度”机器人日前参加了江苏卫视的“芝麻开门”就是一个开端。不过,当前我国的互联网公司似乎对微创新、商业模式的创新更感兴趣,而对需要多年积累的高技术密集型产品或服务的投入相对不足。IBM研制沃森历时4年,集中了一批专家和工程师,包括美国一些顶尖高校的学者,这种“多年磨一剑”的做法是值得学习的。

 

【杨静lillian】一个问题。百度的资料说小度机器人是基于语音识别的自然语言处理机器人,而沃森是视觉识别(扫描屏幕上的问题)。沃森到底是怎么进行问答的?

【刘挺】沃森不能接收语音信息及视频信息,因此比赛时主办方需要将题目信息输入沃森中,便于沃森理解题目。并且,Watson只利用已经存储的数据,比赛的时候不连接互联网。沃森不可以现场连接互联网,也是为了避免作弊的嫌疑。不过,如果让机器扫描印刷体的题目,以当前的文字识别技术而言,也不是难事。

【杨静lillian】原来这样,那么它会连接自己的服务器吧?可以把沃森看成一台小型的超级计算机?

【白硕】意思是服务器也部署在赛场。

 

【罗圣美】刘老师,IBM说的认知技术核心技术是什么?

【刘挺】罗总,IBM认知计算方面的核心技术可以参考近期IBM有关专家的报告,比如IBM中国研究院院长沈晓卫博士在2014年中国计算机大会(CNCC)上的报告。

 

 

高考机器人与类人智能系统

 

【刘挺】国家863计划正在推动一项类人智能答题系统的立项工作,目标是三年后参加中国高考,该系统评测时同样禁止连接互联网,答题需要的支撑技术事先存储在答题机器人的存储器中。

【杨静lillian】您说的这个就是讯飞的高考项目。哈工大与讯飞有个联合实验室,是从事相关研究么?

【刘挺】目前,863在规划的类人答题系统包含9个课题,以文科类高考为评价指标,讯飞公司胡郁副总裁担任首席科学家,我实验室秦兵教授牵头其中的语文卷答题系统,语文是最难的,阅读理解、作文等需要推理、创意等方面的能力。

【刘 挺】为什么要启动沃森、高考机器人这类的项目呢?要从搜索引擎的不足说起。海量数据搜索引擎的成功掩盖了语义方面的问题,在海量的信息检索中,有时候,数 据量的增加自然导致准确率的上升,比如问“《飘》的作者是谁”,如果被检索的文本中说“米切尔写了《飘》”,则用关键词匹配的方法是很难给出答案的,但由 于Web的数据是海量的,是冗余的,我们不断地追加文本,就可能在某个文本中找到“《飘》的作者是美国作家米切尔”这样的话,于是利用简单地字面匹配就可以找出问题和答案的关联,从而解决问题。因此,是海量的信息暂时掩盖了以往我们没有解决的一些深层问题,比如语义问题。

【白硕】飘的作者生于哪一年,也是一样,掩盖了推理和上下文连接的问题。

【杨静lillian】有没有可能,只要有足够海量的数据,那么从中总会找到想要的答案。

【白硕】不会的。

【刘挺】在搜索引擎中,海量的数据掩盖了智能推理能力的不足,但是在类似高考这样的需要细粒度的知识问答的场景里面仅靠海量数据是远远不够的,因而将把更深层次的语言理解与推理的困难暴露在研究者面前,推动技术向更深层发展。

举例而言,有用户问:“肯尼迪当总统时,英国首相是谁?”,这个问题很有可能在整个互联网上均没有答案,必须通过推理得到,而人类其实常常想问更为复杂的问题,只是受到搜索引擎只能理解关键词的限制,使自由提问回退为关键词搜索。

 

【胡颖之】那么微软小冰这一类的问答机器人,是属于相当初级的形态么?

【刘 挺】问答系统有两大类:一类是以知识获取或事务处理为目的的,尽快完成任务结束问答过程是系统成功的标志;另一类是以聊天及情感沟通为目的的,让对话能够 持续进行下去,让用户感到他对面的机器具有人的常识与情感,给人以情感慰藉。我们认为微软“小娜”属于前者,“小冰”属于后者。

【胡本立】词汇,语言只是脑中概念的部分表达。

【杨静lillian】提供一份背景资料。

据日本朝日新闻网报道,以在2021年前通过东京大学入学考试为目标的机器人“东Robo君”,在今年秋天参加了日本全国大学入学考试,尽管其成绩离东京大学合格的标准还相差很远,但较去年有所进步。

“东Robo君”是日本国立信息学研究所等机构于2011年开启的人工智能开发项目,目标是在2021年度之前“考取”东京大学。此次是继去年之后第2次参加模拟考试。

据主办模拟考试的机构“代代木Seminar”介绍,考试必考科目包括英语、日本语、数学、世界史、日本史、物理等7项科目,满分为900分(英语、国语满分200分,其他各科满分100分)。“东Robo君”此次获得了386分,偏差值(相对平均值的偏差数值,是日本对学生智能、学力的一项计算公式值)为47.3,超过了去年的45.1

据介绍,如果“东Robo君”以这次的成绩报考私立大学的话,在全国581所私立大学里的472大学中合格的可能性为80%以上。研究人员认为“东Robo君”的学力水平“应该已能比肩普通高三学生”。

据称,该机器人在英语和日本语方面成绩有所提高,看来是倾向文科。在英语科目上,日本电报电话公司(NTT)参与了开发。NTT不仅灵活地运用其收纳了1千亿个单词的数据库,还加入了NTT公司开发的智能手机对话应用软件等技术。例如,在考试中的对话类填空题中,“东Robo君”会根据会话的语气或对话方的感情来进行判定,这使其成绩有所提高。但“代代木Seminar”的负责人表示,“如果目标是东大的话必须拿到9成的分数。老实说,‘东Robo君’还需更努力才行”。

但是,“东Robo君”的理科明显较弱。在数学函数的问题上,“东Robo君”无法像人一样在图表中描画图形,因为它不能进行直观性的理解。有关物体的运动问题也是同样,假设忽视物体的大小,以及假设摩擦为零之类的思考方式“东Robo君”还做不到。据称,这是因为他认为此类假设在现实中完全不可能。

除了参加7项必考科目外,“东Robo君”还参加了政治、经济的考试,它不能理解譬如“民主主义”的意思。据称,是因为教科书中没有过多解释少数服从多数,以及过半表决规则等社会常识,因此“东Robo君”对此并不熟悉,并且它也因此无法理解社会公正的概念。

该机器人项目负责人、国立信息学研究所新井纪子教授表示:“探究人工智能的极限可以说是这个项目的目的。弄清人和机器如何才能协调相处的问题,是日本经济发展的一把钥匙。”

 

【刘挺】杨静群主介绍的这篇新闻,我们也注意到了。日本第五代机的研制虽然失败了,但日本人仍然对机器人和人工智能充满热情,2021年让机器人考入东京大学是一个令人兴奋的目标。

【白硕】应该反过来思考,五代机的失败对今天的人工智能热有什么启示?

【刘挺】人们对人工智能的关注波浪式前进,本人认为当前对人工智能的期待偏高,本轮高潮过后将引起学者们的冷静思考。

 

【杨静lillian】按理说,届时我们的机器人就应该可以考入北大、清华了?

【刘挺】考入北大、清华是非常高的智能的体现,难度极大,这样的愿景能够变为现实,需要业内学者和企业界研发人员的通力合作,也有赖于未来若干年中计算环境的进一步提升。

【杨静lillian】讯飞的高考机器人是文科生,不考理科?这么说自然语言处理,反而是机器最能接近人类智能的一步?

【刘挺】文科生

【白硕】考理科想都不要想。小学的应用题要能做对已经很不容易了。

【杨静lillian】很奇怪的悖论,算力如此强大的计算机,连应用题都不能做。。。

【刘挺】我接触的一些数学家认为:只要能把应用题准确地转换为数学公式,他们就有各种办法让机器自动解题,因而即便对数学这样的理科科目而言,语言的理解仍然是关键的障碍。

【杨静lillian】看来高考机器人20年内都只能是文科生?但日本为什么2021年能让机器人上东大,也是文科?

【刘挺】日本2021年的目标也是考文科,跟中国的目标一致。

【杨静lillian】这充分说明了,为什么机器最先替代的是记者等文科生。。。

 

机器人为什么不能学习常识?

 

【胡本立】还有自然语言是不精确的,要只会精确计算的机器来不精确地表达比倒过来更难。

【白硕】应用题背后有大量的潜在常识性假设,对于人,不说也知道,对于机器,不说就不知道。

【杨静lillian】常识难道不能学习么?

【周志华】常识问题,图灵奖得主John MaCarthy后半辈子都在研究这个问题。悲观点说,在我退休以前看不到有希望。路过看到谈常识,随口说一句。

【杨静lillian】@周志华您是说20年内让机器学习常识没有希望?

【周志华】甚至是没看到能够让人感觉有希望的途径。当然,不排除有超凡入圣的人突然降生,拨云见日。

【白硕】常识获取比常识推理更难。

【刘挺】关于常识,谈谈我的观点:理论上的常识和工程实践中的知识获取或许有较大的区别。作为应用技术的研究者,我们对常识知识获取持相对乐观的态度。

群体智慧在不断地贡献大量的知识,比如维基百科、百度知道等,谷歌的知识图谱就是从这些体现群体智慧的自然语言知识描述中自动提炼知识,取得了令人瞩目的进展。

【白硕】我误导了。显性常识只需要告诉机器就行了,隐性常识往往是我们碰到了问题才知道原来这个都没告诉机器。所以,显性常识获取并不挑战智力而只挑战体力,但是隐性常识获取至今还在挑战智力。

 

 

【杨静lillian】既然机器学不会常识,为什么能给病人进行诊断呢?语言理解虽然难,但看起来依据常识进行推理就更难,几乎被认为没有可能性。

【杨静lillian】所以霍金和特斯拉CEO马斯克为什么还要“杞人忧天”呢?连常识都不可能具备的“人工智能”,到底有什么可怕的?

【刘挺】2014年6月8日,首次有电脑通过图灵测试,机器人“尤金·古斯特曼”扮演一位乌克兰13岁男孩,成功地在国际图灵测试比赛中被33%的评委判定为人类。

【刘挺】现在有学者质疑在图灵测试中,机器人总是在刻意模仿人的行为,包括心算慢,口误等,模仿乌克兰少年也是借非母语掩盖语言的不流畅,借年纪小掩盖知识的不足。

【王涛-爱奇艺】星际穿越里的方块机器人对话很有智慧和幽默。要达到这个智力水平,还需要解决哪些关键问题?语言理解,对话幽默的能力。。。

【刘挺】智能问答系统的核心问题之一是自然语言的语义分析问题。

【白硕】我曾经提出过一个明确的问题,要孙茂松教授转达给深度学习的大拿,也不知道人家怎么应的。问题如下:输入一些回文串作为正例,非回文串作为反例,用深度学习学出一个区分回文串的分类器。

 

情感计算与电影票房预测

 

【王涛-爱奇艺】语义分析这个问题深度学习是否有效?或者要依靠知识库,推理的传统技术呢?

【刘挺】深度学习近年来成为语音、图像以及自然语言处理领域的一个研究热潮,受到学术界和工业界的广泛关注。相比于深度学习在语音、图像领域上所取得的成功,其在自然语言处理领域的应用还只是初步取得成效。

作为智能问答基础的自然语言处理技术,当前的热点包括:语义分析、情感计算、文本蕴含等,其他诸如反语、隐喻、幽默、水帖识别等技术均吸引了越来越多学者的关注。

自然语言处理领域的重要国际会议EMNLP,今年被戏称为EmbeddingNLP。(注:Embedding技术是深度学习在NLP中的重要体现)

自然语言本身就是人类认知的抽象表示,和语音、图像这类底层的输入信号相比,自然语言已经具有较强的表示能力,因此目前深度学习对自然语言处理的帮助不如对语音、图像的帮助那么立竿见影,也是可以理解的。

我实验室研制的语言处理平台(LTP)历经十余年的研发,目前已开源并对外提供云服务,称为语言云。感兴趣的群友可以在语言云的演示系统中测试目前自然语言处理的句法语义分析水平:http://www.ltp-cloud.com

 

【杨静lillian】情感计算,这个有趣。可以把我的微信好友按照情感量化,排个序么?

【刘挺】情感分析是当前自然语言处理领域的热点,在社会媒体兴起之前,语言处理集中于对客观事实文本,如新闻语料的处理,社会媒体兴起之后,广大网民在网上充分表达自己的情绪,诸如,对社会事件、产品质量等的褒贬评论,对热点话题的喜、怒、悲、恐、惊等情绪。

目前的情感分析技术可以计算你的粉丝对你的情感归属度,对你各方面观点的支持及反对的比例。我们实验室研制了微博情绪地图:http://qx.8wss.com/,根据对大量微博文本的实时分析,观测不同地域的网民对各类事件的情绪变化。

现在在微信上输入”生日快乐“,屏幕上会有生日蛋糕飘落。未来,只要你在微信聊天中的文字带有情绪,就能够被机器识别并配动画配音乐。

机器能够理解甚至模拟人的情感,是机器向类人系统迈进的一个重要方向。

 

【胡本立】深刻理解自然语言的产生和理解还得等对脑科学包括脑认知过程和机制的理解,通个模拟来发现和理解难会有突破性进展。

 

【杨静lillian】情感归属度这个比较有趣。我认为可以对微信群做个智能筛选。保留归属度高的,删除归属度低的。公众号也是同理。刘老师,那么您认为情感计算是未来认知计算的突破方向之一?

【朱进】@杨静lillian 恕我直言,机器的智能筛选免不了是弱智的决定。只要编程这种形式存在,真正意义上的创造就很难想象。

【白硕】情感归属度,先要解决特定的情感倾向是针对谁、针对什么事儿的。反贪官不一定反皇帝,反害群之马不一定反群主。

【刘挺】呵呵,白老师说的是评价对象(比如“汽车”)识别问题,评价对象还有多个侧面(比如“汽车的外观、动力、油耗等”)。

【刘挺】刚才杨静群主提到认知计算,我们认为计算有四个高级阶段:感知计算、认知计算、决策计算和创造计算。

语 音识别、图像识别属于感知层面,语言理解、图像视频的理解、知识推理与数据挖掘属于认知计算,在人类认知世界并认清规律的基础上,需要对社会发展的未来进 行预测并辅助决策,更高的计算则是创造计算,比如我们正在研制的机器自动作文属于创造计算。情感与认知密切相关,应该属于认知计算层面。

我们开展了两年多的中国电影票房预测研究,最近百度也开展了电影票房的预测,这项研究属于决策计算范畴。

【杨静lillian】百度对《黄金时代》的预测据说遭遇了滑铁卢。《黄金时代》这个片子,最主要的原因还是文艺片当大众片推了,高估了市场的接受度。

【刘挺】对于《黄金时代》的票房,百度预测是2.3亿,我实验室“八维社会时空”(http://yc.8wss.com)的预测是8000万,实际票房是5200万而。我们正在开展股票预测研究,社会媒体上反映出的股民情绪为股票预测提供了新的数据支持。重大突发事件与股票涨跌的关联亦是股票预测的重要手段。

白老师是上海证券交易所的总工,又是计算机领域的顶级专家,对证券市场中的计算技术最有发言权,以后我们这方面的研究需要向白老师学习

【杨静lillian】照白老师的想法,量化交易应该逐渐取代散户操作,那么情绪的影响应该是越来越小了。至少权重不会像此前那么高。

【白硕】应该说是情绪都暴露在量化武器的射程之内。

 

【刘挺】关于票房预测,我们采用了基于自然语言语义分析的用户消费意图识别技术,在电影上映前较为准确地计算在微博上表达观影意图的人群数量,这是我实验室票房预测的一块基石。

【朱 进】假如是个制作质量极差的电影,但是谁都没看过,制作方按常规方式宣传,机器能预测出来票房会极差吗?最简单的道理,完全同样的内容,换个相近的名字作 为新电影再放,机器会对票房给出跟第一次结果一样的预测吗?如果第三次换个名字再放哪?题目很牛,所有的宣传都很牛。问题是,预测的机器难道不需要先看一 遍电影再猜吗?另外,这机器真的能看懂电影吗?

【白硕】朱老师,买票的人基本都是没看过的人。做决策,从分析没看过的人的行为入手倒是可以理解的。

 

【刘挺】票房预测有时会失准,主要原因包括:电影制作方的强力微博营销行为、竞争影片的冲击、主创人员不合时宜的公关表态等等。

我实验室还在开展因果分析的研究,在《大数据时代》一书中,作者舍恩伯格认为相关性非常重要,因果关系可以忽略,我们认为因果关系的挖掘将对人类的决策起到关键作用,值得深入研究。

比如,如果《黄金时代》市场不理想的原因是如杨静所言“文艺片当大众片推了”,那么如何用大数据验证该原因是真正的主要原因,以及是否还有其他隐蔽的重要原因未被发现,这将对未来电影营销提供重要的决策支持。

 

【杨静lillian】市场有时非理性。看看《泰囧》,还有《小时代》这类片子就知道了。不知为何,国产片总是低智商更符合大众口味,但美国大片,就《星际穿越》也可以横扫中国。口碑的分析恐怕也很重要。不仅是宣传。朋友的评价这些都影响观影决策。还有时光网与豆瓣的评分。

【王涛-爱奇艺】静主说的这个,和爱奇艺同事聊也是有这个规律。我们今年买了变4,收视一般。那个便宜的泰囧,大众很喜闻乐见。小时代是为90后设计的。致青春为80后设计的。这是他们票房火的原因。

【杨静lillian】可能是两个受众市场。需要做个交叉分析。

【白硕】火的都有共同点,但共同点和智商无关。大众不是傻子但也不都是高大上。从高大上角度看低质量的影片也不乏受大众追捧的理由。这又相对论了。

 

 

【白硕】我关心的问题是,整个预测领域都有个案定终身的趋势,什么准确率召回率一类测度都不见了,这是非常危险的苗头。

【朱进】@白硕 按 我的理解,所谓的预测是在首映之前就做出来的。第一天的票房可以跟机器的预测一致。不过看电影的人又不是傻子,第一场一过,马上电影很臭不值得看的舆论就 传播出去了。后面的人还会按照之前的预测那样挤到电影院里吗?按我的理解,票房的关键还是片子的质量。可是片子的质量再没看到之前其实是不知道的。

【刘挺】@朱进 ,短期预测易,长期预测难,因为在事件演进过程中会有多种因素干扰。预测有两种,一种是事前预测,一种是在事件推进中根据已经获悉的事态对下一步事态进行预测。

【朱进】@刘挺 我咋觉得长期更容易猜准啊,因为时间对于涨落是有平滑的。

 

 

【杨静lillian】刘教授可总结一下,认知计算未来您最看好的技术突破么?需要从您的角度给出趋势判断。

 

【刘挺】我是自然语言处理、社会媒体处理方面的研究者,视野有限。

自然语言处理技术趋势:1. 从句法分析向深度语义分析发展;2. 从单句分析向篇章(更大范围语境)发展;3. 引入人的因素,包括众包等手段对知识进行获取;4. 从客观事实分析到主观情感计算;5. 以深度学习为代表的机器学习技术在NLP中的应用

高考文科机器人只是一种测试智能水平推动学术发展的手段,高考机器人技术一旦突破,将像沃森一样甚至比沃森更好的在教育、医疗等各个领域推动一系列重大应用。

我的观点不仅代表我个人,也代表我实验室多位老师,包括文本挖掘与情感分析技术方面的秦兵教授、赵妍妍博士,自然语言处理方面的车万翔副教授,问答领域的张宇教授、张伟男博士,社会媒体处理领域博士生丁效、景东讲师。也期望将来各位专家对我的同事们给予指点。

 

(没有打分)

弯曲直说–图灵一席谈 第一集

(1个打分, 平均:5.00 / 5)

解密:柴静雾霾调查:穹顶之下 (全)

(5个打分, 平均:3.40 / 5)

Facebook AI Director Yann LeCun on His Quest to Unleash Deep Learning and Make Machines Smarter

原文转载自:http://spectrum.ieee.org

Artificial intelligence has gone through some dismal periods, which those in the field gloomily refer to as “AI winters.” This is not one of those times; in fact, AI is so hot right now that tech giants like Google, Facebook, Apple, Baidu, and Microsoft are battling for the leading minds in the field. The current excitement about AI stems, in great part, from groundbreaking advances involving what are known as “convolutional neural networks.” This machine learning technique promises dramatic improvements in things like computer vision, speech recognition, and natural language processing. You probably have heard of it by its more layperson-friendly name: “Deep Learning.”

Few people have been more closely associated with Deep Learning than Yann LeCun, 54. Working as a Bell Labs researcher during the late 1980s, LeCun developed the convolutional network technique and showed how it could be used to significantly improve handwriting recognition; many of the checks written in the United States are now processed with his approach. Between the mid-1990s and the late 2000s, when neural networks had fallen out of favor, LeCun was one of a handful of scientists who persevered with them. He became a professor at New York University in 2003, and has since spearheaded many other Deep Learning advances.


More recently, Deep Learning and its related fields grew to become one of the most active areas in computer research. Which is one reason that at the end of 2013, LeCun was appointed head of the newly-created Artificial Intelligence Research Lab at Facebook, though he continues with his NYU duties.


LeCun was born in France, and retains from his native country a sense of the importance of the role of the “public intellectual.” He writes and speaks frequently in his technical areas, of course, but is also not afraid to opine outside his field, including about current events.

IEEE Spectrum contributor Lee Gomes spoke with LeCun at his Facebook office in New York City. The following has been edited and condensed for clarity.

IEEE Spectrum: We read about Deep Learning in the news a lot these days. What’s your least favorite definition of the term that you see in these stories?

Yann LeCun:
My least favorite description is, “It works just like the brain.” I don’t like people saying this because, while Deep Learning gets an inspiration from biology, it’s very, very far from what the brain actually does. And describing it like the brain gives a bit of the aura of magic to it, which is dangerous. It leads to hype; people claim things that are not true. AI has gone through a number of AI winters because people claimed things they couldn’t deliver.

Spectrum: So if you were a reporter covering a Deep Learning announcement, and had just eight words to describe it, which is usually all a newspaper reporter might get, what would you say?

LeCun: I need to think about this. [Long pause.] I think it would be “machines that learn to represent the world.” That’s eight words. Perhaps another way to put it would be “end-to-end machine learning.” Wait, it’s only five words and I need to kind of unpack this. [Pause.] It’s the idea that every component, every stage in a learning machine can be trained.

Spectrum: Your editor is not going to like that.

LeCun:
Yeah, the public wouldn’t understand what I meant. Oh, okay. Here’s another way. You could think of Deep Learning as the building of learning machines, say pattern recognition systems or whatever, by assembling lots of modules or elements that all train the same way. So there is a single principle to train everything. But again, that’s a lot more than eight words.

Spectrum: What can a Deep Learning system do that other machine learning systems can’t do?

LeCun: That may be a better question. Previous systems, which I guess we could call “shallow learning systems,” were limited in the complexity of the functions they could compute. So if you want a shallow learning algorithm like a “linear classifier” to recognize images, you will need to feed it with a suitable “vector of features” extracted from the image. But designing a feature extractor “by hand” is very difficult and time consuming.

An alternative is to use a more flexible classifier, such as a “support vector machine” or a two-layer neural network fed directly with the pixels of the image. The problem is that it’s not going to be able to recognize objects to any degree of accuracy, unless you make it so gigantically big that it becomes impractical.

Spectrum: It doesn’t sound like a very easy explanation. And that’s why reporters trying to describe Deep Learning end up saying…

LeCun: …that it’s like the brain.

Spectrum: Part of the problem is that machine learning is a surprisingly inaccessible area to people not working in the field. Plenty of educated lay people understand semi-technical computing topics, like, say, the PageRank algorithm that Google uses. But I’d bet that only professionals know anything detailed about linear classifiers or vector machines. Is that because the field is inherently complicated?

LeCun: Actually, I think the basics of machine learning are quite simple to understand. I’ve explained this to high-school students and school teachers without putting too many of them to sleep.

A pattern recognition system is like a black box with a camera at one end, a green light and a red light on top, and a whole bunch of knobs on the front. The learning algorithm tries to adjust the knobs so that when, say, a dog is in front of the camera, the red light turns on, and when a car is put in front of the camera, the green light turns on. You show a dog to the machine. If the red light is bright, don’t do anything. If it’s dim, tweak the knobs so that the light gets brighter. If the green light turns on, tweak the knobs so that it gets dimmer. Then show a car, and tweak the knobs so that the red light get dimmer and the green light gets brighter. If you show many examples of the cars and dogs, and you keep adjusting the knobs just a little bit each time, eventually the machine will get the right answer every time.

The interesting thing is that it may also correctly classify cars and dogs it has never seen before. The trick is to figure out in which direction to tweak each knob and by how much without actually fiddling with them. This involves computing a “gradient,” which for each knob indicates how the light changes when the knob is tweaked.

Now, imagine a box with 500 million knobs, 1,000 light bulbs, and 10 million images to train it with. That’s what a typical Deep Learning system is.

Spectrum: I assume that you use the term “shallow learning” somewhat tongue-in-cheek; I doubt people who work with linear classifiers consider their work “shallow.” Doesn’t the expression “Deep Learning” have an element of PR to it, since it implies that what is “deep” is what is being learned, when in fact the “deep” part is just the number of steps in the system?

LeCun: Yes, it is a bit facetious, but it reflects something real: shallow learning systems have one or two layers, while deep learning systems typically have five to 20 layers. It is not the learning that is shallow or deep, but the architecture that is being trained.

Spectrum: The standard Yann LeCun biography says that you were exploring new approaches to neural networks at a time when they had fallen out of favor. What made you ignore the conventional wisdom and keep at it?

LeCun: I have always been enamored of the idea of being able to train an entire system from end to end. You hit the system with essentially raw input, and because the system has multiple layers, each layer will eventually figure out how to transform the representations produced by the previous layer so that the last layer produces the answer. This idea—that you should integrate learning from end to end so that the machine learns good representations of the data—is what I have been obsessed with for over 30 years.

Spectrum: Is the work you do “hacking,” or is it science? Do you just try things until they work, or do you start with a theoretical insight?

LeCun: It’s very much an interplay between intuitive insights, theoretical modeling, practical implementations, empirical studies, and scientific analyses. The insight is creative thinking, the modeling is mathematics, the implementation is engineering and sheer hacking, the empirical study and the analysis are actual science. What I am most fond of are beautiful and simple theoretical ideas that can be translated into something that works.

I have very little patience for people who do theory about a particular thing simply because it’s easy—particularly if they dismiss other methods that actually work empirically, just because the theory is too difficult. There is a bit of that in the machine learning community. In fact, to some extent, the “Neural Net Winter” during the late 1990s and early 2000s was a consequence of that philosophy; that you had to have ironclad theory, and the empirical results didn’t count. It’s a very bad way to approach an engineering problem.

But there are dangers in the purely empirical approach too. For example, the speech recognition community has traditionally been very empirical, in the sense that the only stuff that’s being paid attention to is how well you are doing on certain benchmarks. And that stifles creativity, because to get to the level where if you want to beat other teams that have been at it for years, you need to go underground for four or five years, building your own infrastructure. That’s very difficult and very risky, and so nobody does it. And so to some extent with the speech recognition community, the progress has been continuous but very incremental, at least until the emergence of Deep Learning in the last few years.

Spectrum: You seem to take pains to distance your work from neuroscience and biology. For example, you talk about “convolutional nets,” and not “convolutional neural nets.” And you talk about “units” in your algorithms, and not “neurons.”

LeCun: That’s true. Some aspects of our models are inspired by neuroscience, but many components are not at all inspired by neuroscience, and instead come from theory, intuition, or empirical exploration. Our models do not aspire to be models of the brain, and we don’t make claims of neural relevance. But at the same time, I’m not afraid to say that the architecture of convolutional nets is inspired by some basic knowledge of the visual cortex. There are people who indirectly get inspiration from neuroscience, but who will not admit it. I admit it. It’s very helpful. But I’m very careful not to use words that could lead to hype. Because there is a huge amount of hype in this area. Which is very dangerous.

Spectrum: Hype is bad, sure, but why do you say it’s “dangerous”?

LeCun: It sets expectations for funding agencies, the public, potential customers, start-ups and investors, such that they believe that we are on the cusp of building systems that are as powerful as the brain, when in fact we are very far from that. This could easily lead to another “winter cycle.”

And then there is a little bit of “cargo cult science” in this. This is a Richard Feynman expression. He talked about cargo cult science to describe things that look like science, but basically are not.

Spectrum: Give me some examples.

LeCun: In a cargo cult, you reproduce the appearance of the machine without understanding the principles behind the machine. You build radio stations out of straw. The cargo cult approach to aeronautics—for actually building airplanes—would be to copy birds very, very closely; feathers, flapping wings, and all the rest. And people did this back in the 19th century, but with very limited success.

The equivalent in AI is to try to copy every detail that we know of about how neurons and synapses work, and then turn on a gigantic simulation of a large neural network inside a supercomputer, and hope that AI will emerge. That’s cargo cult AI. There are very serious people who get a huge amount of money who basically—and of course I’m sort of simplifying here—are pretty close to believing this.

Spectrum: Do you think the IBM True North project is cargo cult science?

LeCun: That would be a little harsh! But I do believe that some of the claims by the IBM group have gone a bit too far and were easily misinterpreted. Some of their announcements look impressive on the surface, but aren’t actually implementing anything useful. Before the True North project, the group used an IBM supercomputer to “simulate a rat-scale brain.” But it was just a random network of neurons that did nothing useful except burn cycles.

The sad thing about the True North chip is that it could have been useful if it had not tried to stick too close to biology and not implemented “spiking integrate-and-fire neurons.” Building a chip is very expensive. So in my opinion—and I used to be a chip designer—you should build a chip only when you’re pretty damn sure it can do something useful. If you build a convolutional net chip—and it’s pretty clear how to do it—it can go into a lot of devices right away. IBM built the wrong thing. They built something that we can’t do anything useful with.

Spectrum: Any other examples?

LeCun: I’m going to get a lot of heat for this, but basically a big chunk of the Human Brain Project in Europe is based on the idea that we should build chips that reproduce the functioning of neurons as closely as possible, and then use them to build a gigantic computer, and somehow when we turn it on with some learning rule, AI will emerge. I think it’s nuts.

Now, what I just said is a caricature of the Human Brain Project, to be sure. And I don’t want to include in my criticism everyone who is involved in the project. A lot of participants are involved simply because it’s a very good source of funding that they can’t afford to pass up.

Spectrum: How much more about machine learning in general remains to be discovered?

LeCun: A lot. The type of learning that we use in actual Deep Learning systems is very restricted. What works in practice in Deep Learning is “supervised” learning. You show a picture to the system, and you tell it it’s a car, and it adjusts its parameters to say “car” next time around. Then you show it a chair. Then a person. And after a few million examples, and after several days or weeks of computing time, depending on the size of the system, it figures it out.

Now, humans and animals don’t learn this way. You’re not told the name of every object you look at when you’re a baby. And yet the notion of objects, the notion that the world is three-dimensional, the notion that when I put an object behind another one, the object is still there—you actually learn those. You’re not born with these concepts; you learn them. We call that type of learning “unsupervised” learning.

A lot of us involved in the resurgence of Deep Learning in the mid-2000s, including Geoff Hinton, Yoshua Bengio, and myself—the so-called “Deep Learning conspiracy”—as well as Andrew Ng, started with the idea of using unsupervised learning more than supervised learning. Unsupervised learning could help “pre-train” very deep networks. We had quite a bit of success with this, but in the end, what ended up actually working in practice was good old supervised learning, but combined with convolutional nets, which we had over 20 years ago.

But from a research point of view, what we’ve been interested in is how to do unsupervised learning properly. We now have unsupervised techniques that actually work. The problem is that you can beat them by just collecting more data, and then using supervised learning. This is why in industry, the applications of Deep Learning are currently all supervised. But it won’t be that way in the future.

The bottom line is that the brain is much better than our model at doing unsupervised learning. That means that our artificial learning systems are missing some very basic principles of biological learning.

Spectrum: What are some of the reasons Facebook was interested in setting up an AI lab?

LeCun: Facebook’s motto is to connect people. Increasingly, that also means connecting people to the digital world. At the end of 2013, when Mark Zuckerberg decided to create Facebook AI Research, the organization I direct, Facebook was about to turn 10 years old. The company thought about what “connecting people” would entail 10 years in the future, and realized that AI would play a key role.

Facebook can potentially show each person on Facebook about 2,000 items per day: posts, pictures, videos, etc. But no one has time for this. Hence Facebook has to automatically select 100 to 150 items that users want to see—or need to see. Doing a good job at this requires understanding people, their tastes, interests, relationships, aspirations and even goals in life. It also requires understanding content: understanding what a post or a comment talks about, what an image or a video contains, etc. Only then can the most relevant content be selected and shown to the person. In a way, doing a perfect job at this is an “AI-complete” problem: it requires understanding people, emotions, culture, art. Much of our work at Facebook AI focuses on devising new theories, principles, methods, and systems to make machines understand images, video, speech, and language—and then to reason about them.

Spectrum: We were talking earlier about hype, and I have a hype complaint of my own. Facebook recently announced a face-verification algorithm called “DeepFace,” with results that were widely reported to involve near-human accuracy in facial recognition. But weren’t those results only true with carefully curated data sets? Would the system have the same success looking at whatever pictures it happened to come across on the Internet?

LeCun: The system is more sensitive to image quality than humans would be, that’s for sure. Humans can recognize faces in a lot of different configurations, with different facial hair and things like that, which computer systems are slightly more sensitive to. But those systems can recognize humans among very large collections of people, much larger collections than humans could handle.

Spectrum: So could DeepFace do a better job of looking through pictures on the Internet and seeing if, say, Obama is in the picture than I could?

LeCun: It will do it faster, that’s for sure.

Spectrum: Would it be more accurate?

LeCun: Probably not. No. But it can potentially recognize people among hundreds of millions. That’s more than I can recognize!

Spectrum: Would it have 97.25 percent accuracy, like it did in the study?

LeCun: It’s hard to quote a number without actually having a data set to test it on. It completely depends on the nature of the data. With hundreds of millions of faces in the gallery, the accuracy is nowhere near 97.25 percent.

Spectrum: One of the problems here seems to be that computer researchers use certain phrases differently than lay people. So when researchers talk about “accuracy rates,” they might be talking about what they get with curated data sets. Whereas lay people might think the computers are looking at the same sorts of random pictures that people look at every day. But the upshot is that claims made for computer systems usually need to be much more qualified than they typically are in news stories.

LeCun: Yes. We work with a number of well-known benchmarks, like Labeled Faces in the Wild that other groups use as well, so as to compare our methods with others. Naturally, we also have internal datasets.

Spectrum: So in general, how close to humans would a computer be at facial recognition, against real pictures like you find on the Internet?

LeCun: It would be pretty close.

Spectrum: Can you attach a number to that?

LeCun: No, I can’t, because there are different scenarios.

Spectrum: How well will Deep Learning do in areas beyond image recognition, especially with issues associated with generalized intelligence, like natural language?

LeCun: A lot of what we are working on at Facebook is in this domain. How do we combine the advantages of Deep Learning, with its ability to represent the world through learning, with things like accumulating knowledge from a temporal signal, which happens with language, with being able to do reasoning, with being able to store knowledge in a different way than current Deep Learning systems store it. Currently with Deep Learning systems, it’s like learning a motor skill. The way we train them is similar to the way you train yourself to ride a bike. You learn a skill, but there’s not a huge amount of factual memory or knowledge involved.

But there are other types of things that you learn where you have to remember facts, where you have to remember things and store them. There’s a lot of work at Facebook, at Google, and at various other places where we’re trying to have a neural net on one side, and then a separate module on the other side that is used as a memory. And that could be used for things like natural language understanding.

We are starting to see impressive results in natural language processing with Deep Learning augmented with a memory module. These systems are based on the idea of representing words and sentences with continuous vectors, transforming these vectors through layers of a deep architecture, and storing them in a kind of associative memory. This works very well for question-answering and for language translation. A particular model of this type called “Memory Network” was recently proposed by Facebook scientists Jason Weston, Sumit Chopra, and Antoine Bordes. A somewhat related idea called the “Neural Turing Machine” was also proposed by scientists at Google/Deep Mind.

Spectrum: So you don’t think that Deep Learning will be the one tool that will unlock generalized intelligence?

LeCun: It will be part of the solution. And, at some level, the solution will look like a very large and complicated neural net. But it will be very different from what people have seen so far in the literature. You’re starting to see papers on what I am talking about. A lot of people are working on what’s called “recurrent neural nets.” These are networks where the output is fed back to the input, so you can have a chain of reasoning. You can use this to process sequential signals, like speech, audio, video, and language. There are preliminary results that are pretty good. The next frontier for Deep Learning is natural language understanding.

Spectrum: If all goes well, what can we expect machines to soon be able to do that they can’t do now?

LeCun: You might perhaps see better speech recognition systems. But they will be kind of hidden. Your “digital companion” will get better. You’ll see better question-answering and dialog systems, so you can converse with your computer; you can ask questions and it will give you answers that come from some knowledge base. You will see better machine translation. Oh, and you will see self-driving cars and smarter robots. Self-driving cars will use convolutional nets.

Spectrum: In preparing for this interview, I asked some people in computing what they’d like to ask you. Oren Etzioni, head of the Allen Institute for Artificial Intelligence, was specifically curious about Winograd Schemas, which involve not only natural language and common sense, but also even an understanding of how contemporary society works. What approaches might a computer take with them?

LeCun: The question here is how to represent knowledge. In “traditional” AI, factual knowledge is entered manually, often in the form of a graph, that is, a set of symbols or entities and relationships. But we all know that AI systems need to be able to acquire knowledge automatically through learning. The question becomes, “How can machines learn to represent relational and factual knowledge?” Deep Learning is certainly part of the solution, but it’s not the whole answer. The problem with symbols is that a symbol is a meaningless string of bits. In Deep Learning systems, entities are represented by large vectors of numbers that are learned from data and represent their properties. Learning to reason comes down to learning functions that operate on these vectors. A number of Facebook researchers, such as Jason Weston, Ronan Collobert, Antoine Bordes, and Tomas Mikolov have pioneered the use of vectors to represent words and language.

Spectrum: One of the classic problems in AI is giving machines common sense. What ideas does the Deep Learning community have about this?

LeCun: I think a form of common sense could be acquired through the use of predictive unsupervised learning. For example, I might get the machine to watch lots of videos were objects are being thrown or dropped. The way I would train it would be to show it a piece of video, and then ask it, “What will happen next? What will the scene look like a second from now?” By training the system to predict what the world is going to be like a second, a minute, an hour, or a day from now, you can train it to acquire good representations of the world. This will allow the machine to know about the constraints of the physical world, such as “Objects thrown in the air tend to fall down after a while,” or “A single object cannot be in two places at the same time,” or “An object is still present while it is occluded by another one.” Knowing the constraints of the world would enable a machine to “fill in the blanks” and predict the state of the world when being told a story containing a series of events. Jason Weston, Sumit Chopra, and Antoine Bordes are working on such systems here at Facebook using the “Memory Network” I mentioned previously.

Spectrum: When discussing human intelligence and consciousness, many scientists often say that we don’t even know what we don’t know. Do you think that’s also true of the effort to build artificial intelligence?

LeCun: It’s hard to tell. I’ve said before that working on AI is like driving in the fog. You see a road and you follow the road, but then suddenly you see a brick wall in front of you. That story has happened over and over again in AI; with the Perceptrons in the ’50s and ’60s, then the syntactic-symbolic approach in the ’70s, and then the expert systems in the ’80s, and then neural nets in the early ’90s, and then graphical models, kernel machines, and things like that. Every time, there is some progress and some new understanding. But there are also limits that need to be overcome.

Spectrum: Here’s another question, this time from Stuart and Hubert Dreyfus, brothers and well-known professors at the University of California, Berkeley: “What do you think of press reports that computers are now robust enough to be able to identify and attack targets on their own, and what do you think about the morality of that?”

LeCun: I don’t think moral questions should be left to scientists alone! There are ethical questions surrounding AI that must be discussed and debated. Eventually, we should establish ethical guidelines as to how AI can and cannot be used. This is not a new problem. Societies have had to deal with ethical questions attached to many powerful technologies, such as nuclear and chemical weapons, nuclear energy, biotechnology, genetic manipulation and cloning, information access. I personally don’t think machines should be able to attack targets without a human making the decision. But again, moral questions such as these should be examined collectively through the democratic/political process.

Spectrum: You often make quite caustic comments about political topics. Do your Facebook handlers worry about that?

LeCun: There are a few things that will push my buttons. One is political decisions that are not based on reality and evidence. I will react any time some important decision is made that is not based on rational decision-making. Smart people can disagree on the best way to solve a problem, but when people disagree on facts that are well established, I think it is very dangerous. That’s what I call people on. It just so happens that in this country, the people who are on side of irrational decisions and religious-based decisions are mostly on the right. But I also call out people on the left, such as those who think GMOs are all evil—only some GMOs are!—or who are against vaccinations or nuclear energy for irrational reasons. I’m a rationalist. I’m also an atheist and a humanist; I’m not afraid of saying that. My idea of morality is to maximize overall human happiness and minimize human suffering over the long term. These are personal opinions that do not engage my employer. I try to have a clear separation between my personal opinions—which I post on my personal Facebook timeline—and my professional writing, which I post on my public Facebook page.

Spectrum: You’ve already expressed your disagreement with many of the ideas associated with the Singularity movement. I’m interested in your thoughts about its sociology. How do you account for its popularity in Silicon Valley?

LeCun: It’s difficult to say. I’m kind of puzzled by that phenomenon. As Neil Gershenfeld has noted, the first part of a sigmoid looks a lot like an exponential. It’s another way of saying that what currently looks like exponential progress is very likely to hit some limit—physical, economical, societal—then go through an inflection point, and then saturate. I’m an optimist, but I’m also a realist.

There are people that you’d expect to hype the Singularity, like Ray Kurzweil. He’s a futurist. He likes to have this positivist view of the future. He sells a lot of books this way. But he has not contributed anything to the science of AI, as far as I can tell. He’s sold products based on technology, some of which were somewhat innovative, but nothing conceptually new. And certainly he has never written papers that taught the world anything on how to make progress in AI.

Spectrum: What do you think he is going to accomplish in his job at Google?

LeCun: Not much has come out so far.

Spectrum: I often notice when I talk to researchers about the Singularity that while privately they are extremely dismissive of it, in public, they’re much more temperate in their remarks. Is that because so many powerful people in Silicon Valley believe it?

LeCun: AI researchers, down in the trenches, have to strike a delicate balance: be optimistic about what you can achieve, but don’t oversell what you can do. Point out how difficult your job is, but don’t make it sound hopeless. You need to be honest with your funders, sponsors, and employers, with your peers and colleagues, with the public, and with yourself. It is difficult when there is a lot of uncertainty about future progress, and when less honest or more self-deluded people make wild claims of future success. That’s why we don’t like hype: it is made by people who are either dishonest or self-deluded, and makes the life of serious and honest scientists considerably more difficult.

When you are in the kind of position as Larry Page and Sergey Brin and Elon Musk and Mark Zuckerberg, you have to prepare for where technology is going in the long run. And you have a huge amount of resources to make the future happen in a way that you think will be good. So inevitably you have to ask yourself those questions: what will technology be like 10, 20, 30 years from now. It leads you to think about questions like the progress of AI, the Singularity, and questions of ethics.

Spectrum: Right. But you yourself have a very clear notion of where computers are going to go, and I don’t think you believe we will be downloading our consciousness into them in 30 years.

LeCun: Not anytime soon.

Spectrum: Or ever.

LeCun: No, you can’t say never; technology is advancing very quickly, at an accelerating pace. But there are things that are worth worrying about today, and there are things that are so far out that we can write science fiction about it, but there’s no reason to worry about it just now.

Spectrum: Another question from a researcher. C++ creator Bjarne Stroustrup asks, “You used to have some really cool toys—many of them flying. Do you still have time for hobbies or has your work crowded out the fun?”

LeCun: There is so much fun I can have with the work. But sometimes I need to build things with my hands. This was transmitted to me by my father, an aeronautical engineer. My father and my brother are into building airplanes as well. So when I go on vacation in France, we geek out and build airplanes for three weeks.

Spectrum: What is the plane that is on your Google+ page?

LeCun: It’s a Leduc, and it’s in the Musée de l’Air near Paris. I love that plane. It was the first airplane powered by a ramjet, which is a particular kind of jet engine capable of very high speed. The SR-71 Blackbird, perhaps the fastest plane in the world, uses hybrid ramjet-turbojets. The first Leduc was a prototype that was built in France before World War II, and had to be destroyed before the Germans invaded. Several planes were built after the war. It was a very innovative way of doing things; it was never practical, but it was cool. And it looks great. It’s got this incredible shape, where everything is designed for speed, but at the expense of the convenience of the pilot. The noise from the ramjet must have been unbearable for the pilot.

Spectrum: You tell a funny story in a Web post about running into Murray Gell-Mann years ago, and having him correct you on the pronunciation of your last name. You seemed to be poking gentle fun at the idea of the distinguished-but-pompous senior scientist. Now that you’re becoming quite distinguished yourself, do you worry about turning out the same way?

LeCun: I try not to pull rank. It’s very important when you lead a lab like I do to let young people exercise their creativity. The creativity of old people is based on stuff they know, whereas the creativity of young people is based on stuff they don’t know. Which allows for a little wider exploration. You don’t want to stunt enthusiasm. Interacting with PhD students and young researchers is a very good remedy against hubris. I’m not pompous, I think, and Facebook is a very non-pompous company. So it’s a good fit.

(没有打分)

转载:毛渝南出手,原华三总裁吴敬传下课,华三事件继续发酵

今天通信行业最受人关注的莫过于原华三总裁吴敬传被免职。

 

事情的起因于1月16日,惠普宣布任命中国惠普董事长毛渝南兼任H3C(华三通信)董事长,经过三次股权收购的华三,员工而没有任何利益共享,愤怒的员工终于忍无可忍,今天早上就爆发了在杭州总部和各个分公司有大批员工罢工集会,以示对惠普此次任命的抗议,要求管理层召开员工大会,聆听员工心声。

朋友圈的段子王已经有了说法:华三为什么罢工,H3C就是换三次( Huawei 3 changes),它已经换了3次了,换四次就不行。华三在从2亿销售额增长到120亿的过程中经历了3次股权变换:

第一次:2003年11月,华为由于要急于解决和思科在美国市场发生的专利纠纷,华为和3COM成立合资公司华为3COM,华为占股51%,3Com占49%,3Com当时49%的股份是以而3Com则以现金($160 million)和中国及日本两地的业务注入新合资公司。如果以当时汇率1美金兑换8元来讲,考虑到3Com当时在中国和日本的业务已经日暮西山,当时华三的整体估值大致在30亿左右。
第二次:2006年11月,华为和3COM经过多轮竞价,3COM以18.8亿收购了华三100%的股份,绝对控股H3C,当时公司的估值大致在38.37亿。
第三次: 2009年11月12日,惠普宣布以27亿美元(合人民币175亿)现金收购3Com,进军电信设备市场,H3C并到惠普门下。
也就是在从2003年到2009年,华三的估值增值了近6倍,3Com赚的盆满钵满。按照当年HP收购后的说法,华三将会走向全员持股。按照网络上的说法,去年HP计划要将华三卖给国有企业中国电子CEC,目前传出的HP报价为51%股权50亿美元,也就是H3C的市值会到达620亿人民币,CEC如果收购成功,HP将在6年时间净赚445亿,投资增值254%。

针对HP的行动,华三员工在副总裁王巍的组织下进行了停工。

 

1月23日,华三副总裁王巍因“华三风波”离开华三通信

 

1月26日,惠普CEO梅格·惠特曼(Meg Whitman)宣布吴敬传将卸任H3C首席执行官的职位,吴敬传和毛渝南以及Matt Greenly一起成为H3C董事会成员,任副董事长,并将作为梅格·惠特曼在中国以及全球网络战略的顾问。同时梅格·惠特曼宣布任命曹向英为H3C首席执行官,即日生效。曹向英曾担任H3C首席运营官。

 

2月9日,数梦工场入驻云栖小镇签约仪式在杭州举行。杭州数梦工场科技有限公司是一家提供大数据服务的公司,成立于2015年2月,其创始人为前华三副总裁兼市场部总裁王巍。

 

2月14日,情人节,戏剧性的结局:

 

华三董事会已经决定终止华三与吴敬传之间的所有关系,立即生效。同时,惠普公司也已经终止其与吴敬传之间的所有关系。根据相关法律规定,吴敬传女士仍需继续履行竞业禁止及禁止招揽的法律义务。华三将采取积极的行动保证这一法律要求的实现。

 

华三董事会二零一五年二月十四日

(1个打分, 平均:5.00 / 5)