大数据跟“所有人”什么关系(下) ——写给“普通人”看的“Big Data Concerns You How”

Sina WeiboBaiduLinkedInQQGoogle+RedditEvernote分享

大数据跟“所有人”什么关系(下)

——写给“普通人”看的“Big Data Concerns You How”

废话不多说,续前文。

 

三、哎呀我说人类啊

本小节标题请用二手玫瑰(artist)《命运(生存)》一曲中“哎呀我说命运呐对应的旋律来发声。

彩笔黄是个很矫情的人,表现之一是不喜欢回答一些明明是哲学范畴,却被世俗的搞不清楚状况的人问出来,然后还不得不回答。比如常见于门卫大哥小哥的“你是谁?”、“你从哪里来?”、“你要到哪里去?”,[抓狂]真的是完全不知道要如何开口。

随着社会生活内容的增多,这种奇葩问题的来源也越来越多,招架不住。比如美容店、理发店或者美甲的地方,这类服务往往要以小时计(一困就是若干小时的节奏)。服务人员通常会找话题。常见的除了“你从哪里来?”之外,笔者遭遇最多的就是“你是做什么的?”

而这个问题,真!的!很!复!杂!

简单地说,笔者是做“数据分析”的。但在笔者看来,这四个字说!了!等!于!没!说!啊!笔者很抵触这样明摆着会造成误解的答案。可是,第一次被问时不说,便会有“下次”。这些天真的人们不知道有没有想清楚便执着地强迫我给出答案(我不知道他们为什么觉得我一定要说,大约正如他们不知道我为什么会不想说)。于是彩笔只得勉强挤出那四个字“数据分析”。

然而,出人意料,彩笔给出答案之后,对方会仿佛什么都知道了似的“哦~~~~~”(“我还以为是什么呢,这个我知道啊”之类的)。这让笔者很不舒服。

彩笔黄深知自己给不出可以让人“哦~~~~~”的答案(并深刻质疑这个世界上究竟是否存在对数据分析这个职业的一致的认识)。So,作为信息源的我并没有输出“数据分析”的清晰认识,为什么接收方会认为自己“知道了”呢?彩笔很痛苦。

痛点1:我热爱数据分析这个职业,但是谈到对“数据分析”本体的认识,目前的现状还很复杂,不是百废待兴,而是群魔乱舞;彩笔不知道为什么要在一个休闲放松的时候、如何跟一个处理身体美容的孩子介绍这个话题(我知道是我考虑问题太严肃了,我就是不想敷衍)。

痛点2:在我明明没有说清楚的前提下,那些人,凭什么以为自己理解了!(彩笔觉得这很荒谬,也让彩笔很气愤)

 

说着这么一大段有的没的(但绝对不是可有可无的),是为了引出下面要用到的这种先进的价值观:

1. 世界上也许根本没有communication,只有每个人兀自地talking。就像笃信外星人存在的组织和个体不间断地向外太空发送信号一样,漫无目的,却满怀希望。地球上的人类个体之间是不是也是这样:我们都以为自己会被理解而不停地向外界发送有关我们自己的描述。因为视野中从不缺少“同类”,所以我们会过高的估计被理解的可能性。以为自己是在跟对面的人communicating,但其实只不过是大家互相各自你说一句,我说一句(如果有人固执地把这种行为叫做communication)。

2. 所以我们根本没资格谈论understanding,在生存或相处中,我们的选项只有一个:compromise。

 

如此粗略地介绍一个道听途说、但是很伟大的价值观,彩笔无非是想表达:即便你找到一个人(你认为的、或你认为业内公认的能够把大数据的前生今世说清道明的人,或者不是一个人,而是一个可以实现上述功能的虚拟对象),TA说的,你又能领悟多少(当然你永远可以觉得,你领悟了,或多或少)?

 

四、一个虚构的例子

究竟什么是大数据?让我们看个虚构的例子(网上转载,出处不明,如有需要,原作者可与本人联系):

某披萨店的电话铃响了,客服人员拿起电话。

客服:***披萨店。您好,请问有什么需要我为您服务?

顾客:你好,我需要一份……

客服:先生,麻烦您告诉我您的会员卡号。

顾客:16846146***

客服:陈先生,您好!您是住在泉州路一号12楼1205室,您家的电话是2646****,您公司电话是4666****,您的手机是1391234****。请问您想用哪一个电话付费?

顾客:你为什么知道我所有的电话号码?

客服:陈先生,因为我们联机到CRM系统。

顾客:我想要一个海鲜披萨。

客服:陈先生,海鲜披萨不适合您。

顾客:为什么?

客服:根据您的医疗记录,您的血压和胆固醇都偏高。

顾客:那你们有什么可以推荐的?

客服:您可以试试我们的低脂健康披萨。

顾客:你怎么知道我会喜欢吃这种的?

客服:您上星期一在中央图书馆借了一本《低脂健康食谱》。

顾客:好,那我要一个家庭特大号披萨,要付多少钱?

客服:99元,这个足够您一家六口吃了。但您母亲应该少吃,她上个月刚刚做了心脏搭桥手术,还处在恢复期。

顾客:那可以刷卡吗?

客服:陈先生,对不起。请您付现款。因为您的信用卡已经刷爆了,您现在还欠银行4807元,而且还不包括房贷利息。

顾客:那我先去附近的提款机提款。

客服:陈先生,根据您的记录,您已经超过今日提款限额。

顾客:算了,你们直接把披萨送到我家吧,家里有现金。你们多久会送到?

客服:大约30分钟。如果您不想等,可以自己骑车来。

顾客:为什么?

客服:根据我们CRM全球定位系统的车辆行驶自动跟踪系统记录,您登记有一辆车号为SB-748的摩托车,而目前您正在解放路东段华联商场右侧骑着这辆摩托车。

顾客当场晕倒。

 

这是个极一般的网络段子而已,语言很粗糙,逻辑也有很多经不起推敲的地方。但话糙理不糙,其中涉及到一个社会人日常生活的若干方面:经济信息、健康信息、实时的位置信息,还有现代社会基础服务如图书馆,甚至包括个体之间的关系信息(或曰户籍信息?),以及基础规则(比如健康与饮食、历史经济信息与将来的经济行为的对应等)。任何一个人(一般人/普通人),都很难讲自己的信息完全脱离与上述系统之外。

 

所以,对于大多数人,大数据是一种全新的服务形式,而已。LET IT FLOW,放松全身心,JUST ENJOY IT,不需要想太多。

 

写在最后的话

以管窥豹,可见一斑。简单的说,这篇文章的意思是:如果你不知道大数据是什么,那么,说不定你压根就不需要知道大数据是什么。或者,你知道的也是不准的。而且即便找到别人”跟你说,你也不会明白。对于其工作逻辑来讲,就这么回事儿。把它作为你的苹果手机,你不需要因为不知道它的供应链、制造商的运作而感到焦虑,你会用它打电话、发短信、接入互联网享受五彩斑斓的Web服务,就够了。

(6个打分, 平均:2.00 / 5)

普林斯顿 。李凯 。《促进中国高科技科研创新的想法》

(1个打分, 平均:5.00 / 5)

大数据跟“所有人”什么关系(上) ——写给“普通人”看的“Big Data Concerns You How”

大数据跟“所有人”什么关系(上)

——写给“普通人”看的“Big Data Concerns You How”

 

写在前面的话

很少见地,这是一篇散文。(之所以少见,是因为笔者在求学的那些年写所谓的“考试作文”,写散文从未得过高分——彩笔从未排除是评判标准的问题。)

 

一、我有一个朋友

该青年在广州某大型政府投资性企业工作,为人靠谱(我会乱说),领导看好,目测有一个Promising的将来(我其实不懂,不过少女们弱弱地发一下花痴就好,此人已订婚)。

这样一个朋友(本应跟彩笔的二笔人生毫不搭界),偶尔会以请吃饭为诱饵,创造机会跟笔者扯一段什么是大数据之类的话题。可见他原本的工作内容、以及社交圈子跟大数据的相关程度有多稀疏;也可见他是有多绝望——找到彩笔这种边缘人。

这个朋友代表这样一类人:

1. “社会意识”开始觉醒;

2. 部分精神世界开始脱离低级趣味;

3. 有意愿跻身于“大数据”这个ongoing时代潮流中;然而,

4. 无法摆脱“术业有专攻”的桎梏,“专业能力”不对路,至少目前做不到为“大数据”的宏伟蓝图的实现添砖加瓦。

若是在以前(对不起,彩笔也不知道至少要追溯到多久之前),“隔行如隔山”,现如今这种大范围的跨行业“乱入”应该完全不能想象吧。所以,《世界是平的》这种内容的书会成为比尔盖茨推荐的畅销书,还竟然可以“再来一本”。这种现象的出现,也便由意料之外,变为情理之中了。

这个又热又挤又平的世界,好像可以让人不费吹灰之力就可以看到其他行业的人在忙什么。这种零高度差创造出一种“好像很容易”、“我也可以做到啊”之类的幻觉。于是,越来越多的人开始觉得,自己就是那个厨子,以为“不想当裁缝的司机,不是好厨子”的励志的句子说的正是自己。

笔者本想说,每个人做好自己的本分就够了。然而转念一想,如何在这个扁平的毫无遮拦的世界里界定“本分”?有难度。同时,作为一个自由主义者,笔者坚定地维护每个人“天马行空”、“异想天开”以及做任何不切实际的梦的自由,即便倾向于以结果为导向的笔者找不到论据JUSTIFY他们的各种妄想。

因为上文的种种(也会因为下文的种种),每次SENSE到这位大叔是要FEED有关大数据的内容而请求见面时,彩笔黄就会很纠结。且不说彩笔能说出来的东西不多,彩笔也更加不知道大数据concerns him how,不知道要跟他说什么。钱可以浪费,但粮食不能浪费。这样的饭,彩笔吃不下[委屈]。

 

二、我有一个同学

笔者求学期间的一个同学,现在是鄙人母校的在读博士。当年(2012年6月末至7月初的时候),我们是一同在武汉大学图书情报与档案管理研究生暑期学校的讲座上,听张李义老师的讲座“大数据背景下的信息处理工具与方法”。有生之年for the first time,聆听有关“大数据”的福音。自那时始,大数据便渐渐“走进了彩笔的内心深处”,扎根下来,变成了彩笔黄的一生挚爱,谁都抢不走。然而,截然相反地,这位同学却在随后的时间里,走上了一条几乎完全相反的路。用她的话说:大数据对于我们这种没有数据处理技术,没有数据处理设备,更压根就没有数据的(一般)人来说,就是个坑(绕着走)。”

 

/*下面是一段插叙*/

笔者在之前的一篇文章中提到过(我就是不说是哪篇),大数据不过是在新的技术水平下,某旧物展现出来的新姿态。所谓“老树发新芽”。“老树”是经典的数理统计理论,以及在漫长的实践过程中添加和反复验证的方法体系。“新芽”便是分布式、云计算这些我本身就一知半解、说出来你以为你懂但其实你根本不可能懂的“新技术”了。

如果一定要用一句简短的话表述什么是大数据,笔者会选择:大数据即全部的数据。”

什么是“全部的数据”呢?假设你经营一家上世纪八九十年代的“小卖部”,售卖烟酒糖茶等种类有限的货品。这时,“全部的数据”可以是每种货品的成本、单价、销售、库存等等(抱歉笔者并没有实际经营经验,这里只是列举几个作为例子,看官可结合自身丰富的经验自行脑补“等等”的部分)。然后,继续假设你经营的是一家21世纪的跨国连锁超市,对于你来说“全部的数据”仍然是:每种货品的成本、单价、销售、库存等等(注:一个字都没有改动哦)。

是的,我们惊喜地发现了,这个以数据命名的时代(“大数据时代”)并没有新数据的诞生。或许一些行业的资深业内人士仍然能够想到一些现在“能看到”并且当年“看不到”的数据类型。彩笔黄的解释是,这种数据类型或许是新的,但是这些数据类型所描述的对象是一直存在的。(有些诡辩了是不是)

这些业内人士的疑问可以很好的引出我们的下一个环节:是什么让这些新的“数据类型”诞生(成为可能)?答曰:“(新)技术。”

彩笔实在是不想讨论什么是大数据,所以以上内容虽未得尽表,但也到此为止。

/*插叙的内容结束*/

 

终于回到我的同学这里。虽然我们分别决定走上不同的路,但是我们对大数据的一些基本认识是一致的(上述插叙的内容啦)。她之所以决定绕路,是因为认识到大数据的门槛其实很高(在这个普遍认为“THE WORLD IS FLAT”的人世间,能意识到这里实属不易)。而她本人的兴趣、专业的技能等,不足以将她提升至与大数据共舞的高度。

 

 

写到这里,笔者决定暂且告一段落。总结一下文章进行到这里的思路:

在文章中,笔者介绍了笔者身边两类人的代表,描述了他们对于大数据的态度(趋之若鹜VS避之唯恐不及)以及这种态度的形成过程。期望是能够引发有关Big Data Concerns Everybody How的思考。这两类人即便不能完全cover现在社会上的所有人,“所有人”都可以在这两个分类中找到部分自己的归属感。接下来便可以自行思考Big Data Concerns Yourself How了。因为笔者确定有下文,也确定下文的内容与解答这个问题完全无关(突然觉得这个系列——虽然确定下来只写上下两篇——应该叫做“彩笔黄的价值观输出”)。

(12个打分, 平均:1.00 / 5)

我们之专利与你共享

6月12日,2014

 

我们之专利与你共享

 

作者:Elon Musk;译者:硅谷寒

原文网址:http://www.teslamotors.com/blog/all-our-patent-are-belong-you

 

就在昨天,位于Palo Alto的Tesla总部大厅里,还有一面墙,其上挂满了专利。但现在,那些专利已不复存在。今天,为了电动汽车技术的前进,我们以“开源浪潮”之精神,把那些专利通通移走。

我们当年之所以创建Tesla Motors,是为了让“可持续发展”的时代加速到来。如果我们自己已经扫清了制造超凡电动汽车的障碍,但却用专利在身后布下陷阱雷石,以阻挡他人制造,那么这种举措相对于我们的初衷而言,无异于南辕北辙。未来,Tesla将不会首先利用专利,向任何善意使用我们专利的人们,发起诉讼。

当年,在我初次创业,建立Zip2的时候,我曾以为专利是个宝,并且非常努力地去申请它。或许在很久以前,专利的确算是个宝,但是在现如今,专利所起到的作用不过是为了扼杀产业发展,巩固巨头公司之地位,它能给那些专利讼棍们带来财富,却不能使真正的发明者得益。在离开Zip2之后【译者注:1999年Zip2被康柏以3亿美元收购】,我渐渐意识到,获得专利仅仅意味着你买了一张可以打官司的乐透彩票,从此我便决定尽可能地不去申请专利。

然而,在Tesla,我们曾经被动地去申请了很多专利,之所以如此,是因为我们担心那些巨型汽车厂商会山寨我们的技术,并利用它们在大规模量产和市场营销上的优势,把我们击垮。现在,我们意识到,之前的这种担心,错到不能再错。因为,电动汽车产业与我们之前所担心的状况截然相反,而且相当不乐观:电动汽车(或者其它任何非汽油车)在主流的汽车厂商眼里,渺小到几乎不存在,其销量还远远不到整个汽车销量的1%。

大的汽车厂商只不过是在很有限的范围内生产一点点电动车,甚至有些厂商根本就没生产过这些所谓的“零排放”汽车。

全球每年约生产1亿台新汽车,而总的保有量约是20亿台。既然世界上还存在如此之多的汽油车,那么对于Tesla来说,想要在短时间内解决“碳排放危机”,根本是不可能的。当然,这同时也意味着,电动汽车的潜在市场无比巨大。所以,我们的真正竞争,并不是来自于那些产量如同涓涓细流一般的“非Tesla”电动车,而是那些每天之产量都像滔天洪水一般的汽油车。

我们深信,无论Tesla,还是其它生产电动车的厂商,甚至是整个世界,都会从一个共享的、快速发展的技术平台上获益良多。

历史已经无数次地证明:对于那些堪称“一生之敌”的商业竞争者而言,专利能起到的防范作用非常之小。因此,所谓的技术领导力,并不是由什么专利来决定,而是看一个公司如何吸引并激爆顶级工程师的能力。现在,我们把开源的精神移植到专利上来,并且深信,这非但不会减弱Tesla的优势,反而更会加强我们在业界的地位。

 

(2个打分, 平均:5.00 / 5)

PAN . 《10 Things Your Next Firewall Must Do》

(1个打分, 平均:1.00 / 5)

David Patterson . 《How to Build a Bad Research Center》

(没有打分)

The Trick That Makes Google’s Self-Driving Cars Work

本文转载自http://www.theatlantic.com  原作者:

Google’s self-driving cars can tour you around the streets of Mountain View, California.

I know this. I rode in one this week. I saw the car’s human operator take his hands from the wheel and the computer assume control. “Autodriving,” said a woman’s voice, and just like that, the car was operating autonomously, changing lanes, obeying traffic lights, monitoring cyclists and pedestrians, making lefts. Even the way the car accelerated out of turns felt right.

It works so well that it is, as The New York Times‘ John Markoff put it, “boring.” The implications, however, are breathtaking.

Perfect, or near-perfect, robotic drivers could cut traffic accidents, expand the carrying capacity of the nation’s road infrastructure, and free up commuters to stare at their phones, presumably using Google’s many services.

But there’s a catch.

Today, you could not take a Google car, set it down in Akron or Orlando or Oakland and expect it to perform as well as it does in Silicon Valley.

Here’s why: Google has created a virtual track out of Mountain View.

A Googler demonstrates what the self-driving car “sees.” Note the background world on which the light traffic lights and their sightlines are displayed: that’s the track.

The key to Google’s success has been that these cars aren’t forced to process an entire scene from scratch. Instead, their teams travel and map each road that the car will travel. And these are not any old maps. They are not even the rich, road-logic-filled maps of consumer-grade Google Maps.

They’re probably best thought of as ultra-precise digitizations of the physical world, all the way down to tiny details like the position and height of every single curb. A normal digital map would show a road intersection; these maps would have a precision measured in inches.

But the “map” goes beyond what any of us know as a map. “Really, [our maps] are any geographic information that we can tell the car in advance to make its job easier,” explained Andrew Chatham, the Google self-driving car team’s mapping lead.

“We tell it how high the traffic signals are off the ground, the exact position of the curbs, so the car knows where not to drive,” he said. “We’d also include information that you can’t even see like implied speed limits.”

Google has created a virtual world out of the streets their engineers have driven. They pre-load the data for the route into the car’s memory before it sets off, so that as it drives, the software knows what to expect.

“Rather than having to figure out what the world looks like and what it means from scratch every time we turn on the software, we tell it what the world is expected to look like when it is empty,” Chatham continued. “And then the job of the software is to figure out how the world is different from that expectation. This makes the problem a lot simpler.”

While it might make the in-car problem simpler, but it vastly increases the amount of work required for the task. A whole virtual infrastructure needs to be built on top of the road network!

Very few companies, maybe only Google, could imagine digitizing all the surface streets of the United States as a key part of the solution of self-driving cars. Could any car company imagine that they have that kind of data collection and synthesis as part of their core competency?

Whereas, Chris Urmson, a former Carnegie Mellon professor who runs Google’s self-driving car program, oozed confidence when asked about the question of mapping every single street where a Google car might want to operate. “It’s one of those things that Google, as a company, has some experience with our Google Maps product and Street View,” Urmson said. “We’ve gone around and we’ve collected this data so you can have this wonderful experience of visiting places remotely. And it’s a very similar kind of capability to the one we use here.”

So far, Google has mapped 2,000 miles of road. The US road network has something like 4 million miles of road.

“It is work,” Urmson added, shrugging, “but it is not intimidating work.” That’s the scale at which Google is thinking about this project.

All this makes sense within the broader context of Google’s strategy. Google wants to make the physical world legible to robots, just as it had to make the web legible to robots (or spiders, as they were once known) so that they could find what people wanted in the pre-Google Internet of yore.

about it, the more the goddamn Googleyness of the thing stands out.

In fact, it might be better to stop calling what Google is doing mapping, and come up with a different verb to suggest the radical break they’ve made with previous ideas of maps. I’d say they’re crawling the world, meaning they’re making it legible and useful to computers.

Self-driving cars sit perfectly in-between Project Tango—a new effort to “give mobile devices a human-scale understanding of space and motion”—and Google’s recent acquisition spree of robotics companies. Tango is about making the “human-scale” world understandable to robots and the robotics companies are about creating the means for taking action in that world.

The more you think about it, the more the goddamn Googleyness of the thing stands out: the ambition, the scale, and the type of solution they’ve come up with to this very hard problem. What was a nearly intractable “machine vision” problem, one that would require close to human-level comprehension of streets, has become a much, much easier machine vision problem thanks to a massive, unprecedented, unthinkable amount of data collection.

Last fall, Anthony Levandowski, another Googler who works on self-driving cars, went to Nissan for a presentation that immediately devolved into a Q&A with the car company’s Silicon Valley team. The Nissan people kept hectoring Levandowski about vehicle-to-vehicle communication, which the company’s engineers (and many in the automotive industry) seemed to see as a significant part of the self-driving car solution.

He parried all of their queries with a speed and confidence just short of condescension. “Can we see more if we can use another vehicle’s sensors to see ahead?” Levandowski rephrased one person’s question. “We want to make sure that what we need to drive is present in everyone’s vehicle and sharing information between them could happen, but it’s not a priority.”

What the car company’s people couldn’t or didn’t want to understand was that Google does believe in vehicle-to-vehicle communication, but serially over time, not simultaneously in real-time.

After all, every vehicle’s data is being incorporated into the maps. That information “helps them cheat, effectively,” Levandowski said. With the map data—or as we might call it, experience—all the cars need is their precise position on a super accurate map, and they can save all that parsing and computation (and vehicle to vehicle communication).

There’s a fascinating parallel between what Google’s self-driving cars are doing and what the Andreesen Horowitz-backed startup Anki is doing with its toy car racing game. When you buy Anki Drive, they sell you a track on which the cars race, which has positioning data embedded. The track is the physical manifestation of a virtual racing map.

Last year, Anki CEO (and like Urmson, a Carnegie Mellon robotics guy) Boris Sofman told me knowing the racing environment in advance allows them to more easily sync the state of the virtual world in which their software is running with the physical world in which the cars are driving.

“We are able to turn the physical world into a virtual world,” Sofman said. “We can take all these physical characters and abstract away everything physical about them and treat them as if they were virtual characters in a video game on the phone.”

Google did not allow reporters to film the self-driving car software at work while the cars were moving, but I managed to snag this video of the software running through the window of one car parked outside the Computer History Museum. To orient you: the busy street to the left is Shoreline Boulevard. Watch for the pedestrian (a photographer) who walks up to the car.

Of course, when there are bicyclists and bad drivers involved, navigating the hybrid virtual-physical world of Mountain View is not easy: the cars still have to "race" around the track, plotting trajectories and avoiding accidents.

The Google cars are not dumb machines. They have their own set of sensors: radar, a laser spinning atop the Lexus SUV, and a suite of cameras. And they have some processing on board to figure out what routes to take and avoid collisions.

This is a hard problem, but Google is doing the computation with what Levandowski described at Nissan as a "desktop" level system. (The big computation and data processing are done by the teams back at Google's server farms.)

What that on-board computer does first is integrate the sensor data. It takes the data from the laser and the cameras and integrates them into a view of the world, which it then uses to orient itself (with the rough guidance of GPS) in virtual Mountain View. "We can align what we're seeing to what's stored on the map. That allows us to very accurately—within a few centimeters—position ourselves on the map," said Dmitri Dolgov, the self-driving car team's software lead. "Once we know where we are, all that wonderful information encoded in our maps about the geometry and semantics of the roads becomes available to the car."

The lasers and cameras of a Google self-driving car.

Once they know where they are in space, the cars can do the work of watching for and modeling the behavior of dynamic objects like other cars, bicycles, and pedestrians.

Here, we see another Google approach. Dolgov's team uses machine learning algorithms to create models of other people on the road. Every single mile of driving is logged, and that data fed into computers that classify how different types of objects act in all these different situations. While some driver behavior could be hardcoded in ("When the lights turn green, cars go"), they don't exclusively program that logic, but learn it from actual driver behavior.

In the way that we know that a car pulling up behind a stopped garbage truck is probably going to change lanes to get around it, having been built with 700,000 miles of driving data has helped the Google algorithm to understand that the car is likely to do such a thing.

Most driving situations are not hard to comprehend, but what about the tough ones or the unexpected ones? In Google's current process, a human driver would take control, and (so far) safely guide the car. But fascinatingly, in the circumstances when a human driver has to take over, what the Google car would have done is also recorded, so that engineers can test what would have happened in extreme circumstances without endangering the public.

So, each Google car is carrying around both the literal products of previous drives—the imagery and data captured from crawling the physical world—as well as the computed outputs of those drives, which are the models for how other drivers might behave.

There is a literal big red button in the console.

There is, at least in an analogical sense, a connection between how the Google cars work and how our own brains do. We think about the way we see as accepting sensory input and acting accordingly. Really, our brains are making predictions all the time, which guide our perception. The actual sensory input—the light falling on retinal cells—is secondary to the prior experience that we've built into our brains through years of experience being in the world.

That Google's self-driving cars are using these principles is not surprising. That they are having so much success doing so is.

Peter Norvig, the head of AI at Google, and two of his colleagues coined the phrase "the unreasonable effectiveness of data" in an essay to describe the effect of huge amounts of data on very difficult artificial intelligence problems. And that is exactly what we're seeing here. A kind of Googley mantra concludes the Norvig essay: "Now go out and gather some data, and see what it can do."

Even if it means continuously and neverendingly driving 4 million miles of roads with the most sophisticated cars on Earth and then hand-massaging that data—they'll do it.

That's the unreasonable effectiveness of Google.

(没有打分)

科技一周~梦想的星空

梦想的星空

2014/06/08

犹记那年夏末秋初,我独自一人,怯生生地来到美国,其时年纪尚轻,正是慵懒读书的时节。未料,转瞬之间,已是兔走乌飞,流指十年。我的生活像是按照既定的剧本,淡淡地发展下来,纵不曾大富大贵,却也收获了平凡的幸福。现在,当一切都安定下来,我回思过往,不知怎地,那早年的时光却又徒然丰满,年轻时的理想与愿景也在不自觉间细腻起来。我曾尝试用心灵叩问自己:“下一个十年,我究竟会去哪儿,是悠然硅谷下,还是追逐梦想间?”后来,我明白了,这并不是一个需要回答的问话,因为当我发问的时候,就已注定了自己追梦一生的征程。人诚如此,每一个有着使命的科技公司亦然:当梦想开始奔跑的时候,星空也不是极限。

  • Apple的梦想绝不是做出“西半球最好的手机”,而是“万物互联”。6月2日,在WWDC大会上,Apple推出最新的移动操作系统iOS 8,尤为引人注目的是,iOS 8的SDK里提供了两套颇具创新的API Framework:HealthKit、HomeKit[1]。HealthKit是一个面向可穿戴式设备的开发平台,它的出现,几乎肯定了Apple要在下半年推出iWatch的猜测。HomeKit则是面向智能家居的开发平台,其目标是提高家庭电器的数据互通性。可以说,借助HealthKit和HomeKit,Apple将会把基于iOS的移动设备打造成穿戴电子和家庭电器的数据中心,起到“互联万物”(IoT,Internet of Things)的作用。许多科技业者因为Apple没能在此次大会上推出创新性的硬件产品,而大失所望,其实,这慢慢地将会成为习惯:Apple以WWDC作为软件产品的发布会,以及开发者的沟通会,而创新性的硬件设备都会有一个单独的发布会在每年的下半年举行。
  • WWDC上,Tim Cook在谈论HomeKit的时候,大屏幕上显示出了整屏的合作伙伴,其中有三家芯片领域的巨头公司:TI、Broadcom、Marvell。最近几年,这三家芯片公司都在极力向IoT领域进军,TI和Broadcom更是不惜砍掉自己的移动通讯基带部门,而把注意力集中到IoT。再加上Intel、联发科也在不久前宣布进军IoT,一时之间,IoT成为芯片公司的必争之地。事实上,在移动芯片被Qualcomm和联发科两家公司垄断的背景下,其它芯片公司也只能另辟战场,绕路图存。当然,IoT也的确有希望成为下一个“万亿美元”的市场,谁能够占领IoT领域,谁就会成为像Intel和Qualcomm一样的霸主。值得肯定的是,Marvell动作非常迅速,在WWDC的第二天,就宣布了自己IoT单芯片解决方案:EZ-connect[2]。EZ-connect方案可以方便的嵌入HomeKit协议,从而成为“準iOS认证”(Ready for iOS Certified)的方案,基于此方案的终端产品可以快速申请Apple的认证。
  • Google,作为当下最具创新精神的科技公司,它的梦想与使命又是什么呢?先看一则来自纽约长岛的新闻:10岁的华裔小姑娘,Audrey Zhang,在本周结束的第七届中小学生Google涂鸦大赛上(Doodle 4 Google Contest),击败超过10万名参赛者,成为唯一的一个全国冠军[3]!Audrey Zhang的涂鸦作品,名字叫做“Back to Mother Nature”,描述了一种想象性的水净化世界。她在作品的介绍里说:“我画出了一种可转换性的水净化系统,可以令污水在自然界里转化为净水,让我们的世界变得更美好”。这最后一句话,恰好与Google的使命不谋而合。Google在2004年的IPO Letter[4]里说,公司的梦想是:让自己的服务惠及尽可能多的人们,并大幅改善人类之生活(To develop services that significantly improve the lives of as many as people as possible)。这也就是,我们为什么能看到Google的服务触及了几乎所有领域(互联网、人工智能、移动互联、高空联网、无人汽车、穿戴式设备、金融、快递、万物互联)的原因。Google每年一封简短的公开信,不仅仅承载着科技的使命,更为我们解释了自己的梦想。这世界,即便你拥有了彪悍的人生,也需要做出合理的解释。
(2个打分, 平均:5.00 / 5)

Sobug白帽安全众测平台

在国外安全圈,对于安全漏洞的认识越来越趋向于为漏洞付费(No More Free Bugs),这体现了行业对安全研究价值的认可。随着认知的统一,国内的互联网公司也纷纷成立了类似微软MSRC的漏洞中心,比如行业标杆的腾讯TSRC,360安全应急响应中心(QSRC), 新浪SSRC,阿里巴巴ASRC,百度BSRC,甚至京东的JSRC,虽然各家对漏洞的价值认知不同,但是却也热闹异常。

问题来了,互联网生态圈中,大鳄自然是财大气粗,有成百上千的安全人员来重视自身的安全,但对于其余99%的中小型公司来说,招募合格的安全人员对于他们成了一件奢侈品,而面临互联网中无时无刻不在的安全威胁,该如何应对? 

在新浪微博上,业内知名人士Benjurry振臂呐喊如何让白帽子更有尊严的合法赚钱,在新闻联播上,屡屡报道黑客通过技术手段非法牟利而锒铛入狱,在各大媒体上,安全问题的互相披露成为了厂商之间无解而又无奈的公关手段。

这种新模式已经对传统安全厂商的服务项目产生了冲击,会引起行业的进一步洗牌,传统安全公司的技术优势一下子就不复存在,这也许就是互联网的威力。

对于传统安全公司来说,未来需要更多的并购保住自己的地盘,指望公司内的创新已是痴人说梦;而对于互联网入口来说,任何层面上与客户相关的业务,都会被赋予新的价值给于高溢价。

Sobug的愿景是成为连接安全专家与厂商的网络安全众测平台,组织安全专家检测授权厂商的安全问题,同时厂商基于检测效果对安全专家进行奖励。创始人冷焰来自腾讯TSRC,毅然放弃了大型互联网公司的优厚薪水,开始了sobug的创业之路。

下图是Sobug上线一周的数据,看得出来分析的角度非常互联网化。

企业安全市场足够大,相信通过精细化的运营,创新者赢。

 

 

(没有打分)

山石网科弹性防火墙架构解决方案

(4个打分, 平均:2.75 / 5)