解密:柴静雾霾调查:穹顶之下 (全)

Sina WeiboBaiduLinkedInQQGoogle+RedditEvernote分享

(5个打分, 平均:3.40 / 5)

Facebook AI Director Yann LeCun on His Quest to Unleash Deep Learning and Make Machines Smarter

原文转载自:http://spectrum.ieee.org

Artificial intelligence has gone through some dismal periods, which those in the field gloomily refer to as “AI winters.” This is not one of those times; in fact, AI is so hot right now that tech giants like Google, Facebook, Apple, Baidu, and Microsoft are battling for the leading minds in the field. The current excitement about AI stems, in great part, from groundbreaking advances involving what are known as “convolutional neural networks.” This machine learning technique promises dramatic improvements in things like computer vision, speech recognition, and natural language processing. You probably have heard of it by its more layperson-friendly name: “Deep Learning.”

Few people have been more closely associated with Deep Learning than Yann LeCun, 54. Working as a Bell Labs researcher during the late 1980s, LeCun developed the convolutional network technique and showed how it could be used to significantly improve handwriting recognition; many of the checks written in the United States are now processed with his approach. Between the mid-1990s and the late 2000s, when neural networks had fallen out of favor, LeCun was one of a handful of scientists who persevered with them. He became a professor at New York University in 2003, and has since spearheaded many other Deep Learning advances.


More recently, Deep Learning and its related fields grew to become one of the most active areas in computer research. Which is one reason that at the end of 2013, LeCun was appointed head of the newly-created Artificial Intelligence Research Lab at Facebook, though he continues with his NYU duties.


LeCun was born in France, and retains from his native country a sense of the importance of the role of the “public intellectual.” He writes and speaks frequently in his technical areas, of course, but is also not afraid to opine outside his field, including about current events.

IEEE Spectrum contributor Lee Gomes spoke with LeCun at his Facebook office in New York City. The following has been edited and condensed for clarity.

IEEE Spectrum: We read about Deep Learning in the news a lot these days. What’s your least favorite definition of the term that you see in these stories?

Yann LeCun:
My least favorite description is, “It works just like the brain.” I don’t like people saying this because, while Deep Learning gets an inspiration from biology, it’s very, very far from what the brain actually does. And describing it like the brain gives a bit of the aura of magic to it, which is dangerous. It leads to hype; people claim things that are not true. AI has gone through a number of AI winters because people claimed things they couldn’t deliver.

Spectrum: So if you were a reporter covering a Deep Learning announcement, and had just eight words to describe it, which is usually all a newspaper reporter might get, what would you say?

LeCun: I need to think about this. [Long pause.] I think it would be “machines that learn to represent the world.” That’s eight words. Perhaps another way to put it would be “end-to-end machine learning.” Wait, it’s only five words and I need to kind of unpack this. [Pause.] It’s the idea that every component, every stage in a learning machine can be trained.

Spectrum: Your editor is not going to like that.

LeCun:
Yeah, the public wouldn’t understand what I meant. Oh, okay. Here’s another way. You could think of Deep Learning as the building of learning machines, say pattern recognition systems or whatever, by assembling lots of modules or elements that all train the same way. So there is a single principle to train everything. But again, that’s a lot more than eight words.

Spectrum: What can a Deep Learning system do that other machine learning systems can’t do?

LeCun: That may be a better question. Previous systems, which I guess we could call “shallow learning systems,” were limited in the complexity of the functions they could compute. So if you want a shallow learning algorithm like a “linear classifier” to recognize images, you will need to feed it with a suitable “vector of features” extracted from the image. But designing a feature extractor “by hand” is very difficult and time consuming.

An alternative is to use a more flexible classifier, such as a “support vector machine” or a two-layer neural network fed directly with the pixels of the image. The problem is that it’s not going to be able to recognize objects to any degree of accuracy, unless you make it so gigantically big that it becomes impractical.

Spectrum: It doesn’t sound like a very easy explanation. And that’s why reporters trying to describe Deep Learning end up saying…

LeCun: …that it’s like the brain.

Spectrum: Part of the problem is that machine learning is a surprisingly inaccessible area to people not working in the field. Plenty of educated lay people understand semi-technical computing topics, like, say, the PageRank algorithm that Google uses. But I’d bet that only professionals know anything detailed about linear classifiers or vector machines. Is that because the field is inherently complicated?

LeCun: Actually, I think the basics of machine learning are quite simple to understand. I’ve explained this to high-school students and school teachers without putting too many of them to sleep.

A pattern recognition system is like a black box with a camera at one end, a green light and a red light on top, and a whole bunch of knobs on the front. The learning algorithm tries to adjust the knobs so that when, say, a dog is in front of the camera, the red light turns on, and when a car is put in front of the camera, the green light turns on. You show a dog to the machine. If the red light is bright, don’t do anything. If it’s dim, tweak the knobs so that the light gets brighter. If the green light turns on, tweak the knobs so that it gets dimmer. Then show a car, and tweak the knobs so that the red light get dimmer and the green light gets brighter. If you show many examples of the cars and dogs, and you keep adjusting the knobs just a little bit each time, eventually the machine will get the right answer every time.

The interesting thing is that it may also correctly classify cars and dogs it has never seen before. The trick is to figure out in which direction to tweak each knob and by how much without actually fiddling with them. This involves computing a “gradient,” which for each knob indicates how the light changes when the knob is tweaked.

Now, imagine a box with 500 million knobs, 1,000 light bulbs, and 10 million images to train it with. That’s what a typical Deep Learning system is.

Spectrum: I assume that you use the term “shallow learning” somewhat tongue-in-cheek; I doubt people who work with linear classifiers consider their work “shallow.” Doesn’t the expression “Deep Learning” have an element of PR to it, since it implies that what is “deep” is what is being learned, when in fact the “deep” part is just the number of steps in the system?

LeCun: Yes, it is a bit facetious, but it reflects something real: shallow learning systems have one or two layers, while deep learning systems typically have five to 20 layers. It is not the learning that is shallow or deep, but the architecture that is being trained.

Spectrum: The standard Yann LeCun biography says that you were exploring new approaches to neural networks at a time when they had fallen out of favor. What made you ignore the conventional wisdom and keep at it?

LeCun: I have always been enamored of the idea of being able to train an entire system from end to end. You hit the system with essentially raw input, and because the system has multiple layers, each layer will eventually figure out how to transform the representations produced by the previous layer so that the last layer produces the answer. This idea—that you should integrate learning from end to end so that the machine learns good representations of the data—is what I have been obsessed with for over 30 years.

Spectrum: Is the work you do “hacking,” or is it science? Do you just try things until they work, or do you start with a theoretical insight?

LeCun: It’s very much an interplay between intuitive insights, theoretical modeling, practical implementations, empirical studies, and scientific analyses. The insight is creative thinking, the modeling is mathematics, the implementation is engineering and sheer hacking, the empirical study and the analysis are actual science. What I am most fond of are beautiful and simple theoretical ideas that can be translated into something that works.

I have very little patience for people who do theory about a particular thing simply because it’s easy—particularly if they dismiss other methods that actually work empirically, just because the theory is too difficult. There is a bit of that in the machine learning community. In fact, to some extent, the “Neural Net Winter” during the late 1990s and early 2000s was a consequence of that philosophy; that you had to have ironclad theory, and the empirical results didn’t count. It’s a very bad way to approach an engineering problem.

But there are dangers in the purely empirical approach too. For example, the speech recognition community has traditionally been very empirical, in the sense that the only stuff that’s being paid attention to is how well you are doing on certain benchmarks. And that stifles creativity, because to get to the level where if you want to beat other teams that have been at it for years, you need to go underground for four or five years, building your own infrastructure. That’s very difficult and very risky, and so nobody does it. And so to some extent with the speech recognition community, the progress has been continuous but very incremental, at least until the emergence of Deep Learning in the last few years.

Spectrum: You seem to take pains to distance your work from neuroscience and biology. For example, you talk about “convolutional nets,” and not “convolutional neural nets.” And you talk about “units” in your algorithms, and not “neurons.”

LeCun: That’s true. Some aspects of our models are inspired by neuroscience, but many components are not at all inspired by neuroscience, and instead come from theory, intuition, or empirical exploration. Our models do not aspire to be models of the brain, and we don’t make claims of neural relevance. But at the same time, I’m not afraid to say that the architecture of convolutional nets is inspired by some basic knowledge of the visual cortex. There are people who indirectly get inspiration from neuroscience, but who will not admit it. I admit it. It’s very helpful. But I’m very careful not to use words that could lead to hype. Because there is a huge amount of hype in this area. Which is very dangerous.

Spectrum: Hype is bad, sure, but why do you say it’s “dangerous”?

LeCun: It sets expectations for funding agencies, the public, potential customers, start-ups and investors, such that they believe that we are on the cusp of building systems that are as powerful as the brain, when in fact we are very far from that. This could easily lead to another “winter cycle.”

And then there is a little bit of “cargo cult science” in this. This is a Richard Feynman expression. He talked about cargo cult science to describe things that look like science, but basically are not.

Spectrum: Give me some examples.

LeCun: In a cargo cult, you reproduce the appearance of the machine without understanding the principles behind the machine. You build radio stations out of straw. The cargo cult approach to aeronautics—for actually building airplanes—would be to copy birds very, very closely; feathers, flapping wings, and all the rest. And people did this back in the 19th century, but with very limited success.

The equivalent in AI is to try to copy every detail that we know of about how neurons and synapses work, and then turn on a gigantic simulation of a large neural network inside a supercomputer, and hope that AI will emerge. That’s cargo cult AI. There are very serious people who get a huge amount of money who basically—and of course I’m sort of simplifying here—are pretty close to believing this.

Spectrum: Do you think the IBM True North project is cargo cult science?

LeCun: That would be a little harsh! But I do believe that some of the claims by the IBM group have gone a bit too far and were easily misinterpreted. Some of their announcements look impressive on the surface, but aren’t actually implementing anything useful. Before the True North project, the group used an IBM supercomputer to “simulate a rat-scale brain.” But it was just a random network of neurons that did nothing useful except burn cycles.

The sad thing about the True North chip is that it could have been useful if it had not tried to stick too close to biology and not implemented “spiking integrate-and-fire neurons.” Building a chip is very expensive. So in my opinion—and I used to be a chip designer—you should build a chip only when you’re pretty damn sure it can do something useful. If you build a convolutional net chip—and it’s pretty clear how to do it—it can go into a lot of devices right away. IBM built the wrong thing. They built something that we can’t do anything useful with.

Spectrum: Any other examples?

LeCun: I’m going to get a lot of heat for this, but basically a big chunk of the Human Brain Project in Europe is based on the idea that we should build chips that reproduce the functioning of neurons as closely as possible, and then use them to build a gigantic computer, and somehow when we turn it on with some learning rule, AI will emerge. I think it’s nuts.

Now, what I just said is a caricature of the Human Brain Project, to be sure. And I don’t want to include in my criticism everyone who is involved in the project. A lot of participants are involved simply because it’s a very good source of funding that they can’t afford to pass up.

Spectrum: How much more about machine learning in general remains to be discovered?

LeCun: A lot. The type of learning that we use in actual Deep Learning systems is very restricted. What works in practice in Deep Learning is “supervised” learning. You show a picture to the system, and you tell it it’s a car, and it adjusts its parameters to say “car” next time around. Then you show it a chair. Then a person. And after a few million examples, and after several days or weeks of computing time, depending on the size of the system, it figures it out.

Now, humans and animals don’t learn this way. You’re not told the name of every object you look at when you’re a baby. And yet the notion of objects, the notion that the world is three-dimensional, the notion that when I put an object behind another one, the object is still there—you actually learn those. You’re not born with these concepts; you learn them. We call that type of learning “unsupervised” learning.

A lot of us involved in the resurgence of Deep Learning in the mid-2000s, including Geoff Hinton, Yoshua Bengio, and myself—the so-called “Deep Learning conspiracy”—as well as Andrew Ng, started with the idea of using unsupervised learning more than supervised learning. Unsupervised learning could help “pre-train” very deep networks. We had quite a bit of success with this, but in the end, what ended up actually working in practice was good old supervised learning, but combined with convolutional nets, which we had over 20 years ago.

But from a research point of view, what we’ve been interested in is how to do unsupervised learning properly. We now have unsupervised techniques that actually work. The problem is that you can beat them by just collecting more data, and then using supervised learning. This is why in industry, the applications of Deep Learning are currently all supervised. But it won’t be that way in the future.

The bottom line is that the brain is much better than our model at doing unsupervised learning. That means that our artificial learning systems are missing some very basic principles of biological learning.

Spectrum: What are some of the reasons Facebook was interested in setting up an AI lab?

LeCun: Facebook’s motto is to connect people. Increasingly, that also means connecting people to the digital world. At the end of 2013, when Mark Zuckerberg decided to create Facebook AI Research, the organization I direct, Facebook was about to turn 10 years old. The company thought about what “connecting people” would entail 10 years in the future, and realized that AI would play a key role.

Facebook can potentially show each person on Facebook about 2,000 items per day: posts, pictures, videos, etc. But no one has time for this. Hence Facebook has to automatically select 100 to 150 items that users want to see—or need to see. Doing a good job at this requires understanding people, their tastes, interests, relationships, aspirations and even goals in life. It also requires understanding content: understanding what a post or a comment talks about, what an image or a video contains, etc. Only then can the most relevant content be selected and shown to the person. In a way, doing a perfect job at this is an “AI-complete” problem: it requires understanding people, emotions, culture, art. Much of our work at Facebook AI focuses on devising new theories, principles, methods, and systems to make machines understand images, video, speech, and language—and then to reason about them.

Spectrum: We were talking earlier about hype, and I have a hype complaint of my own. Facebook recently announced a face-verification algorithm called “DeepFace,” with results that were widely reported to involve near-human accuracy in facial recognition. But weren’t those results only true with carefully curated data sets? Would the system have the same success looking at whatever pictures it happened to come across on the Internet?

LeCun: The system is more sensitive to image quality than humans would be, that’s for sure. Humans can recognize faces in a lot of different configurations, with different facial hair and things like that, which computer systems are slightly more sensitive to. But those systems can recognize humans among very large collections of people, much larger collections than humans could handle.

Spectrum: So could DeepFace do a better job of looking through pictures on the Internet and seeing if, say, Obama is in the picture than I could?

LeCun: It will do it faster, that’s for sure.

Spectrum: Would it be more accurate?

LeCun: Probably not. No. But it can potentially recognize people among hundreds of millions. That’s more than I can recognize!

Spectrum: Would it have 97.25 percent accuracy, like it did in the study?

LeCun: It’s hard to quote a number without actually having a data set to test it on. It completely depends on the nature of the data. With hundreds of millions of faces in the gallery, the accuracy is nowhere near 97.25 percent.

Spectrum: One of the problems here seems to be that computer researchers use certain phrases differently than lay people. So when researchers talk about “accuracy rates,” they might be talking about what they get with curated data sets. Whereas lay people might think the computers are looking at the same sorts of random pictures that people look at every day. But the upshot is that claims made for computer systems usually need to be much more qualified than they typically are in news stories.

LeCun: Yes. We work with a number of well-known benchmarks, like Labeled Faces in the Wild that other groups use as well, so as to compare our methods with others. Naturally, we also have internal datasets.

Spectrum: So in general, how close to humans would a computer be at facial recognition, against real pictures like you find on the Internet?

LeCun: It would be pretty close.

Spectrum: Can you attach a number to that?

LeCun: No, I can’t, because there are different scenarios.

Spectrum: How well will Deep Learning do in areas beyond image recognition, especially with issues associated with generalized intelligence, like natural language?

LeCun: A lot of what we are working on at Facebook is in this domain. How do we combine the advantages of Deep Learning, with its ability to represent the world through learning, with things like accumulating knowledge from a temporal signal, which happens with language, with being able to do reasoning, with being able to store knowledge in a different way than current Deep Learning systems store it. Currently with Deep Learning systems, it’s like learning a motor skill. The way we train them is similar to the way you train yourself to ride a bike. You learn a skill, but there’s not a huge amount of factual memory or knowledge involved.

But there are other types of things that you learn where you have to remember facts, where you have to remember things and store them. There’s a lot of work at Facebook, at Google, and at various other places where we’re trying to have a neural net on one side, and then a separate module on the other side that is used as a memory. And that could be used for things like natural language understanding.

We are starting to see impressive results in natural language processing with Deep Learning augmented with a memory module. These systems are based on the idea of representing words and sentences with continuous vectors, transforming these vectors through layers of a deep architecture, and storing them in a kind of associative memory. This works very well for question-answering and for language translation. A particular model of this type called “Memory Network” was recently proposed by Facebook scientists Jason Weston, Sumit Chopra, and Antoine Bordes. A somewhat related idea called the “Neural Turing Machine” was also proposed by scientists at Google/Deep Mind.

Spectrum: So you don’t think that Deep Learning will be the one tool that will unlock generalized intelligence?

LeCun: It will be part of the solution. And, at some level, the solution will look like a very large and complicated neural net. But it will be very different from what people have seen so far in the literature. You’re starting to see papers on what I am talking about. A lot of people are working on what’s called “recurrent neural nets.” These are networks where the output is fed back to the input, so you can have a chain of reasoning. You can use this to process sequential signals, like speech, audio, video, and language. There are preliminary results that are pretty good. The next frontier for Deep Learning is natural language understanding.

Spectrum: If all goes well, what can we expect machines to soon be able to do that they can’t do now?

LeCun: You might perhaps see better speech recognition systems. But they will be kind of hidden. Your “digital companion” will get better. You’ll see better question-answering and dialog systems, so you can converse with your computer; you can ask questions and it will give you answers that come from some knowledge base. You will see better machine translation. Oh, and you will see self-driving cars and smarter robots. Self-driving cars will use convolutional nets.

Spectrum: In preparing for this interview, I asked some people in computing what they’d like to ask you. Oren Etzioni, head of the Allen Institute for Artificial Intelligence, was specifically curious about Winograd Schemas, which involve not only natural language and common sense, but also even an understanding of how contemporary society works. What approaches might a computer take with them?

LeCun: The question here is how to represent knowledge. In “traditional” AI, factual knowledge is entered manually, often in the form of a graph, that is, a set of symbols or entities and relationships. But we all know that AI systems need to be able to acquire knowledge automatically through learning. The question becomes, “How can machines learn to represent relational and factual knowledge?” Deep Learning is certainly part of the solution, but it’s not the whole answer. The problem with symbols is that a symbol is a meaningless string of bits. In Deep Learning systems, entities are represented by large vectors of numbers that are learned from data and represent their properties. Learning to reason comes down to learning functions that operate on these vectors. A number of Facebook researchers, such as Jason Weston, Ronan Collobert, Antoine Bordes, and Tomas Mikolov have pioneered the use of vectors to represent words and language.

Spectrum: One of the classic problems in AI is giving machines common sense. What ideas does the Deep Learning community have about this?

LeCun: I think a form of common sense could be acquired through the use of predictive unsupervised learning. For example, I might get the machine to watch lots of videos were objects are being thrown or dropped. The way I would train it would be to show it a piece of video, and then ask it, “What will happen next? What will the scene look like a second from now?” By training the system to predict what the world is going to be like a second, a minute, an hour, or a day from now, you can train it to acquire good representations of the world. This will allow the machine to know about the constraints of the physical world, such as “Objects thrown in the air tend to fall down after a while,” or “A single object cannot be in two places at the same time,” or “An object is still present while it is occluded by another one.” Knowing the constraints of the world would enable a machine to “fill in the blanks” and predict the state of the world when being told a story containing a series of events. Jason Weston, Sumit Chopra, and Antoine Bordes are working on such systems here at Facebook using the “Memory Network” I mentioned previously.

Spectrum: When discussing human intelligence and consciousness, many scientists often say that we don’t even know what we don’t know. Do you think that’s also true of the effort to build artificial intelligence?

LeCun: It’s hard to tell. I’ve said before that working on AI is like driving in the fog. You see a road and you follow the road, but then suddenly you see a brick wall in front of you. That story has happened over and over again in AI; with the Perceptrons in the ’50s and ’60s, then the syntactic-symbolic approach in the ’70s, and then the expert systems in the ’80s, and then neural nets in the early ’90s, and then graphical models, kernel machines, and things like that. Every time, there is some progress and some new understanding. But there are also limits that need to be overcome.

Spectrum: Here’s another question, this time from Stuart and Hubert Dreyfus, brothers and well-known professors at the University of California, Berkeley: “What do you think of press reports that computers are now robust enough to be able to identify and attack targets on their own, and what do you think about the morality of that?”

LeCun: I don’t think moral questions should be left to scientists alone! There are ethical questions surrounding AI that must be discussed and debated. Eventually, we should establish ethical guidelines as to how AI can and cannot be used. This is not a new problem. Societies have had to deal with ethical questions attached to many powerful technologies, such as nuclear and chemical weapons, nuclear energy, biotechnology, genetic manipulation and cloning, information access. I personally don’t think machines should be able to attack targets without a human making the decision. But again, moral questions such as these should be examined collectively through the democratic/political process.

Spectrum: You often make quite caustic comments about political topics. Do your Facebook handlers worry about that?

LeCun: There are a few things that will push my buttons. One is political decisions that are not based on reality and evidence. I will react any time some important decision is made that is not based on rational decision-making. Smart people can disagree on the best way to solve a problem, but when people disagree on facts that are well established, I think it is very dangerous. That’s what I call people on. It just so happens that in this country, the people who are on side of irrational decisions and religious-based decisions are mostly on the right. But I also call out people on the left, such as those who think GMOs are all evil—only some GMOs are!—or who are against vaccinations or nuclear energy for irrational reasons. I’m a rationalist. I’m also an atheist and a humanist; I’m not afraid of saying that. My idea of morality is to maximize overall human happiness and minimize human suffering over the long term. These are personal opinions that do not engage my employer. I try to have a clear separation between my personal opinions—which I post on my personal Facebook timeline—and my professional writing, which I post on my public Facebook page.

Spectrum: You’ve already expressed your disagreement with many of the ideas associated with the Singularity movement. I’m interested in your thoughts about its sociology. How do you account for its popularity in Silicon Valley?

LeCun: It’s difficult to say. I’m kind of puzzled by that phenomenon. As Neil Gershenfeld has noted, the first part of a sigmoid looks a lot like an exponential. It’s another way of saying that what currently looks like exponential progress is very likely to hit some limit—physical, economical, societal—then go through an inflection point, and then saturate. I’m an optimist, but I’m also a realist.

There are people that you’d expect to hype the Singularity, like Ray Kurzweil. He’s a futurist. He likes to have this positivist view of the future. He sells a lot of books this way. But he has not contributed anything to the science of AI, as far as I can tell. He’s sold products based on technology, some of which were somewhat innovative, but nothing conceptually new. And certainly he has never written papers that taught the world anything on how to make progress in AI.

Spectrum: What do you think he is going to accomplish in his job at Google?

LeCun: Not much has come out so far.

Spectrum: I often notice when I talk to researchers about the Singularity that while privately they are extremely dismissive of it, in public, they’re much more temperate in their remarks. Is that because so many powerful people in Silicon Valley believe it?

LeCun: AI researchers, down in the trenches, have to strike a delicate balance: be optimistic about what you can achieve, but don’t oversell what you can do. Point out how difficult your job is, but don’t make it sound hopeless. You need to be honest with your funders, sponsors, and employers, with your peers and colleagues, with the public, and with yourself. It is difficult when there is a lot of uncertainty about future progress, and when less honest or more self-deluded people make wild claims of future success. That’s why we don’t like hype: it is made by people who are either dishonest or self-deluded, and makes the life of serious and honest scientists considerably more difficult.

When you are in the kind of position as Larry Page and Sergey Brin and Elon Musk and Mark Zuckerberg, you have to prepare for where technology is going in the long run. And you have a huge amount of resources to make the future happen in a way that you think will be good. So inevitably you have to ask yourself those questions: what will technology be like 10, 20, 30 years from now. It leads you to think about questions like the progress of AI, the Singularity, and questions of ethics.

Spectrum: Right. But you yourself have a very clear notion of where computers are going to go, and I don’t think you believe we will be downloading our consciousness into them in 30 years.

LeCun: Not anytime soon.

Spectrum: Or ever.

LeCun: No, you can’t say never; technology is advancing very quickly, at an accelerating pace. But there are things that are worth worrying about today, and there are things that are so far out that we can write science fiction about it, but there’s no reason to worry about it just now.

Spectrum: Another question from a researcher. C++ creator Bjarne Stroustrup asks, “You used to have some really cool toys—many of them flying. Do you still have time for hobbies or has your work crowded out the fun?”

LeCun: There is so much fun I can have with the work. But sometimes I need to build things with my hands. This was transmitted to me by my father, an aeronautical engineer. My father and my brother are into building airplanes as well. So when I go on vacation in France, we geek out and build airplanes for three weeks.

Spectrum: What is the plane that is on your Google+ page?

LeCun: It’s a Leduc, and it’s in the Musée de l’Air near Paris. I love that plane. It was the first airplane powered by a ramjet, which is a particular kind of jet engine capable of very high speed. The SR-71 Blackbird, perhaps the fastest plane in the world, uses hybrid ramjet-turbojets. The first Leduc was a prototype that was built in France before World War II, and had to be destroyed before the Germans invaded. Several planes were built after the war. It was a very innovative way of doing things; it was never practical, but it was cool. And it looks great. It’s got this incredible shape, where everything is designed for speed, but at the expense of the convenience of the pilot. The noise from the ramjet must have been unbearable for the pilot.

Spectrum: You tell a funny story in a Web post about running into Murray Gell-Mann years ago, and having him correct you on the pronunciation of your last name. You seemed to be poking gentle fun at the idea of the distinguished-but-pompous senior scientist. Now that you’re becoming quite distinguished yourself, do you worry about turning out the same way?

LeCun: I try not to pull rank. It’s very important when you lead a lab like I do to let young people exercise their creativity. The creativity of old people is based on stuff they know, whereas the creativity of young people is based on stuff they don’t know. Which allows for a little wider exploration. You don’t want to stunt enthusiasm. Interacting with PhD students and young researchers is a very good remedy against hubris. I’m not pompous, I think, and Facebook is a very non-pompous company. So it’s a good fit.

(没有打分)

Glibc 内存管理剖析--Ptmalloc2源码分析

(2个打分, 平均:5.00 / 5)

网络安全与美中关系 ( Cyber Security and US-China Relations )

(1个打分, 平均:4.00 / 5)

转载:毛渝南出手,原华三总裁吴敬传下课,华三事件继续发酵

今天通信行业最受人关注的莫过于原华三总裁吴敬传被免职。

 

事情的起因于1月16日,惠普宣布任命中国惠普董事长毛渝南兼任H3C(华三通信)董事长,经过三次股权收购的华三,员工而没有任何利益共享,愤怒的员工终于忍无可忍,今天早上就爆发了在杭州总部和各个分公司有大批员工罢工集会,以示对惠普此次任命的抗议,要求管理层召开员工大会,聆听员工心声。

朋友圈的段子王已经有了说法:华三为什么罢工,H3C就是换三次( Huawei 3 changes),它已经换了3次了,换四次就不行。华三在从2亿销售额增长到120亿的过程中经历了3次股权变换:

第一次:2003年11月,华为由于要急于解决和思科在美国市场发生的专利纠纷,华为和3COM成立合资公司华为3COM,华为占股51%,3Com占49%,3Com当时49%的股份是以而3Com则以现金($160 million)和中国及日本两地的业务注入新合资公司。如果以当时汇率1美金兑换8元来讲,考虑到3Com当时在中国和日本的业务已经日暮西山,当时华三的整体估值大致在30亿左右。
第二次:2006年11月,华为和3COM经过多轮竞价,3COM以18.8亿收购了华三100%的股份,绝对控股H3C,当时公司的估值大致在38.37亿。
第三次: 2009年11月12日,惠普宣布以27亿美元(合人民币175亿)现金收购3Com,进军电信设备市场,H3C并到惠普门下。
也就是在从2003年到2009年,华三的估值增值了近6倍,3Com赚的盆满钵满。按照当年HP收购后的说法,华三将会走向全员持股。按照网络上的说法,去年HP计划要将华三卖给国有企业中国电子CEC,目前传出的HP报价为51%股权50亿美元,也就是H3C的市值会到达620亿人民币,CEC如果收购成功,HP将在6年时间净赚445亿,投资增值254%。

针对HP的行动,华三员工在副总裁王巍的组织下进行了停工。

 

1月23日,华三副总裁王巍因“华三风波”离开华三通信

 

1月26日,惠普CEO梅格·惠特曼(Meg Whitman)宣布吴敬传将卸任H3C首席执行官的职位,吴敬传和毛渝南以及Matt Greenly一起成为H3C董事会成员,任副董事长,并将作为梅格·惠特曼在中国以及全球网络战略的顾问。同时梅格·惠特曼宣布任命曹向英为H3C首席执行官,即日生效。曹向英曾担任H3C首席运营官。

 

2月9日,数梦工场入驻云栖小镇签约仪式在杭州举行。杭州数梦工场科技有限公司是一家提供大数据服务的公司,成立于2015年2月,其创始人为前华三副总裁兼市场部总裁王巍。

 

2月14日,情人节,戏剧性的结局:

 

华三董事会已经决定终止华三与吴敬传之间的所有关系,立即生效。同时,惠普公司也已经终止其与吴敬传之间的所有关系。根据相关法律规定,吴敬传女士仍需继续履行竞业禁止及禁止招揽的法律义务。华三将采取积极的行动保证这一法律要求的实现。

 

华三董事会二零一五年二月十四日

(1个打分, 平均:5.00 / 5)

解密:Zynga中国解散:再读《Zynga大败局》

2月11日下午有消息透露,12日正式宣布Zynga中国解散。随后公布财报,表现低于预期,股价持续下滑。

从成立到消失,Zynga中国花了不到5年的时间。Zynga作为一家全球领域的社交游戏厂商,陪伴页游的兴起,又伴随手游而衰落。2011年,Zynga以10美元价位登陆纳斯达克,然而他仅受热捧数月后,股价便一路下滑,常年徘徊在2-3美元左右。Zynga收购希佩德信息技术有限公司后成立北京工作室,与新浪微博联手推出了网页版的《你画我猜》,后期又与腾讯微博推出了《星佳城市》等游戏。但步入2013年,Zynga在中国开始陷入了沉寂,推出的《至日竞技场》、《战争之石》和《堕落战争》表现都并不抢眼。

今日撷取一篇2013年于GameLook的《Zynga大败局》,原标题《社交游戏帝国的兴衰之谜-写在Zynga再次裁员之际》。

2013年6月 | GameLook | 曹金明

作者曹金明:2010年到2012年供职于Zynga,担任Facebook、腾讯平台发行的Zynga游戏制作人和产品负责人。

引言

早上7点刚过一点,就被手机微信的消息吵醒。在“离佳儿童联盟”(Zynga中国离职员工的微信群)里有人率先爆出了Zynga裁员18%,新关闭三家工作室,股价大跌12%的消息,立刻引起群里一片唏嘘。有个刚离职不久的老员工叫到“昨天刚把股票挂出去卖,今天就跌了这么多!”

这已经不是Zynga第一次大规模裁员,去年10月份曾经裁员100多人并陆续关闭了Boston,Austin以及日本分公司等多个Studio,其股价更是从IPO的发行价10美元跌去70%。Zynga,曾经的社交游戏帝国,其他社交游戏公司的崇拜和效仿对象,何以走上如此之快的衰落之路?相信很多人都对这个问题感兴趣。华尔街的分析文章中,大部分都是将Zynga的业绩下滑归咎于大量Facebook用户从页面转向移动,而Zynga没能在这个新的战场占据优势。但在我看来,这只是促使Zynga衰落的外因,而更大的问题则出现在Zynga的内部。作为曾经的“局内人”,我也会被一些游戏圈里的朋友问到类似的问题,而这确实曾触发我很多回忆与思考。

鼎盛时期

要研究Zynga的衰落过程,首先要看他曾经站在过什么样的高度上。严格来说,Zynga的鼎盛时期应该是从2010年初到2011年上半年的这一年半的时间。这段时间关于Zynga的报道经常能见诸于各大科技媒体头条,华尔街关于这起“自Google上市以来的最大IPO”也是格外关注。而Zynga也的确如一个年轻且前景光明的高帅富一般不可一世,可以俯视整个社交游戏市场。旗下的几款旗舰产品如《FarmVille》《Mafia Wars》等不仅雄踞了超过一半的Facebook游戏用户,最夸张的时候Facebook前10名的游戏中Zynga占据了9个席位,而更重要的是在那个时代社交游戏也真正具备超强的吸金能力。那时候Zynga到底有多赚钱?举个例子,我刚去Zynga的第一个星期收到了一封来自总部的群发邮件,是在讲FarmVille这款产品刚刚创造了新的单日收入记录 —— 1000万美元!是的,你没有听错,不是100万,不是500万,是1000万美元,1天的时间,这还仅仅是一款产品。这个数字甚至可以秒杀今天在移动平台炙手可热的CoC,PAD之流。但是考虑到当时FarmVille的日活跃用户可以达到4000万,这样的收入数字也就没有那么耸人听闻了。

和超强的吸金能力相伴的,是整个公司蒸蒸日上的美好景象和提前到来的奢华享受。2011年初我到总部出差,那个时候公司里洋溢的是一种忙碌但又轻松的氛围。很多团队辛苦加班一周,周末部门领导就直接带着团队直奔拉斯维加斯Happy,当然费用是公司报销。每周五晚上公司都有大餐,上等的神户牛肉摆在刻成“Zynga”字样的冰雕上,或者是以阿拉斯加雪蟹为主打的海鲜大餐。那个时候Zynga还没有上市,钱又不少赚,所以在成本控制上可谓毫无压力。尽管没有上市,很多拿了大量股份的早期员工已经通过二级市场交易赚得盆满钵满,购置了豪宅名车。

另一方面当时Zynga继《FarmVille》之后的大作《CityVille》刚刚发布不久,其用户量有超过其前辈的趋势,整个公司上下都在为此亢奋。平心而论,虽然《CityVille》不是Facebook上的第一款城市模拟经营游戏,但毫无疑问是最出色的一款。精美的画面,丰富的剧情,创新的系统奠定了其五星级的制作品质。这款游戏最后成功超越老大哥FarmVille达到了1亿的月活跃用户,累计安装量超过4.6亿。我记得当时参加总部2010年Q4的季度总结会,那次会议邀请的嘉宾是Facebook的首席执行官Mark Zuckerburg。可谓是美国版的“双马会”(Zynga CEO叫Mark Pincus),扎克伯格在会上表示,Zynga的《CityVille》让Facebook的员工疯狂,“更让人生气的是他们都用全屏幕模式玩,甚至都懒得假装在工作了”,引发哄堂大笑。紧接着,Mark Pincus走上演讲台,宣布Zynga将进入IPO准备阶段,更将全场的氛围带入高潮。

可以说,2010年到2011年初的Zynga,除了外界普遍指出的“过度依赖Facebook”这个短板以外,没有其他明显的危险信号。似乎只要和Facebook的合作关系不出现大问题,Zynga就会长治久兴下去。而Mark Zuckerburg出现在Zynga季度会议的现场,难道不正说明这个担心是多余的吗?

产品问题的凸显

如果非要给Zynga由盛转衰选择一个拐点,我会选择2011年秋天。这年10月Zynga发布了曾经大热的黑帮题材游戏《Mafia Wars》续集《Mafia Wars 2》。《Mafia Wars》作为Zynga早期的产品之一,其吸金能力可以说是异常出色,曾经有不少用户在这样一款基于HTML开发、看似简陋的游戏里一掷千金、乐此不疲。因此整个Zynga上下对于这款续作的期待可见一斑。

然而,整个产品在上线之前的公司内测阶段就已经出现了一些明显的问题,首先是产品的美术质量比较粗糙,至少谈不上精美。然后就是各种Bug层出不穷,经常玩一会就必须刷新浏览器重启。但是由于这款游戏在之前已经研发了将近两年的时间(其中一半时间在开发内容管理工具),而且之前的各种宣传和预热工作已经展开,因此还是在没有完善的情况下强行发布了。

正式上线后没几天,Zynga强大的流量机器-交叉推广系统-启动了,数以百万记的用户从《Mafia Wars》以及Zynga旗下众多游戏被导流到《Mafia Wars 2》,配合在Facebook的广告攻势,使得这款产品几乎在一瞬间获得了高达上千万的用户(日活跃达到500万)。记得当时的科技媒体竞相以“Mafia Wars 2创下用户增长速度的新纪录”为题进行宣传报道。

然而很快游戏中的硬伤被暴露出来,留存率大幅下降,用户快速流失。此时在强大的数据分析能力也拯救不了这款注定失败的产品,无奈之下Zynga只好暂停了交叉推广,甚至尝试将《Mafia Wars 2》的用户导回到前作《Mafia Wars》中去,但最终被证明是徒劳,续作用户流失后几乎没有人会回到前作继续玩,最终《Mafia Wars 2》只剩下了十几万用户,而原本赚钱的《Mafia Wars》的用户也损失殆尽。

这款产品的失利可以说是Zynga成立以来第一次遭遇大的失败。在此之前,Zynga的战争题材社交游戏《Empires & Allies》虽然也出现了营收不佳的问题,但至少这款产品的创新玩法得到了认可,用户留存也一直不错。Mark Pincus本人曾数次表达了对《Mafia Wars 2》结果的失望甚至是愤怒,而在这之后,Zynga也对新游戏获得交叉推广资源的条件作出了严格的规定,在一款产品至少在数据角度“看上去不错”之前,不能获得来自Zynga其他游戏的交叉推广资源。

《Mafia Wars 2》的失利可以说为Zynga的IPO添上了一层阴影。紧接着,在2011年12月Zynga在纳斯达克上市,我和一些同事在北京办公室观看了现场直播的过程。由于之前的SEC申报文件已经披露了Zynga的利润率远远不如外界推测的那么高(主要是公司运营成本大幅增长),再加上对Facebook过度依赖的先天不足,使得这笔大宗IPO在当天就破发收盘,让很多投资人大跌眼镜,然而这仅仅是个开始。

在这之后的2012年中旬,另一款Zynga耗费大量人力物力开发的产品《The Ville》上市了。这款产品的前身叫做《FamilyVille》,早在2010年就开始开发,是一款类似于虚拟人生概念的社交游戏。由于设计过程中出现了几次推倒重来的过程,因而迟迟不能上线。然而2011年8月EA的《The Sims Social》的上线却完全打乱了Zynga的阵脚,这样一款题材定位几乎和《FamilyVille》完全相同的游戏,以其较高的制作品质和创新的体验模式迅速获得了大量的用户,使得Zynga不得不再次调整产品规划,迅速向《The Sims Social》的设计方向靠拢。

结果,在时隔将近一年之后上线的《The Ville》,看起来只不过像是一个做工更加拙劣的《The Sims Social》。如果我是Mark Pincus,肯定不会允许这款产品面世。然而不知道是因为这款产品的负责人是Zynga的功勋元老Marks Skkags还是其他什么原因,Mark Pincus对这个款产品似乎颇有信心,甚至数次亲自在公司内部的邮件上为其摇旗呐喊。这款产品在上线后不久就获得了巨量的推广资源(800万日活跃),但其结果比《Mafia Wars 2》更糟糕。用户快速流失,最终运营了半年就宣告关闭,在随后的裁员中该项目团队也受到很大波及。此外,由于和《The Sims Social》的过度相似,Zynga还被老对手EA起诉,虽然最后不了了之,但也让Zynga背上了“山寨”的骂名。

可能是这两款产品的失利让Zynga变得更加谨小慎微不敢了,Zynga后续的很多产品都能看出”山寨 + 微创新“的思路,这其中比较明显的是《Dream Heights》(借鉴了NimbleBit出品的《Tiny Tower》)和《Bubble Safari》(借鉴了King公司出品的《Bubble Witch Saga》)这两款产品,也都受到了原创公司的抗议。不过客观的说一句,美国公司再怎么山寨,也要比咱国内那种连美术风格都不换的纯耍流氓的抄袭手段要高上不少。但不管怎么说,过度雷同的产品类型(比如《CastleVille》、《FrontierVille资料片》、《Tresure Isle》的续作《Adventure World》)和玩法的跟风都让Zynga的产品在吸引新用户、留存老用户以及创造营收的能力上大打折扣。

同时,Zynga高层过度笃信休闲模拟类游戏在Facebook平台的生命周期,没有觉察到用户兴趣的快速转移(或是觉察到了但是没有迅速做出反映),因此没能抓住中核(mid-core)游戏在Facebook崛起的最佳时机,也就并不奇怪了。

战略层面的几大败笔

除了产品表现不佳以外,Zynga在其发展历程中也有过几次比较大的战略决策失误,加速了Zynga的衰败。

1、Draw Something的收购

这笔交易在Zynga的败笔中不得不提一句,事实大家都知道了:Draw Something上线仅两周就获得了数百万活跃用户,Zynga“当机立断”以1.8亿美金的价格收购,结果Draw Something的用户量迅速下滑,还没来及的给Zynga做什么贡献就已经流失殆尽。Zynga也因为这笔交易被投资人诟病,结合上其他负面消息,股价一路狂泻至2美元。

在这个问题上面,我认为Zynga最大的失误不在于收购的太早太快(老实说,我还挺佩服这个决策的速度),而在于错误的评估了Draw Something崛起的原因。1.8亿美金用来收购任何一款游戏都显得太贵重,何况只是这样一款技术含量不高的手机游戏?Zynga真正看中的,是Draw Something的平台性。我猜测Zynga董事会理解为Draw Something的火爆是单纯的社交需求,这个看起来是“有史以来最具社交性的游戏”具有很高的平台价值,玩家有可能会长久的留存在平台上面,和好友互动。但这次董事会错了,玩家在一起开始对这款游戏上瘾更主要的原因是UGC(用户成生内容)所带来的新鲜感和乐趣,但是游戏机制本身决定了其重复性所带来的致命短板,当玩家觉得内容没有新鲜感,自然会很快流失。

2、搭建自有的社交游戏平台Project Z

Zynga在上市初期一直被资本界以“过渡以来Facebook”为由批评,对此,Zynga采取的应对措施是搭建自己的游戏平台,让玩家可以脱离Facebook而直接玩到Zynga的游戏。但是,和对Draw Something的错误判断一样,我认为Zynga也错误的判断了社交游戏能够火爆的本质原因。社交游戏依托于社交平台,普通用户首先存在的是社交需求,其次才是游戏需求。绝大部分社交游戏并没有能满足用户的社交需求,所以当脱离了Facebook这样的社交平台,用户单纯的游戏需求变的微不足道,这样的独立游戏平台无法成长也就不奇怪了。

3、“跨平台”战略

当移动互联网变成“瞎子都看得到的机会”,Zynga也自然不可能视而不见。对于如何提升移动游戏的市场份额,除了收购以外,Zynga选择了跨平台战略,即要求Zynga所有的新游戏在设计之初必须必须同时考虑网页和移动版本,而且要求体验必须完全一致。我本身对跨平台并无异议,但我坚决反对的是“简单粗暴”的移植做法。页面游戏和移动平台游戏的巨大差异性相信有过行业经验的大家都有体会,一款游戏适不适合跨平台是一方面,而“提供完全统一的体验”则更是本末倒置的做法。真正的跨平台,应该是针对平台的特点,提供最符合平台特征的体验,让不同平台的玩家享受到相同的游戏乐趣。忽视这一点,跨平台反而会变成产品设计的羁绊,最后的产物就是在哪个平台也讨不到好的怪胎。

以上的几大“败笔”,虽然说每一个都能对Zynga能产生不小的损害,但在我看来,仍然不是让这艘巨型航母沉没的本质原因。其根本的问题,是出在企业文化和对待游戏设计的态度上。

科学与艺术的碰撞

如果说游戏就是科学与艺术的结合体,那么Zynga很显然是站在了“科学”的极端上,具体来说,就是Zynga在游戏圈中知名的另一件法宝:数据分析和数据驱动。

每一个在Zynga工作过的产品经理,其大多数时间都在跟数据打交道。得益于强大的数据仓库、A/B测试工具和数据分析方法,Zynga的确迅速积累了大量的游戏开发的“Best Practices”,配合上“火车模型”的快速开发方法,使得迭代速度和版本改进速度达到了空前的效率。在Zynga,大部分产品的上线频率是一周两次,有些时候能达到一周三次甚至四次。一个新功能上线后的一个小时就要求产品经理开始分析数据并提出进一步改进的思路。可以说,从数据驱动和迭代开发方面的角度讲,Zynga是整个游戏行业的领军人物。

事实上,不象大多数国内游戏公司将策划和运营分开,欧美游戏公司的产品经理通常要求具备策划和运营两方面的能力。这本身得确是更加合理的设置,可以大大降低沟通成本。但由于Zynga对数据的过渡依赖,很多产品经理其实在游戏设计方面的理解并不出色,甚至有些产品经理本身都不怎么玩游戏,作出一些明显有伪玩家利益的设定也就不足为怪了。由于都具有对游戏“改编”的权力,很多时候产品经理和游戏设计师(更类似于国内的策划角色)会产生比较大的争执,但由于Zynga上下贯彻了数据至上的理论,最后结果往往还是产品经理占据了上风。

如果说产品经理和设计师的争执还只是工作中不可避免的小插曲,那么”数据至上“给公司上下带来产品设计的精神枷锁就是大问题了。由于过渡依赖于数据来验证游戏设计的对与错,使得Zynga不敢做任何没有被数据验证过的事情。这也就是为什么自2012年以后Zynga的很多游戏都是在”借鉴“其他成功游戏的核心玩法。如果一款新游戏想在内部评审中被通过,首先要证明的就是自己的游戏在使用一种”Proven Mechanic“,否则就会被定义为”风险太大“。而一个在数据上被证明有效果的功能设计,也会被迅速推广到所有产品中去,而不管这个设计是否从会损害用户体验。举个例子来说,Zynga后来的很多产品都出现了一个“Picutre Wall”的功能,就是在玩家进入游戏的前期,弹出一个布满好友头像列表的对话框,玩家只要点击一个按钮,就可以给50个好友发送病毒性请求,拉他们进入这个游戏,而玩家自己可能都不知道究竟发生了什么。这个设计最早在《Bubble Safari》中被使用,后被数据证明有助于提升K-factor(衡量病毒性的重要指标),于是在后续的新产品中几乎强制要求增加这个功能。但是至今为止,我都认为这是一种很糟糕的体验。

2012年3月,Zynga的市场研究部门敏锐的注意到一款叫《Candy Crush Saga》的游戏(没错,就是你知道的那款)在Facebook上快速崛起,并作出分析报告认为三消+推图(Meta map)类的游戏在FB和手机上有巨大的成长空间。为了给Zynga北京工作室争取到更多立项机会,我带了一个小团队加班两个月设计了一个三消推图类游戏的原型并完成了Demo,准备正式启动产品立项流程。在我们开始设计这款原型之前,我就不断被一些总部的“过来人”告知一定要在产品介绍中强调我们的游戏是基于“Proven Mechanic”(被验证过的机制)进行的微创新,否则很容易因为风险过大而被拒。在这样的大前提下,我们的原型基本上保留了《Candy Crush》的大部分玩法,并设了一个我自认为很有意思的题材和世界观:一个想成为顶级大厨的年轻人周游世界各地,学习制作当地的美味料理,但是必须要完成师傅交给的各项任务…

2012年5月,我和北京工作室的美术总监飞往旧金山总部,准备这款新游戏的Pitch。此时Zynga已经搬到了新的办公楼,就在我前任老东家Adobe的对面。新办公楼比之前租用的办公室要高端大气得多。但在我走访了几个团队之后,却发现大家的士气和状态大不如前,如果说2011年初的Zynga氛围是“忙碌、轻松、亢奋”的话,如今的Zynga给我的感觉是“疲惫、压抑和萎靡”。我们惊讶的发现总部的很多团队在产品研发方面都陷入了泥淖中,很多产品在内部经过多次推翻重来仍然不能被通过,搞得产品负责人也心气全无。有人好心的提醒我们在正式做Pitch之前先拜访那些“产品审核委员会”的老家伙们,看看他们的态度再说。我们欣然接受,于是逐个安排和这些公司最权威人士的单独面谈。然而结果令我们沮丧的是,这些人给我们的竟然是一些截然相反的观点。负责业务的负责人普遍认为我们这款游戏仍然有太多“没有被证明过的设计”,风险太高。而那些资深的游戏设计师则觉得我们的原型和《Candy Crush》太相似,创新不足。难怪公司内那么多产品都得不到通过,因为公司的决策层面已经分成截然相反的两派了。最终,考虑到成功率确实比较低,我们放弃了Pitch,满心疲惫的回到了北京。后来,管理委员会批准了西雅图工作室申请的另一款三消+推图游戏的立项,不过这款产品至今没有面世,不知道是否已经在某次阶段审核中被枪毙了。

也是在这次出差中,听到了Zynga当时的首席游戏设计师Brain Raynolds即将离职的消息,让我颇为震惊。Brain Raynolds是当之无愧的游戏设计大师,曾经担任过《文明2》等产品的主设计师,在Zynga他亲自设计了《FrontierVille》这款被玩家广泛好评的游戏,我个人也非常喜欢。而这款游戏中的很多经典设计也被后续的社交游戏中借鉴,比如在点击物件后奖励的物品会以一种弧线的效果弹出(我们内部称为“Doobers”)就是来自于这款游戏。在我离职的前一天,Zynga的首席创意官,前《命令与征服》的制作人Mike Verdu也宣布离职创业,可以说,Brain和Mike的离职象征了Zynga在游戏设计方面的全面败退。在这之后Zynga的自研发产品也确实更加缺少创意和灵魂,只能靠代理发行的方式来争取一些好的产品了。

客观的说,数据驱动和快速迭代都是很好的方法学,但是游戏毕竟不仅仅是数字,更重要的是感受,是游戏艺术性的体现。完全忽视用户情感而一味的从数据角度判定和解决问题,最终也会失去用户的支持。而因为缺乏数据支持而不敢创新,更是Zynga的发展道路上最深层次的羁绊。

Zynga的海外战略
阅读全文»

(3个打分, 平均:4.67 / 5)

爆炸新闻:华三网络安全产品线研发总裁离职

爆炸新闻:华三网络安全产品线研发总裁刘宇离职。 之前, 市场部总裁王巍离职!

(没有打分)

解密: 美国 2015 国家安全战略 (全文)

(1个打分, 平均:4.00 / 5)

图解自然科学一等奖:透明计算

原文链接:http://news.sciencenet.cn/htmlnews/2015/1/311393.shtm

解读自然科学一等奖:透明计算“云”时代

  Meta OS超级操作系统功能示意图

透明计算扩展冯·诺依曼结构的原理示意图

张尧学及其团队(杨燕飞 摄)

一个由中国科学家提出、定义、设计并实现的计算机领域的原创成果斩获了我国基础研究“桂冠”——2014年度国家自然科学奖一等奖。这就是被国外同行称为“先于云计算、包含云计算”的网络计算理论及模式——“透明计算”。清华大学教授、中南大学校长、中国工程院院士张尧学和他的两个研究团队通过潜心研究,20年磨一剑,把理论创新与普通用户的需求相结合,创造了首个由中国推动的计算技术。

别了,冯·诺依曼

半个多世纪以来,冯·诺依曼的名字被IT人奉为圭臬。这位美国科学家以经典的单机存储式体系结构,奠定了现代计算机的基础,长期占据计算机主流。然而,随着互联网技术的高速发展和大数据时代的来临,冯·诺依曼结构的局限性日益凸显,产生了网络安全性低、用户使用复杂、产业链受制于人等一系列问题。

“现在的计算机体系结构是单机的,在单机上发展越多,操作系统就会越来越庞大和复杂,不可避免地会出现很多漏洞。”据张尧学介绍。进入上世纪70年代以后,伴随着局域网的发展,计算机已逐步进入网络计算阶段,但计算机之间的互联主要是靠协议,理论基础并没有大突破。

如何突破冯·诺依曼结构的束缚,拥有安全、高效的“中国”标签的自主知识产权成果?这个问题不仅关系到计算机技术的未来,也成为关系国家和社会发展的重大课题。

被张尧学称为“运气”的透明计算原创成果,因对冯·诺依曼结构作出的革命性改进备受国内外关注。实际上,相关工作始于1991年。这一年,在清华大学任教的张尧学开始致力于计算机体系结构与计算模式的创新。7年后,他申请的国家“863”项目获得成功,并开发出国内首台网络计算机(NPC)。

彼时,互联网才刚刚普及,中国接入互联网尚不到5年,IT界还普遍处于操作系统为王的“微软时代”,敏锐的张尧学已意识到网络对计算机本身带来的深刻影响。在把应用软件放到服务器上之后,他又开始尝试把操作系统从原本已十分精简的终端上拿掉,提出了没有操作系统的计算模式。

按冯·诺依曼的经典设计,数据和程序都不加区别地存放在同一台计算机的存储器中,被CPU调用执行,形成芯片层、接口层、操作系统、软件应用以及网络层的严格等级结构。没有操作系统的计算机难以想象,不仅计算机的处理模式是串行的,而且网络也像“云端”一样暴露在外层。

2004年,张尧学正式提出“透明计算”的思想。其核心是将数据存储、计算与管理相分离,并确立了跨终端、跨平台的“双跨”原则和“按需服务”的理念。这一体系的确立比国外“云计算”概念的提出早了整整3年。

针对开放与安全这一对看似“悖论”的矛盾,张尧学团队提出了一个大胆的改进:将网络互连平台由较高的网络层下降到芯片和操作系统之间的接口层。这一改变不仅彻底打开了被国外厂商控制的芯片与操作系统间的“黑匣子”,由于在接口层设计了保护程序,也使上层软件系统受攻击和安全漏洞威胁的风险大大降低。

学术界认为,这种从计算机系统底层寻求安全保护的方法创新地改变了计算机领域近70年的固有思维和产品结构模式,理论上可防御所有病毒对计算机系统的攻击,使用户放心使用各种网络服务,全面提升计算机安全级别。

再进一步,研究团队将计算机总线扩展为网线,将单机串行处理扩展到多机并行处理,实现了对冯·诺依曼结构的“时空扩展”。至此,一个适应网络时代的新体系结构初具雏形。

中国学者的工作引起了主流计算机科学界的密切关注,国际上目前已有30多所大学和科研机构对该领域开展跟踪研究。美国计算机学会会士(ACM Fellow)Marshall C.Yovits将透明计算称为“张氏协议综合法”;而另一位ACM Fellow,对等网络创始人之一、美国卡耐基梅隆大学计算机系教授张辉更是大胆预测:透明计算将取代已控制计算机系统思想和实践至少60年之久的传统冯·诺依曼结构。

中国“超级操作系统”

把操作系统从终端上拿掉后,用户怎么操作和使用计算机呢?张尧学定义和提出了一个名为Meta OS的“超级操作系统”。简言之,就是“管理操作系统的操作系统”。

早在上世纪90年代初,张尧学就注意到,为了赶潮流或升级换代,人们每年都会投入很大一笔钱更换终端和升级应用。“如果说我们有台机器不用升级,能直接将新的应用从PC端获得的话,不就没必要花这笔费用了?”他这样自问。

当网络和移动终端普及后,新的问题又来了。“比如苹果手机,它就是一个封闭系统。如果你要使用中国移动平台上的软件,现在肯定没法用。而我们成果的意义就在于能够打破两个系统的限制,实现两者所开发软件的自由使用。”

在张尧学看来,不能指导实践的理论必然是空洞的。事实上,目睹软硬件相互捆绑、严重依赖导致的封闭化现象对用户和产业造成的伤害,正是他从事网络计算研究并最终提出透明计算理论的原动力。

从2009年开始,张尧学团队将透明计算拓展至移动互联网。到2010年,他们终于做出了一个模型。2012年10月出版的《国际云计算杂志》以长达百余页的专辑形式介绍了这种新型网络计算操作系统,在国际业界引起震动。

在中南大学计算机系的透明计算实验室里,记者见到了这台被研究人员唤作“小宝”的透明计算终端。它看上去就是一台显示器,比一体机还要小,其容量近乎裸机。

工作人员向记者现场演示了大型软件AutoCAD的使用过程。原来,所有数据和程序的存储都不在终端上,而是存储于服务器上,计算机上只有一个个虚拟图标。使用时,只要进入Meta OS,便可根据需求选择调用Windows、Linux、IOS等不同操作系统,不仅启动速度和平时没什么两样,还可与旁边手持平板电脑和智能手机等移动终端的用户实现同步编辑。

令记者不解的是,人们早已习惯的下载安装过程似乎在“不知不觉”中消失了。据介绍,只要网络正常,在世界上任何一个地点登陆,都可以感受到同样的上述用户体验,而且不必担心大量的流量使用,来自后台的精心设计展现了体系结构理论改进后的强大威力和迷人魅力。

张尧学告诉记者,Meta OS位于传统操作系统和底层硬件芯片之间,相当于传统桌面计算机的BIOS层。不仅终端内存占用量大大减少,而且可达到多层次缓存效果,让用户可以根据需要通过近乎裸机的小终端获取不同操作系统平台的各种服务,从而大大降低了对用户终端的要求和平台的限制。

透明计算模式采用的计算方法又被形象称为“流式计算”,使服务像水、电一样供用户使用。这正是谷歌等互联网巨头描绘的“云计算”蓝图,它已被中国科学家率先提出和实现并成功走出实验室,运用于我国教育、通信、医疗、冶金等行业,引起了中国移动、阿里巴巴、腾讯、华为、联想等企业密切关注。

超越“替代策略”

源自中国的网络计算原创成果引起了国际IT巨头英特尔公司的高度重视。从2007年起,英特尔就成立了专门研究队伍持续跟踪研究并大力推广。在2012年的英特尔全球信息技术峰会上,现任总裁詹睿妮用了一个小时向与会者介绍张尧学和透明计算,并预言称“今后的十年将是透明计算的十年”。如此高调地推荐一项非英特尔原创的技术,不仅在英特尔历史上前所未有,就是在全球信息技术领域也极为罕见。

尽管如此,对于国内外在信息技术领域的差距,张尧学仍有清醒的认识。“国内目前所使用的CPU基本上都不是国内自己做的,而是国外做出来的,所以目前我们使用的电脑到底安不安全,我们自己是不知道的。”他表示,目前他们的研究已掌握了国外CPU的结构指令,“虽然我们还不是很清楚门内是什么,但至少已经把守住了门口。”

在张尧学看来,透明计算在理论和应用上的突破对产业的意义主要在于:改变了我国操作系统发展一贯采取的“替代策略”,可解决国家网络安全和操作系统技术长期受制于人的重大问题,并具有可形成新的IT产业链、拉动其他产业升级换代的前景与能力。

谈到我国IT技术的发展,张尧学深有感触。“以前,别的公司做了CPU,我们就想做个CPU来取代别的公司;人家有个Windows操作系统,我们就想做个操作系统来取代它。”在他看来,这种替代策略在IT领域其实是一个非常艰苦的策略,“因为别的已有公司在用户体系、资金与市场上都已经非常成熟,你不可能轻易撼动。”他自己给出的答案是:共存。

“透明计算是对以往存储计算的一个扩展,并非完全取代。”张尧学说,它“不破坏、不反对、不消灭”原来的体系结构,但会派生出很多新的终端来,从而改变商业模式和软件的使用模式。

回顾团队的研究历程,张尧学坦言,起初他们只是专注于问题本身,不想一味模仿和替代,并没有想到一定要做出什么轰动世界的东西。“从解决实际问题出发,产生了一些明确的动机和原创的想法。”随后,他们带着这个动机和想法一钻就是二十年。

其间,国家自然科学基金的支持发挥了重要作用。1995年、1997年、2009年,透明计算系列研究工作三次获得国家自然科学基金项目资助。长达十余年的跟踪支持为相关研究工作的持续开展奠定了坚实的基础。

“国家自然科学基金‘不多不少’的经费支持,鼓励探索、宽容失败的资助定位,激励研究人员摒弃浮躁,尊重科学,忠于理想,潜心学问,在创造力最旺盛的年龄阶段夯实研究基石。”张尧学说。不忘初心,方得始终,这或许就是“运气”背后的答案。(原标题:透明计算:缔造下一个“云”时代——解读2014年度国家自然科学奖一等奖)

(10个打分, 平均:1.80 / 5)

计算机系统文章前50名灌水大王

计算机系统文章前50名灌水大王。 其中,
#1. MIT (6)
#2. Stanford (5)
#3. Washington (4)
#4. Wisconsin, Texas, UCSD (3)
#7. Berkeley, CMU, Columbia, MPI-SWS, Michigan, Princeton (2)

(没有打分)