Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
What's Worked in Computer Science (danluu.com)
135 points by cjbprime on Nov 24, 2015 | hide | past | favorite | 63 comments


The claim about RISC being a No is bizarre. RISC has definitely won, for 2 big reasons: 1. Intel redesigned their x86 processors to execute using a RISC model internally. AMD is using the same idea. A processor architecture is defined by its execution model (not it's external encoding). 2. ARM is more ubiquitous than Intel, and ARM is a RISC architecture. In fact, ARM may be the dominant processor architecture in the next 10 years.

RISC is resoundingly a Yes.


The author addresses this:

> It’s possible to nitpick RISC being a no by saying that modern processors translate x86 ops into RISC micro-ops internally, but if you listened to talk at the time, people thought that having a external RISC ISA would be so much lower overhead that RISC would win, which has clearly not happened. Moreover, modern chips also do micro-op fusion in order to fuse operations into decidedly un-RISC-y operations.


The success criteria is not mentioned at all, so there is no clear way of settling this.

yes the brand "risc" failed, but does that mean the idea behind risc presented by the computer science community failed?

I'm no processor expert, but it sounds like to me the paradigm behind risc is both theoretical and practically sound and is a key component in most modern CPU's.


I would disagree. Intel adds more and more special instructions for niche uses (crypto,HPC,virtualization,etc) for the sake of efficiency. This becomes increasingly necessary for performance improvements, because frequency scaling hit its boundary for now and manycore is about to (see Dark Silicon).


One point of RISC was better production yield due to higher die regularity. By this criterion RISC clearly won, and this is also one of the driving factors to CISC-as-microcoded-RISC approach.


> A processor architecture is defined by its execution model (not it's external encoding).

Not quite... how these terms are actually used in computer engineering and computer architecture:

"microarchitecture": How the implementation of a given architecture looks like under the hood.

"instruction set architecture", the externally visible characteristics of the processor from software POV.

"Architecture" and "CPU architecture" usually mean ISA.

"Processor architecture" is not a widely used term, but the first several hits on google refer to the "instruction set architecture" meaning.


AVR, PIC, NEC and alike are classic RISC CPUs and they are already dominant, by a huge margin.


> In fact, ARM may be the dominant processor architecture in the next 10 years.

Are you claiming it isn't dominant now? What kind of chip is in all those set-top boxes people use these days? How about mice?


Set-top box is predominantly MIPS with very few percentage of ARM.


We write web services in Haskell every day and it's much, much easier to refactor, and runs much faster than, the PHP that came before it. This is at large scale for a mature tech company, too.

We've also had good success with Erlang for messaging, although its standard libraries have annoying bugs.

Based on this, I'd say: Functional programming: yes. Fancy type systems: maybe.

The reason for "maybe" is that we use ByteString a little too happily and don't new type enough to prevent all bugs ahead of time...

Then again, C++ templates are definitely fancy type systems, and we ship lots of important code on that every day, too!


Yeah, I've been doing Scala professionally for 5+ years, and not just stringy types (free monads are everywhere once you start looking for them), so this was a weird thing to read.


If you don't mind me asking, what company is this that uses Haskell everyday?


I have a video in my watch queue that shows Facebook is:

https://youtu.be/sl2zo7tzrO8


He's talking about IMVU.



which annoying bugs in erlang's stdlib are you referring to?


This article discusses "today", yet it has no publication date. Well, it mentions "2015" in the conclusion, but that's it.

This trend to publish without a date is ill-conceived.


Author is slyly redefining "what's worked" to "what's widely used".


What are some examples of things that "worked" but are not "widely used"? Erlang?


From this article's list: "fancy" type systems, functional programming, RISC (!). Formal methods, albeit in specific domains. Security? (I'm not even sure how we can say whether "security works" or not.)

Erlang is a good specific example. Haskell and OCaml are others. (I've seen first hand just how incredibly well OCaml works at Jane Street, even if almost nobody else uses it.)

SMT solvers are another cool example. They're used by certain kinds of researchers (including "applied" research like at security companies), but most people haven't even heard of them. At the same time, they let you do some absurd things surprisingly easily that seem intractable. Everything from exhaustively verifying algorithms and invariants to automatically detecting and debugging bugs on the fly to running code backwards. And they make a whole bunch of less absurd but still useful constraint-based problems virtually trivial. Lots of people would benefit from them—if they knew about them and cared.

Another great example is the Nix package manager which does package manager magic, but hasn't caught on yet. This is partly a function of its limited ecosystem and documentation, but it does work and I know people using it in production.

Hell, for some definition of "widely used", Emacs and VIM fall under this banner. They're somewhat more popular than functional programming and the like, but, in the grand scheme of things, they're a narrow niche among everyday programmers.


Functional programming, even when not in, strictly speaking, functional programming languages (MLs, Haskell, lisps, Erlang), has worked. It's moving more and more into mainstream languages. Either as a sublanguage (Linq) or by piecemeal incorporation of its concepts (pattern matching, anonymous functions, TCO so recursion can be used penalty-free, etc.).


> Functional programming, even when not in, strictly speaking, functional programming languages (MLs, Haskell, lisps, Erlang), has worked

How do you know? By "worked" the author means "has unambiguously turned out to be a good idea with very significant benefits". That some people really like something and are able to write good, working programs with it doesn't mean it's "worked" in the sense discussed.

First, each of these languages is "functional" in a very different way, so much so that even their inclusion in the same category is tenuous. Second, none of these (except maybe Erlang) has been used extensively enough in large-scale projects to give us a definitive answer regarding the actual benefits.

That functional concepts are finding their way into the mainstream is very true, but some of these have also been associated with OO for a very long time now (in fact, they've been a part of OO longer than three of the four FP languages you list have existed). Smalltalk, the most OO of OO languages, had anonymous functions from the beginning, and would tell you that anonymous functions are just as much OO as functional. TCO is not very mainstream, and as Guy Steele demonstrated, is just as relevant for OO as it is for functional. Also, adoption of some concepts doesn't mean FP as a paradigm has worked (the bigger problem is that FP doesn't even have a definition; it is continuously redefined to mean whatever it is people associate with languages that call themselves FP).

In short, we're not sure quite yet, hence "maybe".


I agree with you that from the perspective of the article Functional Programming is still a "maybe" as it doesn't have the same sort of following or use in large-scale projects as OO, but I suppose it might be the perspective of the article that is incorrect.

Whatsapp has demonstrated the power of FP with Erlang, and other programming languages like Elixir are starting to gain traction as they improve on the older Frameworks and make development more "comfortable".

Whether they will get enough traction and garner enough of a following to become a "Yes" from this articles perspective is difficult to say but I am confident people will see the light of FP sooner than they think, and I feel like it won't be long till FP is the new cool kid on the block, I'm just not sure who will be responsible for leading that.


> Whatsapp has demonstrated the power of FP with Erlang

Whatsapp demonstrated the power of actors and lightweight threads with Erlang. FP had little to do with that.


Good point... I suppose that is also what Elixir is bringing to the party.


"Is Erlang object oriented? Joe Armstrong: Smalltalk got a lot of the things right. So if your question is about what I think about object oriented programming, I sort of changed my mind over that. I wrote a an article, a blog thing, years ago - Why object oriented programming is silly. I mainly wanted to provoke people with it. They had a quite interesting response to that and I managed to annoy a lot of people, which was part of the intention actually. I started wondering about what object oriented programming was and I thought Erlang wasn't object oriented, it was a functional programming language.

Then, my thesis supervisor said "But you're wrong, Erlang is extremely object oriented". He said object oriented languages aren't object oriented. I might think, though I'm not quite sure if I believe this or not, but Erlang might be the only object oriented language because the 3 tenets of object oriented programming are that it's based on message passing, that you have isolation between objects and have polymorphism.

Alan Kay himself wrote this famous thing and said "The notion of object oriented programming is completely misunderstood. It's not about objects and classes, it's all about messages". He wrote that and he said that the initial reaction to object oriented programming was to overemphasize the classes and methods and under emphasize the messages and if we talk much more about messages then it would be a lot nicer. The original Smalltalk was always talking about objects and you sent messages to them and they responded by sending messages back.

But you don't really do that and you don't really have isolation which is one of the problems. Dan Ingalls said yesterday (I thought it was very nice) about messaging that once you got messaging, you don't have to care where the message came from. You don't really have to care, the runtime system has to organize the delivery of the message, we don't have to care about how it's processed. It sort of decouples the sender and the receiver in this kind of mutual way. That's why I love messaging.

The 3 things that object oriented programming has it's messaging, which is possibly the most important thing. The next thing is isolation and that's what I talked about earlier, that my program shouldn't crash your program, if the 2 things are isolated, then any mistakes I make in my program will not crash your program. This is certainly not true with Java. You cannot take 2 Java applications, bung them in the JVM and one of them still halts the machine and the other one will halt as well. You can crash somebody else's application, so they are not isolated.

The third thing you want is polymorphism. Polymorphism is especially regarding messaging, that's just there for the programmer's convenience. It's very nice to have for all objects or all processes or whatever you call them, to have a printMe method - "Go print yourself" and then they print themselves. That's because the programmers, if they all got different names, the programmer is never going to remember this, so it's a polymorphism. It just means "OK, all objects have a printMe method. All objects have a what's your size method or introspection method."

Erlang has got all these things. It's got isolation, it's got polymorphism and it's got pure messaging. From that point of view, we might say it's the only object oriented language and perhaps I was a bit premature in saying that object oriented languages are about. You can try it and see it for yourself."

http://www.infoq.com/interviews/johnson-armstrong-oop


I'm in the "yes" camp for FP. At the very least it is accepted as a good thing to add to non FP languages these days (Java8) which is a good enough indicator for me. While not strictly FP, FP ideas have influenced some major successes like say MapReduce.


What Java added has been part of OOP for 30 years, since before ML (well, around the same time) and Haskell were even invented (let alone Erlang). So I'd put "anonymous functions" in the yes column, but not sure about FP yet.


My point is that since it was finally added to the most "mainstreamy" non-functional language there is there's some mainstream acceptance. It's not my POV but an interesting data point.


Erlang is 29 years old


Happy birthday! I was being approximate, though, and it turns out that my approximation was way off. Anonymous functions have been part of OO for 43 years, so still long before Erlang (and Haskell).


I think the same could be said of the fancy type system segment (although I'm not entirely sure why those were disjoint -- they so frequently go hand in hand).


Yes. Should've included that. The type theory that came from functional languages is moving into the mainstream languages, too.


I think he's defining "worked" to be "clearly demonstrated to be significantly beneficial for a wide range of applications in numerous production uses in the industry". Seems reasonable.


Speaking as someone currently in the "enamored with capabilities" phase, I'd like to know more about how people get disillusioned with them and whether it can be prevented. Is it something we just need to keep trying until we get out right, or are they somehow fundamentally unworkable despite their elegance?

Also, does the increasing importance of power consumption affect the RISC vs CISC issue? Naïvely I would expect that to work in favor of RISC.


I would put the author in the "doesn't know what he's talking about camp" in regard to capabilities. The same could probably be said regarding the claim that distributed computing works and that maybe we're ok at security now. All of these feel like fields in their infancy.

Capabilities are among the few systems that can solve authorization decisions involving three or more principals. Ambient authority systems e.g. systems based on ACLs are inherently broken when dealing with 3 or more principals (see http://waterken.sourceforge.net/aclsdont/):

"The ACL model is unable to make correct access decisions for interactions involving more than two principals, since required information is not retained across message sends. Though this deficiency has long been documented in the published literature, it is not widely understood."

If you're looking for a place this arises in practice, look no further than the same-origin policy in web browsers, and the complexity of three principal interactions where one principal is a user, another is a web site, and the third is a malicious web site.

That's not to say we should abandon the same-origin policy, but we need authorization primitives that seamlessly span multiple principals.

Projects like Capsicum are adding capabilities to OSes like Linux and FreeBSD.

Cap'n Proto is demonstrating what's possible with a modern implementation of CapTP.

I think the need for authorization-centric (as opposed to identity-centric) access control systems is becoming increasingly clear. Capabilities (particularly in the CapTP sense) are one realization of this idea but there are others.


I would put your comment in the "doesn't understand what the author is saying" camp. When he says something has "worked" he doesn't mean it has promising working prototypes or has the potential to improve things. He means that the benefits have been very, very clearly demonstrated to be very significant and widely applicable. And "clearly demonstrated" means their impact has been shown in numerous production settings.

As this is the case for distributed systems but not for capabilities, the author's categorization seems spot on, even if current work on distributed systems is "broken", when "broken" has its common modern connotation of "has (possibly much) room for improvement".


To reiterate, because you missed my original point: The author thinks we're doing okay at security, and capabilities failed. We aren't doing okay at security, and capabilities are still a promising solution.

The solutions where capabilities are truly most promising have no embedded competitors. I'm looking at things like SELinux here. Few other solutions (except e.g. Macaroons) are actually capable of making correct authorization decisions in scenarios involving 3+ principals: the competition is broken, and vicariously, so are most authorization systems which try to solve 3+ principal authorization problem correctly.

You may as well be arguing that memory safe / garbage collected languages lost to C circa 1995. C is broken and programs written in C will always be full of holes as compared to equivalent programs in memory safe languages just in the same way as authorization systems not built to handle 3+ principals will make wrong decisions. This is why it's 2015 and we're still dealing with CSRF.

If you implement SELinux at your job like I do, and have some informed criticism about how Capsicum is unneeded because SELinux is great, I'd love to hear it! But I'm guessing that isn't the case... I am also guessing it's an area the OP is not particularly informed about.


> you missed my original point

I think you have simply misunderstood what the author is actually saying. He is not saying what you think he is, and I don't think you disagree with him at all.

> The author thinks we're doing okay at security

Where does he say that? He says, "security still isn’t a first class concern for most programmers", and puts it in the "No" column.

> capabilities are still a promising solution.

The author doesn't dispute that. In fact, he says nothing about the promise certain technologies hold. In fact, he says: "I’m much more optimistic about research areas that haven’t yielded much real-world impact (yet), like capability based computing and fancy type systems. It seems basically impossible to predict what areas will become valuable over the next thirty years."

The article, however, is not concerned with possible solutions, even those that show great promise. It is only and solely concerned with solutions that have been conclusively shown to work well in the field by having a wide applicability and usage in numerous production projects. As capabilities -- so he says and you don't seem to dispute -- aren't there yet (he emphasizes the "yet"), they belong in the "no" column. That they show great promise -- or even maybe contain the only solution to problems we haven't been able to tackle -- bears absolutely no relevance to the issue of whether they "have worked" or not. It is a statement of fact that they haven't yet, and you don't seem to dispute that.

Similarly, putting something in the "no" column doesn't mean that something has lost. The author makes that abundantly clear. A no may well turn out to be a yes in time. A no simply means "not yet" (while "maybe" means that it may in fact be "working" now, we just don't have enough information to conclusively say).

The article therefore voices no criticism on the validity of certain technologies at all. That, too, the author makes abundantly clear. It is nothing more than a list of those technologies that to this day have (or haven't yet) been conclusively proven to provide significant benefits and wide applicability in the field. Those that only show promise belong in the "no" category. It is an inventory, not a critique.

You have no argument (so it seems) either with me or the author. In fact, I have no knowledge on this subject at all: I have no idea what capabilities are, I have never heard of Capsicum (or SELinux for that matter) and have never even worked on security related issue. it was just clear to me that you are responding to criticism that is simply not there, and responding with great force by dismissing the author (who is very well versed in technology), which is rarely a good idea, especially if you don't carefully read what he has to say.


> Where does he say that? He says, "security still isn’t a first class concern for most programmers", and puts it in the "No" column.

He put it in the "Maybe" column for 2015. Try searching for "Security"

So Capsicum patches landed in the Linux kernel mainline, Kenton Varda is shipping Cap'n Proto and Sandstorm.io, and capabilities get a "no" but security gets a "maybe"? K.

> I have no knowledge on this subject at all: I have no idea what capabilities are, I have never heard of Capsicum (or SELinux for that matter) and have never even worked on security related issue.

Well that explains a lot...


You still fail to say where you disagree with what the article says as opposed to disagreeing with what you think the article says. It's not a matter of knowing the subject matter but of text comprehension. Sorry, I missed the "maybe" on security, but he explains that "a handful of projects with very high real world impact get a lot of mileage out of security research". With all due respect to Cap'n Proto and Sandstorm.io, neither have "very high real world impact". If you think capabilities are widely used in production, in multiple domains, and have had a significant and conclusive contribution to the industry, then you'd be in disagreement with the author, and I'm sure he'd gladly change his description in light of the new information. Like I said, it's an inventory, not a critique.


Yeah, all that stuff is why I'm enamored with them :). I didn't know about CapTP, though. I'll have to look at that more closely.


What exactly are capabilities, in this context? It's not something I've been acquainted with. Wikipedia gave me a short article that appears to be about schemes for tagged memory, which I vaguely remember hearing about for a hot minute in one of my intro survey CS classes, and then never again.



Thanks. That section of the original article kind of assumes you know what is being referred to, and I wasn't eager to go through the 80 page pdf that was linked.

Trying to google Capabilities+Computer+Science turns up a lot of noise.


I am not sure I would categorize a type system as a maybe. Dynamically typed languages succeeded only because they are simple and popular with beginers (python, php, javascript) and hence are popular by number of programmers, or because they are the only way to be cross platform (javascript). But I am not convinced that dynamic languages "work". It is great for simple things but completely breaks on large projects.


First of all, this isn't so much about CS as about Computer Engineering. Next, as pointed out already, this selection is quite random and subjective


What a strange random list of things big and small loosely related to computers.

So "bitmaps" work? "Software engineering" does not.

Sure things change in this industry. But you really need a bit more focus to capture what is going on and what drives it.


In 1995, "Software Engineering" would usually have implied CASE tools and heavy weight UML. It's current "success" is largely based on a bit of redefinition toward testing and team interactions and common values. They are rather different notions.


Yes. Where are as bitmaps here probably refer to pixel-level raster graphics. Where as "software engineering" is an entire field of its own.

If anything we are moving away from raster graphics to vector graphics, declarative approaches and display schemes that are adaptive to the device and its resolution.


What about Objects/Subtypes ?: a 'Yes' in 1999 but these days, we've learned the hard way that there are at least 2 contexts were objects don't quite work:

  - relational databases 
    (objects & classes are organized in trees, 
     and a tree is a sin in RDB land)
  - concurrency & parallelism
so a No? (although some people keep on trying)


> "DEC started a project to do dynamic translation from x86 to Alpha; at the time the project started, the projected performance of x86 basically running in emulation on Alpha was substantially better than native x86 on Intel chips."

I've heard this several times, but would love to read more about it. Anyone know where to find more information?


https://en.wikipedia.org/wiki/FX!32

(page 7) http://web.stanford.edu/class/cs343/resources/fx32.pdf

note that it was 500 MHz DEC Alpha vs 200 Mhz Pentium


> In retrospect, the reason RISC chips looked so good in the 80s was that you could fit a complete RISC microprocessor onto a single chip, which wasn’t true of x86 chips at the time.

This is simply not true. In fact, the first RISC I saw, the 88000, would not fit on one chip.


Software Engineering is a loaded question. It covers the range from formal planning bureaucracies to seat-of-pants extreme programming. Any software company that makes it into its second decade has probably hacked together something that works.


As usual, read this as "what's popular in Computer Science" more than "what's worked". Lots of things listed in the Maybe and No category (both the original table and in the article) do work but aren't super-popular and some things listed as "Yes" like bitmaps and subtypes are bad but ubiquitous.

Apart from my usual spiel on how popularity is not particularly meaningful[1], this is also an example of a fascinating idea from poker: "results oriented thinking"[2]. It's a terrible name, but a crucial concept: you can do everything right and still fail or you can do things wrong and still succeed. In Poker it means you can bet correctly and still lose to the luck of the draw and vice-versa; you can't generalize from a single example or even a handful of examples. Of course, this is exactly what so many people in industry do. Results oriented thinking is endemic in practical software engineering.

We even have our own name for it, or at least part of it: the IBM effect. "Nobody ever got fired for buying IBM."[3] If you do the same old thing and fail, well, things happen. If you try something new and fail, well that thing you tried has to be terrible! Never do it again! Of course, in reality, those "things that happen" still dominate, but that's not how people make decisions. Then it's all magnified through institutional memory and "common wisdom".

It's also magnified because there's a certain kind of personality that values pragmatism above all else and categorically opposes anything new, idealistic or academic. (To be fair, pretty much the opposite of me :P.) The tricky thing is that they have a point... some of the time. But they'll use any sort of failure to rationalize their view, and because decision makers tend to be so results-oriented this sort of rationalization has an outsized influence.

Putting this all together both this article and Butler Lampson's original argument tell us a lot more about the social dynamics of software engineering than they do about computer science. There are still interesting insights to be had, but certainly not the ones the article is arguing for.

[1]: https://news.ycombinator.com/item?id=10567962

[2]: Unfortunately, I haven't found a single good description of "results oriented thinking" as the phrase is used here. This blog post is okay, but the first part plays dubious semantic games with words that you may as well ignore: http://randomdirections.com/why-being-results-oriented-is-ac...

[3]: Hilariously, this was the first Google result for the phrase: https://en.wikipedia.org/wiki/Fear,_uncertainty_and_doubt


> As usual, read this as "what's popular in Computer Science" more than "what's worked"

I don't think so. By "worked" he means that the benefits have been very, very clearly demonstrated to be very significant and widely applicable. And "clearly demonstrated" means their impact has been shown in numerous production settings.

> Lots of things listed in the Maybe and No category (both the original table and in the article) do work but aren't super-popular

Not in the sense he means, which doesn't mean popular, but rather "clearly demonstrated to provide great benefits in the field with wide applicability".

> and some things listed as "Yes" like bitmaps and subtypes are bad but ubiquitous.

I don't know what you mean by "bad". If by bad you mean "have some serious drawbacks" or "can be possibly replaced by something better", then sure, but that doesn't mean they haven't worked. If by "bad" you mean that their benefits haven't been clearly demonstrated, then I'd disagree about your conclusion on bitmaps and subtypes.

> It's also magnified because there's a certain kind of personality that values pragmatism above all else and categorically opposes anything new, idealistic or academic.

Or is simply wary of incorporating ideas that haven't been sufficiently proven into products with billions of dollars on the line. OTOH, there is another kind of personality that confuses pure research with battle-tested ideas, and think that something that is shown to have benefits in research settings will surely have those benefits in the field without worse side-effects.

> decision makers tend to be so results-oriented

Shouldn't they be? Would you rather the people making the decisions on the software used to run your power stations, airports, banks and government be embrace "new" and be "idealistic or academic"?

Research is research, and the industry is the industry, and ideas have a very, very long maturation process of going from the research stage to the "demonstrably works at a large scale in the industry" phase. That's a good thing, and that's what it's like in all disciplines. Software is rather unique in that it has people who think this shouldn't be the case. Thank God they're not the decision makers.

> We even have our own name for it, or at least part of it: the IBM effect

And now, ironically, you're "spreading FUD" against an approach that is simply skeptical of things touted to be "the next big thing". Some of us have been in this industry long enough to know that most great ideas turn out not to be. That's not "nobody ever got fired for buying IBM", but "let's not bet billions of dollars and possibly people's lives on the flavor of the week just yet until we gather more evidence".


> Shouldn't they be?

No. And the people who make nuclear powerplants and planes certainly don't engage in results-oriented thinking: you could easily take out half the safety features they build and not run into any catastrophes, but that's not an acceptable risk. Software companies, on the other hand, are perfectly happy taking on long-term and long-tail risk if it's more or less worked in the past, in large part because failure is far more forgivable. (Which, to be clear, is a good thing and and of itself, but does incentivize some unfortunate kinds of behavior.)

The whole point is that good decisions lead to bad outcomes and bad decisions lead to good outcomes all the time, but politics pushes people to react strongly to results, ignoring this fact. This is quite different from other fields in engineering which by necessity, tradition, culture or even regulation operate differently.

As I said, "results oriented thinking" is a terrible name which leads to confusion, but it's what we have. Unfortunately I can't think of a better one and if I did it probably wouldn't catch on. (Which, amusingly, is an example of [2] above :).)


> And the people who make nuclear powerplants and planes certainly don't engage in results-oriented thinking: you could easily take out half the safety features they build and not run into any catastrophes

As someone who's once been there (not power plants but ensuring the safety of systems whose failure may endanger the lives of many people), it's much more complicated than that. If by "results oriented" you mean "look at nothing but the bottom line", I think you're disrespecting a lot of people by assuming (with little evidence) that this is indeed the process. I can tell you that it isn't, and not just in safety-critical software. Of course, there may be some bad people as in every field, but I've seen no evidence to suggest that "decision makers" are on average any worse at their job than anyone else. We were result-oriented in the sense that we preferred not to let anything that didn't have industry-proven benefits into our system.

Ironically, it is sometimes the safety-critical systems (well, some classes of them) that can afford to take more risk, because they have the resources to build a new system and then run it in production alongside the old one only it's outputs are not piped to real actuators (we called it a "shadow system"), and do that for a long time (a year if not more) before flipping the switch. "Plain" software usually can't afford to be so patient.

> The whole point is that good decisions lead to bad outcomes and bad decisions lead to good outcomes all the time

Of course. Our disagreement is on what constitutes a good decision.

> politics pushes people to react strongly to results, ignoring this fact.

I don't think so. I've been a decision maker (though not alone, and sometimes didn't have the final say) in both safety-critical settings and less risky ones, and it's not politics but often the lack of a better metric. I can assure you that we've extensively studied the reasons for each failure or success to the best of our abilities and never looked just at the bottom line if we had any more information. We took quite a few calculated risks (changed our numerical algorithms, chose the yet-unproven-at-the-time real-time Java over C++ for a safety-critical, hard-realtime system and more), but always opted for things that had been proven in the industry (or tried to prove them ourselves). It's not politics that led us to that, but lots of experience. That's because things that work in the lab fail in the field more often than not. Ignoring that fact is just stupid (or, rather, shows a lack of experience in the field), especially when the stakes are high (and they're always high to some degree in the industry).

Besides, why risk anything on something that doesn't have strong evidence of significant benefits? We look at people like you to show us those benefits, and if you don't -- or you think you've found clear benefits, but those benefits simply aren't significant enough for us -- why take any risk? We gladly risk quite a bit when there's good reason to believe the payoff would be large enough. Even that (let alone battle-testing) is something some of the things you'd put in the "works" category have yet to demonstrate.

That is something we in the industry tell researchers all the time. If you're content doing pure research, do it and don't whine about us not adopting your work. If not, and you would like us to use your stuff, there's a lot more you need to do than your pure research, like extensive applied research with empirical results, and, most importantly, it requires understanding the industry's needs (e.g. in the industry something is not adopted because it is "better"; it will only be adopted if it is better enough to offset the adoption cost plus associated risk). But what you can't do is have it both ways -- do only the pure research and whine about us not using it. Doing so shows a lack of understanding of what the industry is, and reflects badly on those researchers much more than on the industry (which, BTW, has made amazing achievements in software).

The industry adopts brand new stuff all the time when the researchers do their job properly (well, to be fair, for some it's a lot easier). Suppose I'm a decision maker and someone says to me, my new sorting algorithm would be 20% faster for the data you're sorting. Well, the adoption cost is nearly zero, I can test they payoff rather easily, there is risk involved but it's mitigated by my ability to switch back, so I find the 20% payoff to be high enough and I go for it. This happens all the time. In fact, millions of people will soon switch to a brand new -- and rather ambitious -- GC algorithm (granted, one that's been tested for nearly a decade). But then some other guy shows up and says, my new programming language would make your development better. How much better? I ask. Much! he says. Talk to me in numbers, I say. By how much would my development cost be reduced? Hmm, he says. I'm not sure; 30%... maybe? So I look at the adoption cost (huge), risk (very high), and find that 30% to be rather low, but let's suppose it's borderline good-enough. Only he's not even sure about the 30% because he has no experience with large systems and he's never even tested his claim. Maybe it's 5%, maybe 70% and maybe -20%! So let me tell you that in that case, I will be very results oriented, look around, see that practically no one else uses that language and those who do see payoffs nowhere near high enough, so I pass[1]... and that's when you say I'm motivated by politics, hostile to academia and afraid of new stuff.

[1]: True, it isn't fair. Technologies with high adoption costs incur a much higher testing burden and are required to yield much higher payoffs than those with lower adoption costs. But that's no one's fault, and just the way it is. Researchers in those fields should keep on working, realizing that their payoffs would be high enough to be adopted by the industry every 20 years instead of every 5 as in low-adoption-cost cases.


Bad title. the list of topics is a very small fraction of the activity in computer science. The original Lampson title was fine. Why change it?


Dependency injection.

- No one ever


RPC has won indeed, in many different ways. Classic RISC made its way into the embedded, low power devices and clearly won, at least by the sheer numbers of chips produced. Formal methods settled strongly in the hardware design [1]. Distributed computing made a very strong comeback (map-reduce, clouds, all that). So, in general, it's very outdated. It was already outdated even in 1999.

[1] https://www.cl.cam.ac.uk/~jrh13/slides/nasa-14apr10/slides.p...


I won't say RPC won.

We don't use Remote Procedure Calls much, we do Remote Endpoint Calls. Every endpoint still have to be considered, specified, secured, protected against DoS. None of that is automated - for example, endpoint specification systems famously didn't work. It's not that we fire random procedures on remote machines. We use protocols in the end, not procedures.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: