Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The web is not open and becoming increasingly less so.

People love to talk about how the web is about open standards and such, but it really is rather quite closed.

It's driven less by standards and more by de-facto implementations. Soon we can get rid of the standards committee and just talk to the implementers of webkit to define the "standard".

And I think even worse has been the wholesale discounting of plugins. I still strongly believe that being tied to JavaScript as really the only client side language is a mistake. It's not a great IL and limiting the language for such a pervasive platform is scary. I powerful plugin model would be, IMO, one of the best things to a truly powerful web.

I wish the web was more open. I wish that browsers were a truly extensible runtime that specified lower level abstractions, that allowed more innovation at the top of the stack.

It feels like we're walking into the dark age of the internet.



This comment captures a lot, perhaps not in the way Ken intended, but I feel compelled to respond.

Lets start with the thesis statement, "The web is not open and becoming increasingly less so."

On its face, this statement is not only false, it is painfully so. Sort of like saying the world is not round and becoming increasingly less so. To pretty much anyone they would say "the world is clearly round, and its impossible to change that." Similarly there is absolutely nothing standing between Ken, or anyone else, preventing them from building an entirely different "web" just like Tim Berners-Lee did back at CERN. So the definition of the word 'open' here clearly is needs some additional verbiage.

The next statement helps a bit, "It's driven less by standards and more by de-facto implementations. Soon we can get rid of the standards committee and just talk to the implementers of webkit to define the 'standard'."

This deliciously captures a debate that has raged for 30 years at least. "Which came first, the standard or the code?"

Back when I was young and impressionable, the debate was something called the "ISO 7 layer networking model" and "TCP/IP". You see the international standards organization had decided that the world needed a global networking standard, and so they got their best standards engineers together to come up with what they gloriously christened the "Open Standards Interconnect" or OSI set of protocols. Meanwhile a scrappy bunch of network engineers and hacker types were in this loose knit organization called the Internet Engineering Task Force who were building networks that spanned countries and they wrote code and brought it to the meetings and debated stuff that worked well and stuff that didn't work well, and then everyone went back and wrote more code, Etc.

The forces of evil ignored the IETF and focused on the ISO working groups, since the latter were making standards and the former were just playing around with code.

As it turned out, working code tended to trump standards, and a process of debating changes to the system using working code vs debate using 'it should/it might' as opposed to 'version A does/ version B doesn't' meant that changes got made to standards based on a convincing argument that had never been tried or experienced in practice. The result was that the OSI standards had a lot of stuff in them to avoid issues that weren't issues, and were missing responses to things that actually were issues.

A number of people found the 'code first, standard later' methodology superior for that reason. Assuming that the code was available and unencumbered by patent or licensing restrictions. The latter of course became a much bigger problem when the focus switched to the IETF and the 'big guns' started their usual games.

My first response is then, "open" means anyone can implement and contribute new stuff. And by that definition the web is very open. However, since the community favors a implementation model over a theoretical standards model the 'cost' to influence change is you have to write code, as opposed to making a good argument. And that disenfranchises people without good coding skills.

The second part of this screed is prefaced with this: "And I think even worse has been the wholesale discounting of plugins." Which speaks to the other side effect of "open" as in "we don't make any boundaries we don't have to."

From a mass adoption point of view, the more variability you have in an experience the harder it is to capture the larger audience. This is why cars and motorcycles are all driven in basically the same way, Televisions have 'channels' and 'volume' and browsers have an address bar and bookmarks.

The unification of the structure allows for mass learning and talking in generalizations that remain true without device specific knowledge. Can you imagine how hard it would be to write a driver's code if every vehicle had its own customizable UI and indicators?

So as a technology matures the variability is removed and the common practices are enshrined into the structure.

What that means in a practical sense is that if you're trying to push the envelope in such a mature technology you will face higher and higher resistance. However, you are always allowed to create an entirely new way of doing things.

This isn't the 'dark age' it's the post renaissance start of the industrial revolution. Except instead of widely accessible books we've now got widely accessible information and a de-facto portal for accessing it.


I think this is unfair to the parent. Comparing web standards to the OSI model is really not the same thing, in my view,, at all. Web "standards" are much more specific and prescriptive, as well as detailed than the OSI model.

It's hard to say whether the parent is correct about the dark age but such a thing clearly has been the case in the past with regards to standards. There was a time before in the not-so-distant past that browser vendors, particularly MS, did not care much at all about conforming to any sort of standards and created mess for which things like jQuery were partially created to solve. So I think there is real ground the parent's point. The issue is whether it is really getting worse, still.

One thing I think is different from previous years is that the programming community is less accepting, I think, of totally non-standard, even weird, proprietary implementations.


You could argue that the IETF has become everything the ISO working groups used to be, and that 'rough consensus and running code' is long gone.

I seem to flip flop back and forth on this, personally. Some days the standards process seems bright and cheery, at other times, I fear Apple/Google/Microsoft are running the show.


"You could argue that the IETF has become everything the ISO working groups used to be, and that 'rough consensus and running code' is long gone."

Yes you could, I was there in the trenches trying to urge them not to go down that road. I really lost it when we had gotten Sun to release all claim to any rights to the XDR or RPC code libraries so that IETF could endorse it as an 'independent' standard. There were enough implementations out there, a regular connectathon which tested interoperability. Paul Leach, who made it his life's work to prevent any sort of standardization of RPC, successfully rallied Microsoft to overwhelm the otherwise rational members of the IETF and de-rail that process. It was so blatant, and so petty. I remember telling Vint Cerf that I marked that day, and those events, as the death of the IETF as a body with any integrity. Many working groups soldiered along and did well until they became 'strategic' to some big player, and then their integrity too was ripped out of them. Sort of like that movie 'Invasion of the Body Snatchers.' Very sad.

Over the years I've toyed briefly with starting a new networking working group.


Sadly a group of backwards people stopped WebDB just because SQLite being fast, fully featured and in public domain was not enough for them.


Lets step back from the "open" buzzword and look at the empirical facts.

Less than a decade ago you made sure your web sites ran well in Internet Explorer, a closed source browser that was allowed to stagnate. IE took the W3Cs standards as more like "guidelines" and not a specification.

- Today, every major web browser (except IE) uses an open source rendering engine (or the browser itself is open source.

- Every major web framework and library is open source.

- Most of the servers running the web are powered by an open source OS.

- The standards bodies are actually working faster than ever on new version of Ecmascript and HTML.

- IE's marketshare is smaller than ever.

Years ago some guys had a crazy idea to make a browser for KDE. Today it's powering much of the desktop web and almost ALL of the mobile web. Perhaps I just like a good love story, but it seems like this is a pretty great achievement for "open". Now, it seems pretty disingenuous to say "the web is not open" and even more so to say that just because more people are working on the same open source project that it's, "becoming increasingly less [open]."

[edited for formatting]


... and before IE it was Netscape: people didn't like that either, but you (and most people who talk about browser history) seem to forget or ignore that :(. If you go back and read through the W3C mailing lists people really really hated Netscape (the by-far dominant web browser at the time, for which books on HTML would have sections dedicated to optimizing for and would even go as far as to say being Netscape-only was fine) for seemingly making up HTML as they went along (almost all of the stuff in HTML that is deprecated, including all of the markup that was for style and presentation only, were Netscape-only HTML extensions) and refusing to take up the charge of CSS. Microsoft was even occasionally described as the potential savior that would come in with a second implementation that paid attention to them (and in fact you then find a ton of praise on the list from Microsoft publishing open DTDs from IE).

Despite all of this, Netscape (a company whose business model at the time relied on selling web browsers and getting contracts with ISPs to bundle their software with subscriptions) managed to get Microsoft's hands slapped so hard by the justice department for having the gall to give away a web browser as part of an operating system (something we now all take for granted: no one complains that Apple pushes Safari with OS X, nor do nearly enough people scream loudly about the fact that alternative web browsers on iOS are only possible if you use Apple's rendering engine in a crippled mode, defeating the purpose, despite Apple having near-monopoly status on the mobile web) that Microsoft never quite got back the courage to keep moving forward given the new constraints they were under. Thankfully, in the process, Netscape still died, and from its ashes arose the idea that an open-source web browser would be interesting and viable, leading to the ecosystem we have today.


Well, to be honest, why did Opera once had ads?

And on Netscape and CSS: http://news.ycombinator.com/item?id=2108940


The "almost all of the mobile" part is really bad, and exactly resembles the IE situation on the desktop before. I hope Mozilla will shift the balance there again.


The problem with saying "Webkit is the new IE" is that IE was allowed to stagnate because it was a singular browser with a dominate position in the market. When that position was achieved, it was no longer necessary for the company maintaining it to continue to compete.

Webkit, in contrast, isn't controlled by any one company. The people using it have access to the source, and more often that not are contributing to the project themselves.

I don't think the competition is going to end, It's just going to change form.


Stagnation was only one aspect of the problem. Another big aspect was common bad quality of sites created with IE only in mind. The same often happens with WebKit on mobile.


Honestly, in my view "guidelines" is even a strong word for what Microsoft did with the browser and "standards." This is not to hate on MS at all but honestly, even though you could, I suppose, argue IE has gotten much better, I don't really see a reason for it to exist anymore. It's been such a bad boy and has so few redeeming features that I really think the "blue e" on the desktop should be made to be like simply a shortcut to whatever your default browser is, be it Chrome, Firefox, Opera, or whatever (but not IE because it's development, in my view, should stop).


Here's another interesting way to think about it: look at the mobile web on iOS. Does it matter that WebKit is open source and forkable? No, since Apple actively prevents "alternative" visions of what the web should be since they just don't allow custom builds of WebKit/JS on there. Any browser you make can only add superficial features, forcing you to use the built in WebKit. You may have a great idea that would instantly make everyone want to use web apps on iOS instead of native apps, but since you 1) can't ship that browser on iOS, and 2) can't convince Apple to commit that change to WebKit proper, you are effectively locked out.

Compare this to a plugin-driven environment vs a standards-driven environment. Say what you like about Flash, but it was able to guerrilla video onto the web without anyone's permission.


> ...look at the mobile web on iOS. Does it matter that WebKit is open source and forkable?

The version of WebKit used on iOS is actually not open source and forkable; even WebCore, which is LGPL, Apple works around: rather than releasing code changes for iOS-specific features, they release the binary .o files users can link in.

Hell: Chrome for Android isn't even open source. People tend to totally forget that "WebKit is open source" is meaningless in the general case, as the BSD license allows specific forks to be closed, and all of the mobile ones hold stuff back.


WebCore/JSCore for iOS are indeed open source, but yes certain pieces are released as .o's that you just link with it, and additionally it is released as source dumps instead of nicely managed versioning. If you go to http://www.opensource.apple.com/release/ios-61/ and download WebCore you will see plenty of source in there. The point is that you could for example meaningfully edit WebCore (for example adding python scripting support, or perhaps even putting in ogg support, whatever), link it with the .o pieces, and have an interesting new product, but still not be able to release it on iOS due to the rules.


(edit: This comment's first paragraph is wrong, as pointed out in the reply below; in fact, this functionality is not in WebKit, it is in WebCore. However, it is still closed source, and it is not available on the website that was linked to: you cannot find the source code for anything that makes WebKit on iOS work on iOS, as it is all closed. The point stands.)

WebCore != WebKit. You were saying that WebKit was open source and forkable: no, WebKit isn't. There is a library used by WebKit called WebCore that is, but WebCore doesn't provide most of the juicy iOS functionality; Apple actually seems to actively avoid touching WebCore, lest their lives become more difficult due to it being under LGPL.

The reason, then, that I brought up WebCore was as a demonstration that even for things where Apple must release at least some source code (as WebCore is under LGPL), they still weasel around it: WebKit, which is under BSD, has no such protection, and you will note that there is simply nothing available for it on iOS at all.

(Trust me: I routinely download all of opensource.apple.com, to find not just new packages but redactions, and have scripts to reconstruct git repositories out of the various tarballs for key projects, which I then export for others in our community to more easily be able to work off of; you forward me there as if I haven't heard of it... ;P.)


No, trust me: I was on the original iPhone team and was responsible for Mobile Safari and "WebKit" on iOS, and you are pretty confused about the project.

For starters, "WebKit" is an overloaded term. There is "WebKit" the framework, which is a bridge between the actual gears (WebCore and JSCore) and applications. In other words, it is an API: an increasingly thin Obj-C layer on top of all the C++ that does the real work. Then there is the "WebKit Project", which is an umbrella term for all that stuff together (WebKit/WebCore/JSCore). Chrome for example neither uses WebKit proper if I recall correctly, nor the engine in JSCore (opting for v8 instead), and yet it is still considered a "WebKit browser". That's because what makes you "behave" like WebKit is WebCore, which actually handles DOM, rendering, CSS, etc etc. So saying that Apple releases WebKit for iOS is perfectly acceptable terminology, even if you are wanting to be pedantic about it. Now I don't know what you define as "juicy functionality", but I can assure you that WebCore is not just some helper library or something, WebCore more or less IS WebKit. It is certainly enough for you to be able to build your own custom browser for iOS. In fact, even if the iOS version was completely closed source, you could still take the desktop 100% open source WebKit and port it to the phone (just like Nokia and Google did for their phones).

So I guess I'm missing the relevancy of your point. If you just wanted to rant that Apple doesn't open source as much as it should, then I sympathize, but it really has nothing to do with the point I was making that due to the separate restrictive nature of the App Store policies, it doesn't matter if WebKit is or isn't open source because you aren't allowed to ship a custom browser engine anyways (at least not one that runs JavaScript).


So, I just spent some time digging around in these libraries, to figure out where I might be wrong about this, and it seems like this is my mistake: a bunch of functionality I thought was in WebKit is actually in the closed-source parts of WebCore. Specifically, I'm talking about the tiled rendering (TileCache.mm) and all of the multitouch logic (PlatformTouchEventIPhone.mm). Even simple things like the copy/paste support are redacted, as is scrolling (yes: scrolling). For other platforms, all of this code is available.

This closed source part contains tons of simply critical things, such as how to interact with the iPhone's memory management notification system, how to manage the network state transitions, how to interact with embedded video... pretty much everything about MobileSafari that makes it MobileSafari as opposed to a less-than-half-operational desktop version of Safari with a touch screen (which would suck) is closed source.

I'm therefore quite sorry that I thought that this stuff was in the other library, but my point about the "open source and forkable" stands, and I think it stands pretty well: I can't fork WebCore and make meaningful changes to it for this system. In fact, even for people who have access to the system's internals (we jailbreak users), the few people who used to recompile WebCore for it (the Hebrew community) gave up and moved to writing Substrate extensions instead.

To be clear: I consider these iOS-specific things to be "the juicy parts" of MobileSafari: if you want to compete, you have to have really strong compatible answers for them. It isn't sufficient to take a desktop copy of WebKit and recompile it, because if you did that you'd just have a totally unusable browser experience... you wouldn't even have as much functionality as embedding a UIWebView in your application and attempting to externally script it.

So, yes: you are right that these are in WebCore, and that my complaint about "WebCore != WebKit" was wrong. The reason I made that argument was to try to reconcile your insistence that WebKit for iOS was open source with the reality that there really is no source code available from Apple for anything but "something that renders HTML (slowly, and missing features)"; my reconciliation was wrong, but the reason for it is still correct: WebKit for iOS, including WebCore, is not "open source and forkable" enough to make a web browser.

To the extent to which one can then put in the elbow grease to add back all of these missing parts in order to make a web browser, honestly one may as well be starting with any other rendering engine that isn't yet ported to this platform. Therefore, if WebKit for desktop systems is relevant to this discussion, then Gecko being open source is equally as relevant and libcurl in general being open source is largely as relevant. You can also build web browsers out of those.

Given this, I'm having an increasingly difficult time trying to figure out where your correction to kenjackson's argument was: you tried to give him a different way to think about it (in a way that might undermine his argument that WebKit is a defacto implementation due to an inability to install your own copy on iOS), but it is starting to seem like you just agree with him? Am I simply misunderstanding why you were responding to him?


Yes that is the misunderstanding, I do agree with him (you can see another response to him here where we continue agreeing: http://news.ycombinator.com/item?id=5214213 ). I was not correcting him by saying "Here's another way to think about it", I was offering another interpretation of why kenjackson is right by looking at the case of mobile in particular.

I chose to focus on mobile for iOS precisely because here you have the greatest example of how open source is helpless and irrelevant. Even if Apple were to satisfy all your requirements for WebKit being open source, you still would not be allowed to compile it and ship it in your app, let alone modify it and ship it. Even if I write the best browser ever for iOS, I am not allowed to ship it on iOS. This is why I keep coming back to it not mattering whether you can or can't fork WebKit for iOS, just like it doesn't matter whether you can or can't write a completely new engine from scratch for iOS, just like it doesn't matter whether you can or can't fork FireFox for iOS: due to the nature of the platform, the web is closed on iOS PERIOD. Apple is THE gatekeeper of all features that enter the iOS web. Continuing to agree with kenjackson, that is why he is right that a better runtime or plugin system are ultimately more important for the web to be open than source code being released: as long as I can have a direct relationship with the user where they can install a plugin and modify the behavior of their browser, then there is a shot for non-dominant market players to always influence the direction of the web (the same way Adobe created the video revolution of the web without needing to own a browser or a cell phone or any other way of forcing people to use their tech).

I was really confused why you kept arguing with me about how open source WebKit is when my point was "whether or not its open source and you can fork it, it doesn't matter because the web is closed on iOS for deeper reasons".


And that's bad why? So the web only works with one rendering engine. One rendering engine that's open source and can be used and modified by anyone for any purpose.

Standards are great for things like protocols (even languages), but an entire web browser is a tad more complicated than TCP or even C++. No two browsers have ever implemented HTML/JS/CSS perfectly and they never will. If that's the case, then what's the point of a "standard" anyway?


One rendering engine that has some serious limitations, like not being very parallelizable, because of highly entrenched implementation choices.

Which means that if you want hardware capable of rendering the web it can't be low-power highly-parallel hardware; it has to be high-power-consumption fast-serial-operation hardware. Why is that bad? I guess that's a matter of perspective. I think that would be a terrible outcome, personally.

I should point out that I'm not aware of any compiler that has implemented C++ perfectly, and I doubt any ever will given that it's a moving target. So why bother having multiple compilers or a C++ standard at all? For example, why does the WebKit argument not apply to gcc? And note that in compiler-land not being able to compile some codebases is OK as long as you can compile the codebase your user cares about, while the Web equivalent (only rendering some websites but not others) is a lot more problematic, because the typical compiler user compiles fewer different codebases than they visit websites. And also because using different compilers for different codebases is a lot simpler than using different browsers for different websites.


You know that you can always start an open source project that fixes these parallelization issues and start building out an engine that is better, right? It'd probably be a 5-7+ year project, but it certainly is doable.

In fact, it's possible that the poor parallelization support will be the Achilles' tendon of WebKit on a long-enough time scale.

This is no different than the Achille's tendon of the DOM that is procedural-style immediate mode graphics instead of retain-mode graphics. Browser apps will never compete with iOS apps in terms of user experience until this procedural approach is replaced with a declarative functional reactive approach.

Think long term. The Windows hegemony eventually buckled under its own weight. There's no reason to think that WebKit won't eventually do the same on a long enough time scale. Figure out what will lead to its collapse because that is an opportunity. In fact, letting WebKit lead the way allows you to learn all the ways in which WebKit does it wrong. WebKit will continue to trail blaze on the interface, but doesn't have to be the end all be all of implementations for those interfaces.

Between Tizen and B2G, there is plenty of innovation in the web browser space. I just hope that transclusion is always considered a first world citizen in this brave new world.


> you can always start an open source project that fixes these parallelization issues and start building out an engine that is better

Sure. We (Mozilla) are doing that right now.

> It'd probably be a 5-7+ year project

If there is no WebKit monoculture. If there is, such that the project has to duplicate WebKit bugs after reverse-engineering them, then it's a lot longer, if possible at all (because some of the bugs are parallelism bottlenecks).

Which is precisely my point. A WebKit monoculture would make it less possible to start such an open source project.

> Think long term.

You mean the one in which we're all dead?

Even if a hypothetical WebKit monoculture "merely" delays the advent of more-parallel rendering engines by 20 years, as opposed to preventing it altogether, that's still a huge loss in my book.


I don't know if it does or does not. Maybe. Was the Opera browser engine doing anything to provide a more parallel engine option? If not, it arguable wasn't helping in this respect either.

Since WebKit is open source, can't you just submit bugfixes for the bugs that are parallelism bottlenecks? Seems like that would make a lot more sense than coding another engine to accommodate those bugs. If it truly is a bug, there shouldn't be any problem with submitting a bugfix and getting it accepted.

Since you're working on FireFox, are there any examples of WebKit "bugs" that prevent parallelism that you could not fix yourself? Does Mozilla have a team of WebKit engineers whose sole job is to fix WebKit so that those monoculture problems are mitigated and don't become a problem for other browser engines? At the end of the day, all you guys need to defend is the interface, not the implementation. Fixing each other's engines before bugs become features seems like a good way to accomplish this. ref: http://xkcd.com/1172/

The most important abstraction to fix isn't even a WebKit abstraction, it's a W3C abstraction. Everything was doomed from the get go because of the one-to-one relationship between the window and the document. I agree with Kay here about TBL & Co being shortsighted in what the web could have become if a richer interactive experience instead of a document based experience had been considered from inception.

Look at Twitter. That's not a document. That's an application. Each tweet in the interface is a document. Every tweet in that feed is a document that has been "transcluded" into the app Twitter built on top of a document. There needs to be a standard way to "transclude" documents with reference URL that allows interactivity. The fact that the only hyperlinking option we have today is the <a> tag is unfortunate. There need to be more ways of hyperlinking than an <a> tag. You need to be able to window directly to a document or document fragment at a different URL. The #hashanchors aren't sufficient, since they only describe a beginning, not an end to the fragment being excerpt. iFrames kind of provide an alternative, but this was never explored properly. The host app should also be able to provide a cached copy of the contents of any sub-document for performance and to guarantee that a copy of the referred document is always available in the parent context.

edit: downvote? srsly? without a response? downvoting is for comments that don't contribute to the conversation, not for comments you simply don't agree with.


You're arguing that it's not necessarily terrible if Opera switches to WebKit, in terms of monoculture issues.

That may well be true. But what others in this thread are arguing is that it would also not be terrible if everyone else switched to WebKit too, and I believe they're wrong about that.

> can't you just submit bugfixes for the bugs that are parallelism bottlenecks?

You mean bugs like being written in C++ and not architected around parallelism?

Bolting on parallelism is _hard_. Have you ever tried doing that with an existing multi-million-line C++ codebase? I've tried with Gecko, and I've spoken with people who have tried with WebKit, as well as reading a good bit of WebKit source, and it's not really feasible unless you drop everything else and focus all your energy on it. And maybe not even then.

> are there any examples of WebKit "bugs" that prevent parallelism that you could not fix yourself

And get the patches actually accepted in WebKit? Lots.

Again, you seem to think the problem is some small issues here and there, whereas the problem is more that you have lots of code touching non-const data through C++ pointers, which means that if you try to parallelize you either get tons of data races or lock contention that kills performance or most likely both.

> At the end of the day, all you guys need to defend is the interface, not the implementation

As long as there are multiple implementations. Unless you include in "interface" everything that's web-visible, but policing that is a huge endeavor that no one working on browser implementations has the manpower for.


Assuming that no webpages are reliant on those bugs, sure. One of the problems with everyone targetting a single rendering engine is that they become reliant on the bugs of that engine, to the point that it becomes difficult to make any changes without breaking compatibility. Look at IE, for instance, especially the IE7-compatibility mode in later versions which wasn't IE7 compatible.


Hell don't look at IE, look at Windows, Windows guys had to rewrite a CORRECT implementation of logic check if SimCity was running and turn of a special flag to run the old version of allocator.

http://ianmurdock.com/platforms/on-the-importance-of-backwar...

Now multiply that story for every badly written WebKit site that relies on some backward ass crazy bug that no maintainer sees fit to fix.


As an example, I personally figured out where legacy color parsing is in the Netscape classic source: http://stackoverflow.com/questions/8318911/why-does-html-thi...

It is so subtle even Netscape's own Gecko rewrite did not get it completely right the first time: https://bugzilla.mozilla.org/show_bug.cgi?id=121738


And to stretch that a bit, the same apply for operating systems, text editors, heck, clothes color, clothes type, food, etc.

No choice & diversity = always been a bad idea and never, ever led to efficiency, progress, etc. It has always and every time led to the opposite.


This in an incredibly short-sighted comment. The engine is just a client application. There are millions of websites on the other end of the line. Any changes to web standards also affects them. That's why it's incredibly harmful to treat any single implementation as a de-facto standard.

We need (good) web standards, because we need consistent web architecture where features work together and somebody does long-term planning.


A Monoculture by itself has a very bad property.

It's easy to use and misuse on a large scale. What runs same everywhere is great for app and crackers alike.

Its size make it a valid target of all kinds of dubious organization to target. And not just target exploits existing, but introducing new exploits into the source.


Web being open has nothing to do with the number of rendering engines for browsers. Even having zero of them would not affect the openness of the web in any way. Firewalls would.

  > It's driven less by standards and more by de-facto implementations. 
That was always the case. In fact, one of the goals of WHATWG (fathers of HTML5) was to standardise how the code is rendered even if it is invalid.


> It feels like we're walking into the dark age of the internet.

The persuasiveness of your argument is harmed by this sort of melodrama.

By the way, the "Dark Ages" are named such because of a lack of written historical records from the Early Middle Ages. The negative connotation attached to the phrase by the general public is considered inaccurate by historians.


> Soon we can get rid of the standards committee and just talk to the implementers of webkit to define the "standard".

Your comment in 1995: Soon we can get rid of the standards committee and just talk to the implementers of Netscape to define the "standard".

And in 2001: Soon we can get rid of the standards committee and just talk to the implementers of IE to define the "standard".

And in 2006: Soon we can get rid of the standards committee and just talk to the implementers of Firefox to define the "standard".

...and for more comparison:

Your comment in the late 70's on computers: Soon we can get rid of the hobbyists and just talk to Apple to define "standard".

And in 1996: Soon we can get rid of Apple and just talk to the creators of the PC to define the "standard".

And in 2010: Soon we can get rid of the PC and just talk to the creators of the iPad to define the "standard".

Just sayin'


+1

I've said for years that a pluggable javascript engine should be something that was fundamental to browsers. Extending it further, plugins that allowed for alternate and complementary lower-level technologies (easily embed a python engine, for example) would lead to more competition and innovation.

We just had a discussion (or, I had a rant) about this at a local web meetup last night. 10 years ago it was "IE only". We're moving in to "webkit only" these days, especially if you're targetting mobile users. In some ways it doesn't feel like we've progressed all that much.


if the code for the de facto implementation is open source, does it matter? i see the code and the standard as the same thing in different languages personally.


Of course it matters. The "standard" becomes driven by the peculiarities of a specific implementation, rather than the best thing for the base of customers that are served by the standard.

I think it's fine to have a reference implementation, but we need a broad set of implementations (with actual users) so that the standard doesn't get blinders on it due to an implementation decision made on a de-facto standard.


i see what you mean yes, that does make sense. to keep it honest so to speak.


They are different things because its unrealistic to fork WebKit and get any significant market share. So even though you might make a worthwhile change to the engine, realistically you need that change to be accepted by WebKit proper for it to matter.


Forking WebKit is effectively the same as writing your own browser, with regards to standardization. At the end of the day you want to get everyone to agree on the standard -- having an implementation that everyone can use only helps to sweeten the deal.


The response was more in regards to the grand-OP's desire for a lower level extensible runtime and lamentation about plugins, more than a comparison to standards now that I think about it. In other words, which web is more open:

1. One in which all the code is open source but there are huge hurdles to releasing your own browser, and any new feature is thus at the mercy of just a few big companies (Apple, Google, etc.).

or

2. One in which perhaps all the browsers were closed source, but adding new features to any such browser really was just a matter of referencing a script on a web page?

The questions is more or less only useful as a thought experiment by this point of course, and in particular I don't feel that the "standards" process was ever particularly open to begin with, so I don't think things have, or will, necessarily get much worse.


I like how you've stated it, and made my original premise less confrontational and more of a question about what you value as "open".


I don't think you can say the code is the standard. An implementation will have many quirks or things not related to the issue we are interesting "standardizing." Where do you draw the line? That would mean no implementation that was not simply 100% the same would conform to the standard.

You could say "Well standards document is irrelevant because no one follows them anyway" but that's another issue.


I'd be a lot more worried about implementing proprietary codecs like h.264 and HEVC, and DRM schemes on the web than about having one open source rendering engine as the "standard" for web browsers. At least you can fork webkit, and anyone can collaborate on it anyway.


Presto was not open source. WebKit is open source.

So, the internet is becoming more open, not less.


proprietary binary plugins != open web




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: