I'd like to see some real-world performance numbers when compared with gzip. The article is a little overzealous in its claims that simply don't add up.
My suspicion is it's going to be marginal and not worth the added complexity for what essential is a compression technique.
This project is a prime example of incorrect optimization. Developers should be focused on loading the correct amount of JavaScript that's needed by their application, not on trying to optimize their fat JavaScript bundles. It's so lazy engineering.
I'm guilty here: I described this format as "compression technique" because early feedback indicated that many people assumed this was a new bytecode. However, the main objective is indeed to speed up parsing. Compressing the file is a secondary goal.
> My suspicion is it's going to be marginal and not worth the added complexity for what essential is a compression technique.
In terms of raw file size and according to early benchmarks (which may, of course, be proved wrong as we progress), Binary AST + gzip affords us a compression that is a little bit better than minification + gzip. By opposition to minification, Binary AST does not obfuscate the code.
The real gain is in terms of parsing speed, in which we get considerable speedups. I do not want to advertise detailed numbers yet because people might believe them, and we are so early in the development process that they are bound to change dozens of time.
> This project is a prime example of incorrect optimization. Developers should be focused on loading the correct amount of JavaScript that's needed by their application, not on trying to optimize their fat JavaScript bundles. It's so lazy engineering.
Well, you are comparing optimizing the language vs. optimizing the code written in that language. These two approaches are and always will be complementary.
The gzip point aside (which is not an apples-to-apples comparison as gzipping a big source does not diminish its parse time), I see the response of "JS devs need to stop shipping so much JS" often. My issue with this response is that multiple parties are all trying to work towards making JS apps load faster and run faster. It is easy enough to say "developers should do better", but that can be an umbrella response to any source of performance issues.
The browser and platform vendors do not have the luxury of fiat: they cannot will away the size of modern JS apps simply because they are causing slowdown. There can be engineering advocacy, to be sure, but that certainly shouldn't preclude those vendors from attempting technical solutions.
My suspicion is it's going to be marginal and not worth the added complexity for what essential is a compression technique.
This project is a prime example of incorrect optimization. Developers should be focused on loading the correct amount of JavaScript that's needed by their application, not on trying to optimize their fat JavaScript bundles. It's so lazy engineering.