But JavaScript is not (another daft meme) "the assembler of the web" as it is itself interpreted (or what have you) into byte code for execution.
The point is that one should limit the level of abstraction to one which sits in that comfort zone between code that is sensible to humans and code that is sensible to the machine running it and (perhaps more importantly) the available dev/debug tools.
No, one should limit the level of abstraction to the fastest one that is reliable enough to get the job done. There's nothing intrinsic about high abstraction layers that say they must suck for debugging what you need debugged. It just depends on what you need to do with it. If it gets your job done faster and it's reliable enough for your job, then why not? That's why higher abstractions are created in the first place.
In short, you can't instantly rule out this compiler without knowing exactly what you'd want to do with it. And exactly how good it would be at that specific problem.
I don't agree that JavaScript being good enough for one person makes it good enough for everyone.
I don't agree that limitations should be placed on abstractions; I have no clue what's happening at the CPU level when a line of JavaScript or MSIL or bla bla executes. I don't need to know what's happening at all the layers of abstraction beneath the one I'm working on, and that helps me be productive.
There might be a pragmatic argument against using compile-to-JS languages _right now_, but with things like Google Chrome's Source Maps, this is a problem that is being solved if it hasn't been already.
The point is that one should limit the level of abstraction to one which sits in that comfort zone between code that is sensible to humans and code that is sensible to the machine running it and (perhaps more importantly) the available dev/debug tools.