Actually that's not true. Now that we have Phantheck, all QuickCheck tests (which are a very pleasant superclass of unit tests) can just be lifted to the type level with TH magic. This makes them dramatically more useful. This is because you cannot traditionally write a function that requires that its arguments pass some test, but you can write a function that requires they have a certain type. But now you can require arbitrary QuickCheck-able properties via types.
Note that you can still capture any property you want via non-metaprogramming types (though you may require a trusted kernel to keep things performant) but that can be trickier to work with than a test for people who are much more accustomed to tests than types.
Also note that writing a function that should work, but won't typecheck can also reveal a flaw in the model (or in what you thought the model should be, or in what kind of operations you thought should work)
Well, you can add operators, though you almost certainly should think about more idiomatic solutions first, it's entirely possible to write operators to allow a statement like:
arr .~ (i,j) .= arr ! (a,b) + arr ! (x,y)
You make orphan instances for Indexed and Num, and then .~ is just a slight tweak on the adjust function that Indexed provides and .= is literally a direct synonym for $ that just looks better in this context. This is actually more flexible than most languages, because now (arr .~ (i,j)) or (.~ (i,j)) can be named and applied to multiple things should that be desirable for some reason. Also note that this code is very weird, as arr is not an array, but an IO action describing how to produce one. Operations on it actually produce diffing instructions to be applied elsewhere. I have also not tested the performance.
These exact things are not in base because they are discouraged and not supposed to be easy. Note that the lens library provides operators more or less just like this for a wide variety of data types, in a safe and composable way.
That's not really a line for line port. Or even block for block since yours is never uninitialized. The original code was designed to basically mimic writing bad C in Perl, with an uninitialised array that you then explicitly looped over to initialise to zero, and then modified the diagonal. That sort of thing is difficult on purpose in Haskell.
x .* y = replicate y x
main = mapM_ (putStrLn . foldMap show) [0 .* x ++ 1 : 0 .* (size-x)| x<-[0..size-1]]
Or if you really do want mutation shorthand (because you're a crazy person) just build it:
a .// l = mapM_ (uncurry $ writeArray a) l
pickFn f g (b,v) | b = g v | otherwise = f v
main = do
arr <- newArray ((1,1), (size,size)) 0
arr .// [((x,x),1)|x<-[1..size]]
mapM_ (pickFn putStr putStrLn . (((==size).snd) *** show)) $ getAssocs arr
It's not pretty, but it's not supposed to be. Do you know what is pretty? Doing things the right way.
There are already some quite advanced compilers that treat JavaScript itself as just a web assembly language, you don't technically have to wait for WebAssembly. TypeScript (some type safety) PureScript (Lots of type safety) and GHCJS (MOUNTAINS of type power) all work right now, though the latter is still experimental grade.
But I don't think an initial choice of implementing a Scheme would have helped. The idea of +0 vs 0 vs -0 could just have easily happened in a Scheme, same too for the reliance on stringy types. Those are symptoms of a rushed design and an unwillingness to accept the temporary sharper pain of invalidating existing bad code to produce a new better standard (the exact same tendency - to dig in deeper rather than risk the pain of turning back - is literally how spies managed to convince people to commit treason).
Then of course there's also the great risk that, just like Scheme not-in-the-browser, Scheme-in-the-browser might never have widely caught on.
> There are already some quite advanced compilers that treat JavaScript itself as just a web assembly language, you don't technically have to wait for WebAssembly.
Yeah, there's even a Common Lisp which compiles to JavaScript …
> The idea of +0 vs 0 vs -0 could just have easily happened in a Scheme, same too for the reliance on stringy types.
I don't necessarily know about these specific examples: the Scheme standards have been quite clear about their numeric tower and equality standards.
I think your general point about the hackiness which was the web in the 90s, and the unwillingness to break stuff by fixing things holds, though. And of course it wasn't just the web: I recall that the Makefile syntax was realised to be a mistake weeks into its lifetime, but they didn't want to fix it for fear of inconveniencing a dozen users (details fuzzy).
> Then of course there's also the great risk that, just like Scheme not-in-the-browser, Scheme-in-the-browser might never have widely caught on.
I dunno — would a prototypal language have ever caught on were it not for the fact that JavaScript is deployed everywhere? I can imagine a world where everyone just used it, because it was what there was to use.
And honestly, as much as I dislike Scheme, it would have fit in _really_ well with JavaScript's original use case and feel. And if the world had had a chance to get used to a sane, homomorphic programming language then maybe it might have graduated to a mature, industrial-grade, sane, homomorphic language.
Note that you can still capture any property you want via non-metaprogramming types (though you may require a trusted kernel to keep things performant) but that can be trickier to work with than a test for people who are much more accustomed to tests than types.
Also note that writing a function that should work, but won't typecheck can also reveal a flaw in the model (or in what you thought the model should be, or in what kind of operations you thought should work)